diff --git a/.gitattributes b/.gitattributes index 064ee9fb533a38c20fbab9b33e2c1fe2e7fed41a..df412caa1f198b26fad35ed906c94ec390b4bad0 100644 --- a/.gitattributes +++ b/.gitattributes @@ -239,3 +239,4 @@ data_all_eng_slimpj/shuffled/split/split_finalaa/part-00.finalaa filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalaa/part-18.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-17.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-02.finalaa filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalaa/part-07.finalaa filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalaa/part-07.finalaa b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-07.finalaa new file mode 100644 index 0000000000000000000000000000000000000000..db3516511b5d4d7fed4d5697d1807c8e0eb47bf7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-07.finalaa @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:480d6058973a8a8cc80cded44525b7410a59d571072568f1879e0dd7234468f8 +size 12576986227 diff --git a/data_all_eng_slimpj/shuffled/split2/finalztuo b/data_all_eng_slimpj/shuffled/split2/finalztuo new file mode 100644 index 0000000000000000000000000000000000000000..d229a7b0967d2434b0af540b298ece2e858cef80 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalztuo @@ -0,0 +1,5 @@ +{"text":"\\section{Motivation}\\label{sec:motivation}\nModern cosmology has confronted the physics community with a number of unexplained observations, such as the accelerating expansion of the universe and the isotropy of the cosmological microwave background. The standard model of cosmology aims to explain these observations by the presence of dark energy in the $\\Lambda$CDM model and an inflationary phase in the early universe. However, the mechanisms behind these explanations, i.e., the constituents of dark energy and the driving force of the inflation, are not yet understood. This opens the possibility to create various models both from particle physics and gravity theory. The model we discuss here belongs to the gravitational category. The basic idea of this model is to replace the metric geometry of spacetime from general relativity by a Finsler length measure. In this article we derive a theory of fluid dynamics based on this Finsler geometric background.\n\nThe work we present in this article has two main ingredients. The first ingredient, which serves as the background geometry, is the Finsler spacetime framework.\\cite{Pfeifer:2011tk,Pfeifer:2011xi,Pfeifer:2013gha,Pfeifer:2014yua} This framework introduces a notion of Finsler geometry which provides a unified description of a Lorentzian causality, observers and gravity, and which can be used as a background for field theories such as electrodynamics. The second ingredient is the kinetic theory of fluids.\\cite{Ehlers:1971,Sarbach:2013fya,Sarbach:2013uba} This theory is based on the idea that fluids can be modeled as being constituted by particles, whose worldlines are geodesics on the background spacetime. In the continuum limit this theory yields equations of motion for a density function on a subspace of the tangent bundle of spacetime. In this article we generalize this kinetic theory of fluids to Finsler spacetimes. In this model the density function becomes a function on observer space,\\cite{Gielen:2012fz,Hohmann:2013fca,Hohmann:2014gpa,Hohmann:2015pva} which is the space of physically allowed four-velocities. We apply this formalism to two physically motivated examples.\n\nThe outline of this article is as follows. In section~\\ref{sec:finsler} we provide a brief introduction to the geometry of Finsler spacetimes. From this starting point we construct the space of physical observer velocities in section~\\ref{sec:observer} and discuss its geometric structure. We then use this structure to construct a model of fluid dynamics based on kinetic theory in section~\\ref{sec:fluids}. In section~\\ref{sec:special} we apply this model to two particular examples: a collisionless dust fluid and a fluid with cosmological symmetry. We end with a conclusion in section~\\ref{sec:conclusion}.\n\n\\section{Finsler spacetime geometry}\\label{sec:finsler}\nWe start our discussion with a brief review of the background geometry. In this section we introduce the basic geometric objects on a Finsler spacetime, which will later be used for the construction of fluid dynamics. Our starting point is the following definition:\\cite{Pfeifer:2011tk,Pfeifer:2011xi,Pfeifer:2013gha}\n\nA \\emph{Finsler spacetime} \\((M,L,F)\\) is a four-dimensional, connected, Hausdorff, paracompact, smooth manifold \\(M\\) equipped with continuous real functions \\(L, F\\) on the tangent bundle \\(TM\\) which has the following properties:\n\\begin{enumerate}\n\\item\\label{finsler:lsmooth}\n\\(L\\) is smooth on the tangent bundle without the zero section \\(TM \\setminus \\{0\\}\\).\n\\item\\label{finsler:lhomogeneous}\n\\(L\\) is positively homogeneous of real degree \\(n \\geq 2\\) with respect to the fiber coordinates of~\\(TM\\),\n\\begin{equation}\nL(x,\\lambda y) = \\lambda^nL(x,y) \\quad \\forall \\lambda > 0\\,,\n\\end{equation}\nand defines the Finsler function \\(F\\) via \\(F(x,y) = |L(x,y)|^{\\frac{1}{n}}\\).\n\\item\\label{finsler:lreversible}\n\\(L\\) is reversible: \\(|L(x,-y)| = |L(x,y)|\\).\n\\item\\label{finsler:lhessian}\nThe Hessian\n\\begin{equation}\ng^L_{ab}(x,y) = \\frac{1}{2}\\bar{\\partial}_a\\bar{\\partial}_bL(x,y)\n\\end{equation}\nof \\(L\\) with respect to the fiber coordinates is non-degenerate on \\(TM \\setminus X\\), where \\(X \\subset TM\\) has measure zero and does not contain the null set \\(\\{(x,y) \\in TM | L(x,y) = 0\\}\\).\n\\item\\label{finsler:timelike}\nThe unit timelike condition holds, i.e., for all \\(x \\in M\\) the set\n\\begin{multline}\n\\Omega_x = \\Bigg\\{y \\in T_xM \\Bigg| |L(x,y)| = 1,\\\\\ng^L_{ab}(x,y) \\text{ has signature } (\\epsilon,-\\epsilon,-\\epsilon,-\\epsilon), \\epsilon = \\frac{L(x,y)}{|L(x,y)|}\\Bigg\\}\n\\end{multline}\ncontains a non-empty closed connected component \\(S_x \\subseteq \\Omega_x \\subset T_xM\\).\n\\end{enumerate}\n\nHere we have used coordinates \\((x^a)\\) on \\(M\\) and the \\emph{induced coordinates} \\((x^a,y^a)\\) on \\(TM\\) such that \\((x,y) = y^a\\partial_a|_x \\in T_xM\\). The Finsler function \\(F\\) introduced above defines a length functional\n\\begin{equation}\\label{eqn:finslerlength}\ns[\\gamma] = \\int d\\tau\\,F(\\gamma(\\tau),\\dot{\\gamma}(\\tau))\\,,\n\\end{equation}\nwhich measures the length of a curve \\(\\tau \\mapsto \\gamma(\\tau)\\) on \\(M\\). An important class of curves called \\emph{Finsler geodesics} are those for which the length functional becomes extremal. They satisfy the geodesic equation\n\\begin{equation}\\label{eqn:geodesic}\n\\ddot{\\gamma}^a + N^a{}_b(\\gamma,\\dot{\\gamma})\\dot{\\gamma}^b = 0\\,.\n\\end{equation}\nHere we have introduced the \\emph{Cartan non-linear connection}\n\\begin{equation}\\label{eqn:nonlinconn}\nN^a{}_b = \\frac{1}{4}\\bar{\\partial}_b\\left[g^{F\\,ac}(y^d\\partial_d\\bar{\\partial}_cF^2 - \\partial_cF^2)\\right]\\,,\n\\end{equation}\nwhere \\(\\partial_a = \\partial\/\\partial x^a\\), \\(\\bar{\\partial}_a = \\partial\/\\partial y^a\\) and \\(g^{F\\,ab}\\) is the inverse of the \\emph{Finsler metric}\n\\begin{equation}\ng^F_{ab}(x,y) = \\frac{1}{2}\\bar{\\partial}_a\\bar{\\partial}_bF^2(x,y)\\,.\n\\end{equation}\nThe Cartan non-linear connection induces a unique split of the tangent bundle \\(TTM\\) into horizontal and vertical parts, \\(TTM = HTM \\oplus VTM\\). The horizontal tangent bundle \\(HTM\\) is spanned by the vector fields\n\\begin{equation}\n\\{\\delta_a = \\partial_a - N^b{}_a\\bar{\\partial}_b\\}\\,,\n\\end{equation}\nwhile the vertical tangent bundle \\(VTM\\) is spanned by \\(\\{\\bar{\\partial}_a\\}\\). The corresponding basis \\(\\{\\delta_a,\\bar{\\partial}_a\\}\\) that respects this split is called the \\emph{Berwald basis}. Its dual basis of \\(T^*TM\\) is given by\n\\begin{equation}\n\\{dx^a, \\delta y^a = dy^a + N^a{}_bdx^b\\}\\,.\n\\end{equation}\nUsing the Berwald basis we can find another description for Finsler geodesics. For this purpose we consider the canonical lift \\(\\Gamma: \\mathbb{R} \\to TM\\) of a geodesic \\(\\gamma\\) to the tangent bundle \\(TM\\). Writing \\(\\Gamma\\) in coordinates \\((\\Gamma^a,\\bar{\\Gamma}^a) = (\\gamma^a,\\dot{\\gamma}^a)\\) we find the geodesic equation\n\\begin{equation}\n\\dot{\\Gamma}^a = \\bar{\\Gamma}^a\\,, \\quad \\dot{\\bar{\\Gamma}}^a = -N^a{}_b(\\Gamma^a,\\bar{\\Gamma}^a)\\bar{\\Gamma}^b\\,.\n\\end{equation}\nSince this is a first order differential equation, the canonical lifts of Finsler geodesics take the form of integral curves of a vector field \\(\\mathbf{S} = y^a\\delta_a\\) on \\(TM\\), which we call the \\emph{geodesic spray}. Finally, the Berwald basis and the Finsler metric allow the construction of a metric\n\\begin{equation}\\label{eqn:sasakimetric}\nG = -g^F_{ab}\\,dx^a \\otimes dx^b - \\frac{g^F_{ab}}{F^2}\\,\\delta y^a \\otimes \\delta y^b\\,.\n\\end{equation}\non the tangent bundle, which is called the \\emph{Sasaki metric}.\n\n\\section{Observer space geometry}\\label{sec:observer}\nAfter discussing the most important Finsler geometric structures on the tangent bundle \\(TM\\) in the previous section we now restrict ourselves to a particular subspace of \\(TM\\). Recall from the definition of a Finsler spacetime that for each \\(x \\in M\\) there exists a shell \\(S_x \\subset T_xM\\) of future unit timelike vectors, which corresponds to the four-velocities of test masses and physical observers. Their union\n\\begin{equation}\\label{eqn:observerspace}\nO = \\bigcup_{x \\in M}S_x\n\\end{equation}\nis called \\emph{observer space}. This is the space on which we will define fluid dynamics in the remainder of this work.\n\nThe observer space is a seven-dimensional submanifold of the tangent bundle \\(TM\\) and thus inherits a number of geometric structures from the Finsler geometry. In particular, the geodesic spray \\(\\mathbf{S}\\) introduced in the previous section is tangent to the observer space, and thus restricts to a vector field \\(\\mathbf{r}\\) on \\(O\\), called the \\emph{Reeb vector field}. This relation between the geodesic spray and the observer space is in fact necessary for the interpretation of \\(O\\) as the space of physical four-velocities: it means that a test mass possessing a physical initial four-velocity and following a geodesic, and thus an integral curve of \\(\\mathbf{S}\\), will have a physical four-velocity at all times. The canonical lifts of all physical geodesics, corresponding to freely falling test masses, are thus given by integral curves of the Reeb vector field \\(\\mathbf{r}\\) on \\(O\\).\n\nThe second important structure on the observer space is the restriction \\(\\tilde{G}\\) of the Sasaki metric \\(G\\) to \\(O\\). This also equips \\(O\\) with a volume form \\(\\Sigma = \\mathrm{Vol}_{\\tilde{G}}\\). This volume form has a number of properties which are relevant for the construction of a theory of fluid dynamics. The most relevant property for the remainder of this article is the fact that it is preserved by the flow of the Reeb vector field, \\(\\mathcal{L}_{\\mathbf{r}}\\Sigma = 0\\).\n\nFrom the Reeb vector field \\(\\mathbf{r}\\) and the volume form \\(\\Sigma\\) we finally define the \\emph{particle measure} \\(\\omega = \\iota_{\\mathbf{r}}\\Sigma\\). It is the unique six-form (up to a constant factor) which has the following properties necessary for the construction of fluid dynamics from the kinetic theory. Most importantly, its restriction to a hypersurface \\(\\sigma \\subset O\\) which is not tangent to the Reeb vector field \\(\\mathbf{r}\\) is a non-vanishing volume form on \\(\\sigma\\). Further, it is closed, \\(d\\omega = 0\\), and preserved by the flow of the Reeb vector field, \\(\\mathcal{L}_{\\mathbf{r}}\\omega = 0\\). The relevance of these properties will become clear in the following section, when we use it as an ingredient to develop a theory of fluid dynamics.\n\n\\section{Kinetic theory and fluid dynamics}\\label{sec:fluids}\nWe now turn our attention to the kinetic theory of fluids\\cite{Ehlers:1971,Sarbach:2013fya,Sarbach:2013uba} and apply it to fluids on the Finsler spacetime background geometry detailed in the preceding section. For this purpose we first briefly review how fluids can be modeled by the geodesic motion of particles. From the geodesic motion on Finsler spacetimes we then derive the equations of motion for a Finsler fluid.\n\nThe basic idea of the kinetic theory of fluids is the assumption that fluids are constituted by particles. In the simplest possible case of a single component fluid, which we will discuss here, all particles have identical properties, such as mass and electric charge, and follow piecewise geodesic curves. This geodesic motion corresponds to the motion of freely falling test masses without any other interaction. The interaction between particles is modeled by collisions, which mark the endpoints of the geodesic pieces of the particle trajectories and correspond to instantaneous transfers of momentum between the particles. The physical background of this model of interactions is the assumption that the interaction distances are small compared to the distances between the particles and that the interaction times are short compared to the time between interactions, which is the case for a sufficiently low density.\n\nIn order to construct a continuum theory of fluids from the particle model one introduces the one-particle distribution function \\(\\phi: O \\to \\mathbb{R}^+\\) such that for each oriented hypersurface \\(\\sigma \\subset O\\) the integral\n\\begin{equation}\nN[\\sigma] = \\int_{\\sigma}\\phi\\omega\n\\end{equation}\nis the number \\(N[\\sigma]\\) of particle trajectories whose canonical lifts to \\(O\\) pass through \\(\\sigma\\). Here particle trajectories \\(\\gamma\\) are counted positively (negatively) if the tangent vector of their canonical lift and a positively oriented basis of the tangent space to \\(\\sigma\\) at the intersection point between \\(\\gamma\\) and \\(\\sigma\\) form a positively (negatively) oriented basis of the tangent space of \\(O\\) at that point. Further, \\(\\omega\\) denotes the particle measure introduced in the previous section.\n\nWe now take a closer look at the canonical lifts of the particle trajectories to the observer space. It follows from our assumption of instantaneous momentum transfer that these lifts are discontinuous at collisions, i.e., they have endpoints corresponding to the particles' velocities before and after the collision. To incorporate collisions into the continuum theory we define the collision transfer density \\(\\dot{\\phi}: O \\to \\mathbb{R}\\) such that for each hypervolume \\(V \\subset O\\) the integral\n\\begin{equation}\n\\dot{N}[V] = \\int_V\\dot{\\phi}\\Sigma\n\\end{equation}\nis the number of initial points minus the number of final points of canonical lifts of particle trajectories in \\(V\\). Here \\(\\Sigma\\) denotes the volume form of the Sasaki metric \\(\\tilde{G}\\) on \\(O\\).\n\nNote that we have defined the counting prescription for curves passing a hypersurface such that any curve which has an initial point, but no final point in the hypervolume \\(V\\) passes its boundary \\(\\partial V\\) in the positive direction, and in the negative direction in the opposite case. It thus follows that \\(\\dot{N}[V] = N[\\partial V]\\). From this we further find that\n\\begin{equation}\n\\int_V\\dot{\\phi}\\Sigma = \\dot{N}[V] = N[\\partial V] = \\int_{\\partial V}\\phi\\omega = \\int_Vd(\\phi\\omega) = \\int_V(\\mathcal{L}_{\\mathbf{r}}\\phi)\\Sigma\\,,\n\\end{equation}\nwhere we used Stokes' theorem and the properties of the particle measure \\(\\omega\\). Since this holds for any hypervolume \\(V\\) it follows that\n\\begin{equation}\n\\dot{\\phi} = \\mathcal{L}_{\\mathbf{r}}\\phi\\,.\n\\end{equation}\nThe right hand side of this equation describes the evolution of \\(\\phi\\). We thus obtain the equation of motion of the fluid by specifying a functional \\(\\dot{\\phi}[\\phi]\\) describing collisions between particles. For the simplest possible case of a collisionless fluid we have \\(\\dot{\\phi} = 0\\), and the fluid equation of motion reduces to the Liouville equation \\(\\mathcal{L}_{\\mathbf{r}}\\phi = 0\\).\n\nThis concludes our discussion of the kinetic theory of fluids in general. In the following section we will discuss particular examples of fluids and derive their equations of motion.\n\n\\section{Special cases}\\label{sec:special}\nAfter discussing general fluids on Finsler spacetimes in the previous section, we now consider a few special cases. The first case we display in section~\\ref{subsec:dust} is dust, which is conventionally described as a collisionless perfect fluid with vanishing pressure. Starting from the Liouville equation we derive the generalization of the Euler equations to dust on Finsler spacetimes. The second case shown in section~\\ref{subsec:cosmology} is the most general cosmological fluid. We start from a general Finsler spacetime with cosmological symmetry and derive the equations of motion for the most general fluid obeying the same symmetry.\n\n\\subsection{Dust fluid}\\label{subsec:dust}\nThe first example we discuss is a collisionless dust fluid characterized by a matter density \\(\\rho(x)\\) and four-velocity distribution \\(u^a(x)\\), which is a future timelike vector field normalized by the Finsler function, and thus a function \\(u: M \\to O\\). The aim of this section is to derive the equations of motion for \\(\\rho\\) and \\(u\\) from the kinetic theory introduced in the preceding section. For this purpose we will first construct the one-particle distribution function \\(\\phi\\) and then impose the Liouville equation \\(\\mathcal{L}_{\\mathbf{r}}\\phi = 0\\) for a collisionless fluid.\n\nWe start by introducing coordinates \\((\\hat{x}^a,\\theta^{\\alpha}) = (x^a,y^{\\alpha}\/y^0)\\) on \\(O\\) and writing the three-velocity as \\(v^{\\alpha} = u^{\\alpha}\/u^0\\). Together with the proper Dirac delta distribution on the unit timelike shell \\(S_{\\hat{x}}\\) we find the one-particle distribution function\n\\begin{equation}\\label{eqn:dustphi}\n\\phi(\\hat{x},\\theta) = \\frac{1}{m}\\rho(\\hat{x})\\frac{\\delta(\\theta - v(\\hat{x}))}{\\sqrt{h^F(\\hat{x},\\theta)}}\\,,\n\\end{equation}\nwhere \\(h^F\\) denotes the determinant of the restriction\n\\begin{equation}\nh^F_{\\alpha\\beta} = \\frac{\\partial y^a}{\\partial\\theta^{\\alpha}}\\frac{\\partial y^b}{\\partial\\theta^{\\beta}}g^F_{ab}\n\\end{equation}\nof the Sasaki metric to \\(S_{\\hat{x}}\\). A direct calculation of the Lie derivative \\(\\mathcal{L}_{\\mathbf{r}}\\phi\\) is obstructed by the presence of the delta distribution. We therefore introduce an arbitrary smooth test function \\(f: O \\to \\mathbb{R}\\). Using the properties of the particle measure \\(\\omega\\) and integrating by parts we then find\n\\begin{equation}\n0 = \\int_{O}f\\left(\\mathcal{L}_{\\mathbf{r}}\\phi\\right)\\Sigma = \\int_{O}d(\\phi\\omega)f = -\\int_{O}df \\wedge \\omega\\phi = -\\int_{O}d(f\\omega)\\phi =\n-\\int_{O}\\phi\\left(\\mathcal{L}_{\\mathbf{r}}f\\right)\\Sigma\\,.\n\\end{equation}\nWriting the Finsler function in the form\n\\begin{equation}\nF = y^0\\tilde{F}(x,y\/y^0) = y^0\\tilde{F}(\\hat{x},\\theta)\n\\end{equation}\nwe obtain the Reeb vector field\n\\begin{equation}\n\\mathbf{r} = y^a\\left[\\hat{\\partial}_a + \\tilde{F}\\left(N^0{}_a\\theta^{\\alpha} - N^{\\alpha}{}_a\\right)\\tilde{\\partial}_{\\alpha}\\right],\n\\end{equation}\nusing the partial derivatives \\(\\hat{\\partial}_a\\) and \\(\\tilde{\\partial}_{\\alpha}\\) with respect to \\(\\hat{x}^a\\) and \\(\\theta^{\\alpha}\\), and the volume form\n\\begin{equation}\n\\Sigma = \\sqrt{g^Fh^F}d^4\\hat{x}d^3\\theta\\,.\n\\end{equation}\nFinally inserting the explicit form for \\(\\phi\\) we arrive at\n\\begin{equation}\n0 = \\int_Od^4\\hat{x}d^3\\theta\\sqrt{g^F}\\delta(\\theta - v)\\rho y^a\\left[\\hat{\\partial}_a + \\tilde{F}\\left(N^0{}_a\\theta^{\\alpha} - N^{\\alpha}{}_a\\right)\\tilde{\\partial}_{\\alpha}\\right]f\\,.\n\\end{equation}\nIntegrating over \\(d^3\\theta\\) cancels the Dirac distribution and yields\n\\begin{equation}\n0 = \\left.\\int_Md^4\\hat{x}\\sqrt{g^F}\\rho u^a\\left[\\hat{\\partial}_af + \\tilde{F}\\left(N^0{}_av^{\\alpha} - N^{\\alpha}{}_a\\right)\\tilde{\\partial}_{\\alpha}f\\right]\\right|_{y^a = u^a(\\hat{x})}\\,,\n\\end{equation}\nwhere it is indicated that all objects on observer space are taken at the point \\((\\hat{x},v(\\hat{x}))\\). Observe that the partial derivative \\(\\hat{\\partial}_af(\\hat{x},v(\\hat{x}))\\) is fixed by \\(f(\\hat{x},v(\\hat{x}))\\) and thus not an independent quantity, while \\(\\tilde{\\partial}_{\\alpha}f(\\hat{x},v(\\hat{x}))\\) depends on the choice of \\(f\\) in a neighborhood of \\(v(\\hat{x})\\), which can be chosen independently. We therefore need to eliminate \\(\\hat{\\partial}_af\\) using integration by parts. Since \\(\\hat{\\partial}_a\\) is only a partial derivative, but \\(f(\\hat{x},v(\\hat{x}))\\) contains also an implicit dependence on \\(\\hat{x}\\) via \\(v(\\hat{x})\\), we first need to rewrite the partial derivative into a total derivative using\n\\begin{equation}\n\\hat{\\partial}_af(\\hat{x},v(\\hat{x})) = \\frac{d}{d\\hat{x}^a}f(\\hat{x},v(\\hat{x})) - \\hat{\\partial}_av^{\\alpha}\\tilde{\\partial}_{\\alpha}f(\\hat{x},v(\\hat{x}))\\,.\n\\end{equation}\nIntegration by parts and using the normalization \\(u^0\\tilde{F}(\\hat{x},v(\\hat{x})) = F(\\hat{x},u(\\hat{x})) = 1\\) then finally yields\n\\begin{multline}\n0 = \\int_Md^4\\hat{x}\\sqrt{g^F}\\bigg\\{\\left[\\frac{u^a}{u^0}\\left(N^0{}_a\\frac{u^{\\alpha}}{u^0} - N^{\\alpha}{}_a\\right) - u^a\\hat{\\partial}_a\\left(\\frac{u^{\\alpha}}{u^0}\\right)\\right]\\rho\\tilde{\\partial}_{\\alpha}f\\\\\n- \\left[\\hat{\\partial}_a(\\rho u^a) + \\frac{1}{2}\\rho u^ag^{F\\,bc}\\left(\\hat{\\partial}_ag^F_{bc} + \\hat{\\partial}_au^d\\bar{\\partial}_dg^F_{bc}\\right)\\right]f\\bigg\\}\\bigg|_{y^a = u^a(\\hat{x})}\\,.\n\\end{multline}\nSince now all spacetime derivatives act on objects which depend only on spacetime coordinates, we can rename the coordinates \\(\\hat{x}^a\\) back to \\(x^a\\) and read off the equations of motion\n\\begin{equation}\\label{eqn:dusteomraw1}\nu^a\\left(N^0{}_a\\frac{u^{\\alpha}}{u^0} - N^{\\alpha}{}_a\\right) - u^a\\partial_au^{\\alpha} + \\frac{u^au^{\\alpha}}{u^0}\\partial_au^0 = 0\n\\end{equation}\nand\n\\begin{equation}\\label{eqn:dusteomraw2}\n\\partial_a(\\rho u^a) + \\frac{1}{2}\\rho u^ag^{F\\,bc}\\left(\\partial_ag^F_{bc} + \\partial_au^d\\bar{\\partial}_dg^F_{bc}\\right) = 0\\,.\n\\end{equation}\nThese equations can further be simplified. Using the properties of the Cartan non-linear connection we can rewrite the first equation~\\eqref{eqn:dusteomraw1} in the form\n\\begin{equation}\\label{eqn:dusteom1}\n0 = u^a(\\partial_au^b + N^b{}_a) = \\nabla u^b\\,,\n\\end{equation}\nwhere we have introduced the dynamical covariant derivative \\(\\nabla\\). Similarly, the second equation~\\eqref{eqn:dusteomraw2} can be written as\n\\begin{equation}\\label{eqn:dusteom2}\n0 = \\partial_a(\\rho u^a) + \\frac{1}{2}\\rho u^ag^{F\\,bc}\\delta_ag^F_{bc} = \\nabla_{\\delta_a}(\\rho u^a)\\,,\n\\end{equation}\nwhere we have introduced the covariant derivative \\(\\nabla_{\\delta_a}\\) of the Cartan linear connection. In the case of a metric Finsler function \\(F^2(x,y) = |g_{ab}(x)y^ay^b|\\) these equations reduce to the well-known Euler equations\n\\begin{equation}\nu^b\\nabla_bu^a = 0 \\quad \\text{and} \\quad \\nabla_a(\\rho u^a) = 0\n\\end{equation}\nfor a pressureless fluid.\n\n\\subsection{Cosmological fluid}\\label{subsec:cosmology}\nAs the final example we will derive the Liouville equation for a fluid with cosmological symmetry. For this purpose we introduce coordinates \\((t,r,\\vartheta,\\varphi)\\) on the spacetime manifold \\(M\\) and the corresponding induced coordinates \\((t,r,\\vartheta,\\varphi,y^t,y^r,y^{\\vartheta},y^{\\varphi})\\) on \\(TM\\). In these coordinates the cosmological symmetry generators take the form\n\\begin{gather}\n\\xi_1 = \\sqrt{1 - kr^2}\\left(\\sin\\vartheta\\cos\\varphi\\partial_r + \\frac{\\cos\\vartheta\\cos\\varphi}{r}\\partial_{\\vartheta} - \\frac{\\sin\\varphi}{r\\sin\\vartheta}\\partial_{\\varphi}\\right)\\,,\\nonumber\\\\\n\\xi_2 = \\sqrt{1 - kr^2}\\left(\\sin\\vartheta\\sin\\varphi\\partial_r + \\frac{\\cos\\vartheta\\sin\\varphi}{r}\\partial_{\\vartheta} + \\frac{\\cos\\varphi}{r\\sin\\vartheta}\\partial_{\\varphi}\\right)\\,,\\nonumber\\\\\n\\xi_3 = \\sqrt{1 - kr^2}\\left(\\cos\\vartheta\\partial_r - \\frac{\\sin\\vartheta}{r}\\partial_{\\vartheta}\\right)\\,, \\quad\n\\xi_6 = \\partial_{\\varphi}\\,,\\nonumber\\\\\n\\xi_4 = \\sin\\varphi\\partial_{\\vartheta} + \\frac{\\cos\\varphi}{\\tan\\vartheta}\\partial_{\\varphi}\\,, \\quad\n\\xi_5 = -\\cos\\varphi\\partial_{\\vartheta} + \\frac{\\sin\\varphi}{\\tan\\vartheta}\\partial_{\\varphi}\\,,\\label{eqn:cosmovect}\n\\end{gather}\nwhere \\(k \\in \\{-1,0,1\\}\\) determines the spatial curvature of the corresponding spacetime. For cosmologically symmetric fluid dynamics we require that both the background geometry and the one-particle distribution function of the fluid obey this symmetry.\n\nWe start by deriving the most general cosmologically symmetric background geometry. A Finsler spacetime is symmetric under the action of a vector field \\(\\xi\\) if and only if the geometry function \\(L\\), and thus also the Finsler function \\(F\\), is invariant under the complete lift of \\(\\xi\\) to the tangent bundle,\\cite{Pfeifer:2011xi,Pfeifer:2013gha,Hohmann:2015pva}\n\\begin{equation}\n\\left(\\xi^a\\partial_a + y^b\\partial_b\\xi^a\\bar{\\partial}_a\\right)F = 0\\,.\n\\end{equation}\nIn the present case of a cosmological symmetry it is most convenient to introduce new coordinates on \\(TM\\) via the definition\n\\begin{gather}\nt = \\hat{t}\\,, \\quad r = \\hat{r}\\,, \\quad \\vartheta = \\hat{\\vartheta}\\,, \\quad \\varphi = \\hat{\\varphi}\\,, \\quad y^t = \\hat{y}\\,,\\nonumber\\\\\ny^r = \\hat{w}\\cos\\hat{u}\\sqrt{1 - k\\hat{r}^2}\\,, \\quad y^{\\vartheta} = \\frac{\\hat{w}}{\\hat{r}}\\sin\\hat{u}\\cos\\hat{v}\\,, \\quad y^{\\varphi} = \\frac{\\hat{w}}{\\hat{r}\\sin\\hat{\\vartheta}}\\sin\\hat{u}\\sin\\hat{v}\\,.\n\\end{gather}\nAfter calculating the complete lifts of the vector fields~\\eqref{eqn:cosmovect} in these coordinates it turns out that the most general Finsler function with cosmological symmetry is given by \\(F = F(\\hat{t},\\hat{y},\\hat{w})\\). Further, from the fact that \\(F\\) is 1-homogeneous in the coordinates \\(\\hat{y}\\) and \\(\\hat{w}\\) follows that it must be of the form\n\\begin{equation}\nF(\\hat{t},\\hat{y},\\hat{w}) = \\hat{y}\\tilde{F}\\left(\\hat{t},\\hat{w}\/\\hat{y}\\right)\\,.\n\\end{equation}\nFor the discussion of geodesic motion it is convenient to introduce yet another set of coordinates on \\(TM\\) as\n\\begin{equation}\n\\tilde{t} = \\hat{t}\\,, \\quad \\tilde{r} = \\hat{r}\\,, \\quad \\tilde{\\vartheta} = \\hat{\\vartheta}\\,, \\quad \\tilde{\\varphi} = \\hat{\\varphi}\\,, \\quad \\tilde{u} = \\hat{u}\\,, \\quad \\tilde{v} = \\hat{v}\\,, \\quad \\tilde{y} = \\hat{y}\\tilde{F}\\left(\\hat{t},\\frac{\\hat{w}}{\\hat{y}}\\right)\\,, \\quad \\tilde{w} = \\frac{\\hat{w}}{\\hat{y}}\\,.\n\\end{equation}\nIn these coordinates the observer space \\(O\\) is given as a connected component of the submanifold \\(\\tilde{y} = 1\\), so that one can use the remaining seven coordinates to parametrize \\(O\\). The Reeb vector field then takes the form\n\\begin{multline}\n\\mathbf{r} = \\frac{1}{\\tilde{F}}\\Bigg(\\tilde{\\partial}_t + \\tilde{w}\\cos\\tilde{u}\\sqrt{1 - k\\tilde{r}^2}\\tilde{\\partial}_r + \\frac{\\tilde{w}\\sin\\tilde{u}\\cos\\tilde{v}}{\\tilde{r}}\\tilde{\\partial}_{\\vartheta} + \\frac{\\tilde{w}\\sin\\tilde{u}\\sin\\tilde{v}}{\\tilde{r}\\sin\\tilde{\\vartheta}}\\tilde{\\partial}_{\\varphi}\\\\\n- \\frac{\\tilde{w}\\sin\\tilde{u}\\sqrt{1 - k\\tilde{r}^2}}{\\tilde{r}}\\tilde{\\partial}_u - \\frac{\\tilde{w}\\sin\\tilde{u}\\sin\\tilde{v}}{\\tilde{r}\\tan\\tilde{\\vartheta}}\\tilde{\\partial}_v - \\frac{\\tilde{F}_{tw}}{\\tilde{F}_{ww}}\\tilde{\\partial}_w\\Bigg)\\,,\n\\end{multline}\nwhere the subscripts \\(t\\) and \\(w\\) indicate derivatives with respect to \\(\\tilde{t}\\) and \\(\\tilde{w}\\), respectively.\n\nWe finally come to the discussion of fluids on the Finsler spacetime background derived above. Since the background geometry obeys the cosmological symmetry defined by the vector fields~\\eqref{eqn:cosmovect}, their canonical lifts are tangent to the observer space \\(O\\). A fluid obeys the same symmetry if and only if its one-particle distribution function \\(\\phi\\) is invariant under the restriction of these canonical lifts to \\(O\\). In the present case the most general one-particle distribution function satisfying this condition takes the form \\(\\phi = \\phi(\\tilde{t},\\tilde{w})\\). Its Lie derivative with respect to the Reeb vector field is thus given by\n\\begin{equation}\n\\mathcal{L}_{\\mathbf{r}}\\phi = \\frac{1}{\\tilde{F}}\\left(\\phi_t - \\frac{\\tilde{F}_{tw}}{\\tilde{F}_{ww}}\\phi_w\\right)\\,.\n\\end{equation}\nFor the simplest possible case of a collisionless fluid the equation of motion hence takes the form\n\\begin{equation}\n\\tilde{F}_{ww}\\phi_t = \\tilde{F}_{tw}\\phi_w\\,.\n\\end{equation}\nThis is the Liouville equation for a fluid with cosmological symmetry.\n\n\\section{Conclusion}\\label{sec:conclusion}\nWe have derived a model for fluid dynamics on Finsler geometric backgrounds based on the kinetic theory of fluids and applied our model to two important special cases. Our results show that Finsler spacetimes provide a suitable background geometry for fluid dynamics and that the obtained fluid equations of motion reduce to their well-known limits if the background geometry is metric.\n\nOf course any model of fluid dynamics must be complemented by a suitable model of gravitational dynamics in order to derive consistent solutions. For the Finsler spacetimes we used here an extension of general relativity exists.\\cite{Pfeifer:2011xi,Pfeifer:2013gha} The source of the gravitational field in this model is a scalar function on the tangent bundle of spacetime. Deriving this energy-momentum scalar for a kinetic fluid and the corresponding gravitational field equations will be a topic of future research.\n\n\\section*{Acknowledgments}\nThe author is happy to thank Christian Pfeifer for extensive discussions and fruitful collaboration. He gratefully acknowledges the full financial support of the Estonian Research Council through the Postdoctoral Research Grant ERMOS115 and the Startup Research Grant PUT790.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSuppose a point target is arbitrarily placed on the unit-circumference circle. The target then proceeds to move at some constant velocity $v$ (either known or unknown). An agent is interested to determine the target's position and velocity to within some resolution $\\delta$, with an error probability at most $\\varepsilon$, as quickly as possible. To that end, the agent can probe any region of his choosing (contiguous or non-contiguous) on the circle for the presence of the target, say once per second. He then receives a binary measurement pertaining to the presence of the target in the probed region, which is corrupted by additive binary noise. While the noise sequence is assumed to be independent over time, its magnitude will generally depend on the size of the probed region. This postulate is practically motivated if one imagines that the circle is densely covered by many small sensors; probing a region then corresponds to activating the relevant sensors and obtaining a measurement that is a (Boolean) function of the sum of the noisy signals from these sensors. We therefore further operate under the assumption that the larger the probed region, the higher the noise level. Our goal is to characterize the relation between $\\varepsilon$, $\\delta$, and the expected time $\\mathbb{E}(\\tau)$ until the agent's goal is met, for both adaptive and non-adaptive search strategies. \n\nThe case of stationary target search with measurement independent noise $p$ is well known (see e.g. \\cite{burnashev1974interval}) to be equivalent to the problem of channel coding with noiseless feedback over a Binary Symmetric Channel (BSC) with crossover probability $p$, where the message corresponds to the target, the number of messages pertains to inverse of the resolution, the channel noise plays the role of measurement noise, and the existence of noiseless feedback pertains to the fact that the agent may use past measurements to adapt his probing strategy. Based on the results of \\cite{burnashev_exp} it can be readily shown that using adaptive strategies one can achieve \n\\begin{equation*}\n\\mathbb{E}(\\tau) = \\frac{\\log{(1\\slash\\delta)}}{C(p)} + \\frac{\\log{(1\\slash\\varepsilon)}}{C_1(p)} + \\mathrm{O}(\\log\\log{\\frac{1}{\\delta\\varepsilon}})\n\\end{equation*}\nwhere $C(p)$ is the Shannon capacity of the BSC with crossover probability $p$, and $C_1(p) = D(p\\|1-p)$. This result is also the best possible up to sub-logarithmic terms. For non-adaptive strategies, standard channel coding results \\cite{GallagerBook} indicate for any fixed $0 0$, $\\Pr(\\mathcal{A}^c) = 2^{-2^{O(N)}}$. \n\\end{lemma}\n\\begin{remark}\nUnder the event $\\mathcal{A}$, we can safely assume that the measurements are observed through a BSC$(p[q^*+\\epsilon])$, since we can always artificially add noise to the observations at any time $n$ for which $|S_n| < q^*+\\epsilon$. \n\\end{remark}\n\nOur codebook induces a set of \\textit{trajectory codewords} $\\{x_{m(w_0,v),n}\\}_{n,w_0,v}$. Note that each trajectory codeword corresponds to a set of possible initial positions and velocities. With a slight abuse of notations, we denote the trajectory codewords by $\\{\\bx_k\\}_{k=1}^K$. After $N$ queries, we find the trajectory codeword that has the highest likelihood under the assumption that the measurements are observed through a BSC$(p[q^*+\\epsilon])$. We now show that the likelihood of the correct trajectory codeword is with high probability higher than that of all trajectory codewords whose associated initial position or velocity are at least $(\\delta,N)$-far. Hence, the initial position and velocity of the decoded trajectory will be $(\\delta,N)$-close to the correct one, with high probability. Note that if the target had been stationary, we would have searched for the highest likelihood row just as in channel coding. \n\n\n\nWe write the average probability of error as \n\\begin{equation*}\n\\overline{P}_e = \\Pr(\\mathcal{A})\\Pr(e|\\mathcal{A}) + \\Pr(\\mathcal{A}^c)\\Pr(e|\\mathcal{A}^c). \n\\end{equation*}\nThe second term vanishes double exponentially fast. For the other term we have\n\\begin{align*}\n \t\\Pr(e|\\mathcal{A})=\\sum_{\\bx_k}\\Pr(\\bx_k|\\mathcal{A})P_{\\mathcal{A}}(\\by|\\bx_k)\\Pr(e|\\bx_k,\\by,\\mathcal{A}),\n\\end{align*}\nwhere $\\by$ are the noisy observations and $P_{\\mathcal{A}}(\\by|\\bx_m)$ is the $BSC(q^*+\\epsilon)$ induced by the event $\\mathcal{A}$ (and possible randomization). Let $\\mathcal{E}_{k'}$ denote the event that the trajectory codeword $\\bx_{k'}$ is chosen instead of $\\bx_k$. Let $T_k$ be the set of all $k'$ for which either the velocity or the initial position of each of the trajectories associated with $\\bx_k'$, are more than $\\delta$-far from those of $\\bx_k$. \n\\begin{align}\\label{eq:Tk}\n \t\\Pr(e|\\bx_k,\\by,\\mathcal{A}) \\leq \\sum_{k'\\in T_k}\\Pr(\\mathcal{E}_{k'}|\\mathcal{A})\n\\end{align}\nand \n\\begin{align}\n \t\\Pr(\\mathcal{E}_{k'}|\\mathcal{A}) = \\sum_{\\bx_{k'}: P_{\\mathcal{A}}(\\by|\\bx_k)\\leq P_{\\mathcal{A}}(\\by|\\bx_{k'})} \\Pr(\\bx_{k'}|\\bx_k,\\mathcal{A})\\label{eq:CondErr}\n\\end{align}\nNote that unlike \\cite[eq. 5.6.8]{GallagerBook}, we cannot assume the trajectory codewords are independent under event $\\mathcal{A}$. Furthermore, for $k'\\in T_k$ the trajectories may intersect once. We therefore have that $\\Pr(\\bx_k, \\bx_{k'}|\\mathcal{A}))\\leq \\frac {\\Pr(\\bx_k,\\bx_{k'})}{1-\\Pr(\\mathcal{A}^c)} \\leq \\frac {Q(\\bx_k)Q(\\bx_{k'})}{(1-\\Pr(\\mathcal{A}^c))q_{min}}$ and $\\Pr(\\bx_k|\\mathcal{A})\\geq Q(\\bx_k) - \\Pr(\\mathcal{A}^c)$, where $Q(\\cdot)$ denotes the random coding prior, and $q_{min}$ denotes the probability of the least probable binary symbol under Q. Using this and Bayes rule, for $N$ large enough we have:\n\\begin{align}\n&P(\\bx_{k'}|\\bx_k,\\mathcal{A}) \\leq \\frac{Q(\\bx_k)Q(\\bx_{k'})}{(1-\\Pr(\\mathcal{A}^c))(Q(\\bx_k)-\\Pr(\\mathcal{A}^c))q_{min}} \\notag\n\\\\ &\\leq \\frac{Q(\\bx_{m'})}{(1-\\Pr(\\mathcal{A}^c)\/q_{min}^N)^2q_{min}} = \\frac{Q(\\bx_{m'})}{q_{min}}(1+2^{-2^{O(N)}})\\label{eq:CondProb}\n\\end{align}\nAfter substituting \\eqref{eq:CondProb} in \\eqref{eq:CondErr} and \\eqref{eq:Tk} and plugging in $\\delta=2^{-NR}$, we can follow Gallager's derivation of the random error exponent \\cite{GallagerBook} almost verbatim, with the following two distinctions: 1) By Lemma \\ref{lem:trajec} the effective number of messages is now $|T_k| = K = M^2\\cdot\\mathrm{O}(\\textrm{poly}(N))$; and 2) for any finite $N$, the exponent is multiplied by a constant pertaining to the double exponential penalty and to $q_{min}$, but this constant converges to unity as $N$ grows. The exponent is positive as long as $R\\leq I(q^*,p[q^*+\\epsilon])\/2$. As $\\epsilon$ is arbitrary, this concludes the proof of achievability. \n\n\n\\section{Adaptive strategies}\nIn this section, we consider the gain to be reaped by allowing the search decisions to be made adaptively. For simplicity, we assume here that the velocity is known in advance, and hence without loss of generality can be assumed to be zero. We will again use dithering to make the initial position appear uniformly random. Here, the duration of search $\\tau$ will generally be a random stopping time dependent on the measurements sample path. Moreover, the choice of probing regions $S_n$, for $n$ up to the horizon $\\tau$, \ncan now depend on past measurements. We characterize this gain in terms of the maximal targeting rate, and the targeting rate-reliability tradeoff. As we shall see, adaptivity allows us to achieve the maximal possible rate and reliability, i.e., those associated with the minimal observation noise $p[0]$. \n\n\\subsection{Non Adaptive Search with Validation}\nAs a first attempt at an adaptive strategy, we continue with the non-adaptive search from the previous section, but allow the agent to validate the outcome of the search phase. We will consider two validation schemes, due to Forney \\cite{Forney1968} and Yamamoto-Itoh \\cite{YamaItoh1980}. \n\nIn \\cite{Forney1968}, Forney considered a communication system in which a decoder, at the end of the transmission can signal the encoder to either repeat the message or continue to the next one. Namely, it is assumed that a one bit ``decision feedback'' can be sent back to the transmitter at the end of each message block. This is achieved by adding an erasure option to the decision regions, that allows the decoder\/agent to request a ``retransmission'' if uncertainty is too high, i.e., to restart the exact same coding\/search process from scratch. More concretely, given $Y^N$, a codeword $k$ will be declared as the output if $\\frac {P(y^N|\\bx_k)} {\\sum_{k'\\neq k}P(y^N|\\bx_k')}\\geq 2^{NT}$, where $T>0$ governs the tradeoff between the probability of error and the probability erasure. Let $\\mathcal{E}$ denote the event of erasure. The expected search duration will be $\\frac N {1-\\Pr(\\mathcal{E})}$. While having negligible effect on the rate (as long as $\\Pr(\\mathcal{E})$ vanishes as N grows), the results of \\cite{Forney1968} immediately imply that such a scheme drastically improves the error exponent compared to non-adaptive schemes (see Fig.\\ref{Fig:Exponents}). \n\nThe second validation scheme we consider was proposed by Yamamoto and Itoh in \\cite{YamaItoh1980} in the context of channel coding with clean feedback. Unlike Forney's scheme which requires only one bit of feedback, this scheme requires the decoder to feed back its decision. While perfect feedback is impractical in a communication system, in our model it is inherent and can be readily harnessed. After completing the search phase with resolution $\\delta$, the agent continues to probe the estimated target location, namely an interval of size $\\delta$. If the probed region contains the target, the output of the validation phase should look like a sequence of '1's passing through a BSC$(p[\\delta])$. Thus, if the validation output is typical w.r.t. to a binary source with $\\Pr('1')=1-p[\\delta]$, the agent outputs that region as the final decision. Otherwise, the whole search is repeated from scratch. Specifically, After the $N$ queries of the non-adaptive search, we probe the aforementioned region $\\lambda N$ more times, where $0\\leq \\lambda\\leq \\infty$ determines the tradeoff between rate and reliability. Let $\\mathcal{E}$ denote the event that the search is repeated. This happens if the wrong region has been chosen, or otherwise if the observations in the validation step were not typical. Both these events will have vanishing probabilities and therefore the rate will be negligibly affected; the average search length is now $\\mathbb{E}(\\tau) = \\frac {N(1+\\lambda)}{1-\\Pr(\\mathcal{E})}$. Following the derivations of \\cite{YamaItoh1980} with $\\lambda=\\frac {I(q^*;p[q^*])} {R} -1$, and noting that $\\delta$ can be made arbitrarily small, we obtain: \n\\begin{lemma}\nThe targeting rate-reliability tradeoff for non-adaptive scheme with a Yamamoto-Itoh validation is given by\n\\begin{equation*}\nE = C_1(p[0])\\cdot \\left(1-\\frac{R}{I(q^*;p[q^*])}\\right) \n\\end{equation*} \n\\end{lemma}\nNote that with this search strategy, we get better reliability than the optimal one for the $BSC(q^*)$ with feedback (given by Burnashev \\cite{burnashev1974interval}) since the validation is done over the least noisy channel (see Fig.\\ref{Fig:Exponents}).\n\n\n\\subsection{Two-Phase Search with Validation}\nIn this section, we show that a simple two-phase scheme with validation achieves the best possible performance, improving upon non-adaptive strategies (with and without validation) both in maximal targeting rate and in targeting rate-reliability tradeoff. \n \n\\begin{theorem}\nLet $p[\\cdot]$ be a measurement noise function. For any $\\alpha\\in (0,\\tfrac{1}{2})$, there exists a search scheme with error probability $\\varepsilon$ and resolution $\\delta$, satisfying \n\\begin{equation*}\n\\label{binC}\n\\mathbb{E} [\\tau]\\le \\left(\\frac{\\log(1\\slash\\alpha)}{C(p[q^*])} + \\frac{\\log (1\\slash\\delta)}{C(p[\\alpha])} + \\frac{\\log(1\\slash\\epsilon)}{C_1(p[\\delta])}\\right)\\left(1+\\mathrm{o}(1)\\right).\n\\end{equation*}\n\\end{theorem}\n\\begin{corollary}\nBy letting $\\alpha$ vanish much slower than $\\delta$, we conclude that the maximal targeting rate for adaptive schemes is given by \n \\begin{equation*}\n C(p[0]) \\stackrel{\\textnormal{def}}{=} \\max_{q\\in(0,\\frac{1}{2})} I(q,p[0])= I(\\tfrac{1}{2}, p[0]),\n \\end{equation*}\nwhich is the capacity of the least noisy BSC associated with the measurements, which is the best possible. The associated targeting rate-reliability tradeoff is \n\\begin{equation*}\n E(R) = C_1(p[0])\\left(1 -\\frac{R}{C(p[0])}\\right).\n\\end{equation*}\nwhich is also the best possible. \n\\end{corollary}\n\\begin{remark}\n Juxtaposing Theorem \\ref{thrm:non-adapt} and the Corollary above, we conclude that (unlike the case of constant interval-independent noise) adaptive search strategies outperform the optimal non-adaptive strategy in both targeting rate and reliability. \n\\end{remark}\n\\begin{proof}\nWe prove the theorem for a fixed $\\alpha$ and $\\delta,\\varepsilon\\to 0$. In the first search phase, the agent employs the optimal non-adaptive search strategy with $\\tau=\\log{N}$ and resolution $\\alpha$, i.e. with a vanishing rate $R = \\frac{\\log{1\\slash\\alpha}}{\\log{N}}$. At the end of this phase, the agent knows an interval of size $\\alpha$ containing the target with probability $1- \\textrm{o}(N)$.\n\nIn the second phase, the agent ``zooms-in'' and performs the search only within the interval obtained in the first phase. To that end, the agent employs the optimal non-adaptive search strategy with $\\tau=\\lambda N-\\log{N}$ and resolution $\\delta = 2^{-(\\lambda N-\\log{N})R}$, i.e. with rate $R = \\frac{\\log{1\\slash\\delta}}{\\lambda N-\\log{N}}$, with the query sets properly shrunk by a factor of $\\alpha$. We note that in this phase, all queried sets are of size smaller than $\\alpha\/2$, hence the associated noise is less that $p[\\alpha]$. Therefore, if the rate $R< C[p[\\alpha]]$, then at the end of this phase the agent knows an interval of size $\\delta$ containing the target with probability $1- \\textrm{o}(N)$. \n\nAt this point, the agent perform the Yamamoto-Itoh validation step of length $(1-\\lambda)N$, which queries a fixed interval of size $\\delta$. If not successful, the agent repeats the whole two-phase search from scratch. The expected stopping time of this procedure is $\\frac{N}{1-o(N)}$, and the error probability decays exponentially with an exponent controlled by trading off the search and validation as before, yielding the associated Burnashev behavior for the channel $p[\\delta]$. \n\\end{proof}\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{Exponents} \n\\caption{Error exponents (known velocity) for noise growing linearly with size: $p[0]=0.1, p[\\frac 1 2]=0.45$ (a) Random coding (b) Decision feedback (c) Burnashev's upper bound for BSC$(p[q*])$ (d) Yamamoto-Itoh validation for the non-adaptive scheme (e) Yamamoto-Itoh validation for BSC$(p[0])$}.\\label{Fig:Exponents}\n\\end{figure}\n\n\\section{Conclusions and Further Research}\nIn this paper, we considered the problem of acquiring a target moving with known\/unknown velocity on a circle starting from an unknown position, under the physically motivated observation model where the noise intensity increases with the size of the queried region. For a known velocity, we showed that unlike the constant noise model, there can be a large gap in performance (both in targeting rate and reliability) between adaptive and non-adaptive search strategies. The various rate-reliability tradeoffs discussed herein are depicted in Fig. \\ref{Fig:Exponents}. Furthermore, we demonstrated that the cost of accommodating an unknown velocity in the non-adaptive setting, is a factor of two in the targeting rate, as intuition may suggest. \n\nOne may also consider other search performance criteria, e.g., where the agent is cumulatively penalized by the size of either the queried region or its complement, according to the one containing the target. The rate-optimal scheme presented herein, which is based on a two-phase random search, may be far from optimal in this setup. In such cases we expect that sequential search strategies, e..g, ones based on posterior matching \\cite{Shayevitz11,naghshvar2013extrinsic}, would exhibit superior performance as they naturally shrink the queried region with time.\n\nOther research directions include more complex stochastic motion models, as well as searching for multiple targets (a ``multi-user'' setting). For the latter, preliminary results indicate that the gain reaped by using adaptive strategies vs. non-adaptive ones diminishes as the number of targets increases. \n \n\\section{Acknowledgement}\nThe authors would like to thank an anonymous reviewer for throughly reading the paper and for many useful comments. \n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Model Hamiltonian}\n\n\\subsection*{Main Terms}\n\nOur model consists of two Ni-Ligand clusters, roughly equivalent to two NiO$_6$ octahedra, each containing a Ni $2p$ shell, a Ni $3d$ shell, and a ligand shell. The ligand shell orbitals are defined as linear combinations of the actual oxygen $2p$ like Wannier-orbitals having the same rotation properties as the Ni $d$ orbitals within the $O_h$ point group used here \\cite{Ballhausen1968S,Haverkort_Wannier_PRB_2010S}. The general form of our Hamiltonian is:\n\\begin{equation}\nH = H_{LF_A} + H_{LF_B} + H_{mix},\n\\end{equation}\nwhere $A$ and $B$ refer to the short and long bond sites, respectively (or identical sites when no breathing distortion is present) and $H_{mix}$ is the part of the Hamiltonian that couples the two clusters together. Taken separately, $H_{LF_A}$ and $H_{LF_B}$ are independent multiplet ligand field theory Hamiltonians. For $H_{LF_A}$, we have:\n\\begin{equation}\nH_{LF_A} = H_U^{dd} + H_U^{pd} + H_{\\bm{l} \\cdot \\bm{s}}^{d} + H_{\\bm{l} \\cdot \\bm{s}}^{p} + H_{o}^{p} + H_{o}^{d} + H_{o}^{L} + H_{hyb}^{dL},\n\\end{equation}\nwith, \\\\\n\\begin{tabular}{ll}\n$H_U^{dd}$ & the Coulomb repulsion between two Ni $3d$ electrons including all multiplet effects,\\\\\n$H_U^{pd}$ & the Coulomb repulsion between a Ni $2p$ core and $3d$ valence electron including all multiplet effects,\\\\\n$H_{\\bm{l} \\cdot \\bm{s}}^{d}$ & the Ni $3d$ spin orbit interaction,\\\\\n$H_{\\bm{l} \\cdot \\bm{s}}^{p}$ & the Ni $2p$ core level spin orbit interaction,\\\\\n$H_{o}^{p}$ & the onsite energy of the Ni $2p$ core orbitals,\\\\\n$H_{o}^{d}$ & the orbital dependent onsite energy of the Ni $3d$ valence orbitals,\\\\\n$H_{o}^{L}$ & the orbital dependent onsite energy of the Ligand orbitals, and\\\\\n$H_{hyb}^{dL}$ & the hybridization strength between the Ni $3d$ and Ligand orbitals.\n\\end{tabular}\n\nThe Hamiltonian for site $B$ is analogous to $H_{LF_A}$. Below we list each term in the Hamiltonian in more detail.\n\n\\subsection*{\\texorpdfstring{On-site Energy of the Ni $2p$ Orbitals - $H_{o}^{p}$}{}}\n\nThe onsite energy of the Ni $2p$ core electrons is given as:\n\\begin{equation}\nH_{o}^{p} = \\epsilon_p \\sum_{\\tau} \\bm{p}^{\\dag}_{\\tau} \\bm{p}^{\\phantom{\\dag}}_{\\tau},\n\\end{equation}\nwith $\\tau$ labeling the 6 different Ni $2p$ spin-orbitals with $m=-1,0,1$ and $\\sigma=\\pm1\/2$, $\\bm{p}^{\\dag}_{\\tau}$ ($\\bm{p}^{\\phantom{\\dag}}_{\\tau}$) the operator creating (annihilating) an electron in orbital $\\tau$ and $\\epsilon_p$ defined in terms of $U_{dd}$, $U_{pd}$ and $\\Delta$ as \\cite{Zaanen_ZSA_PRL_1985S, Zaanen_NiPES_PRB_1986S, degroot2008S}:\n\\begin{equation}\n\\epsilon_{p} = \\frac{ 10\\Delta + \\left(1+n_d\\right) \\left(n_d\\frac{U_{dd}}{2}-\\left(10+n_d\\right)U_{pd}\\right)}{16+n_d}\n\\end{equation}\nwhere $n_d$ is the formal number of $3d$ electrons per Ni ($n_d = 7$ for the nickelates studied here).\n\n\\subsection*{\\texorpdfstring{On-site Energy of the Ni $3d$ Orbitals - $H_{o}^{d}$}{}}\n\nThe onsite energy of the Ni $3d$ valence electrons is given as:\n\\begin{equation}\nH_{o}^{d} = \\epsilon_d \\sum_{\\tau} \\bm{d}^{\\dag}_{\\tau} \\bm{d}^{\\phantom{\\dag}}_{\\tau} + Dq_i \\left( 6 \\sum_{\\tau\\in e_g} \\bm{d}^{\\dag}_{\\tau} \\bm{d}^{\\phantom{\\dag}}_{\\tau} - 4 \\sum_{\\tau\\in t_{2g}} \\bm{d}^{\\dag}_{\\tau} \\bm{d}^{\\phantom{\\dag}}_{\\tau} \\right),\n\\end{equation}\nwith $\\tau$ labeling the 10 different Ni $3d$ spin-orbitals belonging either to the $t_{2g}$ irreducible representation ($yz$, $xz$, and $xy$) or to the $e_g$ irreducible representation ($3z^2-r^2$, and $x^2-y^2$) with either spin up or spin down, $\\bm{d}^{\\dag}_{\\tau}$ ($\\bm{d}^{\\phantom{\\dag}}_{\\tau}$) the operator creating (annihilating) an electron in orbital $\\tau$, $\\epsilon_d$ the shell average energy defined in terms of $U_{dd}$, $U_{pd}$ and $\\Delta$ as \\cite{Zaanen_ZSA_PRL_1985S, Zaanen_NiPES_PRB_1986S, degroot2008S}:\n\\begin{align}\n\\epsilon_d = \\frac{10\\Delta - n_d\\left(31+n_d\\right)\\frac{U_{dd}}{2}-90U_{pd}}{16+n_d},\n\\end{align}\nand $Dq_i$ the onsite part of the cubic crystal-field splitting. The value of $Dq_i$ depends on the the breathing distortion, being larger for the smaller site. In terms of the non-breathing value $Dq_0$, the breathing distortion is approximated by Harrison's rules for hybridization \\cite{Harrison1983S} as\n\\begin{align}\nDq_i = Dq_0 \\left(1 + \\frac{\\delta d_i}{d_0}\\right)^{-4}\n\\end{align}\nwhere, $d_0$ is the average bond length and $\\delta d$ is the positive (negative) displacement from average for the long (short) bond octahedron $B$ ($A$).\n\n\\subsection*{\\texorpdfstring{On-site Energy of the Ligand Orbitals - $H_{o}^{L}$}{}}\n\nThe ligand orbitals are linear combinations of the valence states of the infinite solid such that the local Ni $d$ orbitals directly interact only with these orbitals \\cite{Haverkort_Wannier_PRB_2010S}. For each Ni spin-orbital there is exactly one Ligand orbital, independent of crystal symmetry and breathing distortion. The ligand orbitals can have a different onsite energy depending if they belong to the local $t_{2g}$ or $e_g$ irreducible representation. The Hamiltonian thus takes a very similar form as $H_{o}^{d}$ namely:\n\\begin{equation}\nH_{o}^{L} = \\epsilon_L \\sum_{\\tau} \\bm{L}^{\\dag}_{\\tau} \\bm{L}^{\\phantom{\\dag}}_{\\tau} + Tpp_i \\left( \\sum_{\\tau\\in e_g} \\bm{L}^{\\dag}_{\\tau} \\bm{L}^{\\phantom{\\dag}}_{\\tau} - \\sum_{\\tau\\in t_{2g}} \\bm{L}^{\\dag}_{\\tau} \\bm{L}^{\\phantom{\\dag}}_{\\tau} \\right),\n\\end{equation}\nwith $\\tau$ labeling the 10 different Ligand spin-orbitals belonging either to the $t_{2g}$ irreducible representation ($yz$, $xz$, and $xy$) or to the $e_g$ irreducible representation ($3z^2-r^2$, and $x^2-y^2$) with either spin up or spin down, $\\bm{L}^{\\dag}_{\\tau}$ ($\\bm{L}^{\\phantom{\\dag}}_{\\tau}$) the operator creating (annihilating) an electron in orbital $\\tau$, $\\epsilon_L$ the shell average energy defined in terms of $U_{dd}$, $U_{pd}$ and $\\Delta$ as \\cite{Zaanen_ZSA_PRL_1985S, Zaanen_NiPES_PRB_1986S, degroot2008S}:\n\\begin{align}\n\\epsilon_{L} = \\frac{\\left(1+n_d\\right)\\left(n_d\\frac{U_{dd}}{2}+6U_{pd}\\right) - \\left(6+n_d\\right)\\Delta}{16+n_d},\n\\end{align}\nand $T_{pp,i}$ roughly the hopping strength between two ligand O $2p$ orbitals \\cite{Haverkort_Wannier_PRB_2010S}. The value of $T_{pp,i}$ depends on the breathing distortion and can be expressed in terms of $\\delta d_i$, the positive (negative) displacement from average for the long (short) bond octahedron $B$ ($A$) and the non-breathing value $T_{pp}$, using rules defined by Harrison \\cite{Harrison1983S} as\n\\begin{align}\nT_{pp,i} = T_{pp} \\left(1 + \\frac{\\delta d_i}{d_0}\\right)^{-3}.\n\\end{align}\n\n\n\\subsection*{\\texorpdfstring{Hybridization Between Ni $3d$ and Ligand Orbitals - $H_{hyb}^{dL}$}{}}\n\nThe interaction between the Ni $3d$ orbitals and the Ligand orbitals is given as:\n\\begin{equation}\n\\label{Eqn:Hhyb}\nH_{hyb}^{dL} = \\sqrt{1-x} \\left( V_{e_{g}} \\sum_{\\tau\\in e_g} \\left( \\bm{d}^{\\dag}_{\\tau} \\bm{L}^{\\phantom{\\dag}}_{\\tau} + \\bm{L}^{\\dag}_{\\tau} \\bm{d}^{\\phantom{\\dag}}_{\\tau} \\right) + V_{t_{2g}} \\sum_{\\tau\\in t_{2g}} \\left( \\bm{d}^{\\dag}_{\\tau} \\bm{L}^{\\phantom{\\dag}}_{\\tau} + \\bm{L}^{\\dag}_{\\tau} \\bm{d}^{\\phantom{\\dag}}_{\\tau} \\right) \\right),\n\\end{equation}\nwith $\\tau$ labeling the 10 different Ni $3d$ or Ligand spin-orbitals belonging either to the $t_{2g}$ irreducible representation ($yz$, $xz$, and $xy$) or to the $e_g$ irreducible representation ($3z^2-r^2$, and $x^2-y^2$) with either spin up or spin down, $\\bm{d}^{\\dag}_{\\tau}$, $\\bm{L}^{\\dag}_{\\tau}$ ($\\bm{d}^{\\phantom{\\dag}}_{\\tau}$, $\\bm{L}^{\\phantom{\\dag}}_{\\tau}$) the operator creating (annihilating) an electron in orbital $\\tau$ and either the $d$ or Ligand shell. $V_{e_{g}}$ and $V_{t_{2g}}$ are the individual hopping strengths between the $d$ and Ligand orbitals. The parameter $x$ determines the ratio between the hopping within a single ligand field cluster and between two ligand-field clusters.\n\n\\subsection*{\\texorpdfstring{Coupling Between Cluster $A$ and $B$ - $H_{mix}$}{}}\n\nThe interaction between cluster $A$ and cluster $B$ is given as:\n\\begin{align}\n\\label{Eqn:Hmix}\nH_{mix} = \\sqrt{x} \\bigg( &V_{e_{g}} \\sum_{\\tau\\in e_g} \\left( \\bm{d}^{\\dag}_{A,\\tau} \\bm{L}^{\\phantom{\\dag}}_{B,\\tau} + \\bm{L}^{\\dag}_{B,\\tau} \\bm{d}^{\\phantom{\\dag}}_{A,\\tau} + \\bm{d}^{\\dag}_{B,\\tau} \\bm{L}^{\\phantom{\\dag}}_{A,\\tau} + \\bm{L}^{\\dag}_{A,\\tau} \\bm{d}^{\\phantom{\\dag}}_{B,\\tau} \\right) \\\\ \n\\nonumber &+ V_{t_{2g}} \\sum_{\\tau\\in t_{2g}} \\left( \\bm{d}^{\\dag}_{A,\\tau} \\bm{L}^{\\phantom{\\dag}}_{B,\\tau} + \\bm{L}^{\\dag}_{B,\\tau} \\bm{d}^{\\phantom{\\dag}}_{A,\\tau} + \\bm{d}^{\\dag}_{B,\\tau} \\bm{L}^{\\phantom{\\dag}}_{A,\\tau} + \\bm{L}^{\\dag}_{A,\\tau} \\bm{d}^{\\phantom{\\dag}}_{B,\\tau}\\right) \\bigg),\n\\end{align}\nwith the individual terms defined as in the previous section. The definition of the hybridization interaction using the parameter $x$ is such that the overall hopping strength is independent of the coupling between the two clusters. For perfect periodic boundary conditions $x=\\frac{1}{2}$. In this case one can create bonding and anti-bonding linear combinations of the ligand orbitals of cluster $A$ and cluster $B$ and the anti-bonding linear combination will be non-bonding with respect to the Ni $3d$ orbitals. Although this senario on first sight looks like a good cluster model, it highly overestimates the Ni-Ni exchange interactions. For nearest neighbor clusters only one out of six oxygens is shared and one might expect $x\\approx 1\/6$ to yield reasonable results, assuming 180 degree Ni-O-Ni bonds. We used $x$ as an empirical parameter, as one can not claim convergence with respect to cluster size for a two site calculation. Due to the presence of non-local excitations, the XAS spectrum is quite sensitive to $x$ and we find best agreement with experiment using $x = 0.35^2 = 0.1225$, which is quite close to the expected value of $1\/6$. The fact that we find a value slightly smaller than $1\/6$ is likely due in part to the octahedral tilts in the real materials, which decrease the Ni-O-Ni bond angle from 180 degrees and reduce the effective Ni-Ni hopping. \n\n\\subsection*{\\texorpdfstring{Coulomb Repulsion Between Two Ni $3d$ Electrons - $H_{U}^{dd}$}{}}\n\nThe onsite Coulomb repulsion between two $d$ electrons is defined as:\n\\begin{align}\nH_{U}^{dd} &= \\sum_{i,j} \\frac{1}{2}\\frac{e^2}{|r_i-r_j|}\\\\\n\\nonumber &= \\sum_{\\tau_1,\\tau_2,\\tau_3,\\tau_4} U_{\\tau_1,\\tau_2,\\tau_3,\\tau_4} \\bm{d}^{\\dag}_{\\tau_1} \\bm{d}^{\\dag}_{\\tau_2} \\bm{d}^{\\phantom{\\dag}}_{\\tau_3} \\bm{d}^{\\phantom{\\dag}}_{\\tau_4},\n\\end{align}\nwith,\n\\begin{align}\n U_{\\tau_1,\\tau_2,\\tau_3,\\tau_4} = -\\frac{1}{2} \\delta_{\\sigma_1,\\sigma_3} \\delta_{\\sigma_2,\\sigma_4}\n \\sum_{k=0,2,4} \n c^{\\left(k\\right)}\\left[l_1=2,m_1;l_3=2,m_3\\right]\n c^{\\left(k\\right)}\\left[l_4=2,m_4;l_2=2,m_2\\right]\n \\times F^{\\left(k\\right)},\n\\end{align}\nwhere $\\tau$ are combined spin and orbital indices, $\\sigma$ are spin indices, $l$ and $m$ are angular momentum indices ($l=2$ for $d$ electrons), $F^{(k)}$ are the radial (Slater) integrals, and \n\\begin{align}\nc^{\\left(k\\right)} \\left[l_1,m_1;l_2,m_2\\right] = \n\\Bra{Y_{m_1}^{\\left(l_1\\right) } }\nC^{\\left(k\\right)}_{m_1-m_2} \n\\Ket{Y_{m_2}^{\\left(l_2\\right)}}\n\\end{align}\nare angular integrals of spherical harmonics $Y_{m}^{\\left(l\\right) }$ and renormalized spherical harmonics $C^{\\left(k\\right)}_{m} = \\sqrt{\\frac{4 \\pi}{2 k +1}} Y_{m}^{\\left(k\\right)}$.\nThe Slater integrals $F^{(2)}$ and $F^{(4)}$ are related to the multipole interaction between two $d$ electrons and responsible for the multiplet splitting between the different levels. They can be approximated by $J_H$ albeit at the loss of the experimentally observed multiplet structure. $F^{(0)}$ is the spherical averaged Coulomb interaction, i.e. the monopole part of the interaction. $F^{(0)}$ is related to $U$ by:\n\\begin{equation}\nF^{(0)} = U + \\frac{2}{63}(F^{(2)} + F^{(4)}).\n\\end{equation}\n\n\n\\subsection*{\\texorpdfstring{Coulomb repulsion between a Ni $2p$ and Ni $3d$ electron - $H_{U}^{pd}$}{}}\n\nThe onsite interaction between the Ni $2p$ and Ni $3d$ electrons is given as:\n\\begin{align}\nH_{U}^{pd} &= \\sum_{\\tau_1,\\tau_2,\\tau_3,\\tau_4} 2 U_{\\tau_1,\\tau_2,\\tau_3,\\tau_4}^G \\bm{d}^{\\dag}_{\\tau_1} \\bm{p}^{\\dag}_{\\tau_2} \\bm{p}^{\\phantom{\\dag}}_{\\tau_3} \\bm{d}^{\\phantom{\\dag}}_{\\tau_4} + 2 U_{\\tau_1,\\tau_2,\\tau_3,\\tau_4}^F \\bm{d}^{\\dag}_{\\tau_1} \\bm{p}^{\\dag}_{\\tau_2} \\bm{d}^{\\phantom{\\dag}}_{\\tau_3} \\bm{p}^{\\phantom{\\dag}}_{\\tau_4},\n\\end{align}\nwith \n\\begin{align}\n U_{\\tau_1,\\tau_2,\\tau_3,\\tau_4}^F = -\\frac{1}{2} \\delta_{\\sigma_1,\\sigma_3} \\delta_{\\sigma_2,\\sigma_4}\n \\sum_{k=0,2} \n c^{\\left(k\\right)}\\left[l_1=2,m_1;l_3=2,m_3\\right]\n c^{\\left(k\\right)}\\left[l_4=1,m_4;l_2=1,m_2\\right]\n \\times F^{\\left(k\\right)}_{pd},\n\\end{align}\nand \n\\begin{align}\n U_{\\tau_1,\\tau_2,\\tau_3,\\tau_4}^G = -\\frac{1}{2} \\delta_{\\sigma_1,\\sigma_3} \\delta_{\\sigma_2,\\sigma_4}\n \\sum_{k=1,3} \n c^{\\left(k\\right)}\\left[l_1=2,m_1;l_3=1,m_3\\right]\n c^{\\left(k\\right)}\\left[l_4=2,m_4;l_2=1,m_2\\right]\n \\times G^{\\left(k\\right)}_{pd}.\n\\end{align}\nThe monopole part of the $p$-$d$ interaction ($F^{(0)}_{pd}$) is related to $Q = U_{pd}$ by:\n\\begin{equation}\nF^{(0)}_{pd} = U_{pd} + \\frac{1}{15}G^{1}_{pd} + \\frac{3}{70}G^{3}_{pd}\n\\end{equation}\n\n\\vspace*{10pt}\n\n\\subsection*{\\texorpdfstring{Ni $3d$ and $2p$ spin-orbit coupling - $H_{\\bm{l} \\cdot \\bm{s}}^{d}$ and $H_{\\bm{l} \\cdot \\bm{s}}^{p}$}{}}\n\nThe spin-orbit coupling interaction within either the $2p$ or $3d$ shell of Ni is given as:\n\\begin{align}\nH_{\\bm{l \\cdot s}} = \\xi\\sum_{i} \\bm{l}_i \\cdot \\bm{s}_i =\n\\xi\\sum_{m=-l}^{m=l} \\sum_{\\sigma} m\\sigma \\bm{a}^{\\dagger}_{m\\sigma} \\bm{a}_{m\\sigma} +\n\\frac{\\xi}{2}\\sum_{m=-l}^{m=l-1} \\sqrt{l+m+1}\\sqrt{l-m}\n\\left( \n\\bm{a}^{\\dagger}_{m+1,\\downarrow} \\bm{a}_{m,\\uparrow} + \n\\bm{a}^{\\dagger}_{m,\\uparrow} \\bm{a}_{m+1,\\downarrow}\n\\right)\n\\end{align}\nwhere here $m$ is the orbital index, $l$ the angular momentum, i.e. either $p$ or $d$, $\\sigma$ is the spin index, and $\\xi$ the coupling constant. The operator $\\bm{a}^{\\dagger}$ corresponds to $\\bm{d}^{\\dagger}$ or $\\bm{p}^{\\dagger}$ for the valence $3d$ or core $p$ shells respectively.\n\n\\vspace*{10pt}\n\n\\subsection*{Hopping Parameter Definitions}\n\nBoth the inter- and intra-cluster hybridization interactions are affected by the breathing distortion. This dependence is contained in the hopping integrals $V_{e_{g}}$ and $V_{t_{2g}}$ of equations \\ref{Eqn:Hhyb} and \\ref{Eqn:Hmix}. In terms of the non-breathing values $V_{e_{g},0}$ and $V_{t_{2g},0}$, the hybridization parameters are defined using Harrison's rules \\cite{Harrison1983S}, as\n\\begin{align}\nV_{e_{g}} &= V_{e_{g},0} \\left( 1 + \\frac{\\delta d}{d_0} \\right)^{-4} \\\\\nV_{t_{2g}} &= V_{t_{2g},0} \\left( 1 + \\frac{\\delta d}{d_0} \\right)^{-4}\n\\end{align}\nwhere $\\delta d$ is the bond length displacement as defined above. The values used for $V_{e_{g},0}$ and $V_{t_{2g},0}$ are provided in the section below. \n\n\\clearpage\n\n\\section*{Parameter Values}\n\nParameter values enter into our model in the form of Coulomb interactions, on-site energies, spin orbit interactions, and hopping integrals. The values for these parameters have been quite well established over several decades of core level spectroscopy and other techniques, and here we do not deviate in any way from standard values (discussions of typical values can be found in references \\onlinecite{degroot2008S, Zaanen_NiPES_PRB_1986S, Zaanen_ZSA_PRL_1985S, Haverkort_Wannier_PRB_2010S, Ballhausen1968S}, among many others). For the monopole Coulomb interaction parameters we use $U_{dd} = 6$ eV and $U_{pd} = 7$ eV. For the charge transfer energy we use $\\Delta = -0.5$ eV (see the following paragraph for a discussion of the exact definition of $\\Delta$). For the non-breathing-distorted values of on-site energies we use $10Dq = 0.95$ eV and $T_{pp} = 0.75$ eV. For non-breathing-distorted intra-cluster hopping integrals we use $V_{e_{g},0} = 3.0$ eV and $V_{t_{2g},0} = 1.74$ eV. For inter-cluster hopping, we find best agreement with experiment using $V_{I} = \\sqrt{x} = 0.35$. Spin orbit interaction parameters are taken as the atomic values for Ni $3d^7$, $\\xi_{2p} = 11.506$ eV and $\\xi_{3d} = 91$ meV. Finally, the multipole Coulomb interaction parameters are taken as 80\\% of their atomic Hartree-Fock values for Ni $3d^7$ \\cite{Cowan1981S}: $F^2_{dd} = 10.622$, $F^4_{dd} = 6.636$, $F^2_{pd} = 6.680$, $G^1_{pd} = 5.066$, and $G^3_{pd} = 2.882$, all expressed in units of electron volts.\n\nThere exist several different ways to define the charge transfer energy $\\Delta$. As detailed in the preceding sections, in our model $\\Delta$ contributes to the shell average energies $\\epsilon_d$, $\\epsilon_L$, and $\\epsilon_p$. Thus, our value of $\\Delta = -0.5$ eV sets the energy of the $d^8\\underline{L}$ configuration 0.5 eV lower than the $d^7$ configuration \\emph{before} the inclusion of multiplet, spin-orbit, crystal field, and ligand-ligand hopping terms. These terms all modify the energy separation between the lowest $d^7$ and $d^8\\underline{L}$ states, thus affecting the effective charge transfer energy. For example, the ligand-ligand hopping shifts the ligand $e_g$ orbitals up by $T_{pp}$ and shifts the ligand $t_{2g}$ orbitals down by $T_{pp}$, which ends up shifting the $d^8\\underline{L}$ configuration lower by $T_{pp}$ with respect to the $d^7$ configuration, thus decreasing the effective delta further. Once all of the on-site energy effects are considered, we find that the lowest $d^8\\underline{L}$ eigenstate is 0.6 eV lower than the lowest $d^7$ eigenstate (before the inclusion of hybridization), thus giving an effective charge transfer energy of $-0.6$ eV.\n\n\n\\section*{Spectroscopy Simulations}\n\nAfter finding the ground state wavefunction of our model, we simulate the core XAS and MCD spectra using a Lanczos-based Green's function method \\cite{Dagotto_RMP1994S, Haverkort_Wannier_PRB_2010S, Haverkort_DMFTXAS_EPL2014S}. For XAS we calculate the isotropic signal via the sum of spectra using $z$, left circular, and right circular polarized dipole transition operators. The MCD response (i.e. the fundamental MCD spectrum, or $f^{\\left(1\\right)}$ component of the atomic scattering tensor) is the difference between left and right circular polarized spectral functions.\n\nThe resonant magnetic diffraction signal of the nickelates has been intensively studied of late, with experiments probing the collinearity of moments via the azimuthal dependence of the diffraction signal \\cite{Frano_OrbControl_PRL_2013S}. For this work we are interested in the energy dependence of the RMD signal, so for simplicity we assume a collinear arrangement of spins with a structure factor of \n\\begin{align}\nS\\left[\\bm{q}=\\left(\\sfrac{1}{4},\\sfrac{1}{4},\\sfrac{1}{4}\\right)\\right] = \n1\\cdot f^{\\left(1\\right)}_{A1} + i\\cdot f^{\\left(1\\right)}_{B1} - 1\\cdot f^{\\left(1\\right)}_{A2} - i\\cdot f^{\\left(1\\right)}_{B2}\n\\end{align} \nwhere $f^{\\left(1\\right)}$ denote the complex valued, energy dependent, magnetic circular dichroic form factors and $A1$ and $A2$ refer to short bond sites in different antiferromagnetic planes (similar for long bond sites $B$). Given an antiferromagnetic alignment of $A1$ and $A2$ and thus a negation of the form factor ($f^{\\left(1\\right)}_{A1} = -f^{\\left(1\\right)}_{A2} \\equiv f^{\\left(1\\right)}_{A}$, and similar for $B$), our structure factor simplifies to\n\\begin{align}\nS\\left[\\bm{q}=\\left(\\sfrac{1}{4},\\sfrac{1}{4},\\sfrac{1}{4}\\right)\\right] = \n2 \\left( f^{\\left(1\\right)}_{A} + i\\cdot f^{\\left(1\\right)}_{B} \\right)\n\\end{align} \nand, neglecting polarization effects for simplicity, the spectral intensity is therefore\n\\begin{align}\n\\label{Eqn:RMD}\nI\\left(\\omega\\right) \\propto \\left| f^{\\left(1\\right)}_{A}\\left(\\omega\\right) + i f^{\\left(1\\right)}_{B}\\left(\\omega\\right) \\right|^2\n\\end{align}\nwhere we have restored the dependence on energy, $\\omega$. Note that this expression does assume a particular domain structure, as $S_A \\leq S_B$ in our model, i.e. the ordering in Eqn. \\ref{Eqn:RMD} is (~$\\SpinUp[6pt]~\\SpinUp[10pt]~\\SpinDn[6pt]~\\SpinDn[10pt]~$). One can assume the alternate ordering (~$\\SpinUp[10pt]~\\SpinUp[6pt]~\\SpinDn[10pt]~\\SpinDn[6pt]~$) with $f^{\\left(1\\right)}_{A}$ and $f^{\\left(1\\right)}_{B}$ switched in Eqn. \\ref{Eqn:RMD} and obtain a slightly different energy dependence of the magnetic scattering. The spectra of both arrangements are shown along with the XAS for $\\delta d = 0.03$\\AA~in Fig. \\ref{Fig:SRMD} below. While both spectra are qualitatively similar, having the magnetic scattering intensity strongest at the energy of the first XAS peak and a small shoulder at higher energy, there are differences in overall intensity and in the fine details of the peaks. In the manuscript, we plot the average of these two responses.\n\n\n\\begin{figure}[t]\n\\includegraphics[width=4in]{Fig_Supp_XAS_RMD}\n\\caption{Comparison of resonant magnetic diffraction at a breathing distortion of $\\delta d = 0.03$\\AA ~for the two different spin arrangements shown. }\n\\label{Fig:SRMD}\n\\end{figure}\n\n\\vspace*{40pt}\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nArabish is the romanization of Arabic Dialects (ADs) used for informal messaging, especially in social networks.\\footnote{Also known as Arabizi (from the combination between the Arabic words \\textipa{\"}Arab\\textipa{\"}, \\textipa{[\"Qarab]} and \\textipa{\"}English\\textipa{\"} \\textipa{[i:n\"Zli:zi:]}, or the English one: \\textipa{\"}easy\\textipa{\"}, \\textipa{[\"i:zi]}), Franco-Arabic, Arabic Chat Alphabet, ACII-ized Arabic, and many others. \\textipa{\"}Arabish\\textipa{\"} is probably the result of the union between \\textipa{[\"Qarab]} and \\textipa{\"}English\\textipa{\"}.} This writing system provides an interesting ground for linguistic research, computational as well as sociolinguistic, mainly due to the fact that it is a spontaneous representation of the ADs, and because it is a linguistic phenomenon in constant expansion on the web. \nDespite such potential, little research has been dedicated to Tunisian Arabish (TA). In this paper we describe the work we carried to develop a flexible and multi-purpose TA resource. This will include a TA corpus, together with some tools that could be useful for analyzing the corpus and for its extension with new data.\n\nFirst of all, the resource will be useful to give an overview of the TA. At the same time, it will be a reliable representation of the Tunisian dialect (TUN) evolution over the last ten years: the collected texts date from 2009 to present. This selection was done with the purpose to observe to what extent the TA orthographic system has evolved toward a writing convention.\nTherefore, the TArC will be suitable for phonological, morphological, syntactic and semantic studies, both in the linguistic and the Natural Language Processing (NLP) domains.\nFor these reasons, we decided to build a corpus which could highlight the structural characteristics of TA through different annotation levels, including Part of Speech (POS) tags and lemmatization. In particular, to facilitate the match with the already existing tools and studies for the Arabic language processing, we provide a transcription in Arabic characters at token level, following the Conventional Orthography for Dialectal Arabic guidelines \n\\emph{CODA*} (CODA \\emph{star}) \\cite{habash-etal-2018-unified} and taking into account the specific guidelines for TUN (CODA TUN) \\cite{DBLP:conf\/lrec\/ZribiBMEBH14}.\nFurthermore, even if the translation is not the main goal of this research, we have decided to provide an Italian translation of the TArC's texts.\\footnote{We considered the Italian translation as an integrated part of the annotation phase that would have cost us less effort in addition to us if carried out in our mother-tongue. The possibility of a TArC English translation is left open for a later time.}\n\nEven though in the last few years ADs have received an increasing attention by the NLP community, many aspects have not been studied yet and one of these is the Arabish code-system. The first reason for this lack of research is the relatively recent widespread of its use: before the advent of the social media, Arabish usage was basically confined to text messaging.\nHowever, the landscape has changed considerably, and particularly thanks to the massive registration of users on Facebook since 2008. At that time, in Tunisia there were still no Arabic keyboards, neither for Personal Computers, nor for phones, so Arabic-speaking users designed TA for writing in social media (Table \\ref{tab1}). \nA second issue that has held back the study of Arabish is its lack of a standard orthography, and the informal context of use. It is important to note that also the ADs lack a standard code-system, mainly because of their oral nature.\nIn recent years the scientific community has been active in producing various sets of guidelines for dialectal Arabic writing in Arabic characters: CODA (Conventional Orthography for Dialectal Arabic) \\cite{habash-etal-2012-conventional}.\n\\newline\n\nThe remainder of the paper is organized as follows: section~\\ref{sec:StART} is an overview of NLP studies on TUN and TA; section~\\ref{sec:TD_TA} describes TUN and TA; section~\\ref{sec:TArC} presents the TArC corpus building process; section~\\ref{sec:pro} explains preliminary experiments with a semi-automatic transcription and annotation procedure, adopted for a faster and simpler construction of the TArC corpus; conclusions are drawn in section~\\ref{sec:concl}\n\n\\section{Related Work}\n\\label{sec:StART}\n\nIn this section, we provide an overview of work done on automatic processing of TUN and TA. As briefly outlined above, many studies on TUN and TA aim at solving the lack of standard orthography. The first Conventional Orthography for Dialectal Arabic (CODA) was for Egyptian Arabic \\cite{habash-etal-2012-conventional} and it was used by \n \\newcite{bies2014transliteration} for Egyptian Arabish transliteration into Arabic script.\nThe CODA version for TUN (CODA TUN) was developed by \\newcite{DBLP:conf\/lrec\/ZribiBMEBH14}, and was used in many studies, like \\newcite{boujelbane2015traitements}. Such work presents a research on automatic word recognition in TUN. Narrowing down to the specific field of TA, CODA TUN was used in \\newcite{masmoudi2015arabic} to realize a TA-Arabic script conversion tool, implemented with a rule-based approach.\nThe most extensive CODA is CODA*, a unified set of guidelines for 28 Arab city dialects \\cite{habash-etal-2018-unified}. For the present research, CODA* is considered the most convenient guideline to follow due to its extensive applicability, which will support comparative studies of corpora in different ADs. \nAs we already mentioned, there are few NLP tools available for Arabish processing in comparison to the amount of NLP tools realized for Arabic.\nConsidering the lack of spelling conventions for Arabish, previous effort has focused on automatic transliteration from Arabish to Arabic script, e.g. \\newcite{chalabi2012romanized}, \\newcite{darwish2013arabizi}, and \\newcite{al2014automatic}.\nThese three work are based on a character-to-character mapping model that aims at generating a range of alternative words that must then be selected through a linguistic model. A different method is presented in \\newcite{younes2018sequence}, in which the authors present a sequence-to-sequence-based approach for TA-Arabic characters transliteration in both directions \\cite{Sutskever-2014-SSL-2969033.2969173,younes2018sequence}.\n\nRegardless of the great number of work done on TUN automatic processing, there are not a lot of TUN corpora available for free \\cite{younes2018survey}.\nTo the best of our knowledge there are only five TUN corpora freely downloadable: one of these is the PADIC \\citelanguageresource{PADIC}, composed of 6,400 sentences in six Arabic dialects, translated in \\emph{Modern Standard Arabic} (MSA), and annotated at sentence level.\\footnote{The Arabic dialects of the PADIC are: TUN (Sfax), two dialects of Algeria, Syrian, Palestinian and Moroccan \\cite{meftouh2018padic}.} Two other corpora are the Tunisian Dialect Corpus Interlocutor (TuDiCoI) \\citelanguageresource{Tudicoi} and the Spoken Tunisian Arabic Corpus (STAC) \\citelanguageresource{stac}, which are both morpho-syntactically annotated. The first one is a spoken task-oriented dialogue corpus, which gathers a set of conversations between staff and clients recorded in a railway station. TuDiCoI consists of 21,682 words in client turns \\cite{graja2013discriminative}.\\footnote{The annotation was carried out only for 7,814 word.}\nThe STAC is composed of 42,388 words collected from audio files downloaded from the web (as TV channels and radio stations files) \\cite{zribi2015spoken}. A different corpus is the TARIC \\citelanguageresource{Taric}, which contains 20 hours of TUN speech, transcribed in Arabic characters \\cite{masmoudi2014corpus}.\\footnote{The 20 hours recorded are equivalent to 71,684 words.}\n The last one is the TSAC \\citelanguageresource{Tsac}, containing 17k comments from Facebook, manually annotated to positive and negative polarities \\cite{medhaffar2017sentiment}. This corpus is the only one that contains TA texts as well as texts in Arabic characters. As far as we know there are no available corpora of TA transcribed in Arabic characters which are also morpho-syntactically annotated.\nIn order to provide an answer to the lack of resources for TA, we decided to create TArC, a corpus entirely dedicated to the TA writing system, transcribed in CODA TUN and provided with a lemmatization level and POS tag annotation. \n\n\\section{Characteristics of Tunisian Arabic and Tunisian Arabish}\n\\label{sec:TD_TA}\n\nThe Tunisian dialect (TUN) is the spoken language of Tunisian everyday life, commonly referred to as \\RL{\u0627\u0644\u062f\u064e\u0651\u0627\u0631\u0650\u062c\u064e\u0629}, \\textit{ad-d\u0101rija}, \\RL{\u0627\u0644\u0639\u064e\u0627\u0645\u0650\u0651\u064a\u064e\u0651\u0629}, \\textit{al-'\u0101mmiyya}, or \\RL{\u0627\u0644\u062a\u064f\u0651\u0648\u0646\u0652\u0633\u0650\u064a}, \\textit{\\textipa{@t-t\u016bns\u012b}}.\nAccording to the traditional diatopic classification, TUN belongs to the area of Maghrebi Arabic, of which the other main varieties are Libyan, Algerian, Moroccan and the\n\\textsubdot{H}ass\u0101n\u012bya variety of Mauritania\\footnote{The main geographical macro-areas, also called geolects, are the area of the Levant or Syro-Palestinian, Egypt and Sudan, Mesopotamia, Maghreb (North Africa) and the Arabian Peninsula. \n} \\cite{durand2009dialettologia}.\nArabish is the transposition of ADs, which are mainly spoken systems, into written form, thus turning into a quasi-oral system (this topic will be discussed in section \\ref{subsec:TA_system}). In addition, Arabish is not realized through Arabic script and consequently it is not subject to the Standard Arabic orthographic rules. As a result, it is possible to consider TA as a faithful written representation of the spoken TUN \\cite{akbar2019arabizi}. \n\n\\subsection{Tunisian Arabic}\n\nThe following list provides an excerpt of the principal features of TUN, which, through the TArC, would be researched in depth among many others. \\footnote{For a detailed description please refer to \\cite{durand2009dialettologia}, \\cite{marccais1977esquisse}.}\n\n\nAt the phonetic level, some of the main characteristics of TUN, and Maghrebi Arabic in general, are the following: \\\\\n\\begin{adjustwidth}{1em}{0pt}\n\\textbf{*} Strong influence of the Berber substratum, to which it is possible to attribute the conservative phonology of TUN consonants.\n\\end{adjustwidth}\n\n\\begin{adjustwidth}{1em}{0pt}\n\\textbf{*} Presence of new emphatic phonemes, above all [\\textsubdot{r}], [\\textsubdot{l}], [\\textsubdot{b}].\\\\ \n\\textbf{*} Realization of the voiced post-alveolar affricate [\\textdyoghlig] as fricative \\textipa{[Z]}. \\\\\n\\textbf{*} Overlapping of the pharyngealized voiced alveolar stop \\textipa{[d\\super Q]}, <\\RL{\u0636}>, with the fricative \\textipa{[D\\super Q]}, <\\RL{\u0638}>. \\\\\n\\textbf{*} Preservation of a full glottal stop \\textipa{[P]} mainly in cases of loans from Classical Arabic (CA) or exclamations and interjections of frequent use.\n\\textbf{*} Loss of short vowels in open syllables.\n\\newpage\n\\textbf{*} Monophthongization.\\footnote{Reduction of the diphthongs \\textipa{[aw]} and \\textipa{[aj]} to \\textipa{[u:]} and \\textipa{[i:]} in pre-Hilalian dialects, and to \\textipa{[o:]} and \\textipa{[e:]} in the Hilalian ones.} In TUN <\\RL{\u0628\u064e\u064a\u062a}>, \\textipa{[\"baijt]}, \\textipa{\"}house\\textipa{\"}, becomes \\textipa{[\"bi:t]} meaning \\textipa{\"}room\\textipa{\"}. \\\\\n\\textbf{*} Palatalization of \u0101: Im\u0101la, <\\RL{\u0625\u0645\u0627\u0644\u0629}>, literally \\textipa{\"}inclination\\textipa{\"}.\n(In TUN the phenomenon is of medium intensity.)\nThereby the word <\\RL{\u0628\u0627\u0628}>, \\textipa{[\"ba:b]}, \\textipa{\"}door\\textipa{\"},\nbecomes \\textipa{[\"bE:b]}. \\\\ \n\\textbf{*} Metathesis.\n(Transposition of the first vowel of the word.\nIt occurs when non-conjugated verbs or names without suffix begin with the\nsequence CCvC, where C stands for ungeminated consonant, and\n\\textipa{\"}v\\textipa{\"} for short vowel. When a suffix is added to this type of\nname, or a verb of this type is conjugated, the first vowel changes position\ngiving rise to the CvCC sequence.)\nIn TUN it results in: \\textipa{\"}(he) has\nunderstood\\textipa{\"}: \\\\ <\\RL{\u0641\u0652\u0647\u0650\u0645}>, \\textipa{[\"fh@m]}, \\textipa{\"}(she) has\nunderstood\\textipa{\"}: <\\RL{\u0641\u0650\u0647\u0652\u0645\u0650\u062a}>, \\textipa{[\"f@hm@t]} or\n\\textipa{\"}leg\\textipa{\"}: <\\RL{\u0631\u0652\u062c\u0650\u0644}>, \\textipa{[\"rZ@l]}, \\textipa{\"}my\nleg\\textipa{\"}: <\\RL{\u0631\u0650\u062c\u0652\u0644\u0650\u064a}>, \\textipa{[\"r@Zli]}.\\\\\n\n\\end{adjustwidth} \n\nRegarding the morpho-syntactic level, TUN presents:\\\\\n\\begin{adjustwidth}{1em}{0pt}\n\\textbf{*} Addition of the prefix \/-n\/ to first person verbal morphology in \\textit{mu\\textsubdot{d}\u0101ri'} (imperfective).\\\\\n\\textbf{*} Realization of passive-reflexive verbs through the morpheme \/-t\/ \\footnote{The morpheme \/-t\/ can be traced back to the same morpheme present in the V and VI verbal patterns of CA \\cite{mion2004osservazioni}.} prefixed to the verb as in the example:\\\\ <\\RL{\u0633\u0648\u0631\u064a\u0651\u0629 \u0645\u0627\u0644\u062d\u064e\u0641\u0652\u0635\u064a\u0651\u0629 \u062a\u0652\u062a\u0650\u0644\u0652\u0628\u0650\u0633}>, \n\\textipa{[su:\"ri:j:a m@l-\\textcrh af\"s\\super Qij:a t-\"t@lb@s]}, \\textipa{\"}the shirts of \\textsubdot{H}af\\textsubdot{s}iya\\footnote{ \\textsubdot{H}af\\textsubdot{s}iya is a neighborhood in the Med\u012bna of Tunis, known for its great daily fr\u012bp (second-hand market).} are not bad\\textipa{\"}, (lit: \\textipa{\"}they dress\\textipa{\"}). \\\\\n\\textbf{*} Loss of gender distinction at the 2\\super{nd} and 3\\super{rd} persons, at verbal and pronominal level. \\\\\n\\textbf{*} Disappearance of the dual form from verbal and pronominal inflexion.\nThere is a residual of pseudo-dual in some words fixed in time in their dual form. \\\\\n\\textbf{*} Loss of relative pronouns flexion and replacement with the invariable form <\\RL{\u0627\u0650\u0644\u0651\u064a}>, \\textipa{[@l:i]}.\\\\\n\\textbf{*} Use of presentatives \/\\textsubdot{r}\u0101-\/ and \/h\u0101-\/ with the meaning of \\textipa{\"}here\\textipa{\"}, \\textipa{\"}look\\textipa{\"}, as in the example in TUN: <\\RL{\u0631\u0627\u0646\u064a \u0645\u064e\u062e\u0652\u0646\u0648\u0642}>,\n\\textipa{[\"}\\textsubdot{r}\\textipa{a:ni: m@x\"nu:q]}, \\textipa{\"}here I am asphyxiated (by problems)\\textipa{\"}, or in <\\RL{\u0647\u0627\u0643 \u062f\u064e\u0628\u064e\u0651\u0631\u0652\u062a\u0652\u0647\u0627}>, \\textipa{[\"ha:-k d@\"b:@rt-ha:]}, \\textipa{\"}here you are, finding it (the solution)\\textipa{\"} hence: \\textipa{\"}you were lucky\\textipa{\"}. \\\\\n\\textbf{*} Presence of circumfix negation marks, such as <~<\\RL{\u0645\u0627}>, \\textipa{[ma]} + verb + <\\RL{\u0634}>, \\textipa{[S]}>. The last element of this structure must be omitted if there is another negation, such as the Tunisian adverb <\\RL{\u0639\u064f\u0645\u0652\u0631}>, \\textipa{[\"Qomr]}, \\textipa{\"}never\\textipa{\"}, as in the structure: <\\textipa{[\"Qomr]} + personal pronoun suffix + \\textipa{[m@]} + perfect verb>. This construction is used to express the concept of \\textipa{\"}never having done\\textipa{\"} the action in question, as in the example:\n<\\RL{\u0639\u064f\u0645\u0631\u064a \u0645\u0627 \u0643\u064f\u0646\u0652\u062a \u0646\u0650\u062a\u0652\u0635\u064e\u0648\u064f\u0651\u0631...}>, \\textipa{[\"Qomr-i ma \"k@nt n@ts\\super Qaw:@r]}, \\textipa{\"}I never imagined that...\\textipa{\"}. \n\\newline Instead, to deny an action pointing out that it will never repeat itself again, a structure widely used is <[ma] + \\textipa{[\"Qa:d]} + \\textipa{[S]} + imperfective verb>, where the element within the circumfix marks is a grammaticalized element of verbal origin from CA: <\\RL{\u0639\u0627\u062f}>, \\textipa {[\"Qa:d]}, meaning \\textipa{\"}to go back, to reoccur\\textipa{\"}, which gives the structure a sense of denied repetitiveness, as in the sentence:\n\\newline <\\RL{\u0647\u0648 \u0645\u0627 \u0639\u0627\u062f\u0650\u0634 \u064a\u064e\u0631\u0652\u062c\u064e\u0639}>, \\textipa{[\"hu:wa ma \"Qa:d-S \"j@rZaQ]}, \n\n\\textipa{\"}he will not come back\\textipa{\"}.\n\\newline Finally, to deny the nominal phrase, in TUN both the\n<\\RL{\u0645\u0648\u0634}>, \\textipa{[\"mu:S]}, and the circumfix marks are frequently used. \nFor the negative form of the verb \\textipa{\"}to be\\textipa{\"} in the present, circumfix marks can be combined with the personal suffix pronoun, placed between the marks, as in <\\RL{\u0645\u064e\u0627\u0646\u0650\u064a\u0634}>, \\textipa{[ma\"ni:S]}, \\textipa{\"}I am not\\textipa{\"}.\\\\ Within the negation marks we can also find other types of nominal structures, such as: <\\textipa{[fi:]} + \\textipa{[\"bE:l]}(\\textipa{\"}mind\\textipa{\"})\n+ personal pronoun suffix>, which has a value equivalent to the verb \\textipa{\"}be aware of\\textipa{\"}, as in the example: \\\\ <\\RL{\u0645\u0627 \u0641\u064a \u0628\u0627\u0644\u064a\u0634}>, \\textipa{[ma fi: bE:l-\"i:-S]}, \\textipa{\"}I did not know\\textipa{\"}.\n\\end{adjustwidth}\n\n\\subsection{Tunisian Arabish}\n\\label{subsec:TA_system}\n\nAs previously mentioned, we consider Arabish a quasi-oral system.\nWith \\textit{quasi-orality} it is intended the form of communication typical of Computer-Mediated Communication (CMC), characterized by informal tones, dependence on context, lack of attention to spelling and especially the ability to create a sense of collectivity \\cite{hert1999quasi}\\footnote{Even though the CMC is generally a type of asynchronous communication.}. \n\nTA and TUN have not a standard orthography, with the exception of the CODA TUN. Nevertheless, TA is a spontaneous code-system used since more than ten years, and is being conventionalized by its daily usage. \n\nFrom the table~\\ref{tab1}, where the coding scheme of TA is illustrated, it is possible to observe that there is no one-to-one correspondence between TA and TUN characters and that often Arabish presents overlaps in the encoding possibilities. \nThe main issue is represented by the not proper representation by TA of the emphatic phones: \\textipa{{[D\\super Q]}}, \\textipa{[t\\super Q]} and \\textipa{[s\\super Q]}.\n\n\\begin{table}[htbp]\n\\begin{center}\n\\begin{tabularx}{\\columnwidth}{c|c|c||c|c|c}\n\n \\hline\n \\textit{IPA}& \\textit{TUN} & \\textit{TA} & \\textit{IPA}& \\textit{TUN}& \\textit{TA} \\\\\n \\hline\n \\hline\n &&&&& \\\\\n \\textipa{[a:]} & \\RL{\u0629} & a, e, h & \\textipa{[a][a:]} & \\RL{\u0649}, \\RL{\u0627} & a, e, \u00e9, \u00e8\\\\\n \n \\textipa{[P]} & \\RL{\u0621} & 2 & \\textipa{[D\\super Q]} & \\RL{\u0636} & dh, th, d\\\\ \n \n \\textipa{[b]} & \\RL{\u0628} & b, p & \\textipa{[t\\super Q]} & \\RL{\u0637} & 6, t\\\\\n \n \\textipa{[t]} & \\RL{\u062a} & t & \\textipa{[D\\super Q]} & \\RL{\u0638} & th, dh\\\\\n \n \\textipa{[T]} & \\RL{\u062b} & th & \\textipa{[Q]} & \\RL{\u0639} & 3, a\\\\\n \n \\textipa{[Z]} & \\RL{\u062c} & j & \\textipa{[G]} & \\RL{\u063a} & 4, gh\\\\\n \n \\textipa{[\\textcrh]} & \\RL{\u062d} & 7, h & \\textipa{[f]} & \\RL{\u0641} & f\\\\\n \n \\textipa{[x]} & \\RL{\u062e} & 5, kh & \\textipa{[q]} & \\RL{\u0642} & 9, q\\\\\n \n \\textipa{[d]} & \\RL{\u062f} & d & \\textipa{[k]} & \\RL{\u0643} & k\\\\\n \n \\textipa{[D]} & \\RL{\u0630} & dh & \\textipa{[l]} & \\RL{\u0644} & l\\\\\n \n \\textipa{[r]} & \\RL{\u0631} & r & \\textipa{[m]} & \\RL{\u0645} & m\\\\\n \n \\textipa{[z]} & \\RL{\u0632} & z & \\textipa{[n]} & \\RL{\u0646} & n\\\\\n \n \\textipa{[s]} & \\RL{\u0633} & s & \\textipa{[h]} & \\RL{\u0647} & 8, h\\\\\n \n \\textipa{[S]} & \\RL{\u0634} & ch, (sh) & \\textipa{[w][u:]} & \\RL{\u0648} & ou, w\\\\\n \n \\textipa{[s\\super Q]} & \\RL{\u0635} & s & \\textipa{[j][i:]} & \\RL{\u064a} & i, y\\\\\n &&&&& \\\\\n \\hline\n\n\\end{tabularx}\n\\caption{Arabish code-system for TUN}\n\\label{tab1}\n \\end{center}\n\\end{table}\n\nOn the other hand, being TA not codified through the Arabic alphabet, it can well represent the phonetic realization of TUN, as shown by the\nfollowing examples: \n\n\\textbf{ *} The Arabic alphabet is generally used for formal\nconversations in Modern Standard Arabic (MSA), the Arabic of formal situations, or in that of \nClassical Arabic (CA), the Arabic of the Holy Qur'\u0101n, also known as\n'The Beautiful Language'. Like MSA and CA, also Arabic Dialects\n(ADs) can be written in the Arabic alphabet, but in this case it is\npossible to observe a kind of hypercorrection operated by the speakers in order to respect the writing rules of MSA. For example, in TUN texts written in Arabic script, it is possible to find a 'silent vowel' (namely an epenthetic \\textipa{\"}alif\n<\\RL{\u0627}>) written at the beginning of those words starting with\nthe sequence '\\#CCv', which is not allowed in MSA.\n\n\\textbf{ *} Writing TUN in Arabic script, the Code-Mixing or Switching in foreign language will be unnaturally reduced. \n\n\\textbf{ *} As described in table~\\ref{tab1}, the Arabic alphabet\nis provided with three short vowels, which correspond to the three long ones: \\textipa{[a:]}, \\textipa{[u:]}, \\textipa{[i:]}, but TUN presents\na wider range of vowels. Indeed, regarding the early presented\ncharacteristics of TUN, the TA range of vowels offers better possibility to represent most of the TUN characteristics outlined in the previous subsection, in particular:\n\n\\begin{itemize}[nosep]\n \\item Palatalization.\n \\item Vowel metathesis.\n \\item Monophthongization.\\footnote{Regarding the last two phenomena, they can be visible in Arabic script only in case of texts provided with short vowels, which are quite rare.}\n\\end{itemize}\n\n\n\n\\section{Tunisian Arabish Corpus}\n\\label{sec:TArC}\n\nIn order to analyze the TA system, we have built a TA Corpus based on social media data, considering this as the best choice to observe the quasi-oral nature of the TA system.\n\n\\subsection{Text collection}\n\\label{subsec:txt-coll}\nThe corpus collection procedure is composed of the following steps: \n\\begin{enumerate}[nosep]\n\\item Thematic categories detection.\n\\label{step1} \n\\item Match of categories with sets of semantically related TA keywords.\n\\label{step2}\n\\item Texts and metadata extraction. \\\\\n\\label{step3}\n\\end{enumerate}\n\n\\textbf{Step~\\ref{step1}.} In order to build a Corpus that was as representative as possible of the linguistic system, it was considered useful to identify wide thematic categories that could represent the most common topics of daily conversations on CMC.\n\nIn this regard, two instruments with a similar thematic organization have been employed: \n\\begin{itemize}[nosep]\n\\item\\textbf{'A Frequency Dictionary of Arabic'} \\\\ \\cite{buckwalter2014frequency} In particular its 'Thematic Vocabulary List' (TVL). \n\\item\\textbf{'Loanword Typology Meaning List'} \\\\ A list of 1460 meanings\\footnote{The 'Loanword Typology Meaning List' is a result of a joint project by Uri Tadmor and Martin Haspelmath: the 'Loanword Typology Project' (LWT), launched in 2004 and ended in 2008.} (LTML) \\cite{haspelmath2009loanwords}. \n\\end{itemize}\nThe TVL consists of 30 groups of frequent words, each one represented by a thematic word.\nThe second consists of 23 groups of basic meanings sorted by representative word heading.\nConsidering that the boundaries between some categories are very blurred, some categories have been merged, such as \\textipa{\"}Body\\textipa{\"} and \\textipa{\"}Health\\textipa{\"}, (see table~\\ref{tab2}). Some others have been eliminated, being not relevant for the purposes of our research, e.g. \\textipa{\"}Colors\\textipa{\"}, \\textipa{\"}Opposites\\textipa{\"}, \\textipa{\"}Male names\\textipa{\"}. In the end, we obtained 15 macro-categories listed in table~\\ref{tab2}. \\\\\n\n\\textbf{Step~\\ref{step2}.} Aiming at easily detect texts and the respective \\textit{seed URLs}, without introducing relevant query biases, we decided to avoid using the category names as query keywords \\cite{schafer2013web}. Therefore, we associated to each category a set of TA keywords belonging to the basic Tunisian vocabulary. We found that \na semantic category with three meanings was enough to obtain a sufficient number of keywords and URLs for each category. For example, to the category \\textipa{\"}Family\\textipa{\"} the meanings: \\textipa{\"}son\\textipa{\"}, \\textipa{\"}wedding\\textipa{\"}, \\textipa{\"}divorce\\textipa{\"} have been associated in all their TA variants, obtaining a set of 11 keywords (table~\\ref{tab2}).\\\\\n\n\\begin{table}[htbp]\n\\begin{center}\n\\begin{tabularx}{\\columnwidth}{|X|X|} \n\n \\hline\n \\textbf{Macro-Categories}& \\textbf{Words Associated} \\\\\n \\hline\n 1. Family \\newline\\textit{son, wedding, divorce}& weld, wild, 3ars, 3ers, \\newline tla9, 6la9, tlaq, 6laq, tle9, tleq, 6leq\\\\\n \\hline\n 2. Clothing \\newline\\textit{dress, shoes, t-shirt} &robe, lebsa, rouba, \\newline sabat, spedri, spadri, \\newline marioul, maryoul, \\newline meryoul, merioul\\\\\n \\hline\n 3. Automobiles \\newline\\textit{gasoil, engine, \\newline occasion} & mazout, motor, moteur, motour, forsa\\\\\n \\hline\n 4. Animals \\newline\\textit{cock, dog, cat} & sardouk, kelb, kalb, \\newline9attous, gattous\\\\\n \\hline\n 5. Body and Health \\newline\\textit{sick, doctor, health} & maridh, marith, mridh, ettbib, tbib, sa77a, sa7a, sahha, saha \\\\\n \\hline\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\\end{tabularx}\n\\caption{Example of the fifteen thematic categories}\n\\label{tab2}\n \\end{center}\n\\end{table}\n\n\\textbf{Step~\\ref{step3}.} \nWe collected about 25,000 words and the related metadata as first part of our corpus, which are being semi-automatically transcribed into Arabic characters (see next sections).\nWe planned to increase the size of the corpus at a later time.\nRegarding the metadata, we have extracted the information published by users, focusing on the three types of information generally used in ethnographic studies: \n\\begin{enumerate}[nosep]\n\\item Gender: Male (M) and Female (F). \n\\item Age range: [10-25], [25-35], [35-50], [50-90]. \n\\item City of origin.\n\\end{enumerate}\n\n\\subsection{Corpus Creation}\n\\label{subsec:Corpus_cr}\n\nIn order to create our corpus, we applied a word-level annotation. This phase was preceded by some data pre-processing steps, in particular tokenization. \nEach token has been associated with its annotations and metadata (table~\\ref{tab3}).\nIn order to obtain the correspondence between Arabish and Arabic morpheme transcriptions, tokens were segmented into morphemes.\nThis segmentation was carried out completely manually for a first group of tokens.\\footnote{Arabic, in general, is a language with a high level of synthesis, that means that it can concentrate within a token more syntactic and grammatical information through the addition of different morphemes.}\nIn its final version, each token is associated with a total of 11 different annotations, corresponding to the number of the annotation levels we chose.\nAn excerpt of the corpus after tokens annotation is depicted in table~\\ref{tab3}.\n\nFor the sake of clarity, in table~\\ref{tab3} we show:\\\\\n\\textbf{ *} The A column, \\textit{Cor}, indicates the token\\textipa{\"}s source code. For example, the code \\textit{3fE}, which stands for \\textit{3rab fi Europe}, is the forum from which the text was extracted.\\\\\n\\textbf{ *} The B column, \\textit{Textco}, is the publication date of the text. \\\\%e.g. '150902' stand for '2015\/09\/02'\n\\textbf{ *} The C column, \\textit{Par}, is the row index of the token in the paragraph.\\\\\n\\textbf{ *} The D column, \\textit{W}, is the index of the token in the sentence. When \\textipa{\"}W\\textipa{\"} corresponds to a range of numbers, it means that the token has been segmented in to its components, specified in the rows below.\\\\\n\\textbf{ *} The E column, \\textit{Arabi\\textipa{S}}, corresponds to the token transcription in Arabish.\\\\\n\\textbf{ *} The F column, \\textit{Tra}, is the transcription into Arabic characters.\\\\\n\\textbf{ *} The G column, \\textit{Ita}, is the translation to Italian. \\\\\n\\textbf{ *} The H column, \\textit{Lem}, corresponds to the lemma.\\\\\n\\textbf{ *} The I column, \\textit{POS}, is the Part-Of-Speech tag of the token. The tags that have been used for the POS tagging are conform to the annotation system of Universal Dependencies.\\\\\n\\textbf{ *} The last three columns (J, K, L) contain the metadata: \\textit{Var}, \\textit{Age}, \\textit{Gen}.\n\n\n\\begin{table}[htbp]\\small\n\\begin{center}\n\\begin{tabularx}{\\columnwidth}{|XlllllX|}\n \\hline \n \\textbf{A} & \\textbf{B} & \\textbf{C} & \\textbf{D} & \\textbf{E} & \\textbf{F} & \\textbf{G}\\\\\n \\hline\n \\hline\n Cor\\label{Cor} & Textco & Par & W & Arabi\\textipa{S} & Tra & Ita\\\\\n \\hline\n \\hline\n &&&&&&\\\\\n 3fE & 150902 & 2 & 1 & kifech & \\RL{\u0643\u064a\u0641\u0627\u0634} & come\\\\\n \n 3fE & 150902 & 2 & 2 & tchou- & \\RL{\u062a\u0634\u0648\u0641\u0648\u0627} & vi\\\\\n & & & & fou & & pare\\\\\n \n 3fE & 150902 & 2 & 3-4 & l3icha & \\RL{\u0627\u0644\u0639\u064a\u0634\u0629} & la vita\\\\\n \n 3fE & 150902 & 2 & 3 & l & \\RL{\u0627\u0644\u0640} & -\\\\\n \n 3fE & 150902 & 2 & 4 & 3icha & \\RL{\u0639\u064a\u0634\u0629} & -\\\\\n \n 3fE & 150902 & 2 & 5-6 & fil & \\RL{\u0641\u0627\u0644\u0640} & all\\textipa{\"}\\\\\n \n 3fE & 150902 & 2 & 5 & f & \\RL{\u0641\u0640} & -\\\\\n \n 3fE & 150902 & 2 & 6 & il & \\RL{\u0627\u0644\u0640} & -\\\\\n \n 3fE & 150902 & 2 & 7 & 4orba & \\RL{\u063a\u0631\u0628\u0629} & estero\\\\\n \n 3fE & 150902 & 2 & 8 & ? & \\RL{\u061f} & ?\\\\\n &&&&&&\\\\\n \\hline\n \\hline\n & \\textbf{H} & \\textbf{I} & \\textbf{J} & \\textbf{K} & \\textbf{L} &\\\\\n \\hline\n \\hline\n & Lem & POS & Var & Age & Gen &\\\\\n \\hline\n \\hline \n &&&&&&\\\\\n & \\RL{\u0643\u064a\u0641\u0627\u0634} & adv & Bnz & 25-35 & M &\\\\\n \n & \\RL{\u0634\u0627\u0641} & verb & Bnz & 25-35 & M &\\\\\n \n & \\RL{\u0639\u064a\u0634\u0629} & noun & Bnz & 25-35 & M &\\\\\n \n & \\RL {\u0627\u0644\u0640} & det & Bnz & 25-35 & M &\\\\\n \n & \\RL{\u0639\u064a\u0634\u0629} & noun & Bnz & 25-35 & M &\\\\\n \n & \\RL{\u0641\u064a} & prep & Bnz & 25-35 & M &\\\\\n \n & \\RL{\u0641\u064a} & prep & Bnz & 25-35 & M &\\\\\n \n & \\RL {\u0627\u0644\u0640} & det & Bnz & 25-35 & M &\\\\\n \n & \\RL{\u063a\u0631\u0628\u0629} & noun & Bnz & 25-35 & M &\\\\\n \n & \\RL{\u061f} & pct & Bnz & 25-35 & M &\\\\\n &&&&&&\\\\\n \\hline \n\\end{tabularx}\n\\caption{An Excerpt of the TArC structure. In the column \\textit{Var}, \\textipa{\"}Bnz\\textipa{\"} stands for \\textipa{\"}Bizerte\\textipa{\"} a northern city in Tunisia. Glosses: w1:\\textit{how}, w2:\\textit{do you(pl) see}, w3-4:\\textit{the life}, w5-6:\\textit{at the}, w7:\\textit{outside}, w8:\\textit{?}}\n\\label{tab3}\n \\end{center}\n\\end{table}\n\n\n\nSince TA is a spontaneous orthography of TUN, we considered important to adopt the CODA* guidelines as a model to produce a unified lemmatization for each token (column \\textit{Lem} in table~\\ref{tab3}).\nIn order to guarantee accurate transcription and lemmatization, we annotated manually the first 6,000 tokens with all the annotation levels.\n\nSome annotation decisions were taken before this step, with regard to specific TUN features:\n\n \\textbf{* Foreign words.} We transcribed the Arabish words into Arabic characters, except for Code-Switching terms. In order to not interrupt the sentences continuity we decide to transcribe Code-Mixing terms into Arabic script. However, at the end of the corpus creation process, these words will be analyzed, making the distinction between acclimatized loans and Code-Mixing. \n \\newline\\newline \n \n The first ones will be transcribed into Arabic characters also in \\textit{Lem}, as shown in table~\\ref{tab4}. The second ones will be lemmatized in the foreign language, mostly French, as shown in table~\\ref{tab5}.\n \\newline\\textbf{* Typographical errors.} Concerning typos and typical problems related to the informal writing habits in the web, such as repeated characters to simulate prosodic features of the language, we have not maintained all these characteristics in the transcription (column \\textit{Tra}). Logically, these were neither included in \\textit{Lem}, according to the CODA* conventions, as shown in table~\\ref{tab5}.\n \\newline\\textbf{* Phono-Lexical exceptions.} We used the grapheme \\newline<\\RL{\u06a8}>, \\textipa{[q]}, only in loanword transcription and lemmatization. As can be seen in table~\\ref{tab6}, the Hilalian phoneme [g] of the Turkish loanword \\textipa{\"}gawriyya\\textipa{\"}, has been transcribed and lemmatized with the grapheme <\\RL{\u0642}>, \\textipa{[g]}.\n \\newline\\textbf{* Glottal stop.} As explained in CODA TUN, real initial and final glottal stops have almost disappeared in TUN. They remain in some words that are treated as exceptions, e.g. <\\RL{\u0623\u0633\u0626\u0644\u0629}>, \\textipa{[\"PasPla]}, \\textipa{\"}question\\textipa{\"} \\cite{DBLP:conf\/lrec\/ZribiBMEBH14}. Indeed, we transcribe the glottal stops only when it is usually pronounced, and if it does not, we do not write the glottal stops at the beginning of the word or at the end, neither in the transcription, nor in the lemmas. \\\\\n\n\n\\begin{table}[htbp]\\small\n\\begin{center}\n\\begin{tabularx}{\\columnwidth}{|XccccX|}\n \\hline\n W & Arabi\\textipa{S} & Tra & Ita & Lem & POS \\\\\n \\hline\n \\hline\n 4 & konna & \\RL{\u0643\u0646\u0651\u0627} & siamo stati & \\RL{\u0643\u0627\u0646} & verb \\\\\n \n 5 & far7anin & \\RL{\u0641\u0631\u062d\u0627\u0646\u064a\u0646} & contenti & \\RL{\u0641\u0631\u062d\u0627\u0646} & adj \\\\\n \n 6 & , & \\RL{,} & , & \\RL{,} & punct \\\\\n \n 7 & merci & \\RL{\u0645\u0631\u0633\u064a} & grazie & \\RL{\u0645\u0631\u0633\u064a} & intj \\\\\n \n \n \\hline\n\n\\end{tabularx}\n\\caption{Loanword example in the corpus. Glosses: w4:\\textit{we were}, w5:\\textit{happy}, w6:\\textit{,} , w7:\\textit{thanks}}\n\\label{tab4}\n \\end{center}\n\\end{table}\n\n\n\n\n\\begin{table}[htbp]\\small\n\\begin{center}\n\\begin{tabularx}{\\columnwidth}{|XccccX|} \n \\hline\n W & Arabi\\textipa{S} & Tra & Ita & Lem & POS \\\\\n \\hline\n \\hline\n 1 & R7 & recette & ricetta & recette & noun \\\\\n \n 2 & patee & p\u00e2t\u00e9 & pat\u00e8 & p\u00e2t\u00e9 & noun \\\\\n \n 3 & dieri & \\RL{\u062f\u064a\u0627\u0631\u064a} & fatto in casa & \\RL{\u062f\u064a\u0627\u0631\u064a} & adj \\\\\n \n 4 & w & \\RL{\u0648} & e & \\RL{\u0648} & cconj \\\\\n \n 5 & bniiiiin & \\RL{\u0628\u0646\u064a\u0646} & buonissimo & \\RL{\u0628\u0646\u064a\u0646} & adj \\\\\n \\hline\n \n\\end{tabularx}\n\\caption{Prosody example in the corpus. Glosses: w1:\\textit{recipe}, w2:\\textit{p\u00e2t\u00e9}, w3:\\textit{homemade}, w4:\\textit{and}, w5:\\textit{delicious}}\n\\label{tab5}\n \\end{center}\n\\end{table}\n\n\n\\begin{table}[htbp]\n\\begin{center}\n\\begin{tabularx}{\\columnwidth}{|XccccX|}\n \\hline\n W & Arabi\\textipa{S} & Tra & Ita & Lem & POS \\\\ \n \\hline\n \\hline\n 1 & Mtala9 & \\RL{\u0645\u0637\u0644\u0642} & divorziato & \\RL{\u0645\u0637\u0644\u0642} & noun \\\\\n \n 2 & min & \\RL{\u0645\u0646} & da & \\RL{\u0645\u0646} & noun \\\\\n \n 3 & gawriya & \\RL{\u06a8\u0627\u0648\u0631\u064a\u0629} & (un\\textipa{\"})europea & \\RL{\u06a8\u0627\u0648\u0631\u064a} & adj \\\\\n \\hline\n \n \n\\end{tabularx}\n\\caption{Phono-Lexical exceptions in the corpus. Glosses: w1:\\textit{divorced}, w2:\\textit{from}, w3:\\textit{European(f)}}\n\\label{tab6}\n \\end{center}\n\\end{table}\n\n\n\n\\textbf{* Negation Marks.} CODA TUN proposes to keep the MSA rule of maintaining a space between the first negation mark and the verb, in order to uniform CODA TUN to the first CODA \\cite{habash-etal-2012-conventional}. However, as \\newcite{DBLP:conf\/lrec\/ZribiBMEBH14} explains, in TUN this rule does not make really sense, but it should be done to preserve the consistency among the various CODA guidelines. \nIndeed, in our transcriptions we report what has been produced in Arabish following CODA TUN rules, while in lemmatization we report the verb lemma. At the same time we segment the negative verb in its minor parts: the circumfix negation marks and the conjugated verb. For the first one, we describe the negative morphological structure in the \\textit{Tra} and \\textit{Lem} columns, as in table~\\ref{tab7}. For the second one, as well as the other verbs, we provide transcription and lemmatization.\n\n\\begin{table}[htbp]\\small\n\\begin{center}\n\\begin{tabularx}{\\columnwidth}{|llllll|}\n \\hline\n W & Arabi\\textipa{S} & Tra & Ita & Lem & POS \\\\\n \\hline\n \\hline\n 14-15 & manajem- & \\RL{\u0645\u0627 \u0646\u062c\u0645\u0646\u0627\u0634} & non & \\RL{\u0646\u062c\u0651\u0645} & verb \\\\\n & nech & & abbiam &&\\\\\n & & & potuto &&\\\\\n \n 14 & ma + ch & \\RL{\u0645\u0627+\u0634} & - & \\RL{\u0634}+V+\\RL{\u0645\u0627} & part \\\\\n \n 15 & najemne & \\RL{\u0646\u062c\u0645\u0646\u0627} & - & \\RL{\u0646\u062c\u0651\u0645} & verb \\\\\n \\hline\n \n\\end{tabularx}\n\\caption{Circumfix negation marks in the corpus. Glosses: w14-15:\\textit{we could not}}\n\\label{tab7}\n \\end{center}\n\\end{table}\n\n\n\\section{Incremental and Semi-Automatic Transcription}\n\\label{sec:pro}\n\nIn order to make the corpus collection easier and faster, we adopted a semi-automatic procedure based on sequential neural models \\cite{DBLP-journals\/corr\/abs-1904-04733,DinarelliGrobol-Seq2BiseqTransformer-2019}.\nSince transcribing Arabish into Arabic is by far the most important information to study the Arabish code-system, the semi-automatic procedure concerns only transcription from Arabish to Arabic script.\n\nIn order to proceed, we used the first group of (roughly) 6,000 manually transcribed tokens as training and test data sets in a 10-fold cross validation setting with 9-1 proportions for training and test, respectively. As we explained in the previous section, French tokens were removed from the data. More precisely, whole sentences containing \\emph{non-transcribable} French tokens (code-switching) were removed from the data.\nSince at this level there is no way for predicting when a French word can be transcribed into Arabic and when it has to be left unchanged, French tokens create some noise for an automatic, probabilistic model. After removing sentences with French tokens, the data reduced to roughly 5,000 tokens. We chose this amount of tokens for annotation blocks in our incremental annotation procedure.\n\nWe note that by combining sentence, paragraph and token index in the corpus, whole sentences can be reconstructed. However, from 5,000 tokens roughly 300 sentences could be reconstructed, which are far too few to be used for training a neural model.\\footnote{Preliminary experiments gave indeed quite poor results, below 50\\% token-level accuracy on average.}\nInstead, since tokens are transcribed at morpheme level, we split Arabish tokens into characters, and Arabic tokens into morphemes, and we treated each token itself as a sequence.\nOur model learns thus to map Arabish characters into Arabic morphemes.\n\nThe 10-fold cross validation with this setting gave a token-level accuracy of roughly 71\\%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data.\nThis result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. \nWith this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block.\nThis can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days.\\footnote{We based our estimations of the annotation time needed on the time we spent correcting tokens, which is actually faster because tokens are already transcribed, they don't need to be transcribed from scratch.}\nThe accuracy of the model on the annotation of the second block was roughly 70\\%, which corresponds to the accuracy on the test set.\nThe manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected.\nBoth accuracy on the test set and on the annotation block remained at around 70\\%. This is because the block added to the training data was significantly different from the previous and from the third.\nAdding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80\\%.\nThis incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up.\n\nOur goal concerning transcription, is to have the 25,000 tokens mentioned in section~\\ref{subsec:txt-coll} annotated automatically and manually corrected. These data will constitute our gold annotated data, and they will be used to automatically transcribe further data.\n\n\\section{Conclusions}\n\\label{sec:concl}\n\nIn this paper we presented TArC, the first Tunisian Arabish Corpus annotated with morpho-syntactic information. We discussed the decisions taken in order to highlight the phonological and morphological features of TUN through the TA corpus structure. Concerning the building process, we have shown the steps undertaken and our effort intended to make the corpus as representative as possible of TA. We therefore described the texts collection stage, as well as the corpus building and the semi-automatic procedure adopted for transcribing TA into Arabic script, taking into account CODA* and CODA TUN guidelines. At the present stage of research, TArC consists of 25.000 tokens, however our work is in progress and for future research we plan to enforce the semi-automatic transcription, which has already shown encouraging results (accuracy = 70\\%). We also intend to realize a semi-automatic TA Part-Of-Speech tagger.\nThus, we aim to develop tools for TA processing and, in so doing, we strive to complete the annotation levels (transcription, POS tag, lemmatization) semi-automatically in order to increase the size of the corpus, making it available for linguistic analyses on TA and TUN. \n\n\n\n\n\n\n\n\n\\section{Bibliographical References}\n\\label{reference}\n\\bibliographystyle{lrec2020}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\nThere is cumulative evidence that theories with exponential non-local operators of the form\n\\begin{equation}\\label{serep0}\ne^{-(\\Box\/M^2)^n}\n\\end{equation}\nhave interesting renormalization properties. After early studies of quantum scalar field theories \\cite{AE1,AE2,Efi77,Efi01} and gauge and gravitational theories \\cite{Kra87,Mof1,HaMo,EMKW,Cor1,Cor2,Cor3}, in recent years there has been a surge of interest in non-local classical and quantum gravity \\cite{Kuz89,Tom97,Bar1,Bar2,BMS,Kho06,cuta8,BKM1,Mof3,Mod1,BGKM,Mod2,Mod3,Mod4,BMTs,BCKM,CaMo2,MoRa1,TBM,MoRa2,MoRa3,Edh18,BKLM}. A non-local theory of gravity aims to fulfill a synthesis of minimal requirements: (i) spacetime is a continuum where Lorentz invariance is preserved at all scales; (ii) classical local (super-)gravity should be a good approximation at low energy; (iii) the quantum theory must be perturbatively super-renormalizable or finite; (iv) the quantum theory must be unitary and ghost free, without extra pathological degrees of freedom in addition to those present in the classical theory; (v) typical classical solutions must be singularity-free.\n\nThe typical structure of the gravitational action in $D$ topological dimensions is\n\\begin{equation}\\nonumber\nS_g = \\frac{1}{2\\kappa^2} \\int d^D x \\sqrt{-g}\\,\\left[R-2\\Lambda + R_{\\mu \\nu} \\, \\mathcal{F}_2(\\Box) \\, R^{\\mu \\nu} + R \\mathcal{F}_0(\\Box) R\\right]\n\\end{equation}\nwhere $\\kappa^2=8\\pi G$ is the gravitational constant and $\\mathcal{F}_{0,2}$ are \\emph{form factors} dependent on the dimensionless ratio $r_*\\Box:=\\Box\/M^2$, where $M=1\/\\sqrt{r_*}$ is the characteristic energy scale of the system, $\\Box=\\nabla_\\mu \\nabla^\\mu$ is the Laplace--Beltrami or d'Alembertian operator and $\\nabla_\\nu V_\\mu := \\partial_\\nu V_\\mu-\\Gamma^\\sigma_{\\mu\\nu}V_\\sigma$ is the covariant derivative of a vector $V_\\mu$. Our conventions for the curvature invariants are\n\\begin{eqnarray}\n&& \\Gamma^\\rho_{\\mu\\nu}:= \\frac12 g^{\\rho\\sigma}\\left(\\partial_{\\mu} g_{\\nu\\sigma}+\\partial_{\\nu} g_{\\mu\\sigma}-\\partial_\\sigma g_{\\mu\\nu}\\right)\\,,\\label{leci}\\\\\n&& R^\\rho_{~\\mu\\sigma\\nu}:= \\partial_\\sigma \\Gamma^\\rho_{\\mu\\nu}-\\partial_\\nu \\Gamma^\\rho_{\\mu\\sigma}+\\Gamma^\\tau_{\\mu\\nu}\\Gamma^\\rho_{\\sigma\\tau}-\\Gamma^\\tau_{\\mu\\sigma}\\Gamma^\\rho_{\\nu\\tau}\\,,\\label{rite}\\\\\n&& R_{\\mu\\nu}:= R^\\rho_{~\\mu\\rho\\nu}\\,,\\qquad R:= R_{\\mu\\nu}g^{\\mu\\nu}\\,.\n\\end{eqnarray}\nThe particular choice of form factors\n\\begin{equation}\n\\mathcal{F}_2(\\Box) = \\frac{e^{-r_*\\Box} -1}{\\Box} \\,,\\qquad \\mathcal{F}_0(\\Box) = -\\frac{e^{-r_*\\Box}-1}{2\\Box} \\,,\\nonumbe\n\\end{equation}\nleads to the action \\cite{Mod1,Mod2,Mod3,Mod4,CaMo2}\n\\begin{equation}\nS_g = \\frac{1}{2\\kappa^2}\\int d^D x \\sqrt{-g}\\,\\left[R-2\\Lambda+ G_{\\mu\\nu} \\, \\gamma_{r_*}(\\Box) \\, R^{\\mu\\nu} \\right],\\label{nlffg}\n\\end{equation}\nwhere $G_{\\mu\\nu}$ is the Einstein tensor \\Eq{Eiten} and\n\\begin{equation}\\label{fofag}\n\\gamma_{r_*}(\\Box) := \\frac{e^{-r_*\\Box}-1}{\\Box}\\,.\n\\end{equation}\nThis model is dictated by the above program (i)--(v) and may be also regarded as a phenomenological non-local limit of M-theory \\cite{CaMo2}. The role of the non-local operator $1\/\\Box$ is to compensate the second-order derivatives in curvature invariants. Its definition is presented in appendix \\ref{app1}. To date, the perturbative renormalizability of the theory with \\Eq{fofag} has been proven only with the use of the resummed propagator \\cite{TBM}, while infinities have not been tamed yet in the orthodox expansion with the bare propagator. Nevertheless, this theory encodes all the main features of those non-local quantum gravities that have been shown to be renormalizable and its dynamics is simpler to deal with.\n\nEven without considering gravity and the quantum limit, there is a general conceptual issue usually characterizing non-local physics. Namely, the Cauchy problem can be ill defined or highly non-standard in non-local theories \\cite{Lew33,Car36,PU,Pau53}. In fact, while there is a time-honored tradition on \\emph{linear} differential equations with infinitely many derivatives that admit a fair mathematical treatment \\cite{Lew33,Car36}, \\emph{non-linear} non-local equations such as those appearing in non-local field theories are a very different and much trickier business. For any tensorial field $\\varphi(t,{\\bf x})$, it entails an infinite number of initial conditions $\\varphi(t_{\\rm i},{\\bf x})$, $\\dot\\varphi(t_{\\rm i},{\\bf x})$, $\\ddot\\varphi(t_{\\rm i},{\\bf x})$, \\dots, representing an infinite number of degrees of freedom. As the Taylor expansion of $\\varphi(t,{\\bf x})$ around $t=0$ is given by the full set of initial conditions, specifying the Cauchy problem would be tantamount to knowing the solution itself, if analytic \\cite{MoZ}. This makes it very difficult to find analytic solutions to the equations of motion, even on Minkowski spacetime. Fortunately, the exponential operator \\Eq{serep0} is under much greater control than other non-local operators, since (at least for finite $n$) the diffusion-equation method is available to find analytic solutions \\cite{roll,cuta2,cuta3,cuta4,MuNu3,cuta5,cuta6,cuta7} which are well defined when perturbative expansions are not \\cite{cuta2}. The Cauchy problem can be rendered meaningful, both in the free theory \\cite{Car36,PU,BK1} and in the presence of interactions \\cite{cuta3}. Consider a real scalar field $\\phi(x)$ dependent on spacetime coordinates $x=(t,{\\bf x})$. According to the diffusion-equation method, one promotes $\\phi(t,{\\bf x})$ to a field $\\Phi(r,t,{\\bf x})$ living in an extended spacetime with a fictitious extra coordinate $r$. This field is assumed to obey the diffusion equation $(\\Box-\\partial_r)\\Phi(r,t,{\\bf x})=0$, implemented at the level of the $(D+1)$-dimensional action by introducing an auxiliary scalar field $\\chi(r,x)$ (dynamically constrained to be $\\chi=\\Box\\Phi$). Since the diffusion equation is linear in $\\Phi$ (and $\\chi$, consequently), the Laplace--Beltrami operator $\\Box$ commutes with the diffusion operator $\\partial_r$ and exponential operators act as translations on the extra coordinate, $e^{s\\Box}\\Phi(r,t,{\\bf x})=e^{s\\partial_r}\\Phi(r,t,{\\bf x})=\\Phi(r+s,t,{\\bf x})$. One can then show that, from the point of view of spacetime coordinates, the $(D+1)$-dimensional system is fully localized and that the only initial conditions to be specified are $\\Phi(r,t_{\\rm i},{\\bf x})$, $\\dot\\Phi(r,t_{\\rm i},{\\bf x})$, $\\chi(r,t_{\\rm i},{\\bf x})$, $\\dot\\chi(r,t_{\\rm i},{\\bf x})$ \\cite{cuta3}. The infinite number of initial conditions $\\phi(t_{\\rm i},{\\bf x})$, $\\dot\\phi(t_{\\rm i},{\\bf x})$, $\\ddot\\phi(t_{\\rm i},{\\bf x})$, \\dots have been transferred into two initial conditions, which are actually boundary conditions in $r$, for an auxiliary field. When interactions are turned off, $\\chi$ vanishes and one obtains the single degree of freedom, represented by $\\phi(t_{\\rm i},{\\bf x})$ and $\\dot\\phi(t_{\\rm i},{\\bf x})$, of the free local theory.\\footnote{This is obvious when integrating by parts the kinetic term, $\\phi f(\\Box) \\phi\\to h(\\Box)\\phi \\Box h(\\Box) \\phi$, and absorbing non-locality with the field redefinition $\\tilde\\phi=h(\\Box)\\phi$.} The original system is recovered when $r$ acquires a specific, fixed value proportional to the scale $r_*$. This value depends on the solution and is determined by solving the localized equations at $r=\\beta r_*$, where $\\beta$ is a constant. The resulting solutions $\\phi(x)=\\Phi(\\beta r_*,x)$ are not exact in general but they satisfy the equations of motion to a very good level of approximation \\cite{roll,cuta5,cuta7}. \n\nFor non-local gravity, one would like to apply the same method to the metric itself or to curvature invariants $\\mathcal{R}(g)$, but this is not possible in a direct way. Calling $\\mathcal{R}(r,x)$ the curvature invariants of a putative localized theory, since the diffusion equation $(\\Box-\\partial_r)\\mathcal{R}(r,x)=0$ would be non-linear in the metric $g_{\\mu\\nu}$, one would have\n\\begin{equation}\\label{probl}\n[\\Box(g),\\partial_r]\\mathcal{R}(g)\\neq 0\\,,\n\\end{equation}\nand one would be unable to trade non-local operators for shifts in the extra direction. Moreover, the diffusion method applies for exponential operators, while in the actual quantum-gravity action \\Eq{nlffg} non-locality is more complicated.\n\nIn this paper, we address this problem. First, we will use a field redefinition (already employed in other non-local gravities, although not for \\Eq{nlffg} \\cite{BMS,BCKM}, and similar to those used in scalar-tensor theories and modified gravity models) to transfer all non-locality to an auxiliary field $\\phi_{\\mu\\nu}$. Next, we impose the diffusion equation on $\\phi_{\\mu\\nu}$: the linearity problem is thus immediately solved and one can proceed to localize the non-local system, count the initial conditions and identify the degrees of freedom, which are finite in number. From there, one can begin the study of the dynamical solutions of the classical Einstein equations, but this goes beyond the scope of the present work. Counting non-local degrees of freedom is a subject surrounded by a certain halo of mystery and confusion in the literature. To make it hopefully clearer, we will make a long due comparison of the counting procedure and of its outcome in the methods proposed to date: the one based on the diffusion equation and the delocalization approach by Tomboulis \\cite{Tom15}.\n\n\\subsection{Plan of the paper}\n\nIn preparation for the study of non-local gravity, the diffusion-equation method is reviewed in section \\ref{scala} for a scalar field. This example is very useful because it contains virtually all the main ingredients we will need to localize non-local gravity and rewrite it in a user-friendly way: localized action, auxiliary fields, slicing choice, matching of the non-local and localized equations of motion, counting of degrees of freedom, solution of the Cauchy problem, and so on. The non-local scalar is introduced in section \\ref{scala1}, while the localization procedure is described in section \\ref{ized}. The counting of initial conditions and degrees of freedom is carried out in section \\ref{scala3}, where we find that this number is, respectively, 4 and 1 for the real non-local scalar with non-linear interactions. Section \\ref{solu} reviews another practical use of the diffusion-equation method, the construction of analytic solutions of the equations of motion. In section \\ref{deloc}, we compare the diffusion-equation method with the results obtained in other approaches, mainly the delocalization method by Tomboulis \\cite{Tom15}. A generalization of the method to non-local operators $\\exp H(\\Box)$ with polynomial exponents is proposed in section \\ref{Hge}, while non-polynomial profiles $H(\\Box)$ require some extra input which is discussed in a companion paper \\cite{CMN3}.\n\nThe non-local gravitational action \\Eq{nlffg} is studied in section \\ref{eoms}, where we find the background-independent covariant Einstein equations for \\emph{any} form factor $\\gamma(\\Box)$ and recast the system in terms of an auxiliary field. Contrary to other calculations in the literature \\cite{BCKM,Kos13,CKMT}, we find the equations of motion for an exponential-type form factor \\Eq{fofag} in terms of parametric integrals rather than from the series expansion of the non-local operators. This new form is crucial both to solve the initial-value problem and to find explicit solutions with the diffusion-equation method.\n\nThe localized system corresponding to the non-local gravitational action \\Eq{nlffg} is introduced and discussed in section \\ref{locnlg}. After defining the localized action in section \\ref{loac}, we obtain the equations of motion in section \\ref{loeom}, which agree with the non-local ones. The counting of initial conditions and degrees of freedom is done in section \\ref{lodof}, where we find that they amount to, respectively, 4 and $D(D-2)$. Appendices contain several technical details and the full derivation of the equations of motion.\n\nTherefore, although in sections \\ref{ized} and \\ref{locnlg} we will concentrate on the form factor \\Eq{fofag} for which renormalization is likely but still under debate, our results with auxiliary fields (section \\ref{aux}) will be valid for an arbitrary form factor, while in section \\ref{Hge} and in \\cite{CMN3} we will generalize the diffusion-equation approach to form factors associated with finite quantum theories.\n\n\\subsection{Summary of main equations and claims}\n\nTo orient the reader, we summarize here the key formul\\ae:\n\\begin{itemize}\n\\item Scalar field theory.\n\t\\begin{itemize}\n\t\\item Non-local action: \\Eq{fac}.\n\t\\item Non-local equation of motion: \\Eq{tpheom}.\n\t\\item Localized action: \\Eq{act}.\n\t\\item Localized equations of motion: \\Eq{eomchi2}, \\Eq{eomP12}, \\Eq{eomP22}.\n\t\\item Constraints on localized dynamics: \\Eq{rstcon1}, \\Eq{rstcon2}.\n\t\\item Number of field degrees of freedom: \\Eq{dof1}.\n\t\\item Number of initial conditions: \\Eq{dofic}.\n\t\\end{itemize}\n\\item Gravity.\n\t\\begin{itemize}\n\t\\item Non-local action: \\Eq{nlffgb}.\n\t\\item Non-local equations of motion: \\Eq{EinEq1}.\n\t\\item Non-local action with auxiliary field: \\Eq{nlff3}.\n\t\\item Non-local equations of motion with auxiliary field: \\Eq{eomnl1}.\n\t\\item Localized action: \\Eq{gactcm}.\n\t\\item Localized equations of motion: \\Eq{difPg}, \\Eq{chimnde}, \\Eq{uff}, \\Eq{lasto}.\n\t\\item Constraints on localized dynamics: \\Eq{slic}, \\Eq{Fmu2}.\n\t\\item Number of field degrees of freedom: \\Eq{dof3}.\n\t\\item Number of initial conditions: \\Eq{dof2}.\n\t\\end{itemize}\n\\end{itemize}\n\n\n\\section{Diffusion-equation method: scalar field}\\label{scala}\n\nBefore considering gravity, it will be useful to illustrate the main philosophy beyond and advantages of the diffusion-equation method. To this purpose, we review its application to a classical scalar field theory \\cite{cuta3}, expanding the discussion therein to cover all important points that will help us to understand the results for non-local gravitational theories. We present a simplified version of the scalar system, with no nested integrals, no free parameters in the diffusion equation, and fewer assumptions than in \\cite{cuta3}. The original version of \\cite{cuta3} can be found in appendix \\ref{app2}. A comparison between the scalar and gravitational systems will be done in section \\ref{loeom}.\n\n\n\\subsection{Non-local system: traditional approach and problems}\\label{scala1}\n\nConsider the scalar-field action in $D$-dimensional Minkowski spacetime (with signature $-,+,\\cdots,+$)\n\\begin{equation}\\label{fac}\nS_{\\phi} = \\int d^D x\\,\\mathcal{L}_\\phi,\\qquad \\mathcal{L}_\\phi =\\frac12\\phi\\Boxe^{-r_*\\Box}\\phi-V(\\phi),\n\\end{equation}\nwhere $r_*$ is a constant of mass dimension $[r_*]=-2$ and $V(\\phi)$ is a potential. We chose the exponential operator as the simplest example where the diffusion method works, but we will relax this assumption later to include operators of the form $\\exp H(\\Box)$ not contemplated in the original treatment in \\cite{cuta3}. Applying the variational principle to $S_\\phi$, the equation of motion is\n\\begin{equation}\\label{tpheom}\n\\Box e^{-r_*\\Box}\\phi-V'(\\phi)=0,\n\\end{equation}\nwhere a prime denotes a derivative with respect to $\\phi$. The action \\Eq{fac} and the dynamical equation \\Eq{tpheom} are a prototype of, respectively, a \\emph{non-local system} and a \\emph{non-local equation of motion}.\n\nThe initial-condition problem associated with \\Eq{tpheom} suffers from the conceptual issues outlined in the introduction. Rather than repeating the same mantra again, we recast the Cauchy problem as a problem of representation of the non-local operator $\\exp(-r_*\\Box)$. To find a solution of \\Eq{tpheom}, one must first define the left-hand side. The most obvious way to represent the exponential is via its series,\n\\begin{equation}\\label{opse}\ne^{-r_*\\Box}=\\sum_{n=0}^{+\\infty}\\frac{(-r_*\\Box)^n}{n!}=1-r_*\\Box+\\frac12 r_*^2\\Box^2+\\dots\\,.\n\\end{equation}\nTo find solutions, one can use different strategies. One of the oldest and most disastrous is to truncate the non-local operator up to some finite order $n_{\\rm max}$. In doing so, one introduces instabilities corresponding to the Ostrogradski modes of a higher-derivative theory which has little or nothing to do with the starting theory \\cite{MoZ,cuta2}. Exact procedures such as the root method exist for linear equations of motion \\cite{PU,EW1,BK1,Ver09} but they have the disadvantage of being applicable only to non-interacting systems. Another possibility is to choose a profile $\\phi(x)$ and apply the operator \\Eq{opse}, but the series does not converge in general \\cite{cuta2}. This does not necessarily mean that the chosen profile is not a solution of the equations of motion. Rather, the series representation \\Eq{opse} is ill defined for a portion of the space of solutions. Even in the case where an exact solution is found, however, this may be non-unique for a given set of initial conditions \\cite{EW1,MoZ,Tom15}.\n\n\n\\subsection{Localized system}\\label{ized}\n\nThe diffusion-equation method \\cite{roll,cuta3,MuNu3,cuta7}, some elements of which can be found already in \\cite{PU} (section III.B.3), bypasses the above-mentioned issues by converting the Cauchy problem into a boundary problem.\\footnote{A similar attempt was made in \\cite{CCG}.} All the non-locality is transferred into a fictitious extra direction $r$ and infinite initial conditions for the scalar field $\\phi(t,{\\bf x})$ are converted to a \\emph{finite} number of field conditions on the $r=\\beta r_*$ slice along the extra direction, where $\\beta$ is a positive dimensionless constant (i.e., it is the physical value of $r$ measured in $r_*$ units). In other words, the rectangle $[0,\\beta r_*]\\times [t_{\\rm i},t_{\\rm f}]$ can be spanned either along the $t$ (time) direction, as done when trying to solve the problem of initial conditions by brute force at $t=t_{\\rm i}$, or along the $r$ direction, as done in the boundary-value problem with the diffusion method; see Fig.\\ \\ref{fig1} here and Fig.\\ 1 of \\cite{MuNu3}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=8.2cm]{fig1}\n\\caption{\\label{fig1} Diffusion-equation method describing the dynamics of the scalar field theory \\Eq{fac} as the dynamics of the localized system \\Eq{act} on the slice $r=\\beta r_*$.}\n\\end{figure}\n\nWe will also be able to find the exact number of conditions required and to compare these results with those from other methods \\cite{Tom15}.\n\n\\subsubsection{Lagrangian formalism}\n\nThe main idea is to exploit the fact that the exponential operator in \\Eq{fac} acts as a translation operator if $\\phi$ obeys a diffusion equation. Using this property, we can convert the non-local system into a \\emph{localized} one where the diffusion equation is part of the dynamical equations, the field is evaluated at different points in an extra direction (along which the system is thus non-local), and only second-order derivative operators appear in the action and in the equations of motion. In this way, one can make sense of the Cauchy problem in the localized system and also in the non-local one, after establishing the conditions for which the two systems are equivalent \\cite{cuta3}. This construction goes through some initial guesswork about the form of the correct localized system, especially regarding the integration domain of certain parts of the action, but this is not difficult in general. Both the scalar case \\Eq{fac} and the gravitational action \\Eq{nlffg} are simple enough to create no big trouble.\n\nLet us therefore forget temporarily about the non-local system \\Eq{fac} and consider the $(D+1)$-dimensional local system \n\\begin{eqnarray}\n\\mathcal{S}[\\Phi,\\chi]&=&\\int d^D x\\,d r \\left(\\mathcal{L}_{\\Phi}+\\mathcal{L}_{\\chi}\\right)\\,,\\label{act}\\\\\n\\mathcal{L}_{\\Phi}&=&\\frac12\\Phi(r,x)\\Box \\Phi(r-r_*,x)-V[\\Phi(r,x)]\\,,\\label{locPh2}\\\\\n\\mathcal{L}_{\\chi}&=&\\frac12 \\int_0^{r_*} d q\\,\\chi(r-q,x)(\\partial_{r'}-\\Box)\\Phi(r',x)\\,.\\label{locch2}\n\\epsilon\nwhere $r$ is an extra direction, $r_*$ is a specific value of $r$, $\\Phi$ and $\\chi$ are $(D+1)$-dimensional scalar fields and\n\\begin{equation}\\label{rprime}\nr'=r+q-r_*\\,,\n\\end{equation}\nhence $\\partial_{r'}=\\partial_q$. The action \\Eq{act} is second-order (hence local) in spacetime derivatives and non-local in $r$ (because the fields take different arguments). The integration range of $r$ in \\Eq{act} is arbitrary, it can be set equal to $r\\in [0,+\\infty)$ or any other interval containing $[0,\\beta r_*]$ (the slices $r=0$ and $r=\\beta r_*$ play a special role: the former is the value where to specify the initial condition in $r$ of the diffusion equation, while the latter will be the physical value of the parameter $r$, for a given $\\beta$).\n\nThe equations of motion are calculated from the infinitesimal variations of the action, using the functional derivative $\\delta f(r,x)\/\\delta f(\\bar r, \\bar x)=\\delta(r-\\bar r)\\delta^{(D)}(x-\\bar x)$ for a field $f$. Since $\\bar x$ and $\\bar r$ are arbitrary, one can always assume the support of these delta distributions to lie within the integration domains in \\Eq{act}, so that integrations in $x$, $r$ and $q$ are removed and the fields evaluated at $x=\\bar x$ and $r=\\bar r$. Bars will be removed in the final equations of motion.\n\nThe first variation we calculate is with respect to $\\chi$. To keep notation light, let us ignore the trivially local $x$-dependence from now on. Doing it step by step,\n\\begin{eqnarray}\n0 &=& \\frac{\\delta\\mathcal{S}[\\Phi,\\chi]}{\\delta\\chi(\\bar r,\\bar x)}=\\frac12 \\int d r \\int_0^{r_*}d q\\,\\delta(r-q-\\bar r)(\\partial_{r'}-\\Box)\\Phi(r')\\nonumber\\\\\n&=& \\frac12\\int_{\\bar r}^{r_*+\\bar r} d r (\\partial_{r'}-\\Box)\\Phi(r')\\Big|_{r'=2r-\\bar r-r_*}.\\label{inter2}\n\\end{eqnarray}\nThe integration of the Dirac distribution in $q$ gives the prescription $00$. In the second case, the range in \\Eq{inter2} is reduced to $[\\bar r, r_*]$, since $\\bar r0$ is a real constant, and\n\\begin{equation}\\label{rstcon20}\n\\chi(r,x) = \\Box\\Phi(r,x)\\,.\n\\end{equation}\nIn fact, in this case $\\chi$ obeys the same diffusion equation \\Eq{eomchi2} as $\\Phi$, so that the two contributions in \\Eq{inter22} must vanish separately, thus yielding the two equations of motion (restoring $x$-dependence)\n\\begin{eqnarray}\n\\hspace{-.8cm} 0&=&\\frac12[\\Box\\Phi(r-r_*,x)+\\chi(r-r_*,x)]+\\frac12[\\Box\\Phi(r+r_*)-\\chi(r+r_*)]-V'[\\Phi(r,x)]\\,,\\label{eomP12}\\\\\n\\hspace{-.8cm} 0&=&(\\partial_r-\\Box)\\chi(r,x)\\,.\\label{eomP22}\n\\end{eqnarray}\nThen, when evaluating \\Eq{eomP12} at $r=\\beta r_*$ the first term yields $(1\/2)2\\Boxe^{-r_*\\Box}\\Phi(\\beta r_*,x)=\\Boxe^{-r_*\\Box}\\phi(x)$, the second term vanishes and \\Eq{eomP12} reproduces \\Eq{tpheom} exactly. See Fig.\\ \\ref{fig2} for a toy example. Note that imposing \\Eq{rstcon1} only at $r=\\beta r_*$,\n\\begin{equation}\\label{rstcon2}\n\\chi(\\beta r_*,x) = \\Box\\Phi(\\beta r_*,x)\\,,\n\\end{equation}\nor at any given $r=\\tilde r$ instead of for all $r$ would again yield \\Eq{rstcon1}, provided $\\chi$ obeyed \\Eq{eomP22}. In fact, parametrizing with $\\sigma=r-\\tilde r$, $\\chi(r,x)= \\chi(\\sigma+ \\tilde r,x) = e^{\\sigma\\Box}\\chi(\\tilde r,x)= e^{\\sigma\\Box}\\Box\\Phi(\\tilde r,x)=\\Boxe^{\\sigma\\Box}\\Phi(\\tilde r,x)= \\Box\\Phi(\\sigma+\\tilde r,x)=\\Box\\Phi(r,x)$.\n\nThe introduction of the parameter $\\beta$ in \\Eq{rstcon1} reflects the fact that the choice of the slice where the $(D+1)$-dimensional scalar field coincides with the $D$-dimensional field does not affect the final result. For instance, one could have chosen $\\beta=0$ and identified $\\Phi(0,x)=\\phi(x)$ (the ``initial'' condition in $r$ of the diffusion equation), $\\chi(0,x)=\\Box\\phi(x)$. However, in section \\ref{solu} we will argue that equation \\Eq{rstcon1} is far better suited than $\\Phi(0,x)=\\phi(x)$ for the task of finding dynamical solutions. This is why we introduced a strictly positive $\\beta$ in the first place.\n\\begin{figure}\n\\centering\n\\includegraphics[width=8.cm]{fig2a}\n\\includegraphics[width=8.cm]{fig2b}\n\\caption{\\label{fig2} In $D=1$ flat Euclidean space, the solution of the diffusion equation \\Eq{eomchi2} with initial condition $\\Phi(0,x)=\\delta(x)$ is $\\Phi(r,x)=\\exp[-x^2\/(4r)]\/\\sqrt{4\\pi r}$. This solution is represented in the $(r,x)$ plane as an orange surface (concavity upwards) in the left plot, together with $\\chi(r,x)=\\partial_x^2\\Phi(r,x)$ (blue surface, concavity downwards). The section of these surfaces at $r=\\beta r_*=0.5$ (black thick line) are shown in the right plot.}\n\\end{figure}\n\nTo summarize the logic here, given the non-local system \\Eq{fac} one can always write down the system \\Eq{act}--\\Eq{locch2} localizing it. This localized system is not in one-to-one correspondence with the non-local system but it always admits, among its solutions, the solutions of the non-local system. These solutions are defined by the boundary condition \\Eq{rstcon1} together with the local condition \\Eq{rstcon2}. The sub-set of solutions of the localized system obeying these conditions are solutions to the original non-local one, since the above conditions are valid on shell (i.e., applying \\Eq{eomchi2} and \\Eq{eomP22} to \\Eq{eomP12}). In other words, \\Eq{rstcon1} and \\Eq{rstcon2} define the sub-set of solutions of the localized system that recover the equations of motion and solutions of the original non-local system. Recalling that the localized system \\Eq{act} must be reducible to the non-local one \\Eq{fac} only at a certain slice $r=\\beta r_*$ in the extra direction, it is clear that we do not need to study the most general $(D+1)$-dimensional evolution of the localized dynamics, which is obtained by dropping \\Eq{rstcon2}. \n\nNotice that it is not possible, while keeping the diffusing structure unaltered, to change the status of \\Eq{rstcon2} from a condition imposed by hand to a consequence of the dynamics. For instance, one could try to add an extra term $\\mathcal{L}_\\lambda=\\lambda(r,x)[\\Box\\Phi(r,x)-\\chi(r,x)]$ to the action \\Eq{act}, which would give \\Eq{rstcon2} when varying $\\mathcal{S}[\\Phi,\\chi,\\lambda]$ with respect to the Lagrange multiplier $\\lambda$. However, equations \\Eq{inter2} and \\Eq{inter22} would become, respectively, $(\\dots)-\\lambda=0$ and $(\\dots)+\\Box\\lambda=0$, where the extra terms would vanish separately if, again, we imposed by hand\n\\begin{equation}\\label{lamb}\n\\lambda(r,x)=0\\,.\n\\end{equation}\nThis condition, replacing \\Eq{rstcon2}, amounts to forbid source terms in the diffusion equation \\Eq{eomchi2}. Indeed, the infinitely many degrees of freedom of the original non-local system are encoded in equation \\Eq{rstcon2} or in the alternative equation \\Eq{lamb}, both of which are a condition on the \\emph{infinitely many} $r$-values of the fields $\\Phi$ and $\\chi$. Thus, demanding to get a fully self-determined diffusing localized system equivalent to the non-local one is not only impossible,\\footnote{Of course, this claim does not apply to an arbitrary localized system not diffusing with the standard sourceless diffusion equation of Brownian motion.} but also meaningless, since the equivalence between the localized and the non-local system on one hand and the statement of the initial-value problem for the non-local system on the other hand must both go through the setting of an infinite number of conditions external to the dynamics.\n\nFor future use, we highlight three important features of the localization procedure which will apply, in their essence, also to the non-local gravity action \\Eq{nlffg}.\n\\begin{enumerate}\n\\item By the diffusion-equation method, one does not establish a one-to-one correspondence between the localized system \\Eq{act} and the non-local system \\Eq{fac}. Rather, we showed that there exist field conditions on the $r=\\beta r_*$ slice such that the localized system has the same spacetime dynamics as the non-local system. This correspondence on a slice is depicted in Fig.\\ \\ref{fig1}.\n\\item To get the correct result, it was crucial to make a careful choice of the arguments in the diffusion-equation term \\Eq{locch2} and a careful treatment of the boundary terms when integrating \\Eq{locch2} by parts as in \\Eq{inpa2}. Without such boundary terms, \\Eq{eomP12} would have been unable to reproduce \\Eq{tpheom} on the $r=\\beta r_*$ slice with the correct numerical factors.\n\\item The localized system is second-order in spacetime derivatives, for both $\\Phi$ and $\\chi$. Therefore, the Cauchy problem for this system, when restricted to spacetime directions $x^\\mu$, is solved by \\emph{four initial conditions} at some $t=t_{\\rm i}$:\n\\begin{equation}\n\\Phi(r,t_{\\rm i},{\\bf x}),\\,\\dot\\Phi(r,t_{\\rm i},{\\bf x}),\\,\\chi(r,t_{\\rm i},{\\bf x}),\\,\\dot\\chi(r,t_{\\rm i},{\\bf x})\\,.\n\\end{equation}\nIn particular, these conditions are valid at $r=\\beta r_*$, so that also the Cauchy problem of the non-local system \\Eq{fac} is solved by four initial conditions, corresponding (via \\Eq{rstcon2}) to $\\phi(t_{\\rm i},{\\bf x})$ and its first three time derivatives. We will find a similar result also in non-local gravity gravity, with four initial conditions for the metric. In general, given a non-local action with exponential non-locality for tensorial field $\\phi_{\\mu\\nu\\cdots}$ representing $n$ physical degrees of freedom, the diffusion-equation method relies on a second-order localized system for a field $\\Phi_{\\mu\\nu\\cdots}$ and an auxiliary field $\\chi^{\\mu\\nu\\cdots}$ with the same symmetry properties as $\\phi$, thus leading to $2n$ initial conditions.\n\\end{enumerate}\n\n\\subsubsection{Ghost mode}\n\nIn this subsection, we analyze a hidden ghost mode which, however, does not influence the non-local dynamics. To understand this aspect, we will employ a reformulation of the localized dynamics (equation \\Eq{tildeL}), physically equivalent to \\Eq{act}--\\Eq{locch2}, which is convenient to study the degrees of freedom of the theory but is unsuitable for the practical treatment (Cauchy problem, solutions, and so on) of the dynamics, due to problems we will comment on in due course.\n\nIt is very well known that the kinetic term in \\Eq{fac} can be symmetrized after integrating by part, so that the Lagrangian becomes\n\\begin{equation}\\label{facbis}\n\\mathcal{L}_\\phi =\\frac12(e^{-\\frac12r_*\\Box}\\phi)\\Box(e^{-\\frac12r_*\\Box}\\phi)-V(\\phi).\n\\end{equation}\nFrom here, one can make the field redefinition $\\tilde\\phi=e^{-\\frac12r_*\\Box}\\phi$ so often used in $p$-adic and string field theory. We will do something similar by considering the localized version of \\Eq{facbis}, which is given by \\Eq{act} with ($x$-dependence omitted everywhere)\n\\begin{eqnarray}\n\\tilde\\mathcal{L}_{\\Phi}=\\frac12\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)\\Box \\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)-V[\\Phi(r,x)]\\label{locPh2bis}\n\\end{eqnarray}\nreplacing \\Eq{locPh2}. We note that the integral in \\Eq{locch2} is pleonastic for the Laplace--Beltrami term, since both $\\chi$ and $\\Phi$ obey the diffusion equation:\n\\begin{eqnarray}\n\\intd^Dx\\int_0^{r_*} d q\\,\\chi(r-q)\\Box\\Phi(r')&=&\\intd^Dx\\int_0^{r_*} d q\\,e^{(\\frac12r_*-q)\\Box}\\chi\\left(r-\\textstyle{\\frac12} r_*\\right)e^{(q-\\frac12r_*)\\Box}\\Box\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)\\nonumber\\\\\n&=&\\intd^Dx\\int_0^{r_*} d q\\,\\chi\\left(r-\\textstyle{\\frac12} r_*\\right)\\Box\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)\\nonumber\\\\\n&=&r_*\\intd^Dx\\,\\chi\\left(r-\\textstyle{\\frac12} r_*\\right)\\Box\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)\\,.\n\\end{eqnarray}\nHowever, replacing \\Eq{locch2} with a mixed term\n\\begin{equation}\\nonumber\n\\tilde\\mathcal{L}_{\\chi}=-\\frac{r_*}{2}\\chi\\left(r-\\textstyle{\\frac12} r_*\\right)\\Box\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)+\\frac12 \\int_0^{r_*} d q\\,\\chi(r-q)\\partial_{r'}\\Phi(r')\n\\end{equation}\nwould not give the correct equations of motion, as we will see shortly. The reason is that $\\tilde\\mathcal{L}_{\\chi}$ is originated by an on-shell condition, a trick that invalidates the variational principle. To find the correct Lagrangian, we generalize this term with a generic functional of the fields, $\\Phi\\rightarrow f[\\Phi,\\chi]$. A last step we take (not necessary, but useful to simplify the physical interpretation) is to consider the field redefinitions\n\\begin{equation}\n\\varphi(r,x) :=\\Phi\\left(r-\\textstyle{\\frac12} r_*,x\\right)-\\frac{r_*}{2}\\chi\\left(r-\\textstyle{\\frac12} r_*,x\\right),\\qquad \\psi(r,x):=\\frac{r_*}{2}\\chi\\left(r-\\textstyle{\\frac12} r_*,x\\right)\\,,\n\\end{equation}\nso that\n\\begin{equation}\\label{Pvfpsi}\n\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)=\\varphi(r)+\\psi(r)\\,,\n\\end{equation}\nand the total Lagrangian on Minkowski spacetime is\n\\begin{eqnarray}\n\\tilde\\mathcal{L}&=&-\\frac12\\partial_\\mu\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)\\partial^\\mu\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)-V[\\Phi(r)]+\\frac{r_*}{2}\\partial_\\mu\\chi\\left(r-\\textstyle{\\frac12} r_*\\right)\\partial^\\mu\\Phi\\left(r-\\textstyle{\\frac12} r_*\\right)\\nonumber\\\\\n&&+\\frac12 \\int_0^{r_*} d q\\,\\chi(r-q)\\partial_{r'}f(r')\\nonumber\\\\\n&=&-\\frac12\\partial_\\mu\\varphi(r)\\partial^\\mu\\varphi(r)-V\\left[\\varphi\\left(r+\\textstyle{\\frac12} r_*\\right)+\\psi\\left(r+\\textstyle{\\frac12} r_*\\right)\\right]+\\frac12\\partial_\\mu\\psi(r)\\partial^\\mu\\psi(r)+\\frac{1}{r_*}\\,I(r)\\,,\\nonumber\\\\ \\label{tildeL}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\\label{utile}\nI(r) &:=& \\int_{0}^{r_{*}}d q\\,\\psi\\left(r-q+\\textstyle{\\frac12} r_*\\right)\\partial_q f\\left(r+q-\\textstyle{\\frac12} r_*\\right)\\nonumber\\\\\n&=& \\psi\\left(r-\\textstyle{\\frac12} r_*\\right)f\\left(r+\\textstyle{\\frac12} r_*\\right)-\\psi\\left(r+\\textstyle{\\frac12} r_*\\right)f\\left(r-\\textstyle{\\frac12} r_*\\right)\\nonumber\\\\\n&&-\\int_0^{r_*}d q\\,f\\left(r+q-\\textstyle{\\frac12} r_*\\right)\\partial_q\\psi\\left(r-q+\\textstyle{\\frac12} r_*\\right).\n\\end{eqnarray}\nThe function $f$ is determined in appendix \\ref{appI} by requiring the recovery of the non-local dynamics on the $r=\\beta r_*$ slice.\n\nObserving \\Eq{tildeL}, one sees that the canonical scalar $\\varphi$ propagates with a kinetic term of the correct sign, while the canonical scalar $\\psi$ (hence $\\chi$) is a ghost. This detail went unnoticed in \\cite{cuta3}.\n\nThere are two issues affecting \\Eq{tildeL} and described in appendix \\ref{appI}, but we should not lose sight of the reason why we introduced this Lagrangian. One may choose either \\Eq{act}--\\Eq{locch2} or \\Eq{tildeL} depending on what one wants to study. For the analysis of the Cauchy problem and of the dynamical solutions, the action \\Eq{act}--\\Eq{locch2} is to be preferred, and in fact we will analyze non-local gravity under the same scheme. On the other hand, for the characterization (ghost-like or not) of the localized degrees of freedom the Lagrangian \\Eq{tildeL}, or the Hamiltonian \\Eq{hamil} we will derive from it in the next subsection, is more indicated. The counting of the localized degrees of freedom (section \\ref{scala3}) can be performed indifferently in the original system \\Eq{act}--\\Eq{locch2}, in the Lagrangian \\Eq{tildeL}, or in the Hamiltonian formalism derived from \\Eq{tildeL}.\n\n\n\\subsubsection{Hamiltonian formalism}\n\nTo count the number of degrees of freedom in a non-local theory, we must first count the number of localized degrees of freedom in the associated localized $(D+1)$-dimensional theory. In the case of the scalar field, this information is already available in Lagrangian formalism, but for completeness we can obtain the same result from Hamiltonian formalism. The example presented in this subsection will illustrate the general method and its caveats. Its application in the localization of the scalar field was sketched in \\cite{cuta3}, but here we will fill several gaps in that discussion. The actual counting of localized degrees of freedom will be done in section \\ref{scala3}.\n\nAlthough we do not write the non-local system \\Eq{fac} in Hamiltonian formalism, we can reach a lesser but still instructive goal, namely, the formulation of the Hamiltonian approach for the associated localized system. However, if we take the localized system \\Eq{act}--\\Eq{locch2} as a starting point we soon meet several problems, all of which stem from the non-locality with respect to the $r$ direction. Momenta acquire a rather obscure non-invertible form and one cannot write down a Hamiltonian in phase space. However, the system is not constrained. We can avoid all the trouble by acting directly on \\Eq{tildeL}. Calling $\\tilde L := \\int d^{D-1}{\\bf x}\\intd r\\,\\tilde\\mathcal{L}$ the Lagrangian, we can define the phase space and the Hamiltonian. The momenta are\n\\begin{equation}\n\\pi_\\varphi(r,x) := \\frac{\\delta\\tilde L}{\\delta\\dot\\varphi(r,x)}=\\dot\\varphi(r,x),\\qquad \\pi_\\psi(r,x) := \\frac{\\delta\\tilde L}{\\delta\\dot\\psi(r,x)}=-\\dot\\psi(r,x)\\,.\\label{pic}\n\\end{equation}\nNotice that, if we had calculated the momenta directly from \\Eq{locPh2} and \\Eq{locPh2bis}, we would have obtained $\\pi_\\Phi(r)=(1\/2)[\\dot\\Phi(r-r_*)+\\dot\\Phi(r+r_*)-\\int_0^{r_*}d s\\,\\dot\\chi(r-2s+r_*)]$ and $\\pi_\\chi(r) =-(1\/2)\\int_0^{r_*} d s\\,\\dot\\Phi(r+2s-r_*)$, which are not invertible locally with respect to $\\dot\\Phi(r)$ and $\\dot\\chi(r)$.\n\nThe non-vanishing equal-time Poisson brackets in terms of the spatial $(D-1)$-vectors ${\\bf x}$ are\n\\begin{eqnarray}\n\\{\\varphi(r_1,x_1),\\,\\pi_\\varphi(r_2,x_2)\\}_{t_1=t_2} &=& \\delta(r_1-r_2)\\,\\delta^{(D-1)}({\\bf x}_1-{\\bf x}_2)\\,,\\\\\n\\{\\psi(r_1,x_1),\\,\\pi_\\psi(r_2,x_2)\\}_{t_1=t_2} &=& \\delta(r_1-r_2)\\,\\delta^{(D-1)}({\\bf x}_1-{\\bf x}_2)\\,,\n\\end{eqnarray}\nwhile the Hamiltonian of the system is ($x$-dependence omitted again)\n\\begin{eqnarray}\nH &:=&\\int d^{D-1}{\\bf x} d r \\left[\\pi_\\varphi(r)\\dot\\varphi(r)+\\pi_\\psi(r)\\dot\\psi(r)\\right]-\\tilde L\\nonumber\\\\\n &=& \\int d^{D-1}{\\bf x} d r \\left\\{\\frac12\\pi_\\varphi^2(r)+\\frac12\\nabla_i\\varphi(r)\\nabla^i\\varphi(r)-\\frac12\\pi_\\psi^2(r)-\\frac12\\nabla_i\\psi(r)\\nabla^i\\psi(r)\\right.\\nonumber\\\\\n\t&&\\qquad\\qquad\\qquad\\left.+V[\\Phi(r)]-\\frac{1}{r_*} I[\\psi(r),\\Phi(r)]\\right\\}\\,,\\label{hamil}\n\\end{eqnarray}\nwhere it is understood that $\\Phi(r)=\\varphi(r+r_*\/2)+\\psi(r+r_*\/2)$. Since $\\Phi$ is shifted in $r$, $H$ is non-local in $r$ due to the terms in the last line of \\Eq{hamil}. Nevertheless, the Hamiltonian is written solely in terms of phase-space variables and the phase-space fields are completely local in spacetime coordinates.\n\nThe evolution equations for the fields $\\varphi$ and $\\psi$ trivially gives the momenta, $\\dot\\varphi(r)=\\{\\varphi(r),H\\}=\\delta H\/\\delta\\pi_\\varphi(r)=\\pi_\\varphi(r)$, $\\dot\\psi(r)=\\{\\psi(r),H\\}=\\delta H\/\\delta\\pi_\\psi(r)=-\\pi_\\psi(r)$, while the Hamiltonian evolution of the momenta give the localized equations of motion \\Eq{eomH1} and \\Eq{eomH2}:\n\\begin{equation}\n\\dot\\pi_\\varphi(\\bar r) = \\{\\pi_\\varphi(\\bar r),\\,H\\}=-\\frac{\\delta H}{\\delta\\varphi(\\bar r)}\\,,\\qquad \\dot\\pi_\\psi(\\bar r) = \\{\\pi_\\psi(\\bar r),\\,H\\}= -\\frac{\\delta H}{\\delta\\psi(\\bar r)}\\,.\\label{eomH22}\n\\end{equation}\n\n\n\\subsection{Initial conditions and degrees of freedom}\\label{scala3}\n\nThe question about how many initial conditions we should specify for the non-local scalar system is related to another one: How many\ndegrees of freedom are hidden in equation \\Eq{tpheom}? In higher-derivative theories, the presence of many degrees of freedom (Ostrogradski modes) is well known. For a system with $n$ derivatives, the Cauchy problem is uniquely solved by $n$ initial conditions. However, there is an uncrossable divide between higher-derivative and non-local theories, and one cannot conclude that non-local theories need $n=\\infty$ initial conditions; conversely, truncating a non-local theory to finite order leads to a physically different model \\cite{EW1,cuta2}.\n\nTo understand the problem, we review its root and also some confusion surrounding it. First of all, there is agreement in the literature about the fact that the \\emph{free} system with constant, linear or quadratic $V(\\phi)$\n has \\emph{two} initial conditions. In the absence of interactions, the Cauchy problem associated with \\Eq{tpheom} is specified only by $\\phi(t_{\\rm i},{\\bf x})$ and $\\dot\\phi(t_{\\rm i},{\\bf x})$. The entire functional $\\exp(-r_*\\Box)$ introduces no new poles in the spectrum of $\\phi$ and the system is equivalent to the local one with $r_*=0$, as is obvious from the field redefinition $\\tilde\\phi=e^{-r_*\\Box\/2}\\phi$.\\footnote{Another method, completely equivalent, is to work in Laplace momentum space.} This was first recognized as early as 1950 in the seminal paper by Pais and Uhlenbeck \\cite{PU} (section III.B.3) and reiterated more recently, sometimes using very different terminology and techniques, in other works \\cite{EW1,Sim90,BK1,cuta3}.\n\nMore contrived is the case with interactions. The reader unfamiliar with non-local theories may wonder why interactions should make any difference when counting the number of initial conditions. The reason is that, in this case, there is no field redefinition absorbing the non-local operator of the kinetic term. Any other rewriting will not work, either. For instance, a non-local kinetic term can always be expressed as a convolution with a kernel \\cite{PU}. Consider the scalar-field Lagrangian $\\mathcal{L}_\\phi=\\phi f(\\Box)\\,\\phi-V(\\phi)$ with generic form factor $f(\\Box)$. In momentum space, calling $F$ the Fourier transform of $f$,\n\\begin{eqnarray}\n\\phi(x) f(\\Box)\\,\\phi(x) &=& \\phi(x)\\intd^D k\\,f(-k^2)\\,\\delta(k^\\mu-i\\nabla^\\mu)\\,\\phi(x)\\nonumber\\\\\n\t\t\t\t\t\t\t\t\t\t\t &=& \\phi(x)\\intd^D k\\,\\left[\\int\\frac{d^D z}{(2\\pi)^D}\\,F(z)\\,e^{-i z^\\mu k_\\mu}\\right]\\,\\delta(k^\\mu-i\\nabla^\\mu)\\,\\phi(x)\\nonumber\\\\\n\t\t\t\t\t\t\t\t\t\t\t &=& \\phi(x)\\int\\frac{d^D z}{(2\\pi)^D}\\,F(z)\\,e^{z^\\mu\\nabla_\\mu}\\phi(x)\\nonumber\\\\\n\t\t\t\t\t\t\t\t\t\t\t &=& \\phi(x)\\int\\frac{d^D z}{(2\\pi)^D}\\,F(z)\\,\\phi(x+z)\\nonumber\\\\\n\t\t\t\t\t\t\t\t\t\t\t &\\stackrel{y:=z+x}{=}&\\phi(x)\\int\\frac{d^D y}{(2\\pi)^D}\\,F(y-x)\\,\\phi(y)\\,.\\label{ker}\n\\end{eqnarray}\nSpecifying the form factor $f(-k^2)$ determines the spectrum of the field. In general, the poles of the propagator correspond to the zeros of $f(-k^2)$ and to the poles of $F(z)$. This correspondence is straightforward for a massless dispersion relation $f(-k^2)=-k^{2n}$, where $F(z)\\propto\\delta^{(2n)}(z)$ and $(2n)$ denotes the derivative of order $2n$ of the delta. The derivative order of the delta is the order of the pole. Polynomial dispersion relations have a similar structure, e.g., $f(-k^2)=-k^2-a k^{2n}$ gives $F(z)\\propto \\delta^{(2)}(z)+a\\delta^{(2n)}(z)$. For $n=2$, this dispersion relation corresponds to one massive and one massless scalar mode, for a total of two double poles.\\footnote{In fact, $f^{-1}(-k^2)=-[k^2(1+ak^2)]^{-1}=-k^{-2}+(a^{-1}+k^2)^{-1}$. The second mode is a ghost (positive residue).} Furthermore, when $f(\\Box)$ is non-local the field spectrum depends on whether the form factor is entire or not. In the case of \\Eq{tpheom}, the propagator \n\\begin{equation}\\label{prof}\nf^{-1}(-k^2)=-\\frac{e^{-r_*k^2}}{k^2}\n\\end{equation}\nhas a massless double pole, while $F(z)\\propto (2r_*+z^2)\\exp[z^2\/(4r_*)]$ has a double massive pole. In the last two cases, the order and nature (massless or massive) of the particle poles and the poles of $F$ is less transparent, although their counting agrees.\n\nFrom this exercise, it should become clear that hiding infinitely many derivatives into integrals with non-trivial kernels such as \\Eq{ker}, or to transfer part of these derivatives onto the scalar potential and then converting them into integral operators, does not help in solving the Cauchy problem, since the two formulations are equivalent (on the space of real analytic functions \\cite{MoZ}). In \\cite{CMN3}, we complement this no-go result with its way out: If the kernel $F$ can be found by solving some finite-order differential equation extra with respect to the dynamical equations, then its contribution to the Cauchy problem becomes under full control.\n\nThe novelty brought in by the diffusion-equation method is that it allows one to go beyond the free theory and count the extra number of initial conditions. Surprisingly, in the scalar-field case this number is two, which sums to the two initial conditions of the free theory for a total of four. This result goes against the belief, implicitly endorsed in some literature, that the information from the free theory is complete. In particular, when one says that the number of initial conditions for solving the Cauchy problem of the theory \\Eq{fac} is two, one should specify that this is true only for the free, perturbative case.\n\nWe reach this conclusion in three steps: (i) counting the number of field degrees of freedom of the localized theory; (ii) specifying the number of initial conditions (in time) for each localized field; (iii) restricting our attention to the slice $r=\\beta r_*$ where the non-local dynamics is recovered, and proceeding with the counting thereon. In Lagrangian formalism, we saw that there are two independent localized fields, either the pair $\\Phi$ and $\\chi$ or the pair $\\varphi$ and $\\psi$. Consistently, the same result is obtained in Hamiltonian formalism, where there are two non-vanishing independent momenta $\\pi_\\varphi$ and $\\pi_\\psi$. Since the dynamics is second-order in spacetime derivatives, there are two initial conditions per field, for a total of four.\n\\begin{equation}\\label{dof1}\n\\parbox[c]{13cm}{{\\bf Number of degrees of freedom: scalar field.} \\emph{The localized real scalar field theory \\Eq{act}--\\Eq{locch2} in $D+1$ dimensions has two scalar degrees of freedom $\\Phi$ and $\\chi$. On the $r$-slice where the system is equivalent to the non-local real scalar field theory \\Eq{fac} in $D$ dimensions, the degree of freedom $\\chi$ is no longer independent. Consequently, the non-local theory has \\underline{one} non-perturbative scalar degree of freedom $\\phi$.}}\n\\end{equation}\n\\begin{equation}\\label{dofic}\n\\parbox[c]{13cm}{{\\bf Number of initial conditions: scalar field.} \\emph{The Cauchy problem on spacetime slices of the localized real scalar field theory \\Eq{act}--\\Eq{locch2} in $D+1$ dimensions is specified by four initial conditions $\\Phi(r,t_{\\rm i},{\\bf x})$, $\\dot\\Phi(r,t_{\\rm i},{\\bf x})$, $\\chi(r,t_{\\rm i},{\\bf x})$, $\\dot\\chi(r,t_{\\rm i},{\\bf x})$. As a consequence, the Cauchy problem of the non-local non-perturbative real scalar field theory \\Eq{fac} in $D$ dimensions is specified by \\underline{four} initial conditions $\\phi(t_{\\rm i},{\\bf x})$, $\\dot\\phi(t_{\\rm i},{\\bf x})$, $\\ddot\\phi(t_{\\rm i},{\\bf x})$, $\\dddot\\phi(t_{\\rm i},{\\bf x})$.}}\n\\end{equation}\n\nThe nature of the new degree of freedom $\\chi$ is quite peculiar. As we saw above with a diagonalization trick (used, for instance, also in \\cite{DeWo2}), this field is a ghost and, in fact, the Hamiltonian \\Eq{hamil} is unbounded from below.\n From the point of view of the $(D+1)$-dimensional localized system \\Eq{act}--\\Eq{locch2}, $\\chi$ arises as a Lagrange multiplier introduced to enforce the diffusion equation of $\\Phi$; $\\chi$ itself does not appear in its own equation of motion \\Eq{eomchi2}. Its $(D+1)$-dimensional dynamics, given by the equation of motion of $\\Phi$, is non-trivial (in Hamiltonian formalism, the momentum $\\pi_\\chi\\propto \\pi_\\psi$ does not vanish) but it only amounts to diffusion, equation \\Eq{eomP22}. Eventually, it turned up that it is associated with $\\Phi$ by the second-order derivative relation \\Eq{rstcon2}. From the point of view of the $D$-dimensional non-local system, $\\chi$ disappears because its diffusion is frozen at a given slice, and the dynamics is written solely in terms of $\\phi$, its derivatives and its potential. At this point, there is only one degree of freedom whose \\emph{perturbative classical} propagator \\Eq{prof} describes a non-ghost massive scalar mode. The potentially dangerous ghost mode in the $(D+1)$-dimensional system turns out to be non-dynamical in $D$-dimensions and in the \\emph{free} theory. \n\nIn the interacting non-local theory, $\\chi$ does play a part in the dynamics, but in the form of the potential for $\\phi$. Combined with equation \\Eq{tpheom}, the local condition \\Eq{rstcon2} explains in part the finite proliferation of degrees of freedom in the interacting case. Since \\Eq{rstcon2} implies $\\chi(r,x)=\\Box\\Phi(r,x)$ for all $r$, then from \\Eq{tpheom} one has\n\\begin{equation}\\label{usefll}\n\\chi[(\\beta-1)r_*,x]=\\Box\\Phi[(\\beta-1)r_*,x]=\\Boxe^{-r_*\\Box}\\Phi(\\beta r_*,x)=V'[\\Phi(\\beta r_*,x)]=V'[\\phi(x)]\\,.\n\\end{equation}\nIf $V\\propto\\phi^2$, then $\\chi[(\\beta-1)r_*,x]\\propto \\Phi(\\beta r_*,x)=\\phi(x)$ and there is no extra degree of freedom with respect to the $V=0$ case. For a cubic or higher-order polynomial, $\\chi[(\\beta-1)r_*,x]$ is not linearly equivalent to $\\phi$. Non-linearities can generate new degrees of freedom (a typical example is $f(R)$ gravity, which contains a hidden scalar mode apart from the graviton) but not in this case, since the field $\\chi$ is not dynamical on the $r=\\beta r_*$ slice where \\Eq{usefll} holds.\n\n\n\\subsection{Solutions}\\label{solu}\n\nSolutions of non-local theories can be categorized into perturbative and non-perturbative. Perturbative solutions can have two meanings, either as the solutions obtained when truncating the non-local operators to a finite order (a procedure we will not discuss here \\cite{EW1,MoZ,cuta2}) or as the solutions obtained, order by order, starting from the free theory and modeling interactions as a perturbative series \\cite{MoZ,EW1,JLM}. When all non-locality acts on interactions, the two meanings coincide. Non-perturbative solutions are all those solutions that cannot be reached in these ways and, in general, they constitute the great majority of all possible solutions of the system. The diffusion-equation method permits to get access precisely to these solutions with generic non-perturbative potential \\cite{roll,cuta2,cuta4,cuta5,cuta6,cuta8}. \n\nWhen introducing the condition \\Eq{rstcon1}, we commented on the fact that the identification of the localized dynamics with the non-local one could take place at any $r=\\tilde r$ slice, including at $r=\\tilde r=0$ where $\\Phi(0,x)=\\phi(x)$. However, for the sake of the construction of actual solutions this choice is not fortunate, since it corresponds to the initial condition of the heat kernel. In other words, setting the initial condition (in $r$) of the $(D+1)$-dimensional system to be the solution of the non-local system would take us back to the usual paradox with non-local dynamics, namely, that knowing all the infinite number of initial conditions (in time) $\\phi(t_{\\rm i},{\\bf x}),\\,\\dot\\phi(t_{\\rm i},{\\bf x}),\\,\\ddot\\phi(t_{\\rm i},{\\bf x}),\\,\\dots$ is tantamount to already knowing the Taylor expansion of the full solution around $t=t_{\\rm i}$. It is more logical, then, to impose \\Eq{rstcon1} (the non-local solution is the outcome of the diffusion from $r=0$ to $r=\\beta r_*$ rather than of anti-diffusion from $r=\\beta r_*$ to $r=0$) and to set the initial condition $\\Phi(0,x)$ in $r=0$ as something else. This ``something else'' can be most naturally recognized as the solution $\\phi_{\\rm loc}(x)$ of the \\emph{local} system obtained by setting $r_*=0$ in equations \\Eq{fac} and \\Eq{tpheom}:\n\\begin{equation}\\label{filoc}\n\\Phi(0,x)=\\phi_{\\rm loc}(x)\\,.\n\\end{equation}\nThen, the solution of the diffusion equation \\Eq{eomchi2} can be found in integral form in momentum space. Calling $-k^2$ the eigenvalue of the Laplace--Beltrami operator $\\Box$ and writing\n\\begin{equation}\n\\phi_{\\rm loc}(x)=\\int_{-\\infty}^{+\\infty}\\frac{d^D k}{(2\\pi)^D}\\,e^{-i k\\cdot x}\\tilde\\phi_{\\rm loc}(k)\\,,\n\\end{equation}\none has\n\\begin{equation}\n\\phi(x)=\\Phi(\\beta r_*,x)=e^{\\beta r_*\\Box}\\Phi(0,x)=\\int_{-\\infty}^{+\\infty}\\frac{d^D k}{(2\\pi)^D}\\,e^{-i k\\cdot x}e^{-\\beta r_*k^2}\\tilde\\phi_{\\rm loc}(k)\\,.\\label{soluz}\n\\end{equation}\nSince we know $\\phi_{\\rm loc}(x)$, we also know its Fourier transform $\\tilde\\phi_{\\rm loc}(k)$ and we can obtain the full non-local solution $\\phi(x)$. Examples of solutions of the scalar-field equation of motion \\Eq{tpheom} using the diffusion-equation method can be found in \\cite{cuta2} (on a Friedmann--Lema\\^{i}tre--Robertson--Walker (FLRW) cosmological background), \\cite{roll,cuta7} (Minkowski background, rolling tachyon of open string field theory), \\cite{cuta3,cuta6} (Minkowski and FLRW backgrounds, $V\\propto\\Phi^n$ and $V\\propto\\exp(\\lambda\\Phi)$), \\cite{cuta4,cuta5} (lump solutions on Minwkoski, FLRW and Euclidean backgrounds; kink solutions on Euclidean background), and \\cite{cuta8} (FLRW solutions in a scalar-tensor non-local theory). Solutions of $p$-adic models, corresponding to \\Eq{tpheom} without the $\\Box$ in the kinetic term, have been considered in \\cite{cuta4,cuta7}. In some of these cases, a diffusion equation with opposite sign of the diffusion operator has been used, in which case the representation \\Eq{soluz} may be ill-defined. This is not a problem, since there exist a more general integral form of the solution valid for any sign (see section 3.3 of \\cite{cuta7}).\n\nNote that, in general, convergence of the integral \\Eq{soluz} will require $\\beta>0$. Also, setting $\\beta=1$ in \\Eq{usefll} would yield $\\Box\\phi_{\\rm loc}=V'(\\phi)$, implying $\\phi_{\\rm loc}=\\phi$. To avoid this inconsistency, we exclude the value $\\beta=1$. Also, for any \\emph{given} potential $V(\\phi)$ and for a generic $0<\\beta<1$ the profile $\\phi(x)$ is not a solution to the non-local equation of motion, not even approximately. Therefore, what one usually finds is an approximate solution $\\phi(x)$ for a certain range of $x$. The actual value of $\\beta$ determines the limits of the $x$ range, since the profile typically depends on the combination $x^2\/(4\\beta r_*)$. For instance, an approximated solution valid at large $x$ requires $\\beta>1$. However, cases are known where the profile $\\phi(x)$ is an approximate solution for any $x$ (even small) with a very good degree of accuracy, which means that there exists a value of $\\beta$ such that the equation of motion is solved up to a maximal deviation of a few percent or less for some $x$, and with much greater accuracy everywhere else. These systems are related to (\\cite{roll,cuta4,cuta5}) or inspired by (\\cite{cuta2,cuta6}) string field theory. On the other hand, there may be special cases where $\\phi$ is an exact solution, but these in general require a specifically tailored potential. Some examples of this inverse problem are given in \\cite{cuta3}.\n\n\n\\subsection{Comparison with Tomboulis approach}\\label{deloc}\n\nAnother approach handling non-perturbative solutions was proposed after the diffusion-equation method by Tomboulis \\cite{Tom15}. Here, by a field redefinition one transfers non-local operators from the kinetic into the potential term, with a procedure analogous to that leading to \\Eq{ker}. The latter is then written as an integral kernel, as above. This type of ``delocalized'' hyperbolic partial integro-differential equations are characterized by the phenomenon, due to the smearing of the kernel in \\Eq{ker}, of ``spill-over'' (or delays) outside the standard causal cones of the local hyperbolic initial-value problem. Depending on the system, delays may be present only in the past or both in the past and in the future.\n\nComparing the diffusion-equation method with Tomboulis' delocalization (or delays) approach in classical theories, we find several similarities.\n\\begin{itemize}\n\\item Both recognize the central role of interactions to distinguish between local and non-local models.\n\\item Related to this, both agree also on the fact that, independently on whether one transfers non-locality from the kinetic term to interactions or not, it makes no sense to count the number of initial conditions just from the order of the kinetic term or by looking at any isolated part of the Lagrangian; in the limit of turning off the interactions, one may obtain the wrong answer. In this sense, the distinction between perturbative and non-perturbative solutions is not very useful in either method if one aims to make existence and uniqueness statements on the full dynamics.\n\\item The ill-defined concept of ``infinitely many initial conditions'' is traded with a boundary-value problem. In the diffusion-equation approach, the value of the $D$-dimensional field $\\phi(t_{\\rm i},{\\bf x})$ and all its derivatives $\\phi^{(n)}(t_{\\rm i},{\\bf x})$ at one time instant $t=t_{\\rm i}$ is replaced by a field configuration $\\Phi(r,t,{\\bf x})$ living in $D+1$ dimensions, evaluated at a certain slice $r=\\beta r_*$ in the extra direction. In Tomboulis' approach, one specifies one or more functions rather than field values at one instant in the past, if delays occur only in the past light cone. If delays occur also in the future cone, as in systems with Lorentz-invariant interactions, then analogous specifications of functions must be done for them. For these systems, the type of non-local kernel has the same spill-over at all sides of the causal cone and the number of specifications in the future is finite and equal to the number of specifications in the past. Thus, both methods predict a \\emph{finite, even number of conditions} (initial-value or boundary-value) for non-local scalar field theories. The specific prediction of the diffusion method is \\Eq{dof1}, for any non-quadratic potential.\n\\item Consequently, because solutions are determined by picking conditions on $r$-slices in one case and past-future delay specifications in the other case, there are no implicit choices nor hidden conditions in the construction of such solutions, which are therefore unique once the explicit conditions are specified. This solves the long-standing problem of non-local theories where proving the existence of a solution by a brute-force \\emph{Ansatz} does not imply, in the absence of any localization or delocalization method, its uniqueness \\cite{EW1}.\n\\end{itemize}\n\n\n\\subsection{Generalizing to \\texorpdfstring{$\\exp H(\\Box)$}{} operators}\\label{Hge}\n\nFinally, let us comment on an extension of the above procedure to a non-locality of the form $\\exp\\Box\\to\\exp H(\\Box)$ for some function $H$. In this case, one simply replaces $\\Box$ with $H(\\Box)$ in the Lagrange-multiplier equation \\Eq{locch2}. Everything else follows suit. However, the system \\Eq{act} is local and the Cauchy problem is well defined only if $H(\\Box)$ is a polynomial in the Laplace--Beltrami operator, $H(\\Box)=\\sum_{n=1}^N a_n\\Box^n$. In this case, the number of initial conditions increases from 4 to $4n$: the value of the scalars $\\Phi$ and $\\chi$ at the initial time plus their first $2n-1$ derivatives.\n\nFor entire functions $H(\\Box)$, the ``localized'' system would be non-local. It may still be possible to localize \\Eq{act} for special cases, for instance if $H(\\Box)=\\exp\\Box$. However, in the most general case the diffusion method is insufficient to deal with these non-localities different from a pure exponential, unless an extra convolution equation is added to the system \\cite{CMN3}.\n\n\n\\section{Non-local gravity: equations of motion}\\label{eoms}\n\nConsider the gravitational action\n\\begin{equation}\n\\boxd{S_g = \\frac{1}{2\\kappa^2}\\int d^D x \\sqrt{-g}\\,\\left[R-2\\Lambda+G_{\\mu\\nu} \\, \\gamma(\\Box) \\, R^{\\mu\\nu} \\right],\\label{nlffgb}}\n\\end{equation}\nwhere $\\gamma(\\Box)$ is a completely arbitrary form factor. In this section, we determine its dynamics in two ways. First, by a brute-force calculation, eventually specializable to the form factor \\Eq{fofag}. Second, by recasting the system in terms of an auxiliary field.\n\n\n\\subsection{Einstein equations: pure gravity}\n\nTo compute the Einstein equations for a generic form factor $\\gamma(\\Box)$, one must expand the latter in series of the Laplace--Beltrami operator $\\Box$,\n\\begin{equation}\\label{ggen}\n\\gamma=\\sum_{n=0}^{+\\infty}c_n\\Box^n\\,,\n\\end{equation}\nwhere $c_n$ are constants, and vary with respect to the metric. We couple \\Eq{nlffg} to matter minimally. Varying the total action $S=S_g+S_{\\rm m}$ with respect to the contravariant metric $g^{\\mu\\nu}$, the matter part is dispensed with by the usual definition of energy-momentum tensor\n\\begin{equation}\\label{emt}\nT_{\\mu\\nu} :=-\\frac{2}{\\sqrt{-g}}\\frac{\\delta S_{\\rm m}}{\\delta g^{\\mu\\nu}}\\,.\n\\end{equation}\nIn general, also matter fields will be non-local, but we do not consider their details here. The variations of curvature invariants and form factors are reported in appendix \\ref{app3} and the full derivation of the final result is given in appendix \\ref{app4}:\n\\begin{eqnarray}\n\\kappa^2 T_{\\mu\\nu} &=& (1+\\gamma\\Box) G_{\\mu\\nu}+\\Lambda g_{\\mu\\nu}-\\frac{1}{2} g_{\\mu\\nu}\\,G_{\\sigma\\t}\\gamma R^{\\sigma\\t}+2G^\\sigma_{\\ (\\mu} \\gamma G_{\\nu)\\sigma}+g_{\\mu\\nu}\\nabla^\\sigma\\nabla^\\t\\gamma G_{\\sigma\\t}\\nonumber\\\\\n&& -2\\nabla^\\sigma\\nabla_{(\\mu}\\gamma G_{\\nu)\\sigma}+\\frac12(G_{\\mu\\nu} \\gamma R+R\\gamma G_{\\mu\\nu})+\\Theta_{\\mu\\nu}(R_{\\sigma\\t},G^{\\sigma\\t})\\,,\\label{EinEq1}\n\\end{eqnarray}\nwhere the expression of $\\Theta_{\\mu\\nu}(R_{\\sigma\\t},G^{\\sigma\\t})$ is given by \\Eq{usef5a} for any form factor $\\gamma$. The non-local equation \\Eq{EinEq1} can be compared with similar ones found elsewhere \\cite{BCKM,Kos13}.\n\nFor the particular choice of form factor \\Eq{fofag},\n\\begin{eqnarray}\n(1+\\gamma\\Box) G_{\\mu\\nu}&=&e^{-r_*\\Box}G_{\\mu\\nu}\\,,\\\\\n\\Theta_{\\mu\\nu}(R_{\\sigma\\t},G^{\\sigma\\t}) &=&-\\int_0^{r_*}d q\\,\\bar\\Theta_{\\mu\\nu}[e^{-q\\Box} R_{\\sigma\\t},\\gamma_{r_*-q}(\\Box)G^{\\sigma\\t}]\\,,\\label{Theta2}\n\\end{eqnarray}\nwhere $\\bar\\Theta_{\\mu\\nu}$ is given by equation \\Eq{Theta}. The last expression, derived in appendix \\ref{app4}, is fully explicit.\n\n\n\\subsection{Einstein equations: auxiliary field}\\label{aux}\n\nAn alternative form of the Einstein equations makes use of an auxiliary field \\cite{BMS,BCKM}. Here, we apply this method to \\Eq{nlffg} for the first time. Consider the action\n\\begin{equation}\n\\tilde S[g,\\phi] = \\frac{1}{2\\kappa^2}\\int d^D x \\sqrt{-g}\\,\\left[R-2\\Lambda-2\\phi^{\\mu\\nu} f_1(\\Box)\\, R_{\\mu\\nu}+\\left(\\phi^{\\mu\\nu}-\\frac{1}{D-2} g^{\\mu\\nu}\\phi\\right)\\,f_2(\\Box)\\,\\phi_{\\mu\\nu}\\right],\\label{nlff2}\n\\end{equation}\nwhere $\\phi_{\\mu\\nu}$ is a symmetric two-tensor, $\\phi=\\phi_\\sigma^{\\ \\sigma}=g^{\\mu\\nu}\\phi_{\\mu\\nu}$ is its trace and $f_{1,2}$ are some arbitrary form factors. The equations of motion for $\\phi_{\\mu\\nu}$ are given by the variation $\\delta\\tilde S\/\\delta\\phi^{\\mu\\nu}=0$:\n\\begin{equation}\\label{phieom}\n-f_1(\\Box)\\, R_{\\mu\\nu}+f_2(\\Box)\\,\\phi_{\\mu\\nu}-\\frac{1}{D-2} g_{\\mu\\nu}\\,f_2(\\Box)\\,\\phi=0\\,.\n\\end{equation}\nTaking the trace of \\Eq{phieom}, plugging it back and inverting for $\\phi_{\\mu\\nu}$, one sees that\n\\begin{equation}\\label{bG}\n\\phi_{\\mu\\nu}=[f_2^{-1}f_1](\\Box)\\,G_{\\mu\\nu}+\\lambda_{\\mu\\nu}\\,,\\qquad \\phi=-\\left(\\frac{D}{2}-1\\right)[f_2^{-1}f_1](\\Box)\\,R+\\lambda_\\mu^{\\ \\mu}\\,,\n\\end{equation}\nwhere $\\lambda_{\\mu\\nu}$ is the homogeneous solution of $f_2(\\Box)\\lambda_{\\mu\\nu}=0$. Using \\Eq{bG} and $f_2(\\Box)\\lambda_{\\mu\\nu}=0$ in \\Eq{nlff2} and integrating by parts, one gets the Lagrangian\n\\[\n2\\kappa^2\\tilde\\mathcal{L} =R-2\\Lambda-G_{\\mu\\nu}f_1f_2^{-1}f_1 R^{\\mu\\nu}-\\lambda_{\\mu\\nu} f_1 R^{\\mu\\nu}\\,.\n\\]\nComparing this with \\Eq{nlffg}, we conclude that $\\tilde S=S_g$ on shell provided\n\\begin{equation}\\label{ffg}\n[f_1f_2^{-1}f_1](\\Box)=-\\gamma(\\Box)\\,,\\qquad \\lambda_{\\mu\\nu}=0\\,.\n\\end{equation}\nThere are various possible choices for the form factors $f_1$ and $f_2$; physically they are all equivalent as long as \\Eq{ffg} holds. The simplest choice\n\\beta\nf_2=f_1=-\\gamma\n\\end{equation}\nfor an arbitrary form factor \\Eq{ggen} satisfies the first condition in \\Eq{ffg}, while only form factors with $c_0\\neq 0$ (i.e., those with trivial kernel) also guarantee that the second condition in \\Eq{ffg} is obeyed. In fact, $\\gamma(\\Box)=c_0+c_1\\Box+O(\\Box^2)$, so that $\\gamma(\\Box)\\lambda_{\\mu\\nu}=0$ if, and only if, $\\lambda_{\\mu\\nu}\\equiv 0$. The form factor \\Eq{fofag} is of this type, since $\\gamma_{r_*}(\\Box)=-r_*+(r_*^2\/2)\\Box+O(\\Box^3)$. In other words, there is no homogeneous solution we should worry about when recasting \\Eq{nlffg} as \\Eq{nlff2}, contrary to what happens when making field redefinitions in $f(\\Box^{-1}R)$ non-local gravity \\cite{NoOd,Kos08,MaMa,ZKSZ}.\n\nThus, \\Eq{nlff2} becomes\n\\begin{equation}\n\\boxd{\\tilde S[g,\\phi] = \\frac{1}{2\\kappa^2}\\int d^D x \\sqrt{-g}\\,\\left[R-2\\Lambda+\\left(2R_{\\mu\\nu}-\\phi_{\\mu\\nu}+\\frac{1}{D-2} g_{\\mu\\nu}\\phi\\right)\\,\\gamma(\\Box)\\,\\phi^{\\mu\\nu}\\right].\\label{nlff3}}\n\\end{equation}\nIn appendix \\ref{app5}, we show that the covariant equations of motion of the theory \\Eq{nlff3} are\n\\begin{eqnarray}\n\\kappa^2 T_{\\mu\\nu} &=& G_{\\mu\\nu}+\\Box\\gamma \\phi_{\\mu\\nu}+\\Lambda g_{\\mu\\nu}-\\frac12 g_{\\mu\\nu} X_{\\sigma\\t}\\gamma\\phi^{\\sigma\\t}+2\\phi_{(\\mu}^{\\ \\sigma}\\gamma\\phi_{\\nu)\\sigma}+g_{\\mu\\nu}\\nabla^\\sigma\\nabla^\\t \\gamma\\phi_{\\sigma\\t}\\nonumber\\\\\n\t\t\t\t\t\t\t\t&& -2\\nabla^\\sigma\\nabla_{(\\mu} \\gamma\\phi_{\\nu)\\sigma}-\\frac{1}{D-2}(\\phi_{\\mu\\nu}\\gamma\\phi+\\phi\\gamma\\phi_{\\mu\\nu})+\\Theta_{\\mu\\nu}(X_{\\sigma\\t},\\phi^{\\sigma\\t})\\,,\\label{eomnl1}\\\\\nX_{\\sigma\\t}&:=&2R_{\\sigma\\t}-\\phi_{\\sigma\\t}+\\frac{1}{D-2}g_{\\sigma\\t}\\phi\\,,\\label{X}\n\\end{eqnarray}\naccompanied by the equation of motion $\\delta \\tilde S[g,\\phi]\/\\delta\\phi^{\\mu\\nu}=0$ and its trace:\n\\begin{equation}\\label{eomnl2}\n\\phi_{\\mu\\nu}=G_{\\mu\\nu}\\qquad \\Rightarrow \\qquad \\phi=G=-\\frac{D-2}{2}R\\,,\\qquad X_{\\mu\\nu}=R_{\\mu\\nu}\\,.\n\\end{equation}\nWe call \\Eq{eomnl1} Einstein equations because they come from the variation of the metric and \\Eq{eomnl2} Einstein-like equations because they resemble the Einstein equations of general relativity, where $\\phi_{\\mu\\nu}$ plays the role of a stress-energy tensor. Notice from \\Eq{eomnl2} that the field $\\phi_{\\mu\\nu}$ is local and does not hide $1\/\\Box$ operators. This check \\emph{a posteriori} guarantees that the ordinary variational principle (where fields and their first derivatives vanish at infinity) has been correctly applied.\n\nConsistently, \\Eq{EinEq1} and \\Eq{eomnl1} agree on shell, i.e., when \\Eq{eomnl2} is used (see appendix \\ref{app6}).\n\n\n\\subsection{Brief remarks on causality}\n\nWhenever a factor $\\Box^{-1}$ appears in a non-local theory, causality may be in trouble. The line of reasoning is well known and relies on the definition of the inverse d'Alembertian through the Green equation \\Eq{bcK}, where one must specify a contour prescription for the Green function $\\mathcal{K}$. The main point is that even if the causal (retarded) propagator $\\mathcal{K}_{\\rm ret}(x-y)$ is used to define the $\\Box^{-1}$ operator, a variation of the action with respect to the fundamental fields always gives rise to the even combination\n\\begin{equation}\\nonumber\n\\mathcal{K}_{\\rm ret}(x-y)+\\mathcal{K}_{\\rm ret}(y-x)=:\\mathcal{K}_{\\rm ret}(x-y)+ \\mathcal{K}_{\\rm adv}(x-y)\\,.\n\\end{equation}\nThe retarded Green function is not even, and changing sign to its argument gives the advanced Green function $\\mathcal{K}_{\\rm ret}(y-x) = \\mathcal{K}_{\\rm adv}(x-y)$, which is anti-causal. Therefore, the equations of motion obtained from theories with non-localities of the type $\\Box^{-1}$ (typically, theories where the quantum effective action, not the classical one, is non-local) are necessarily acausal \\cite{BDFM}.\n\nHowever, this argument does not apply in our case because the non-localities we deal with do not need any prescription for the $\\Box^{-1}$ factor, as it always appears in a combination $\\gamma(\\Box)=c_0+c_1\\Box+O(\\Box^2)$ which is analytic when ``$\\Box =0$'' (in particular, $\\gamma_{r_*}(\\Box)=-r_*+(r_*^2\/2)\\Box+O(\\Box^3)$). In other words, in all the theories of quantum gravity with a fundamental non-locality, non-localities (at the level of the classical action, not of the quantum effective one) are always of the type \\Eq{effeb} with $f(0)=0$. As a consequence of this fact, the Green function associated with the non-local operator $\\gamma(\\Box)$ is symmetric.\n\nA one-dimensional example in flat space will further clarify the matter. The non-local operator containing $\\Box^{-1}$ is (\\ref{fofag}), which can be written as \\Eq{fofag2}. Its Green function $K(x-y)$ is the solution of the equation\n\\begin{equation}\\label{exerspace}\n\\gamma_{r_*}(\\Box_x)K(x-y)=-\\int_{0}^{r_*} d s\\, e^{- s\\Box_x} \\, K(x-y)= \\delta(x-y)\\,,\n\\end{equation}\nor, in momentum space, \n\\begin{equation}\\label{exermom}\n\\int_{0}^{r_*} d s\\, e^{s k^2} \\, \\tilde K(k)=-1\\,.\n\\end{equation}\nWhile the inverse of the $\\Box$ operator needs to be prescribed because the naive solution $1\/k^2$ of the Green equation does not define a tempered distribution, the solution of eq.\\ (\\ref{exermom}) does not need to be regularized, as its algebraic solution\n\\begin{equation} \\label{exersolk}\n\\tilde K (k) = - \\left[ \\int_{0}^{r_*} d s\\,e^{s k^2} \\right]^{-1} = - \\frac{k^2}{e^{r_* k^2}-1}\n\\end{equation}\nalready defines a tempered distribution. Its Fourier transform cannot be written in closed form but is very well behaved and, most importantly, is manifestly symmetric under the exchange $x\\leftrightarrow y$,\\footnote{In the $\\Box^{-1}$ case, the regularization procedure needed to define $1\/k^2$ as a tempered distribution prevents the Green function to be symmetric under the exchange $x \\leftrightarrow y$, leading to the known mismatch between causality and symmetry of the propagator \\cite{BDFM}.}\n\\begin{equation}\n\\label{exersolx}\nK(x-y)= -\\frac{1}{\\pi} \\int_0^\\inftyd k\\, \\frac{k^2\\, \\cos [k (x-y)]}{e^{r_* k^2}-1}\\,.\n\\end{equation}\n\t\nThe fact that the $\\Box^{-1}$ operator in the form factor $\\gamma(\\Box)$ of fundamentally non-local quantum gravity does not introduce causality breaking is not, by itself, a guarantee of causality of these theories but, at least, it shows that standard arguments against causality, plaguing effective non-local field theories, do not apply in our case. The problem of causality in fundamentally non-local theories is subtle \\cite{Tom15} and might not admit an all-or-nothing solution, in the sense that the theory might retain macrocausality \\cite{GiMo} while including acceptable violations of microcausality. This interesting possibility will be explored elsewhere.\n\n\n\\section{Localization of non-local gravity}\\label{locnlg}\n\nIn section \\ref{eoms}, we started from a gravitational action $S_g[g(x)]$ and introduced an auxiliary tensor field $\\phi_{\\mu\\nu}$ so that we could rewrite the original action as a functional of this field and the metric, $S_g[g(x)]= \\tilde S[g_{\\mu\\nu}(x),\\phi_{\\mu\\nu}(x)]$, where the gravitational part of $\\tilde S$ is given by the integral of \\Eq{nlff3}. In this section, we will construct a functional $\\mathcal{S}_g[g_{\\mu\\nu}(x),\\Phi_{\\mu\\nu}(r,x),\\chi_{\\mu\\nu}(r,x),$ $\\lambda_{\\mu\\nu}(r,x)]$ representing a system living in $D+1$ dimensions and local in spacetime coordinates. Here we show that the two systems coincide at a section $r=\\beta r_*$ in the $(D+1)$-dimensional space,\n\\begin{equation}\\label{sss}\n\\mathcal{S}_g[g_{\\mu\\nu}(\\beta r_*,x),\\Phi_{\\mu\\nu}(\\beta r_*,x),\\chi_{\\mu\\nu}(\\beta r_*,x)]=\\tilde S_g[g_{\\mu\\nu}(x),\\phi_{\\mu\\nu}(x),\\chi_{\\mu\\nu}(x)]=S_g[g_{\\mu\\nu}(x)]\\,,\n\\end{equation}\nwhere the equalities are meant to be valid on-shell, i.e., at the level of the dynamics. This statement, which can be immediately extended to actions that include also matter fields, is the extension to gravity of the results of section \\ref{scala} \\cite{cuta3} for a scalar field in Minkowski spacetime.\n\n\n\\subsection{Localized action}\\label{loac}\n\nWe apply the procedure illustrated in section \\ref{scala} to the gravitational theory with form factor \\Eq{fofag}. We have seen that \\Eq{nlffg} is physically equivalent to the action \\Eq{nlff3}, which can also be written as\n\\begin{equation}\\label{phint}\n\\tilde S[g,\\phi] = \\frac{1}{2\\kappa^2}\\int d^D x \\sqrt{-g}\\,\\left[R-2\\Lambda-\\int_0^{r_*}d s\\left(2R_{\\mu\\nu}-\\phi_{\\mu\\nu}+\\frac{1}{D-2} g_{\\mu\\nu}\\phi\\right)e^{-s\\Box}\\phi^{\\mu\\nu}\\right]\n\\end{equation}\nthanks to \\Eq{fofag2}. Using \\Eq{phint} instead of \\Eq{nlffg} will allow us to enforce the diffusion equation to $(D+1)$-dimensional fields without facing the commutation problem \\Eq{probl} mentioned in the introduction and the fact that the metric field does not obey a linear diffusion equation. This problem is solved by letting only auxiliary fields diffuse, while the gravitational field does not diffuse at all: it is a dynamical field living in a fixed $r=\\beta r_*$ slice. Therefore, an interesting difference with respect to the scalar-field case is that here some fields (which we will call $\\Phi_{\\mu\\nu}(r,x)$ and $\\chi_{\\mu\\nu}(r,x)$) are free to evolve in the whole $(D+1)$-dimensional bulk, while others (the metric $g_{\\mu\\nu}(x)$ and the Ricci tensor $R_{\\mu\\nu}(x)$ derived from it) are confined into the slice where the higher-dimensional localized system is made equivalent to the non-local one. This configuration strongly reminds us of braneworld scenarios where $r$ is the direction transverse to a brane at $r=\\beta r_*$ and the Einstein--Hilbert Lagrangian contributes with a term $[R(x)-2\\Lambda]\\,\\delta(r-\\beta r_*)$. Another possibility, which we will follow from now on and yields the same result, is to consider an $r$-dependent $g_{\\mu\\nu}(r,x)$ dynamically constrained to be constant along $r$:\n\\begin{equation}\n\\boxd{\\mathcal{S}_g=\\frac{1}{2\\kappa^2}\\intd^Dx\\,d r\\,\\sqrt{-g(r)}\\left(\\mathcal{L}_R+\\mathcal{L}_\\Phi+\\mathcal{L}_\\chi+\\mathcal{L}_\\lambda\\right),\\label{gactcm}}\n\\end{equation}\n\\begin{eqnarray}\n\\mathcal{L}_R\\!\\! &=&\\!\\! R(r)-2\\Lambda\\,,\\\\\n\\mathcal{L}_\\Phi\\!\\! &=&\\!\\! -\\int_0^{r_*}d s\\,\\left[2\\mathcal{R}_{\\mu\\nu}(r)-\\Phi_{\\mu\\nu}(r)+\\frac{1}{D-2}g_{\\mu\\nu}(r)\\Phi(r)\\right]\\Phi^{\\mu\\nu}(r-s)\\,,\\label{gact2cm}\\\\\n\\mathcal{L}_\\chi\\!\\! &=&\\!\\! -\\int_0^{r_*}d s\\int_0^{s}d q\\,\\chi_{\\mu\\nu}(r-q)(\\partial_{r'}-\\Box)\\Phi^{\\mu\\nu}(r')\\,,\\label{difeqgcm}\\\\\n\\mathcal{L}_\\lambda\\!\\! &=&\\!\\! \\lambda_{\\mu\\nu}(r)\\,\\partial_r g^{\\mu\\nu}(r)\\,,\\label{lagcm}\n\\end{eqnarray}\nwhere the metric is $r$-dependent just like the other fields, its Ricci curvature is denoted with a curly $\\mathcal{R}_{\\mu\\nu}$, we introduced a Lagrange multiplier $\\lambda_{\\mu\\nu}$, we omitted the $x$-dependence everywhere, $\\Phi= g^{\\mu\\nu}\\Phi_{\\mu\\nu}$ is the trace of the symmetric rank-2 tensor $\\Phi_{\\mu\\nu}$, and\n\\begin{equation}\\label{rprime2}\nr'=r+q-s\\,,\n\\end{equation}\nso that $\\partial_{r'}=\\partial_q$ in \\Eq{difeqgcm}. All tensorial indices still run from 0 to $D-1$, so that the theory \\Eq{gactcm} is a fake $D+1$ system, which is not $(D+1)$-covariant anyway due to the diffusion equation term. In analogy with \\Eq{X}, it will be convenient to define the tensorial combination\n\\begin{equation}\\label{Xr}\nX_{\\mu\\nu}(r):=2\\mathcal{R}_{\\mu\\nu}(r)-\\Phi_{\\mu\\nu}(r)+\\frac{1}{D-2}g_{\\mu\\nu}(r)\\Phi(r)\\,.\n\\end{equation}\n\nComparing with the scalar field theory \\Eq{act}, there are four major differences one should note: (a) all fields are rank-2 tensors; (b) there is an extra integration $-\\int_0^{r_*}d s$ accounting for the more complicated form factor \\Eq{fofag2}; (c) because of (b), the $q$-integral in \\Eq{difeqgcm} is nested, while in \\Eq{locch2} it is definite; (d) because of (c), \\Eq{rprime2} replaces the scalar-field parameter \\Eq{rprime}.\n\n\n\\subsection{Localized equations of motion}\\label{loeom}\n\nIn intermediate steps of the derivation, we will omit the $x$-dependence in all fields as well as the discussions of section \\ref{scala} on integration domains. The equation of motion for $\\lambda_{\\mu\\nu}$ establishes the independence of the metric from the extra coordinate $r$:\n\\begin{equation}\\label{eomla}\n0=\\frac{\\delta\\mathcal{S}_g}{\\delta\\lambda^{\\mu\\nu}(\\bar r)}=\\partial_{\\bar r} g_{\\mu\\nu}(\\bar r,x)\\qquad\\Rightarrow\\qquad g_{\\mu\\nu}(r,x)=g_{\\mu\\nu}(x)\\,.\n\\end{equation}\nTherefore, in the following we can apply this equation on shell and ignore any change (shift, integration, and so on) in the $r$-argument of the metric, of the Laplace--Beltrami operator, and of curvature invariants, unless stated otherwise. The equations of motion turn out to be\n\\begin{eqnarray}\n0&=&(\\partial_r-\\Box)\\Phi_{\\mu\\nu}(r,x)\\,,\\label{difPg}\\\\\n0&=&(\\partial_r-\\Box)\\chi_{\\mu\\nu}(r,x)\\,,\\label{chimnde}\\\\\n0&=&\\int_0^{r_*}d s\\left[X_{\\mu\\nu}(\\bar r-s)+X_{\\mu\\nu}(\\bar r+s)-2\\mathcal{R}_{\\mu\\nu}(r-s)+\\chi_{\\mu\\nu}(\\bar r-s)-\\chi_{\\mu\\nu}(\\bar r+s)\\right]\\,,\\label{uff}\\\\\n\\kappa^2T_{\\mu\\nu} &=& G_{\\mu\\nu}+\\Lambda g_{\\mu\\nu}-\\int_0^{r_*}d s\\left\\{\\vphantom{\\frac{1}{D-2}}-\\frac12\\,g_{\\mu\\nu}X_{\\sigma\\t}(r)\\Phi^{\\sigma\\t}(r-s)+2\\Phi_{\\sigma(\\mu}(r)\\Phi_{\\nu)}^{\\ \\ \\sigma}(r-s)\\right.\\nonumber\\\\\n&&+\\Box\\Phi_{\\mu\\nu}(r-s)+g_{\\mu\\nu}\\nabla^\\sigma\\nabla^\\t\\Phi_{\\sigma\\t}(r-s)-2\\nabla^\\sigma\\nabla_{(\\mu}\\Phi_{\\nu)\\sigma}(r-s)\\nonumber\\\\\n&& \\left.-\\frac{1}{D-2}\\left[\\Phi_{\\mu\\nu}(r)\\Phi(r-s)+\\Phi(r)\\Phi_{\\mu\\nu}(r-s)\\right]-\\int_0^{s}d q\\,\\bar\\Theta_{\\mu\\nu}[\\chi_{\\sigma\\t}(r-q),\\Phi^{\\sigma\\t}(r+q-s)]\\right\\}\\!.\\nonumber\\\\ \\label{lasto}\n\\end{eqnarray}\n\nLet us see where they come from. The equation of motion for $\\chi_{\\mu\\nu}$ is\n\\begin{eqnarray}\n\\hspace{-.8cm}0 &=& \\frac{\\delta\\mathcal{S}_g}{\\delta\\chi^{\\mu\\nu}(\\bar r)}=-\\int d r \\int_0^{r_*}d s\\int_0^{s}d q\\,\\delta(r-q-\\bar r)(\\partial_{r'}-\\Box)\\Phi_{\\mu\\nu}(r')\\nonumber\\\\\n &\\stackrel{\\textrm{\\tiny \\Eq{eomla}}}{=}& -\\int_{\\bar r}^{\\bar r+r_*} d r\\int_0^{r_*}d s\\,(\\partial_{r'}-\\Box)\\Phi_{\\mu\\nu}(r')\\Big|_{r'=2r-\\bar r-s}\\nonumber\\\\\n &=&-\\int_0^{r_*}d\\rho\\int_0^{r_*}d s\\,(\\partial_{r'}-\\Box)\\Phi_{\\mu\\nu}(r')\\Big|_{r'=2\\rho+\\bar r-s}\\,,\\label{inter2g}\n\\end{eqnarray}\nwhere we first integrated in $q$, then restricted the integration in $r$ from the condition $01$ be a real number and we define the absolute value on $F_0$ by \n$\\mid\\! t^n u \\!\\mid= \\alpha^{-n}$ for $n \\in \\mathbb{Z}$ and $u \\in \n(\\widehat R_0)^\\times$.\n \n Let $F_1$, $F_2$ be subfields of $F_0$ containing $T$.\n We further assume that we are given \n $t$-adically complete $T$-submodules $V \\subset F_1 \\cap \\widehat R_0$\n and $W \\subset F_2 \\cap \\widehat R_0$ \nsatisfying the following conditions:\n\n\n\\begin{equation}\\label{cond_I}\n V + W = \\widehat R_0;\n\\end{equation}\n\n\\vskip-4mm\n\n\n\\begin{equation}\\label{cond_II} \nV \\cap t\\widehat R_0= t V \\quad \\hbox{ and} \\quad\nW \\cap t\\widehat R_0= t W.\n\\end{equation}\n\n\n \\noindent Note that Condition \\eqref{cond_II} is equivalent to \n\n\n\\begin{equation}\\label{cond_IIbis} \n V \\cap t^n\\widehat R_0= t^n V \\quad \\hbox{and} \\quad \nW \\cap t^n\\widehat R_0= t^n W \\quad \\hbox{ for \\, each } \\quad n \\geq 1.\n\\end{equation}\n\n\n\n\n\\begin{sremark} {\\rm Condition \\eqref{cond_II} above is added there compared with \\cite[2.4]{HHK}\nbut we do not require at this stage that $F_1$ is dense in $F_0$.\n }\n\\end{sremark}\n\n\nWe equip the submodules $V[\\frac{1}{t}]$ of $F_0$ \nof the induced metric (and similarly for $W[\\frac{1}{t}]$).\n\n\n\\begin{slemma}\\label{lem_banach} (1) $V$ is closed in $\\widehat R_0$.\n\n\\smallskip\n\n\\noindent (2) For $v \\in V[\\frac{1}{t}] \\setminus \\{0\\}$, we have $$\n\\mid v\\mid= \\mathrm{Inf}\\{ \\alpha^{n} \\mid t^n v \\in V \\} .\n$$\n\n\n\\smallskip\n\n\\noindent (3) We have $V= \\mathrm{Inf}\\{ v \\in V[\\frac{1}{t}] \\mid \n\\enskip \\mid v \\mid \\geq 0 \\} $ and $V$ is a clopen submodule of\n$V[\\frac{1}{t}]$.\n\n\n\\smallskip\n\n\\noindent (4) $V[\\frac{1}{t}]$ is closed in $F_0$ and is a Banach $K$-space. \n \n\\end{slemma}\n\n\n\n\\begin{proof}\n (1) Our assumption is that the map $V \\to \\limproj V\/t^{m+1} V$ is an isomorphism. Let $(x_n)$ be a sequence of $V$ \n which converges in $\\widehat R_0$. For each $m \\geq 0$, condition \\eqref{cond_IIbis} shows that \n the map $V\/t^{m+1} V \\to \\widehat R_0\/t^{m+1} \\widehat R_0$ \n is injective so that the sequence $(x_n)$ modulo $t^{m+1} V$ is stationary to some $v_m \\in V\/t^{m+1} V$.\n The $v_m$'s define a point $v$ of $V$ and the sequence $(x_n)$ converges to $v$.\n \n \\smallskip\n \n \n \\noindent (2) We are given $v = t^m v'\\in V[\\frac{1}{t}]$ with $v' \\in V \\setminus t V$\n and we have \\break \n $\\mid\\! v \\! \\mid= \\mathrm{Inf}\\{ \\alpha^{n} \\mid t^n v \\in \\widehat R_0 \\}\n = \\alpha^{m} \\mathrm{Inf}\\{ \\alpha^{n} \\mid t^n v' \\in \\widehat R_0 \\}$.\n Condition (II) implies that $v' \\in \\widehat R_0 \\setminus t \\widehat R_0$ so that \n $\\mid\\! v \\! \\mid=0$. We conclude that $\\mid\\! v \\! \\mid= \\mathrm{Inf}\\{ \\alpha^{n} \\mid t^n v \\in V \\}$.\n \n \n \n \\smallskip\n \n \\noindent (3) It readily follows from the assertion (2).\n \n \n \n \\smallskip\n \n \\noindent (4) Let $(x_n)$ be a sequence of $V[\\frac{1}{t}]$ converging to some $x \\in F_0$.\n We want to show that $x$ belongs to $V[\\frac{1}{t}]$ so that we can assume that $x\\not = 0$\n and that $\\mid\\! x_n \\! \\mid = \\mid\\! x \\! \\mid = \\alpha^m$ for all $n \\geq 0$.\n Assertion (2) shows that $t^m x_n$ is a sequence of $V$ and (1) shows that its limit\n $t^m x$ belongs to $V$. Thus $x \\in V[\\frac{1}{t}]$. We have shown that $V[\\frac{1}{t}]$ is closed in $F_0$.\n \n Finally since $F_0$ is a Banach $K$--space so is $V[\\frac{1}{t}]$.\n\\end{proof}\n\n\n\nThe following statement extends partially \\cite[th. 2.5]{HHK} and \\cite[prop. 4.1]{HHK2}.\n\n\n\\begin{sproposition}\\label{prop_analytic}\nLet $a,b,c$ be positive integers.\nLet $\\Omega \\subset (F_0)^a \\times (F_0)^b$ be an open\nneighborhood of $(0,0)$ and let \n$f: \\Omega \\to (F_0)^c$ be an analytic map.\nWe denote by $f^a: \\Omega \\cap (F_0)^a \\to (F_0)^c$\nand $f^b: \\Omega \\cap (F_0)^b \\to (F_0)^c$. We assume that \n\n\n\\smallskip\n\n(i) $f(0,0)=0$;\n\n\\smallskip\n\n(ii) the differentials $Df^a_{0}: (F_0)^{a} \\to (F_0)^c$ and $Df^b_{0}: (F_0)^{a} \\to (F_0)^c$\nsatisfy $$\nDf^a_{0}\\Bigl( V[\\frac{1}{t}]^a \\Bigr) + Df^b_{0}\\Bigl( V[\\frac{1}{t}]^b\\Bigr)=(F_0)^c.\n$$\n\n\n\\smallskip\n\n\\noindent Then there is a real number\n$\\epsilon > 0$ such that for all $y \\in (F_0)^c$ with $\\mid \\! y \\! \\mid \\, \n \\leq \\epsilon$, there exist\n$v \\in V^a$ and $w \\in W^b$ such that\n$(v, w) \\in \\Omega$ and $f(v, w) = y$.\n\n\\end{sproposition}\n\n\n\\begin{proof}\nWe consider the continuous embedding $i: V[\\frac{1}{t}]^a \\times W[\\frac{1}{t}]^b \\to (F_0)^a \\times (F_0)^b$ and define $\\widetilde \\Omega=i^{-1}(\\Omega)$ and \nthe function $\\widetilde f= f \\circ i : \\widetilde \\Omega \\to (F_0)^c$.\n\n\\begin{sclaim} The map $\\widetilde f$ \nis strictly differentiable at $(0,0)$ and\n$D\\widetilde{f}_{(0,0)}: V[\\frac{1}{t}]^a \\times W[\\frac{1}{t}]^b \\to (F_0)^c$ is onto. \n\\end{sclaim}\n\n\nSince $f$ is $F_0$--analytic at $(0,0)$, it is strictly differentiable \\cite[I.5.6]{Sc}, that is, there exists\nan open neighborhood $\\Theta$ of $(0,0)$ and a positive real number $\\beta$ such that \n$$\n\\mid \\! f(x_2) -f(x_1) - Df_{(0,0)}.(x_2-x_1) \\! \\mid \\enskip \\leq \\enskip \\beta \n\\mid \\! x_2 -x_1 \\! \\mid \\qquad \\forall x_1,x_2 \\in \\Theta.\n$$\nIt is then strictly derivable as function between the \nBanach $K$--spaces $(F_0)^a \\times (F_0)^b \\to (F_0)^c$.\nOn the other hand the embedding $i: V[\\frac{1}{t}]^a \\times W[\\frac{1}{t}]^b \\to (F_0)^a \\times (F_0)^b$\nis $1$--Liftschitz so is strictly differentiable at $(0,0)$. As composite of strictly differentiable functions,\n$\\widetilde f$ is strictly differentiable at $(0,0)$ \\cite[\\S 1.3.1]{BF}. Furthermore\nthe differential \n$D\\widetilde{f}_{(0,0)}$ is the composite of\n$$\nV[\\frac{1}{t}]^a \\times W[\\frac{1}{t}]^b \\xrightarrow{\\quad i \\quad} (F_0)^a \\times (F_0)^b \\xrightarrow{\\enskip Df^a_{0} + Df^b_{0} \\enskip} (F_0)^c.\n$$\nCondition (ii) says exactly that $D\\widetilde{f}_{(0,0)}$ is surjective. The Claim is proven.\n\n\nWe apply the implicit function theorem to the function $\\widetilde f$ \\cite[\\S 1.5.2]{BF}\n(see \\cite[\\S 4]{Sc} for concocting a proof). Lemma \\ref{lem_banach}.(4)\nshows that $V[\\frac{1}{t}]^a \\times W[\\frac{1}{t}]^b $ is a Banach $K$--space and so is $F_0$.\nThere exists then an open neighborhood $\\Upsilon \\subset \\widetilde \\Omega$\nof $(0,0)$ in $V[\\frac{1}{t}]^a \\times W[\\frac{1}{t}]^b$ such that $\\widetilde f_{\\mid \\Upsilon}$ is open.\nUp to shrink $\\Upsilon$ we can assume that \n$\\Upsilon \\subset V \\times W$ according to Lemma \\ref{lem_banach}.(3).\nThere exists then a real number $\\epsilon >0$ \nsuch that \n$$\n\\bigl\\{ y \\in (F_0)^c \\mid \\enskip \\mid \\! y \\! \\mid \\leq \\epsilon \\bigr\\} \\enskip \\subset \\enskip\n\\bigl\\{ y \\in (F_0)^c \\mid \\enskip \\mid \\! y \\! \\mid < 2 \\epsilon \\bigr\\} \\subset \\widetilde f(\\Upsilon).\n$$\nWe conclude that \nfor all $y \\in (F_0)^c$ with $\\mid \\! y \\! \\mid \\, \n \\leq \\epsilon$, there exist\n$v \\in V^a$ and $w \\in W^b$ such that\n$(v, w) \\in \\Omega$ and $f(v, w) = y$.\n\\end{proof}\n\n\n \n \n\n\\begin{scorollary}\\label{cor_analytic}\nLet $n$ be a positive integer.\nLet $\\Omega \\subset (F_0)^n \\times (F_0)^n$ be an open neighborhood of $(0,0)$ and let \n$f: \\Omega \\to (F_0)^n$ be an analytic map which satisfies \n\n\\smallskip\n\n(i) $f(0,0)=0$;\n\n\\smallskip\n\n(ii) $f(x,0)=f(0,x)=x$ over an open neighborhood $\\Upsilon$ of $0$.\n\n\n\\smallskip\n\n\\noindent Then there is a real number\n$\\epsilon > 0$ such that for all $a $ with $\\mid \\! a \\! \\mid \\, \n \\leq \\epsilon$, there exist\n$v \\in V^n$ and $w \\in W^n$ such that\n$(v, w) \\in \\Omega$ and $f(v, w) = a$.\n\n\\end{scorollary}\n\n\\begin{proof} In this case we have $a=b=c=n$ and $Df^a_{0} = Df^b_{0}= \\mathrm{Id}_{ (F_0)^n}$\nso that Proposition \\ref{prop_analytic} applies.\n\\end{proof}\n\n\n\n\\subsection{Kneser-Tits' subgroups}\n\nContinuing in the previous setting, \nwe assume furthermore that $F_1$ is $t$-adically dense\nin $F_0$. Let $F \\subset F_1 \\cap F_2$ be a subfield. \nFor dealing later with Weil restriction issues it is \nconvenient to deal with a finite field extension $E$ of $F$.\n\n\n\\begin{sproposition}\\label{prop_KT} \n Let $H$ be a semisimple simply connected\n$E$--group scheme assumed strictly isotropic.\nWe put $G=R_{E\/F}(H)$. \nFor each overfield $L$ of $F$, we put \n$G(L)^+= H(L \\otimes_F E)^+$ where the second group \nis that defined in \\S \\ref{subsec_PS}.\n Then we have the decomposition\n$$\nG(F_0)^+= G(F_1)^+ \\, G(F_2)^+.\n$$\n\\end{sproposition}\n \n \n\\begin{proof} Without lost of generality we can assume that $F$ is infinite. The proof is based on an analytic argument requiring \nsome preparation.\n\nLet $P$ be a strictly proper parabolic $E$--subgroup of $H$. \nLet $U$ be its \nunipotent radical and $U_{last}$ the last part of Demazure's filtration\n\\cite[\\S 3.2]{GPS}. Let $u: \\mathbb{G}_{a,E}^d \\buildrel\\sim\\over\\lgr U_{last}$ be a $E$--group isomorphism.\nAccording to \\cite[Lemma 3.4(3)]{GPS}, $E_P(E). \\mathop{\\rm Lie}\\nolimits(U_{last})(E)$\ngenerates $\\mathop{\\rm Lie}\\nolimits(H)(E)$.\nThere exists $g_1,\\dots, g_n \\in E_P(E)$ such that \n$$\n\\mathop{\\rm Lie}\\nolimits(H)(E)= \\, {^{g_1}\\!\\mathop{\\rm Lie}\\nolimits}(U_{last})(E) \\oplus \\, ^{g_2}\\!\\mathop{\\rm Lie}\\nolimits(U_{last})(E) \\oplus \\dots \n\\oplus \\, ^{g_n}\\!\\mathop{\\rm Lie}\\nolimits(U_{last})(E) . $$\n\nWe consider the map $h: (\\mathbb{G}_{a,E}^d)^n \\to H$, $h(x_1,\\dots, x_n)= \\, {^{g_1}u}(x_1) \\, \\dots \\, \n^{g_n}\\!u(x_n)$. Its differential at $0$ is \n$dh_{0,0}: E^{dn} \\cong \\mathop{\\rm Lie}\\nolimits(U_{last})(E)^n \\to \\mathop{\\rm Lie}\\nolimits(H)(E)$, $(X_1,\\dots, X_n) \\mapsto \\sum_{i=1}^n \\, \\, ^{g_i}\\!X_i$, \nso is an isomorphism. The map $h$ is then \\'etale at a neighborhood of $0$.\nIt follows that $h_\\sharp= R_{E\/F}(h): R_{E\/F}\\bigl((\\mathbb{G}_{a,E}^d)^n\\bigr) \\to G=R_{E\/F}(H)$\nis \\'etale also at a neighborhood of $0$ \\cite[A.5.2.(4)]{CGP}\n\n\nSince the field $F_0$ is henselian, \nthe local inversion theorem holds \\cite[prop. 2.1.4]{GGMB}. \nWe mean that there exists an open neighborhood $\\Upsilon \\subset (F_0)^n$ such that\n the restriction \n$h_{\\sharp \\mid \\Omega}: \\Upsilon \\to G(F_0)$ is a topological open embedding.\n\nWe consider now the product morphism $q: \\Upsilon \\times \\Upsilon \\to G(F_0)$,\n$q(x,y)= h_\\sharp(x) h_\\sharp(y)$. We put $\\Omega= q^{-1}( h_\\sharp(\\Upsilon))$, this an open\nsubset of $(F_0)^{2n}$.\nThen the restriction $q_{\\mid \\Omega}$ defines an (unique) analytical map\n$f: \\Omega \\to \\Upsilon$ \nsuch that $q(x,y)= h_\\sharp( f(x,y) )$.\n\nBy construction we have $f(0,x)= f(x,0)=x$ for $x$ in a neighborhood of $0 \\in (F_0)^n$.\nWe apply Corollary \\ref{cor_analytic} to $f$ so that there exists $\\epsilon >0$\nsuch that for each $a \\in \\Upsilon$ with $\\mid a \\mid \\, \\leq \\, \\epsilon$, there exist\n$v \\in V^n$ and $w \\in W^n$ such that\n$(v, w) \\in \\Omega$ and $f(v, w) = a$.\nWe denote by $\\Upsilon_\\epsilon= \\Upsilon \\cap B(0, \\epsilon)$.\nThen $h_\\sharp^{-1}( \\Upsilon_\\epsilon)$ is an open neighborhood of \n$0$ in $(F_0)^n$.\n\n\nLet us now prove that $G(F_0)^+= G(F_1)^+ \\times G(F_2)^+$.\nSince $F_1$ is dense in $F_0$, \n$G(F_1)^+$ is dense in $G(F_0)^+$.\nIt is then enough to show that $\\Upsilon_\\epsilon \\subset G(F_1)^+ \\times G(F_2)^+$.\nLet $g =h(a) \\in h_\\sharp( \\Upsilon_\\epsilon)$.\nThen $a= f(v,w)$ with $(v, w) \\in \\Omega$.\nIt follows that $g =h( a)= h_\\sharp( f(v,w)) = q(v,w) = h_\\sharp(v) \nh_\\sharp(w) \\in G(F_1)^+ \\times G(F_2)^+$.\n\\end{proof}\n\n \nThis could be refined as follows.\n\n\\newpage\n\n\n\n \n\\begin{sproposition}\\label{prop_R_eq} Let \n$H$ be a reductive $E$-group and put $G=R_{E\/F}(H)$.\n\n\\smallskip\n\n\\noindent (1) $RG(F_1) \\, RG(F_2)$ contains an open neighborhood of $1$ in $G(F_0)$.\n\n\n\\smallskip\n\n\\noindent (2) If $RG(F_1)$ is dense in $RG(F_0)$, then $RG(F_1) \\, RG(F_2)=RG(F_0)$;\n\n\n\\smallskip\n\n\\noindent (3) If $H$ is semisimple simply connected and $H_{F_1 \\otimes_F E}$ is \nstrictly isotropic, then we have\n$$\nG(F_1)^+ \\, RG(F_2)=G(F_0)^+ .\n$$\n\n\n\\smallskip\n\n\\noindent (4) If $H$ is semisimple simply connected and $H_{F_i \\otimes_F E}$ is strictly isotropic for $i=1,2$,\nthen $G(F_1)^+ \\, G(F_2)^+=G(F_0)^+$.\n\n\\end{sproposition}\n \n \nThe subgroups $G(F_1)^+$, $G(F_0)^+ $ are defined as in Proposition \\ref{prop_KT}.\n \n\\begin{proof}\n (1) Let $T \\subset H$ be a maximal $E$--torus and let $1 \\to S \\to Q \\xrightarrow{s} T \\to 1$\n be a resolution of $T$ where $Q$ is a quasitrivial torus and $S$ is a torus.\n We have $Q=R_{C\/E}(\\mathbb{G}_m)$ where $C$ is an \\'etale $E$--algebra so that\n $Q$ is an open subset of the affine $E$--space $\\mathbf{W}(C)$.\n \n We use now Raghunathan's technique \\cite[\\S 1.2]{R}.\n There exists $h_1,\\dots, h_r \\in H(E)$ such that \n$\\mathop{\\rm Lie}\\nolimits(H)(E)= \\, ^{g_1}\\!\\mathop{\\rm Lie}\\nolimits(T)(E)\\oplus \\, ^{g_2}\\!\\mathop{\\rm Lie}\\nolimits(T)(E) \\oplus \\, \\dots \\oplus \\; ^{g_r}\\!\\mathop{\\rm Lie}\\nolimits(T)(E) $.\nWe consider the map $h: (Q^n)_{E} \\to H$, $h(x_1,\\dots, x_n)= \\, ^{g_1}\\!s(x_1) \\, \\dots \\, \n^{g_n}\\!s(x_n)$. Its differential at $0$ is \\break \n$dh_{0}: C^r \\to \\mathop{\\rm Lie}\\nolimits(H)(E)$, $(c_1,\\dots, c_r) \\mapsto \\sum_i \\, \\, ^{g_i}\\!ds(c_i)$, \nand is onto (observe that $\\mathop{\\rm Lie}\\nolimits(Q)(E) \\to \\mathop{\\rm Lie}\\nolimits(T)(E)$ is surjective). \n\nWe cut now $Q^r$ by some suitable affine $E$--subspace $\\mathbf{W}(C^n)$ of $\\mathbf{W}(C)^r$\nsuch that the restriction $h'$ of $h$ to $X=Q^r \\cap \\mathbf{W}(C^n)$\nis such that $dh'_{0}: \\mathrm{Tan}_{X,1} \\to \\mathop{\\rm Lie}\\nolimits(H)(F)$ is an isomorphism.\nNote that $X$ and $E$ have same dimension $n$ over $E$.\n\nThe map $h'$ is then \\'etale at a neighborhood of $1$.\nIt follows that \\break $h'_\\sharp= R_{E\/F}(h'): R_{E\/F}\\bigl(X\\bigr) \\to G=R_{E\/F}(H)$\nis \\'etale also at a neighborhood of $0$ \\cite[A.5.2.(4)]{CGP}\n\nSince the field $F_0$ is henselian, \nthe local inversion theorem holds \\cite[prop. 2.1.4]{GGMB}. \nWe mean that there exists an open neighborhood $\\Upsilon \\subset (F_0)^n$ such that\n the restriction \n$h'_{\\sharp \\mid \\Omega}: \\Upsilon \\to G(F_0)$ is a topological open embedding.\n\nWe consider now the product morphism $q: \\Upsilon \\times \\Upsilon \\to G(F_0)$,\n$q(x,y)= h'_\\sharp(x) \\, h'_\\sharp(y)$. We put $\\Omega= q^{-1}( h'_\\sharp(\\Upsilon))$, this an open\nsubset of $(F_0)^{2n}$.\nThen the restriction $q_{\\mid \\Omega}$ defines a (unique) analytical map $f: \n \\Omega \\to \\Upsilon$ \nsuch that $q(x,y)= h'_\\sharp( f(x,y) )$.\n\nBy construction we have $f(0,x)= f(x,0)$ for $x$ in a neighborhood of $0 \\in F_0^n$.\nWe apply Corollary \\ref{cor_analytic} to $f$ so that there exists $\\epsilon >0$\nsuch that for each $a \\in \\Upsilon$ with $\\mid a \\mid \\, \\leq \\epsilon$, there exist\n$v \\in V^n$ and $w \\in W^n$ such that\n$(v, w) \\in \\Omega$ and $f(v, w) = a$.\nWe denote by $\\Upsilon_\\epsilon= \\Upsilon \\cap B(0, \\epsilon)$.\nThen ${h'_\\sharp}^{-1}( \\Upsilon_\\epsilon)$ is an open neighborhood of \n$0$ in $(F_0)^n$.\n\nWe claim that $\\Upsilon_\\epsilon \\subset RG(F_1) \\, RG(F_2)$.\nLet $g =h'_\\sharp(a) \\in h( \\Upsilon_\\epsilon)$.\nThen $a= f(v,w)$ with $(v, w) \\in \\Omega$.\nIt follows that $g= h'_\\sharp( a)= h_\\sharp( f(v,w)) = q(v,w) = h'_\\sharp(v) h'_\\sharp(w) \\in RG(F_1) \\times RG(F_2)$.\n\n\n \\smallskip\n \n \\noindent (2) \n If $RG(F_1)$ is furthermore dense in $RG(F_0)$, then (1) shows that \n $RG(F_1) \\, RG(F_2)$ is a dense open subset of $RG(F_0)$.\n Thus $RG(F_1) \\, RG(F_2)=RG(F_0)$ \n \n \n \\smallskip\n \n \\noindent (3) We assume that $H$ is semisimple simply connected and \n that $G_{F_1 \\otimes_F E_1}$ is strictly isotropic.\n According to Lemma \\ref{lem_GS}.(2), we have\n $G(F_1)^+=RG(F_1)=RH(F_1 \\otimes_F E)=H^+(F_1 \\otimes_F E)$\n and similarly for $F_0$.\n \n \nSince $H^+(F_1 \\otimes_F E)^+$ is dense in $H^+(F_0 \\otimes_F E)$, it follows that \n$RG(F_1)$ is dense in $RG(F_0)$. Assertion \n (2) yields then $RG(F_1) \\, RG(F_2)=RG(F_0)$.\n Lemma \\ref{lem_R_eq}.(2) states that $G(F_1)^+=RG(F_1)$\n and $G(F_0)^+=RG(F_0)$. We conclude that \n $G(F_1)^+ \\, RG(F_2)=G(F_0)^+$.\n \n\n \\smallskip\n \n \\noindent (4) We have furthermore $G(F_2)^+= RG(F_2)$ so that (3)\n yields $G(F_1)^+ \\, G(F_2)^+ =G(F_0)^+$.\n\\end{proof}\n\n \n \\begin{sremark} {\\rm Note that (1) shows in particular that \n $RG(F_0)$ is an open subgroup of $G(F_0)$.\n }\n \\end{sremark}\n \n \n The main result of the section is the following patching statement on twisted flag varieties.\n \n\n \\begin{stheorem} \\label{thm_key} We put $F=F_1 \\cap F_2$. \n Let $H$ be a reductive $E$--group and let \n $X$ be a twisted flag $E$--variety of $H$.\n Assume that $X$ is auto-opposite, that is,\n the stabilizer $P$ of an $E_s$--point of $X$\n is conjugated to an opposite parabolic subgroup of $P$ \\cite[\\S 4.9]{BoT65}. We put $G=R_{E\/F}(H)$ and $Z=R_{E\/F}(X)$.\n If $Z(F_1) \\not = \\emptyset$\n and $Z(F_2) \\not = \\emptyset$, then $X(F) \\not = \\emptyset$.\n \\end{stheorem}\n\n \n \\begin{proof} Without loss of generality, we can assume that $H$ is\n semisimple simply connected and that it is absolutely $E$--simple.\n This implies that $H$ is strictly $F_i \\otimes_F E$--isotropic for $i=1,2$.\n Let $x_i \\in Z(F_i)= X( F_i \\otimes_F E)$ and denote by $P_i=\\mathop{\\rm Stab}\\nolimits_{G_{F_2}}(x_2)$\n its stabilizer, this is parabolic $F_i \\otimes_F E$--subgroup of $H$. We denote by $U_i\/E$ its unipotent radical. \n\n Since the conjugacy class of $P_1$ is autopposite, there exists \n $h \\in H(E \\otimes_F F_0)$ such that $P_{1, E \\otimes_F F_0}$\n is opposite to $^hP_{2, E \\otimes_F F_0}$ \\cite[XXVI.5.3]{SGA3}.\nAccording to Lemma \\ref{lem_GS0}, we have \n$H(E \\otimes_F F_0)= H^+(E \\otimes_F F_0) \\, P_2(E \\otimes_F F_0)$\nso we can assume that $h \\in H^+(E \\otimes_F F_0)$.\nAccording to Proposition \\ref{prop_R_eq}.(4), \nwe can write $h=h_1 \\, h_2$ with $h_i \\in H^+(E \\otimes_F F_i)$ for $i=1,2$.\nUp to replace $P_1$ by $^{h_1^{-1}}P_1$ and $P_2$ by $^{h_2^{-1}}P_2$\nwe can then assume that $P_{1,F_0}$ is opposite to $P_{2,F_0}$.\n According to \\cite[XXVI.5.1]{SGA3}, we have a decomposition \n\n\\begin{equation} \\label{michel} \n H(F_0 \\otimes_F E) =U_2(F_0 \\otimes_F E) \\, U_1(F_0 \\otimes_F E) \\,\n P_2(F_0\\otimes_F E).\n \\end{equation}\n \n\\noindent Let $h \\in H(F_0 \\otimes_F E)$ such that $x_1=h.x_2$. The preceding decomposition permits to write\n$h= u_2 \\, u_1 \\, p_2$ with $u_2 \\in U_2(F_0\\otimes_F E)$, \n $u_1 \\in U_1(F_0 \\otimes_F E)$\nand $p_2 \\in P_2(F_0 \\otimes_F E)$. It follows\nthat $$\nx_1= u_1^{-1}.x_1 = (u_1^{-1}h).x_2= (u_1^{-1} \\, u_2 \\, u_1) \\, . \\, (p_2.x_2) =(u_1^{-1} \\, u_2 \\, u_1) \\, . \\, x_2\n$$\n hence $x_1 \\in H(F_0 \\otimes_F E)^+.x_2$.\n According to Proposition \\ref{prop_R_eq}.(4), we have $G(F_0)^+= G(F_1)^+ \\, G(F_2)^+$\n so that $u_1^{-1} \\, u_2 \\, u_1 =g_1 \\, g_2$ with $g_i \\in G(F_i)^+$ for $i=1,2$.\n It follows that $g_1^{-1} \\, . \\, x_1 = g_2^{-1} \\, . \\, x_2$, this defines a point of $X(E)=Z(F)$.\n \\end{proof}\n\n \n \n \n \\section{Relation with the original HHK method}\n \nWe recall the setting.\nLet $T$ be an excellent complete discrete valuation ring with fraction field $K$, residue field $k$ and\nuniformizing parameter $t$. \nLet $F$ be a one-variable function field over $K$\nand let $\\goth X$ be a normal model of $F$, i.e. a normal connected\nprojective $T$-curve with function field $F$.\n We denote by $Y$ the closed fiber of $\\goth X$ and fix a separable closure $F_s$ of $F$.\n \n For each point $P \\in Y$, let $R_P$ be the local ring of\n $\\goth X$ at $P$; its completion $\\widehat R_P$ is a domain \n with fraction field denoted by $F_P$.\n \n \n For each subset $U$ of $Y$ that is contained\n in an irreducible component of $Y$ and does not meet the other components, \nwe define\n $R_U= \\bigcap\\limits_{P \\in U} R_P \\subset F$. We denote by $\\widehat R_U$\n the $t$--adic completion of $R_U$.\n The rings $R_U$ and $\\widehat R_U$ are excellent normal domains \n and we denote by $F_U$ the fraction field of $\\widehat R_U$\n \\cite[Remark 3.2.(b)]{HHK3}.\n \n Each height one prime $\\mathfrak{p}$ in $\\widehat R_P$ that contains\n $t$ defines a branch of $Y$ at $P$ lying \n on some irreducible component of $Y$. \n The $t$-adic completion $\\widehat R_{\\mathfrak{p}}$ of the local ring\n $R_{\\mathfrak{p}}$ of $\\widehat R_P$ at $\\mathfrak{p}$ is a complete DVR\n whose fraction field is denoted by $F_{\\mathfrak{p}}$. \n The field $F_{\\mathfrak{p}}$ contains also $F_U$ if $U$ \n is an irreducible open subset of $Y$ such that $P \\in \\overline{U} \\setminus U$.\n We have then a diagram of fields\n \n\\[\\xymatrix{\n & F_{\\mathfrak{p}} & \\\\ \n F_P \\ar[ru] & & F_U . \\ar[lu]\n}\\]\n \n \n \n \n \\begin{sexample} \\label{sexample1}{\\rm We assume that $T=k[[t]]$ and \n take $X= \\mathbb{P}^1_K$, $\\goth X= \\mathbb{P}^1_T$,\n $P= \\infty_k$, $U=\\mathbb{A}^1_k= \\mathop{\\rm Spec}\\nolimits(k[x])$.\n The ring $R_U$ contains $k[[t]][x]$ and is \n its localization with respect to the multiplicative set $S$\n of elements which are units modulo $t$.\n The $t$-adic completion of $R_U$ is $\\widehat R_U=\n k[x][[t]]$; we have $F_U= Frac( \\widehat R_U)=\n k(x)((t))$.\n The local ring of $\\goth X$ at $P=\\infty_k$ is $R_P=k[[t]][x^{-1}]_{(x^{-1},t)}$\n so that its completion is $\\widehat R_P= k[[t,x^{-1}]]$;\n in particular $F_P=k((t,x^{-1}))$.\n \n We take $\\mathfrak{p}= t \\widehat R_P \\subset \\widehat R_P$\n and the $t$-adic completion $\\widehat R_{\\mathfrak{p}}$ \n of the local ring\n $R_{\\mathfrak{p}}$ of $\\widehat R_P$ at $\\mathfrak{p}$ is a complete DVR\n which is $k((x^{-1}))[[t]]$. In particular $F_\\goth p= k((x^{-1}))((t))$.\n }\n \\end{sexample}\n\n \\medskip\n \n \n \\begin{ssetting} \\label{setting_hhk} {\\rm\n Let $\\mathcal P$ be a non-empty finite set of closed points of $Y$ that contains\n all the closed points at which distinct irreducible components meet.\n Let $\\mathcal U$ be the set of connected components of $Y \\setminus \\mathcal P$ and let $\\mathcal B$\n be the set of branches of $Y$ at points of $\\mathcal P$.\n This yields a finite inverse system of field $F_P, F_U, F_\\goth p$ (for $P \\in \\mathcal P$;\n $U \\in \\mathcal U$, $\\goth p \\in \\mathcal B$) where $F_P, F_U \\subset F_\\goth p$ if $\\goth p$\n is a branch of $Y$ at $P$ lying in the closure of $U$.\n }\n\\end{ssetting}\n\n\n\n\\begin{slemma} \\label{lem_dense0} We assume that $X= \\mathbb{P}^1_K$, $\\goth X= \\mathbb{P}^1_T$,\n $P= \\infty_k$, $U=\\mathbb{A}^1_k= \\mathop{\\rm Spec}\\nolimits(k[x])$ and $\\goth p$ the branch\n of $P$. We put $F_1=F_P$, $F_2=F_U$\n and $F_0=F_\\goth p$.\n \n\n\\smallskip\n\n\\noindent (1) $F_1$ is $t$-dense in $F_0$.\n\n\n\\smallskip\n\n\\noindent (2) We put $V= F_1 \\cap \\widehat R_\\mathfrak{p}$ and $W= F_2 \\cap \\widehat R_\\mathfrak{p}$.\nThen $V$ and $W$ satisfy conditions \\eqref{cond_I} and \\eqref{cond_II}.\n \n\\end{slemma}\n\n\n\n \\begin{proof}\n \\noindent (1) We are given $u_0\/v_0 \\in F_0$ with $u_0, v_0 \\in \\widehat R_\\mathfrak{p}$, $v_0 \\not =0$.\n There exists elements $u, v \\in R_\\goth p$ very close respectively of $u_0,v_0$ with $v \\not =0$.\n Let $s_1, s_2 \\in R_p \\setminus R_p \\goth p$ such that $s_1 u \\in R_P$ and $s_2 v \\in R_\\goth p$.\n Then $u_0\/v_0$ is very close of $(s_1s_2 u) \/ (s_1s_2 v) \\in F_1$.\n \n \n \\smallskip\n \n \\noindent (2) Condition \\eqref{cond_II} is obviously fullfilled since $t \\in F_1 \\cap F_2$.\n For establishing condition \\eqref{cond_I}, we are given an element $f$ of $\\widehat R_\\goth p$ and \n may write it as $f= \\sum\\limits_{i=0}^\\infty x^{m_i} \\Bigl( \\sum_{j=0}^\\infty a_{i,j} \\frac{1}{x^j} \\Bigr) t^i$\n where the $m_i$'s are non-negative integers and $a_{i,j} \\in T$. \n We decompose $$\n f= f_1 + f_2= \\sum_{i=0}^\\infty x^{m_i} \\Bigl( \\sum_{j=m_i}^\\infty a_{i,j} \\frac{1}{x^j} \\Bigr) t^i\n \\quad + \\quad\n \\sum_{i=0}^\\infty x^{m_i} \\Bigl(\\sum_{j=0}^{m_i-1} a_{i,j} \\frac{1}{x^j} \\Bigr) t^i \n $$\n We observe that $f_2$ belongs to $\\widehat R_P$ so belongs to $V$.\nWe recall that $R_U$ is the localization of $T[x]$ \nwith respect to the elements which are units modulo $t$. \nWe conclude that $f_2$ belongs to $W$ as desired.\n \\end{proof}\n \n \n \n \n \\begin{stheorem} \\label{thm_main_hhk} Let $G$ be a reductive $F$--algebraic group.\n \n \n \\smallskip\n \n \\noindent (1) Let $Z$ be a twisted flag projective $F$--variety for $G$.\n Then $Z(F) \\not= \\emptyset$ if and only if $Z(F_U) \\not = \\emptyset$ for \n each $U \\in \\mathcal U$ and $Z(F_P) \\not = \\emptyset$ for \n each $P \\in \\mathcal P$.\n \n \\smallskip\n \n \\noindent (2) For each $U \\in \\mathcal U$ (resp.\\ each $P \\in \\mathcal P$), \n we fix an $F$--embeddings $i_U: F_s \\to F_{U,s}$ (resp.\\ $i_P: F_s \\to F_{P,s}$)\n providing identifications $\\Delta(G_{F_s}) \\buildrel\\sim\\over\\lgr \\Delta(G_{F_{U,s}})$\n (resp.\\, $\\Delta(G_{F_s}) \\buildrel\\sim\\over\\lgr \\Delta(G_{F_{P,s}})$).\n The Tits index $\\Delta_0(G)$ is the smallest subset\n of $\\Delta(G_{F_s})$\n which is stable under the $\\star$--action of $\\mathop{\\rm Gal}\\nolimits(F_s\/F)$ and such that \n $\\Delta_0(G) \\subset \\Delta_0(G_{F_U})$ for each $U \\in \\mathcal U$ and $\\Delta_0(G) \\subset \\Delta_0(G_{F_P})$ for each $P \\in \\mathcal P$.\n \n \\end{stheorem}\n\n The recollection for star action and Tits index is done in the beginning of the paper.\n \n \\begin{proof}\n The outline is to prove (1) under an assumption\n of autoppositeness, to prove (2) and then to prove (1) in the general case.\n \n \\smallskip\n \n \\noindent (1) We assume that $Z$ parameterizes an\n auto-opposite conjugacy\n class of parabolic subgroups of $G$. \n We use a Weil restriction argument as in the proof of \\cite[Thm. 4.2]{HHK2}.\nThis involves a finite morphism $f: \\goth X \\to \\mathbb{P}^1_T$ such that \n$\\mathcal P=f^{-1}(\\infty_k)$. Write $\\uF$ for the function field of $\\mathbb{P}^1_T$, and let $d=[F:F']$. We put\n$\\uU= \\mathbb{P}^1_k \\setminus \\{\\infty \\}$, $\\uP= \\infty_k$ and\n$\\underline{\\goth p}= (U, \\mathfrak{p})$ and\n $F_0=\\uF_{\\underline{\\goth p}}$, $F_1=\\uF_{\\uP}$ and $F_2=F_{\\uU}$. Also patching holds for the diamond $(\\uF, F_1,F_2,F_0)$ according to \\cite[thm 3.9]{HH}\n so in particular $K(x)= \\uF= F_1 \\cap F_2 \\subset F_0$. \n We put $V= F_1 \\cap \\widehat R_\\mathfrak{p}$ and $W= F_2 \\cap \\widehat R_\\mathfrak{p}$.\n Lemma \\ref{lem_dense0} shows that \n $F_1$ is dense in $F_0$ and that $V$ and $W$ satisfy conditions \\eqref{cond_I} and \\eqref{cond_II}.\n\n We consider the Weil restriction $\\uG= R_{F\/\\uF}(G)$, it is a reductive $\\uF$--group\nwhich acts on the $\\uF$--variety $\\uZ=R_{F\/\\uF}(Z)$.\nWe have $$\n\\uZ(F_1)= Z(F_1 \\otimes_{\\uF} F) = \\prod_{P \\in \\mathcal P} Z(F_P)\n$$\naccording to \\cite[Lemma 6.2.(a)]{HH}.\nSimilarly we have\n$$\n\\uZ(F_2)= Z(F_2 \\otimes_{\\uF} F) = \\prod_{U \\in \\mathcal U} Z(F_U)\n$$\nOur assumptions imply that $\\uZ(F_1) \\not = \\emptyset$ and \n$\\uZ(F_2) \\not = \\emptyset$. Theorem \\ref{thm_key} implies \nthat $\\uZ(\\uF) \\not = \\emptyset$. Thus $\\uZ(\\uF)= Z(F)$ is non-empty.\n\n \\smallskip \n \n \\noindent (2) Let $\\Theta$ be the smallest subset of $\\Delta(G_{F_s})$\n which is stable under the $\\star$--action of $\\mathop{\\rm Gal}\\nolimits(F_s\/F)$ and such that \n $\\Delta_0(G) \\subset \\Delta_0(G_{F_U})$ for each $U \\in \\mathcal U$ and $\\Delta_0(G) \\subset \\Delta_0(G_{F_P})$ for \n each $P \\in \\mathcal P$.\n Since $\\Theta$ is an intersection of auto-opposite subsets of $\\Delta(G_{F_s})$, it is auto-opposite.\n \n We observe that $\\Delta_0(G) \\subset \\Theta$ since it is \n stable under the star action and \n satisfies $\\Delta_0(G) \\subset \\Delta_0$ for each $U \\in \\mathcal U$ and $\\Delta_0(G) \\subset \\Delta_0(G_{F_P})$ for each $P \\in \\mathcal P$.\n For the converse inclusion we consider the $F$--variety $Z$ of parabolic\n subgroups of type $\\Theta$ (which is auto-opposite).\n For each $U \\in \\mathcal U$, we have $\\Theta \\subset \\Delta_0(G_{F_U})$ so that\n $Z(F_U) \\not = \\emptyset$; similarly we have \n $\\Theta \\subset \\Delta_0(G_{F_P})$ for each $P \\in \\mathcal P$\n so that $Z(F_P) \\not = \\emptyset$.\n Part (1) yields that $Z(F) \\not = \\emptyset$. Thus $\\Theta \\subset \\Delta_0(G)$\n and $\\Theta = \\Delta_0(G)$\n \n \\smallskip\n \n\\noindent (3) We consider now the case of an arbitrary flag $F$--variety $Z$ for $G$. It is associated to a subset $\\Upsilon$ of $\\Delta(G_{F_s})$\n which is invariant under the $\\star$--action of $\\mathop{\\rm Gal}\\nolimits(F_s\/F)$. \n If $Z(F) \\not = \\emptyset$, we have that $Z(F_U) \\not = \\emptyset$ for each \n $U \\in \\mathcal U$ and that $Z(F_P) \\not = \\emptyset$ for each \n $P \\in \\mathcal P$. Conversely if $Z(F_U) \\not = \\emptyset$ for each \n $U \\in \\mathcal U$ and $Z(F_P) \\not = \\emptyset$ for each \n $P \\in \\mathcal P$. \n It follows that $ \\Upsilon \\subset \\Delta_0(G_{F_U})$\n for each $U \\in \\mathcal U$.\n Part (2) of the statement yields $\\Upsilon \\subset \\Delta_0(G)$\n so that $Z(F) \\not = \\emptyset$. \n \\end{proof}\n\n \n \n \\begin{scorollary} \\label{cor_main_hhk} Let $G$ be a reductive $F$--algebraic group.\n \n \\smallskip\n \n \\noindent (1) Let $Z$ be a twisted flag projective $F$--variety for $G$.\n Then $Z(F) \\not= \\emptyset$ if and only if $Z(F_P) \\not = \\emptyset$ for \n each $P \\in Y$.\n \n \\smallskip\n \n \\noindent (2) For each $P \\in Y$, \n we fix an $F$--embedding $i_P: F_s \\to F_{P,s}$\n providing identifications \n $\\Delta(G_{F_s}) \\buildrel\\sim\\over\\lgr \\Delta(G_{F_{P,s}})$.\n The Tits index $\\Delta_0(G)$ is the smallest subset of $\\Delta(G_{F_s})$\n which is stable under the $\\star$--action of $\\mathop{\\rm Gal}\\nolimits(F_s\/F)$ and such that \n$\\Delta_0(G) \\subset \\Delta_0(G_{F_P})$ for each $P \\in Y$.\n \n \\end{scorollary}\n\n \n \\begin{proof}\n (1) We assume $Z(F_P) \\not = \\emptyset$ for \n each $P \\in Y$. Let $Y_1,\\dots, Y_d$ be the irreducible components of $Y$\n with respective generic points $\\eta_1, \\dots, \\, \\eta_d$. \n According to \\cite[prop. 5.8]{HHK2}, there exists non-empty affine subsets \n $U_i \\subset Y_i$ ($i=1,\\dots, d$) such that $Z(F_{U_i}) \\not = \\emptyset$\n for $i=1,\\dots, d$ and $U_i \\cap U_j = \\emptyset$ for $i0$, set $X^{\\varepsilon}_t = \\varepsilon X_{t \/\\varepsilon^2}$, $t\\geq 0$. \nLet $\\mathcal{D}_T = D([0,T], \\mathbb{R}^d)$ denote the Skorokhod space, \nand let $\\mathcal{D}_\\infty=D([0,\\infty), \\mathbb{R}^d)$.\nWrite $d_S$ for the Skorokhod metric and $\\mathcal{B}(\\mathcal{D}_T)$ for the $\\sigma$-field of \nBorel sets in the corresponding topology. \nLet $X$ be the canonical process on $\\mathcal{D}_\\infty$ or $\\mathcal{D}_T$, $P_{\\text{BM}}$ be Wiener \nmeasure on $(\\mathcal{D}_\\infty, \\mathcal{B}(\\mathcal{D}_\\infty))$ and let $E_{\\text{BM}}$ be the \ncorresponding expectation. \nWe will write $W$ for a standard Brownian motion.\nIt will be convenient to assume that $\\{\\mu_e\\}_{e\\in E_d}$ are \ndefined on a probability space $(\\Omega, \\mathcal{F}, \\bP)$, and that\n$X$ is defined on $(\\Omega, \\mathcal{F}) \\times (\\mathcal{D}_\\infty, \\mathcal{B}(\\mathcal{D}_\\infty))$ \nor $(\\Omega, \\mathcal{F}) \\times (\\mathcal{D}_T, \\mathcal{B}(\\mathcal{D}_T))$. \nWe also define the averaged or annealed measure ${\\bf P}$ on \n$(\\mathcal{D}_\\infty, \\mathcal{B}(\\mathcal{D}_\\infty))$ or $(\\mathcal{D}_T, \\mathcal{B}(\\mathcal{D}_T))$ by\n\\begin{equation} \\label{e:bfPdef}\n {\\bf P}(G) = \\bE P^0_{\\omega}(G). \n\\end{equation}\n\n\\begin{definition}\\label{j1.2}\nFor a bounded function $F$ on $\\mathcal{D}_T$ and a constant matrix $\\Sigma$, let \n$\\Psi^F_\\varepsilon = {E}^0_\\omega F(X^{\\varepsilon})$ and \n$\\Psi^F_\\Sigma = E_{\\text{BM}} F(\\Sigma W)$. We will use $I$ to denote the identity matrix.\n\n\\smallskip \\noindent (i) We say that the {\\em Quenched Functional CLT} (QFCLT) holds \nfor $X$ with limit $\\Sigma W$ if for every $T>0$ and \nevery bounded continuous function $F$ on $\\mathcal{D}_T$ we \nhave $\\Psi^F_\\varepsilon \\to \\Psi^F_\\Sigma$ as $\\varepsilon\\to 0$, with $\\Pp$-probability 1.\\\\\n(ii) We say that the {\\em Weak Functional CLT} (WFCLT) \nholds for $X$ with limit $\\Sigma W$ if for every $T>0$ and every \nbounded continuous function $F$ on $\\mathcal{D}_T$ we have \n$\\Psi^F_\\varepsilon \\to \\Psi^F_\\Sigma$ as $\\varepsilon\\to 0$, in $\\Pp$-probability.\\\\\n(iii) We say that the {\\em Averaged (or Annealed) Functional CLT}\n(AFCLT) holds for $X$ with limit $\\Sigma W$ if for every $T>0$ and every \nbounded continuous function $F$ on $\\mathcal{D}_T$ we have \n$ \\bE \\Psi^F_\\varepsilon \\to \\Psi_{\\Sigma}^F$.\nThis is the same as standard weak convergence with respect to the probability measure ${\\bf P}$. \n\\end{definition}\n\nIf we take $\\Sigma$ to be non-random then, since $F$ is bounded, it is\nimmediate that QFCLT $\\Rightarrow$ WFCLT. In general for the QFCLT the matrix\n$\\Sigma$ might depend on the environment $\\mu_\\cdot({\\omega})$. However, if\nthe environment is stationary and ergodic, then $\\Sigma$ is a shift invariant\nfunction of the environment, so must be $\\bP$--a.s. constant.\nIn \\cite{DFGW} it is proved that if $\\mu_e$ is a stationary ergodic \nenvironment with $\\bE \\mu_e<\\infty$ then the WFCLT holds. In \\cite[Theorem 1.3]{BBT1} \nit is proved that for the random conductance model the AFCLT and WFCLT are equivalent.\n\n\\begin{definition}\nWe say an environment $(\\mu_e)$ on ${\\mathbb Z}^d$ is {\\em symmetric} if the law of $(\\mu_e)$ is \ninvariant under symmetries of ${\\mathbb Z}^d$. \n\\end{definition}\n\nIf $(\\mu_e)$ is stationary, ergodic and symmetric, and the WFCLT holds with\nlimit $\\Sigma W$ then the limiting covariance matrix $\\Sigma^T \\Sigma$ must also\nbe invariant under symmetries of ${\\mathbb Z}^d$, so must be a constant \ntimes the identity.\n\nIn a previous paper \\cite{BBT1} we proved the following theorem:\n\n\\begin{theorem}\\label{T:oldmain}\nLet $d=2$ and $p<1$.\nThere exists a symmetric stationary ergodic environment $\\{\\mu_e\\}_{e\\in E_2}$\nwith $\\bE (\\mu_e^p \\vee \\mu_e^{-p})<\\infty$ \nand a sequence $\\varepsilon_n \\to 0$ such that\\\\\n(a) the WFCLT holds for $X^{\\varepsilon_n}$ with limit $W$, \ni.e., for every $T>0$ and every \nbounded continuous function $F$ on $\\mathcal{D}_T$ we have \n$\\Psi^F_{\\varepsilon_n} \\to \\Psi^F_I$ as $n\\to \\infty$, in $\\Pp$-probability,\n\\\\\nbut \\\\\n(b) the QFCLT does not hold for $X^{\\varepsilon_n}$ with limit $ \\Sigma W$ for any $\\Sigma$. \n\\end{theorem}\n\nIn this paper we prove that for an environment similar to\nthat in Theorem \\ref{T:oldmain} the WFCLT holds for $X^{\\varepsilon}$ as $\\varepsilon \\to 0$,\nand not just along a subsequence.\n\n\\begin{theorem}\\label{T:main}\nLet $d=2$ and $p<1$.\nThere exists a symmetric stationary ergodic environment $\\{\\mu_e\\}_{e\\in E_2}$\nwith $\\bE (\\mu_e^p \\vee \\mu_e^{-p})<\\infty$ \nsuch that\\\\\n(a) the WFCLT holds for $X^{\\varepsilon}$ with limit $W$, \ni.e., for every $T>0$ and every \nbounded continuous function $F$ on $\\mathcal{D}_T$ we have \n$\\Psi^F_{\\varepsilon} \\to \\Psi^F_I$ as $\\varepsilon \\to 0$, in $\\Pp$-probability,\n\\\\\nbut \\\\\n(b) the QFCLT does not hold for $X^{\\varepsilon}$ with limit $ \\Sigma W$ for any $\\Sigma$. \n\\end{theorem}\n\nFor more remarks on this problem see \\cite{BBT1}.\n\n\\smallskip \\noindent {\\bf Acknowledgment.}\nWe are grateful to Emmanuel Rio, Pierre Mathieu, Jean-Dominique Deuschel \nand Marek Biskup for some very useful discussions.\n\n\\section{Description of the environment}\\label{const} \n\nHere we recall the environment given in \\cite{BBT1}. We refer the reader to that\npaper for proofs of some basic properties.\n\nLet $\\Omega = (0,\\infty)^{E_2}$, and $\\mathcal{F}$ be the Borel $\\sigma$-algebra defined \nusing the usual product topology. Then every $t\\in{\\mathbb Z}^2$ defines a transformation \n$T_t (\\omega)=\\omega +t$ of $\\Omega$. Stationarity and ergodicity of the measures \ndefined below will be understood with respect to these transformations. \n\nAll constants (often denoted $c_1, c_2$, etc.) are assumed to be strictly positive and finite.\nFor a set $A \\subset {\\mathbb Z}^2$ let $E(A)\\subset E_2$ be the set of all edges with both endpoints in\n$A$. Let $E_h(A)$ and $E_v(A)$ respectively\nbe the set of horizontal and vertical edges in $E(A)$.\nWrite $x \\sim y$ if $\\{x,y\\}$ is an edge in ${\\mathbb Z}^2$. Define the exterior boundary of $A$ by\n$$ {\\partial} A =\\{ y \\in {\\mathbb Z}^2 -A: y \\sim x \\text{ for some } x \\in A \\}. $$\nLet also\n$$ {\\partial}_i A = {\\partial}({\\mathbb Z}^2 -A). $$ \nDefine balls in the $\\ell^\\infty$ norm by $\\mathcal{B}(x,r)= \\{y: ||x-y||_\\infty \\le r\\}$; of \ncourse this is just the square with center $x$ and side $2r$.\n\nLet $\\{a_n\\}_{n\\geq 0}$, $\\{ \\beta_n\\}_{n \\ge 1}$ and $\\{b_n\\}_{n\\geq 1}$ be \nstrictly increasing sequences of positive integers growing to infinity with $n$,\nwith \n$$ 1=a_0 < b_1 < \\beta_1 < a_1 \\ll b_2 < \\beta_2< a_2 \\ll b_3 \\dots $$\nWe will impose a number of conditions on these sequences in the course\nof the paper. We collect the main ones here.\nThere is some redundancy in the conditions, for easy reference.\n\n\\begin{enumerate}[(i)]\n\\item $a_n$ is even for all $n$. \n\\item For each $n \\ge 1$, $a_{n-1}$ divides $b_n$, \nand $b_n$ divides $\\beta_n$ and $a_n$. \n\\item $b_1 \\geq 10^{10}$.\n\\item $a_n\/\\sqrt{2n} \\le b_n \\le a_n \/ \\sqrt{n} $ for all $n$, and\n$b_n \\sim a_n\/\\sqrt{n}$.\n\\item $b_{n+1} \\ge 2^n b_n$ for all $n$.\n\\item $b_n > 40 a_{n-1}$ for all $n$.\n\\item $b_n$ is large enough so that the estimates (5.1) and (6.1) of \\cite{BBT1} hold.\n\\item $100 b_n < \\beta_n \\le b_n n^{1\/4} < 2 \\beta_n < a_n\/10$ for $n$ large enough.\n\\end{enumerate}\n\nIn addition, at various points in the proof, we will assume that $a_n$ is sufficiently much\nlarger than $b_{n-1}$ so that a process $X^{(n-1)}$ defined below is such that for $a\\ge a_n$\nthe rescaled process\n$$ (a^{-1} X^{(n-1)}_{a^2 t}, t\\ge 0)$$\nis sufficiently close to Brownian motion.\nWe will mark the places in the proof where we impose these extra conditions by ($\\clubsuit$) .\n\n\n\\smallskip\\noindent\nWe begin our construction by defining a collection of squares in ${\\mathbb Z}^2$. Let\n\\begin{align*} \nB_n &= [0, a_n]^2, \\\\\nB_n' &= [0, a_n-1]^2 \\cap {\\mathbb Z}^2 ,\\\\\n\\mathcal{S}_n(x) &= \\{ x + a_n y + B_n': \\, y \\in {\\mathbb Z}^2 \\}.\n\\end{align*} \nThus $\\mathcal{S}_n(x)$ gives a tiling of ${\\mathbb Z}^2$ by disjoint squares of side $a_n-1$\nand period $a_n$.\nWe say that the tiling $\\mathcal{S}_{n-1}(x_{n-1})$ is a refinement\nof $\\mathcal{S}_n(x_n)$ if every square $Q \\in \\mathcal{S}_n(x_n)$ is a finite\nunion of squares in $\\mathcal{S}_{n-1}(x_{n-1})$. It is clear that \n$\\mathcal{S}_{n-1}(x_{n-1})$ is a refinement of $\\mathcal{S}_n(x_n)$ if\nand only if $x_n = x_{n-1}+ a_{n-1}y$ for some $y \\in {\\mathbb Z}^2$.\n\nTake $\\mathcal{O}_1$ uniform in $B'_1$, and for $n\\geq 2$\ntake $\\mathcal{O}_n$, conditional on $(\\mathcal{O}_1, \\dots, \\mathcal{O}_{n-1})$, \nto be uniform in $B'_n \\cap ( \\mathcal{O}_{n-1} + a_{n-1}{\\mathbb Z}^2)$. We now define random tilings by letting\n\\begin{equation*}\n \\mathcal{S}_n = \\mathcal{S}_n( \\mathcal{O}_n), \\, n \\ge 1. \n\\end{equation*}\n\nLet $\\eta_n$, $K_n$ be positive constants; we will have $\\eta_n \\ll 1 \\ll K_n$.\nWe define conductances on $E_2$ as follows. \nRecall that $a_n$ is even, and let $a_n' = \\frac12 a_n$. Let\n$$ C_n = \\{ (x,y) \\in B_n \\cap {\\mathbb Z}^2: y \\ge x, x+y \\le a_n \\}. $$\nWe first define conductances $\\nu^{n,0}_e$ for $e \\in E(C_n)$. Let\n\\begin{align*}\nD_n^{00} &= \\big\\{ (a'_n - \\beta_n,y), a'_n - 10 b_n \\le y \\le a'_n + 10 b_n \\big\\}, \\\\\nD_n^{01} &= \\big\\{ (x, a'_n + 10 b_n), (x, a'_n + 10 b_n + 1), (x, a'_n - 10 b_n), (x, a'_n - 10 b_n -1), \\\\\n\\nonumber \n & \\quad \\quad \\quad a'_n -\\beta_n -b_n \\le x \\le a'_n -\\beta_n + b_n \\big\\}.\n\\end{align*}\nThus the set $D^{00}_n \\cup D_n^{01}$ resembles the letter I (see Fig.~\\ref{fig1}).\n\nFor an edge $e \\in E(C_n)$ we set \n\\begin{align*} \n \\nu^{n,0}_{e} &= \\eta_n \\quad \\text {if } e \\in E_v(D^{01}_n), \\\\\n \\nu^{n,0}_{e} &= K_n \\quad \\text {if } e \\in E(D^{00}_n), \\\\\n \\nu^{n,0}_{e} &= 1 \\quad \\text {otherwise.} \n\\end{align*} \n\n\\begin{figure} \\includegraphics[width=4cm]{fig1_1}\n\\caption{The set $D^{00}_n \\cup D_n^{01}$ resembles the letter I.\nBlue edges have very low conductance. The red line represents edges with very \nhigh conductance. Drawing not to scale. \n}\n\\label{fig1}\n\\end{figure}\n\nWe then extend $\\nu^{n,0}$ by symmetry to $E(B_n)$.\nMore precisely,\nfor $z =(x,y) \\in B_n$, let $R_1 z=( y,x)$ and $R_2z = (a_n-y,a_n-x)$, so that\n$R_1$ and $R_2$ are reflections in the lines $y=x$ and $x+y=a_n$.\nWe define $R_i$ on edges by $R_i (\\{x,y\\}) = \\{R_i x, R_i y \\}$ for $x,y \\in B_n$. \nWe then extend $\\nu^{0,n}$ to $E( B_n)$ so that\n$\\nu^{0,n}_e = \\nu^{0,n}_{R_1 e }=\\nu^{0,n}_{R_2 e }$ for $e \\in E(B_n)$.\nWe define the {\\em obstacle} set $D_n^0$ by setting\n$$ D_n^0 = \\bigcup_{i=0}^1 \\big( D_n^{0,i} \\cup R_1(D_n^{0,i}) \\cup R_2(D_n^{0,i})\n \\cup R_1R_2 (D_n^{0,i} ) \\big). $$\nNote that $\\nu^{n,0}_e=1$ for every edge adjacent to the boundary of $B_n$,\nor indeed within a distance $ a_n\/4$ of this boundary.\nIf $e=(x,y)$, we will write $e-z = (x-z,y-z)$. \nNext we extend $\\nu^{n,0}$ to $E_2$ by periodicity, i.e.,\n$\\nu^{n,0}_e = \\nu^{n,0}_{e+ a_n x}$ for all $x\\in {\\mathbb Z}^2$.\nWe define the conductances $\\nu^n$ by translation by $\\mathcal{O}_n$, so that\n\\begin{equation*}\n \\nu^n_e =\\nu^{n,0}_{e-\\mathcal{O}_n}, \\, e \\in E_2.\n\\end{equation*}\nWe also define the obstacle set at scale $n$ by\n\\begin{equation}\\label{ma26.1}\n D_n = \\bigcup_{ x \\in {\\mathbb Z}^2} (a_n x + \\mathcal{O}_n + D^0_n ).\n\\end{equation}\nWe will sometimes call the set $D_n$ the set of $n$th level obstacles.\n\n\nWe define the environment $\\mu^n_e$ inductively by\n\\begin{align*}\n \\mu^n_e &= \\nu^{n}_e \\quad \\text{ if } \\nu^n_e \\neq 1, \\\\\n \\mu^n_e &= \\mu^{n-1}_e \\quad \\text{ if } \\nu^n_e=1.\n\\end{align*}\nOnce we have proved the limit exists, we will set\n\\begin{equation} \\label{e:mudef}\n \\mu_e = \\lim_n \\mu^n_e.\n\\end{equation}\n\n\n\n\\begin{lemma} \\label{L:erg} (See \\cite[Theorem 3.1]{BBT1}).\\\\\n(a) The environments $(\\nu^n_e, e\\in E_2)$, $(\\mu^n_e, e\\in E_2)$\nare stationary, symmetric and ergodic.\\\\\n(b) The limit \\eqref{e:mudef} exists $\\bP$--a.s. \\\\\n(c) The environment $(\\mu_e, e \\in E_2)$ is stationary, symmetric and ergodic.\n\\end{lemma}\n\n\n\nNow let \n\\begin{align}\\label{j27.4}\n\\mathcal{L}_n f(x) = \\sum_{y} \\mu^n_{xy} (f(y)-f(x)), \n\\end{align}\nand $X^{(n)}$ be the associated Markov process. Set\n\\begin{equation} \\label{e:etadef}\n \\eta_n = b_n^{-(1+1\/n)}, \\, n \\ge 1.\n\\end{equation}\nFrom Section 4 of \\cite{BBT1} we have:\n\n\\begin{theorem} \\label{T:eK}\nFor each $n$ there exists a constant $K_n$, depending on $\\eta_1, K_1, \\dots \\eta_{n-1}, K_{n-1}$,\nsuch that the QFCLT holds for $X^{(n)}$ with limit $W$.\n\\end{theorem}\n\nFor each $n$ the process $X^{(n)}$ has invariant measure which is counting measure \non ${\\mathbb Z}^2$. For $x \\in \\mathbb{R}^2$ and $a>0$ write $[xa]$ for the point in ${\\mathbb Z}^2$ closest to $xa$.\n(We use some procedure to break ties.) We have the following bounds on the transition\nprobabilities of $X^{(n)}$ from \\cite{BZ}. We remark that the constant $M_n$ below is\nnot effective -- i.e. the proof does not give any control on its value. \nWrite $k_t(x,y) = (2\\pi t)^{-1} \\exp( -|x-y|^2\/2t)$ for the transition density of Brownian motion\nin $\\mathbb{R}^2$, and\n$$ p^{{\\omega},n}_t(x,y) = P^x_{\\omega}( X^{(n)}_t =y )$$\nfor the transition probabilities for $X^{(n)}$.\n\n\\begin{lemma} \\label{L:hkXn} \nFor each $0< \\delta < T$ there exists $M_n=M_n(\\delta,T)$ such that for $a \\ge M_n$ \n\\begin{equation} \\label{e:GB1}\n\\frac12 k_t(x,y) \\le a^{2} p^{{\\omega},n}_{a^2t}([xa],[ya]) \\le 2 k_t(x,y) \\, \\hbox { for all }\n \\delta \\le t \\le T, |x|, |y| \\le T^2.\n\\end{equation}\n\\end{lemma} \n\n\n\n\n\\section{Preliminary results}\n\n\nSince a proof of Theorem \\ref{T:oldmain}(b) was given in \\cite{BBT1}, \nall we need to prove is part (a) of Theorem \\ref{T:main}.\nThe argument consists of several lemmas. We start with some preliminary \nresults on weak convergence of probability measures on the space of c\\`adl\\`ag functions. \nRecall the definitions of the measures $\\bP$ and $P^0_{\\omega}$.\n\nRecall that $\\mathcal{D} := \\mathcal{D}_1 = D([0,1], \\mathbb{R}^2)$ denotes the space of c\\`adl\\`ag functions \nequipped with the Skorokhod metric ${\\rm d_S}$ defined as follows (see \\cite[p.~111]{B}). \nLet $\\Lambda$ be the family of continuous strictly increasing functions $\\lambda$ \nmapping $[0,1]$ onto itself. In particular, $\\lambda(0) =0$ and $\\lambda(1) =1$. \nIf $x(t), y(t) \\in \\mathcal{D}$ then \n\\begin{align*}\n{\\rm d_S}(x,y) = \\inf_{\\lambda \\in \\Lambda}\n\\max\\Big( \\sup_{t\\in[0,1]} |\\lambda(t) - t|, \\sup_{t\\in[0,1]} |y(\\lambda(t)) - x(t)| \\Big).\n\\end{align*}\nFor $x(t) \\in \\mathcal{D}$, let $\\Osc(x, \\delta) = \\sup\\{|x(t)-x(s)|: s,t\\in[0,1], |s-t|\\le \\delta\\}$.\n\n\\begin{lemma}\\label{d21.2}\nSuppose that $\\sigma: [0,1] \\to [0,1]$ is continuous, non-decreasing and $\\sigma(0) = 0$ \n(we do not require that $\\sigma(1) = 1$). \nSuppose that $|\\sigma(t) - t| \\le \\delta$ for all $t\\in[0,1]$.\nLet $\\varepsilon\\geq0$, $\\delta_1>0$, $x, y \\in \\mathcal{D}$ with\n${\\rm d_S}(x(\\,\\cdot\\,), y(\\,\\cdot\\,))\\le \\varepsilon$, and\n$\\Osc(x, \\delta) \\vee \\Osc(y, \\delta) \\le \\delta_1$. Then\n${\\rm d_S}(x(\\sigma(\\,\\cdot\\,)), y(\\sigma(\\,\\cdot\\,))) \\le \\varepsilon + 2\\delta_1$.\n\\end{lemma}\n\n\\begin{proof}\nFor any $\\varepsilon_1> \\varepsilon$ there exists $\\lambda\\in \\Lambda$ such that,\n\\begin{align*}\n\\max\\Big( \\sup_{t\\in[0,1]} |\\lambda(t) - t|,\n\\sup_{t\\in[0,1]} |y(\\lambda(t)) - x(t)| \\Big)\\le\\varepsilon_1.\n\\end{align*}\nWe have for $\\lambda$ satisfying the above condition,\n\\begin{align*}\n&\\sup_{t\\in[0,1]} |y(\\sigma(\\lambda(t))) - x(\\sigma(t))|\\\\\n&\\qquad \\le \n\\sup_{t\\in[0,1]} (|y(\\sigma(\\lambda(t))) - y(\\lambda(t))|\n+ |y(\\lambda(t)) - x(t)| + |x(t) - x(\\sigma(t))|) \\\\\n&\\qquad \\le \\Osc(y,\\delta) + \\varepsilon_1 + \\Osc(x,\\delta) \\le \\varepsilon_1 + 2 \\delta_1. \n\\end{align*}\nHence,\n\\begin{align*}\n\\max\\Big( \\sup_{t\\in[0,1]} |\\lambda(t) - t|,\n\\sup_{t\\in[0,1]} |y(\\sigma(\\lambda(t))) - x(\\sigma(t))|\n\\Big) \\le \\varepsilon_1 + 2 \\delta_1.\n\\end{align*}\nTaking infimum over all $\\varepsilon_1 > \\varepsilon$ we obtain\n${\\rm d_S}(x(\\sigma(\\,\\cdot\\,)), y(\\sigma(\\,\\cdot\\,)))\n\\le \\varepsilon + 2\\delta_1$.\n\\end{proof}\n\nLet ${d_P}$ denote the Prokhorov distance between probability measures on a probability space defined \nas follows (see \\cite[p.~238]{B}). \nRecall that \n$\\Omega = (0,\\infty)^{E_2}$ and $\\mathcal{F}$ is the Borel $\\sigma$-algebra defined \nusing the usual product topology.\nWe will use measurable spaces $(\\mathcal{D}_T, \\mathcal{B}(\\mathcal{D}_T))$ and \n $(\\Omega, \\mathcal{F}) \\times (\\mathcal{D}_T, \\mathcal{B}(\\mathcal{D}_T))$, for a fixed $T$ (often $T=1$).\nNote that $\\mathcal{D}_T$ and $\\Omega \\times \\mathcal{D}_T$ are metrizable, with the metrics generating the usual topologies. A ball around a set $A$ with radius $\\varepsilon$ will\nbe denoted $\\mathcal{B}(A,\\varepsilon)$ in either space. \nFor probability measures $P$ and $Q$, \n${d_P}(P,Q) $ is the infimum of $\\varepsilon>0$ such that $P(A) \\le Q(\\mathcal{B}(A,\\varepsilon)) + \\varepsilon$ and \n$Q(A) \\le P(\\mathcal{B}(A,\\varepsilon)) + \\varepsilon$ for all Borel sets $A$.\nConvergence in the metric ${d_P}$ is equivalent to the weak convergence of measures.\nBy abuse of notation we will sometimes write arguments of the function \n${d_P}(\\,\\cdot\\,,\\,\\cdot\\,)$ as processes rather than their distributions: for example we will write\n${d_P}( \\{(1\/a)X^{(n)}_{ta^2}, t\\in[ 0,1]\\}, P_{\\text{BM}})$.\nWe will use ${d_P}$ for the Prokhorov distance\nbetween probability measures on $(\\Omega, \\mathcal{F}) \\times (\\mathcal{D}_T, \\mathcal{B}(\\mathcal{D}_T))$. We will write ${d_P}_\\omega$ for the metric on the space\n$(\\mathcal{D}_T, \\mathcal{B}(\\mathcal{D}_T))$.\nIt is straightforward to verify that if, for some processes $Y$ and $Z$, \n${d_P}_\\omega(Y,Z) \\le \\varepsilon$ for $\\bP$--a.a. $\\omega$, then ${d_P}(Y,Z) \\le \\varepsilon$.\n\nWe will sometimes write $W(t)=W_t$ and similarly for other processes.\n\n\\begin{lemma}\\label{d21.1}\nThere exists a function $\\rho: (0,\\infty) \\to (0,\\infty)$ such that $\\lim_{\\delta\\downarrow 0}\n \\rho(\\delta) = 0$ and the following holds.\nSuppose that $\\delta,\\delta'\\in (0,1)$ and $\\sigma: [0,1] \\to [0,1]$ is a non-decreasing \nstochastic process such that $t-\\sigma_t \\in [0,\\delta]$ for all $t$, with probability greater \nthan $1-\\delta'$. Suppose that $\\{W_t, t\\geq 0\\}$ has the distribution $P_{\\text{BM}}$ and \n$W^*_t = W(\\sigma_t)$ for $t\\in[0,1]$. \nThen ${d_P}(\\{W^*_t, t\\in[0,1]\\}, P_{\\text{BM}}) \\le \\rho(\\delta) + \\delta'$.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that $W, W^*$ and $\\sigma$ are defined on the sample space with a \nprobability measure $P$.\nIt is \neasy to see that we can choose $\\rho(\\delta)$ so that \n$\\lim_{\\delta\\downarrow 0} \\rho(\\delta) = 0$\nand $P(\\Osc(W,\\delta) \\geq\\rho(\\delta) )<\\rho(\\delta)$. \nSuppose that the event \n$F := \\{\\Osc(W,\\delta) <\\rho(\\delta)\\}\\cap \\{ \\forall t\\in[0,1]: t-\\sigma_t \\in [0,\\delta]\\}$ holds. \nThen taking $\\lambda(t) = t $,\n\\begin{align*}\n{\\rm d_S}(W, W^*) &\\le \\max\\Big( \\sup_{t\\in[0,1]} |\\lambda(t) - t|,\n\\sup_{t\\in[0,1]} |W(\\lambda(t)) - W^*(t)| \\Big) \\\\\n&= \\sup_{t\\in[0,1]} |W(t) - W(\\sigma(t))| \\le \\Osc(W, \\delta) < \\rho(\\delta).\n\\end{align*}\nWe see that if $F$ holds and $W \\in A \\subset \\mathcal{D}$ then \n$W^*(\\,\\cdot\\,)\\in \\mathcal{B}( A,\\rho(\\delta))$.\nSince $P(F^c) \\le \\rho(\\delta) + \\delta'$, we obtain\n\\begin{align*}\nP&(W \\in A) \\\\\n&\\leq P(\\{W\\in A\\} \\cap F) + P(F^c) \n\\leq P(\\{W^*\\in \\mathcal{B}( A,\\rho(\\delta))\\} \\cap F) \n+\\rho(\\delta) + \\delta'\\\\\n&\\leq P(W^*\\in \\mathcal{B}( A,\\rho(\\delta))) \n+\\rho(\\delta) + \\delta'.\n\\end{align*}\nSimilarly we have\n$P(W^*\\in A ) \\le P(W\\in \\mathcal{B}( A,\\rho(\\delta)) ) + \\rho(\\delta) + \\delta'$, and\nthe lemma follows.\n\\end{proof}\n\n\\begin{lemma}\\label{ma26.5}\nSuppose that for some processes $X, Y$ and $Z$ on the interval $[0,1]$ we have $Z= X+Y$ and $P(\\sup_{0\\leq t \\leq 1} |X_t| \\leq \\delta) \\geq 1-\\delta$. \nThen ${d_P}(\\{Z_t, t\\in[0,1]\\}, \\{Y_t, t\\in[0,1]\\}) \\le \\delta$.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that the event \n$F := \\{\\sup_{0\\leq t \\leq 1} |X_t| \\leq \\delta\\}$ holds. \nThen taking $\\lambda(t) = t $,\n\\begin{align*}\n{\\rm d_S}(Z,Y) &\\le \\max\\Big( \\sup_{t\\in[0,1]} |\\lambda(t) - t|,\n\\sup_{t\\in[0,1]} |Z(\\lambda(t)) - Y(t)| \\Big) \\\\\n&= \\sup_{t\\in[0,1]} |Z(t) - Y(t)| \\le \\delta.\n\\end{align*}\nWe see that if $F$ holds and $Z \\in A \\subset \\mathcal{D}$ then \n$Y(\\,\\cdot\\,)\\in \\mathcal{B}( A,\\delta)$.\nSince $P(F^c) \\le \\delta$, we obtain\n\\begin{align*}\nP(Z \\in A) &\\leq P(\\{Z\\in A\\} \\cap F) + P(F^c) \n\\leq P(\\{Y\\in \\mathcal{B}( A,\\delta)\\} \\cap F) \n+ \\delta\\\\\n&\\leq P(Y\\in \\mathcal{B}( A,\\delta)) \n+ \\delta.\n\\end{align*}\nSimilarly we have\n$P(Y\\in A ) \\le P(Z\\in \\mathcal{B}( A,\\delta) ) + \\delta$, and\nthe lemma follows.\n\\end{proof}\n\nRecall that the function $e\\to \\mu^n_e$ is periodic with period $a_n$.\nHence the random field $\\{\\mu^n_e\\}_{e\\in E_2}$ takes only finitely many values --\nthis is a much stronger statement than the fact that $\\mu^n_e$\ntakes only finitely many values.\n\n\nBy Theorem \\ref{T:eK} for each $n \\ge 1$,\n$$ \\lim_{a\\to \\infty} \n{d_P}( \\{ (1\/a)X^{(n)}_{ta^2}, t\\in[ 0,1]\\}, P_{\\text{BM}}) =0. $$ \nThus ($\\clubsuit$) we can take $a_{n+1}$ so large that for every $\\omega$, $n \\ge 1$\nand $a\\geq a_{n+1}$, \n\\begin{equation} \\label{e:PdistBM}\n{d_P}_\\omega( \\{(1\/a)X^{(n)}_{ta^2}, t\\in[ 0,1]\\}, P_{\\text{BM}}) \\le 2^{-n}.\n\\end{equation}\n\n\\bigskip\n\nLet $\\theta$ denote the usual shift operator for Markov processes, that is, $X^{(n)}_t \\circ \\theta_s = X^{(n)}_{t+s}$ for all $s,t\\geq 0$ (we can and do assume that $X^{(n)}$ is the canonical process on an appropriate probability space). \nRecall that\n$\\mathcal{B}(x,r) =\\{y: ||x-y||_\\infty \\le r\\}$ denote balls in the $\\ell^\\infty$ norm\nin ${\\mathbb Z}^2$ (i.e. squares), $a_n' = a_n\/2$, $B_n=[0,a_n]^2$ and $u_n =(a_n', a'_n)$. Note that $u_n$ is \nthe center of $B_n$.\nWe choose $ \\beta_n$ so that\n\\begin{align}\\label{j2.10}\n b_n n^{1\/8}\n< \\beta_n \\leq \\lfloor b_n n^{1\/4}\\rfloor < 2 \\beta_n < a_n\/10,\n\\end{align}\nand we assume that $n$ is large enough so that the above inequalities hold.\nLet $\\mathcal{C}_n =\\{ u_n + \\mathcal{O}_n+ a_n {\\mathbb Z}^2\\}$ be the set of centers of the squares in $\\mathcal{S}_n$, and let\n\\begin{equation} \\label{e:Hdef}\n \\calK(r) = \\bigcup_{z \\in \\mathcal{C}_n} \\mathcal{B}(z,r).\n\\end{equation}\nNow let\n\\begin{align*}\n\\Gamma^1_n &= \\calK(2\\beta_n), \\\\\n\\Gamma^2_n &= {\\mathbb Z}^2 \\setminus \\calK(4 \\beta_n).\n\\end{align*}\nNow define stopping times as follows.\n\\begin{align*}\nS^n_0 &= T^n_0 = 0,\\\\\nU^n_k & = \\inf\\{t\\geq S^n_{k-1}: X^{(n)}_t \\in \\Gamma^2_n\\}, \\qquad k \\geq 1,\\\\\nS^n_k & = \\inf\\{t \\geq U^n_k: X^{(n)}_t \\in \\Gamma^1_n\\}, \\qquad k \\geq 1,\\\\\nV^n_1 & = \\inf \\Big\\{t \\in \\bigcup_{k\\geq 1} [U^n_k, S^n_k]: \nX^{(n)}_t \\in X^{(n)}(T^n_0) + a_{n-1} {\\mathbb Z}^2 \\Big\\}, \\\\\nT^n_k & = \\inf\\{t\\geq V^n_{k}: X^{(n)}_t \\in \\Gamma^1_n\\}, \\qquad k \\geq 1,\\\\\nV^n_k & = V^n_{1} \\circ \\theta_{T^n_{k-1}} , \\qquad k \\geq 2.\n\\end{align*}\nLet\n$$ J= \\bigcup_{k=1}^\\infty [V^n_k, T^n_k]; $$\nfor $t \\in J$ the process $X^{(n)}$ is a distance at least\n$\\beta_n$ away from any $n$th level obstacle.\nNow set for $t\\geq 0$,\n\\begin{align*}\n\\sigma^{n,1}_t &= \\int_0^t {\\bf 1}_J(s) ds = \\sum_{k=1}^\\infty \\left(T^n_{k} \\land t - V^n_{k} \\land t\\right),\\\\\n\\sigma^{n,2}_t &= t-\\sigma^{n,1}_t = \\sum_{k=0}^\\infty \\left(V^n_{k+1} \\land t - T^n_{k} \\land t\\right).\n\\end{align*}\nLet $\\widehat \\sigma^{n,j}$ denote the right continuous inverses of these processes, given by\n\\begin{equation*}\n\\widehat \\sigma^{n,j}_t = \\inf\\{s\\geq 0: \\sigma^{n,j}_s \\geq t\\}, \\, j=1,2.\n\\end{equation*}\nFinally let \n\\begin{align*}\nX^{n,1}_t &= X^{(n)}_0 + \\int_0^t {\\bf 1}_J(s) dX^{(n)}_s \\\\\n &= X^{(n)}_0 + \\sum_{k=0}^\\infty\n\\left(X^{(n)}(T^n_k \\land t) - X^{(n)}(V^n_{k} \\land t)\\right), \\\\\n\\widehat X^{n,1}_t &= X^{(n)}_0 + X^{n,1}(\\widehat \\sigma^{n,1}_t), \\\\\nX^{n,2}_t &= X^{(n)}_0 + \\int_0^t {\\bf 1}_{J^c}(s) dX^{(n)}_s \\\\\n &= X^{(n)}_0 + \\sum_{k=0}^\\infty\n\\left(X^{(n)}(V^n_{k+1} \\land t) - X^{(n)}(T^n_{k} \\land t)\\right), \\\\\n\\widehat X^{n,2}_t &= X^{(n)}_0 + X^{n,2}(\\widehat \\sigma^{n,2}_t).\n\\end{align*}\n\nThe point of this construction is the following.\nFor every fixed $\\omega$, the function $e\\to\\mu_e^{n-1}$ is invariant under the shift by \n$x a_{n-1}$ for any $x\\in {\\mathbb Z}^2$, and \n$X^{(n)}(V^n_{k+1} ) = X^{(n)}(T^n_{k}) + x a_{n-1} $ for some $x\\in {\\mathbb Z}^2$. \nIt follows that for each $\\omega\\in \\Omega$, we have the following equality of distributions: \n\\begin{equation} \\label{e:Xhatdsn}\n\\{\\widehat X^{n,1}_t, t\\geq 0\\} {\\buildrel (d) \\over {\\ =\\ }} \\{X^{(n-1)}_t, t\\geq 0\\}.\n\\end{equation}\nThe basic idea of the argument which follows is to write $X^{(n)}= X^{n,1} + X^{n.2}$.\nBy Theorem \\ref{T:eK}, or more precisely by \\eqref{e:PdistBM}, the process $X^{n,1}$ is close\nto Brownian motion, so to prove Theorem \\ref{T:main} we need to prove that $X^{n,2}$\nis small.\n\n\\bigskip\nWe state the next lemma at a level of generality greater than what we need in this article. A variant of our lemma is in the book \\cite{AF} but we could not find a statement that would match perfectly our needs.\nConsider a finite graph $G=(\\mathcal{V},E)$ and suppose that for any edge $\\overline{xy}$, $\\mu_{xy}$ is a non-negative real number. Assume that $\\sum_{y\\sim x} \\mu_{xy} >0$ for all $x$.\nFor $f: \\mathcal{V} \\to \\mathbb{R}$ set\n$$ \\sE (f,f) = \\sum_{\\{x,y\\} \\in E} \\mu_{xy} (f(y)-f(x))^2. $$\nSuppose that $A_1, A_2 \\subset \\mathcal{V}$, $A_1 \\cap A_2 = \\emptyset$, and\nlet \n\\begin{align*}\n\\mathcal{H} &=\\{ f:\\mathcal{V} \\to \\mathbb{R} \\text{ such that } f(x)=0 \\text{ for } x\\in A_1, f(y)=1 \\text{ for } y \\in A_2\\},\\\\\n \\mathbf{r}^{-1} &= \\inf\\{ \\sE(f,f): f \\in \\mathcal{H} \\}. \n\\end{align*}\nThus $\\mathbf{r}$ is the effective resistance between $A_1$ and $A_2$.\nLet $Z$ be the continuous time Markov process on $\\mathcal{V}$ with the generator $\\mathcal{L}$ given by\n\\begin{align}\\label{ma21.1}\n\\mathcal{L} f(x) = \\sum_y \\mu_{xy} (f(y) - f(x)).\n\\end{align}\nLet $T_i = \\inf\\{t\\geq 0: Z_t \\in A_i\\}$ for $i=1,2$,\n and let $Z^{(i)}$ be $Z$ killed at time $T_i$.\n\n\\begin{lemma}\\label{L:com}\nThere exist probability measures $\\nu_1$ on $A_1$ and $\\nu_2$ on $A_2$ such that\n\\begin{align*}\nE^{\\nu_2} T_1 + E^{\\nu_1} T_2 = \\mathbf{r} |\\mathcal{V}| . \n\\end{align*}\nMoreover, for $i=1,2$, $\\nu_i$ is the capacitary measure of $A_i$ for the process $Z^{(3-i)}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $h_{12}(x) = P^x( T_1 < T_2)$. \nSet $D= \\mathcal{V}-A_1$ and recall that $Z^{(i)}$ is $Z$ killed at time $T_i$.\nLet $G_2$ be the Green operator for $Z^{(2)}$, and $g_2(x,y)$ be the density of\n$G_2$ with respect to counting measure, so that\n$$ E^x T_2 = \\sum_{y \\in \\mathcal{V}} g_2(x,y). $$\nNote that $g_2(x,y)=g_2(y,x)$.\nLet $e_{12}$ be the capacitary measure of $A_1$ for the process $Z^{(2)}$. Then\n$\\mathbf{r}^{-1} = \\sum_{z \\in A_1} e_{12}(z), $ and\n$$ h_{12}(x) = \\sum_{z \\in A_1} e_{12}(z) g_2(z,x) . $$\nSo, if $ \\nu_1 = \\mathbf{r} e_{12} $, then\n\\begin{align*}\n\\sum_{y \\in \\mathcal{V}} h_{12}(y) &= \n \\sum_{y \\in \\mathcal{V}} \\sum_{x \\in A_1} e_{12}(x) g_2(x,y) \\\\\n&= \\mathbf{r}^{-1} \\sum_{x \\in A_1} \\nu_1(x) \\sum_{y \\in \\mathcal{V}} g_2(x,y) \\\\\n&= \\mathbf{r}^{-1} \\sum_{x \\in A_1} \\nu_1(x) E^x T_1 = \\mathbf{r}^{-1} E^{\\nu_1} T_2.\n\\end{align*}\nSimilarly if $h_{21}(x) =\\bP^x( T_2 < T_1)$ we obtain\n$\\mathbf{r}^{-1} E^{\\nu_2} T_1 = \\sum_{y \\in \\mathcal{V}} h_{21}(y) $, and since $h_{12}+h_{21}=1$, adding these\nequalities proves the lemma.\n\\end{proof}\n\n\\section{ Estimates on the process $X^{n,2}$ } \n\nIn this section we will prove \n\n\\begin{proposition}\\label{d22.2}\nFor every $\\delta>0$ there exists $n_1$ such that for all $n\\geq n_1$, $u\\geq a_n^2$, and $\\omega$ such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$, \n\\begin{align}\\label{d22.3}\nP^0_\\omega\n\\left( \\sigma^{n,2}_u \/ u \\le \\delta, \\sup_{0\\le s \\le u} u^{-1\/2} |X^{n,2}_s| \\le \\delta \\right) \\geq 1-\\delta.\n\\end{align}\n\\end{proposition}\n\nThe proof requires a number of steps. We begin with a Harnack inequality.\n\n\\begin{lemma}\\label{L:harn}\nLet $1 \\le \\lambda \\le 10$. \nThere exist $p_1>0$ \n and $n_1 \\ge 1$ with the following properties. \\\\\n(a) Let $x \\in {\\mathbb Z}^2$, let $B_1= \\mathcal{B}(x, \\lambda \\beta_n)$ \nand $B_2= \\mathcal{B}(x, (2\/3) \\lambda \\beta_n)$. \nLet $F$ be the event that $X^{(n)}$ makes a closed loop around $B_2$\ninside $B_1 - B_2$\nbefore its first exit from $B_1$.\nIf $n \\ge n_1$ and $D_n \\cap B_1 = \\emptyset$ then\n$P^y_{\\omega}(F) \\ge p_1$ for all $ y \\in B_2$. \\\\\n(b) Let $h$ be harmonic in $B_1$. \nThen \n\\begin{equation} \\label{e:harni}\n \\max_{B_2} h \\le p_1^{-1} \\min_{B_2} h.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n(a) Using ($\\clubsuit$) and \\eqref{e:PdistBM} we can make a Brownian approximation to\n$\\beta_n^{-1} X^{(n)}_\\cdot$ which is good enough so that this estimate holds.\\\\\n(b) Let $y \\in B_1$ be such that $h(y) = \\max_{z \\in B_2} h(z)$.\nThen by the maximum principle there exists a connected path $\\gamma$ from $y$\nto ${\\partial}_i B_1$ with $h(w) \\ge h(y)$ for all $w \\in \\gamma$.\nNow let $y'\\in B_2$. On the event $F$ the process $X^{(n)}$ must hit $\\gamma$, and\nso we have\n$$ h(y') \\ge P^{y'}_{\\omega} (F) \\min_{\\gamma} h \\ge p_1 h(y),$$\nproving \\eqref{e:harni}. \n\\end{proof}\n\n\n\\begin{lemma}\\label{12a29lem}\nFor some $n_1$ and $c_1$, for all $n\\geq n_1$, $k\\geq 1$, and $\\omega$ such that \n$0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$, \n\\begin{align}\\label{z1.1}\nE^0_\\omega(U^n_k - S^n_{k-1} \\mid \\mathcal{F}_{S^n_{k-1}} ) \\leq c_1 \\beta_n^2.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nAssume that $\\omega$ is such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$.\nBy the strong Markov property applied at $S^n_{k-1}$ for $k >1$, it is enough to prove \nthe Lemma for $k=1$, that is that \n$E^x_\\omega(U^n_1 ) \\le c_1 \\beta_n^2$ for all \n$x \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$.\n Let \n\\begin{align*}\n\\mathcal{V} &= \\mathcal{B}(u_n + \\mathcal{O}_n, 4 \\beta_n+1),\\\\\nA_1 &= \\partial_i \\mathcal{B}(u_n + \\mathcal{O}_n, (3\/2) \\beta_n),\\\\\nA_2 &= \\partial_i \\mathcal{V},\\\\\nA_3 &= \\partial_i \\mathcal{B}(u_n + \\mathcal{O}_n, 2 \\beta_n)\\\\\nT_i &= \\inf\\{t\\geq 0: X^{(n)}_t \\in A_i\\}, \\qquad i =1,2,3.\n\\end{align*}\nLet $Z$ be the continuous time Markov chain defined on $\\mathcal{V}$ by \\eqref{ma21.1}, \nrelative to the environment $\\mu^n$. Note that the transition probabilities from $x$ \nto one of its neighbors are the same for $Z$ and $X^{(n)}$ if $x $ is in the interior \nof $\\mathcal{V}$, i.e., $x\\notin \\partial_i \\mathcal{V} \\cup({\\mathbb Z}^2 \\setminus \\mathcal{V})$. \nNote also that $Z$ and $X^{(n-1)}$ have the same transition probabilities \nin the region between $A_1$ and $A_3$.\nThe expectations and probabilities in this proof will refer to $Z$.\nBy Lemma \\ref{L:com}, there exists a probability measure $\\nu_1$ on $A_1$ such that\n$E^{\\nu_1} T_2 \\leq \\mathbf{r} |\\mathcal{V}|$.\nWe have $|\\mathcal{V}| \\leq c_2 \\beta_n^2$.\n\nTo estimate $\\mathbf{r}$ note that by the choice of the constants $\\eta_{n-1}$ and $K_{n-1}$\nin Theorem \\ref{T:eK}, the resistance (with respect to $\\mu^{n-1}_e$) between two opposite sides of any\nsquare in $\\mathcal{S}_{n-1}$ will be 1. It follows that the resistance \nbetween two opposite sides of any square side $\\beta_n$ which is a union\nof squares in $\\mathcal{S}_{n-1}$ will also be 1. So, using Thompson's principle as\nin \\cite{BB3} we deduce that $\\mathbf{r} \\leq c_3$.\n\nSo, by Lemma \\ref{L:com} we have\n\\begin{align}\\label{ma18.1}\nE^{\\nu_1} T_2 \\leq c_4 \\beta_n^2.\n\\end{align}\n\nWe have for some $c_5$, $p_1 >0$ all $n$ and $x\\in \\mathcal{V} \\setminus \\mathcal{B}(u_n + \\mathcal{O}_n, (3\/2) \\beta_n)$,\n\\begin{align*}\nP^x_\\omega ( T_1 \\land T_2 \\leq c_5 \\beta_n^2) > p_1,\n\\end{align*}\nbecause an analogous estimate holds for Brownian motion and ($\\clubsuit$) we have \\eqref{e:PdistBM}. This and a standard argument based on the strong Markov property imply that for $x\\in A_3$,\n\\begin{align*}\nE^x_\\omega ( T_1 \\land T_2 ) \\leq c_6 \\beta_n^2.\n\\end{align*}\n\n\nNow for $y \\in A_1$ and $x \\in \\mathcal{V}$ set \n$$ \\nu_3^x (y) = P^x_\\omega(X^{(n)}(T_1 \\wedge T_2) = y ).$$\n(Note that there exist $x$ with $\\sum_{y \\in A_1} \\nu^x_3(y) < 1$.)\nWe obtain for $n\\geq n_2$ and $x\\in A_3$,\n\\begin{align}\\label{ma21.2}\nE^x_\\omega ( T_2 ) &=\nE^x_\\omega ( T_1 \\wedge T_2 ) + E^x_\\omega ((T_2 -T_1) {\\bf 1}_{ T_1 < T_2} ) \\\\\n& = E^x_\\omega ( T_1 \\wedge T_2 ) + E^{\\nu_3^x} T_2 \n\\leq c_6 \\beta_n^2 + E^{\\nu_3^x}_{\\omega} T_2. \\nonumber\n\\end{align}\nFor $y \\in A_1$ the function $x \\to \\nu^x_3(y)$ is harmonic in $\\mathcal{V} \\setminus A_1$.\nSo we can apply the Harnack inequality Lemma \\ref{L:harn} to deduce that there exists\n$c_7$ such that\n\\begin{equation}\n \\nu^x_3(y) \\le c_7 \\nu^{x'}_3(y) \\hbox{ for all } x,x' \\in A_3, y \\in A_1.\n\\end{equation}\n\n\nThe measure $\\nu_1$ is the hitting distribution on $A_1$ \nfor the process $Z$ starting with $\\nu_2$ (see \\cite[Chap.~3, p.~45]{AF}). So for\nany $x' \\in A_3$,\n\\begin{align*}\n\\nu_1(y) &= P^{\\nu_2}_0 ( Z_{T_1} =y) \n= \\sum_{x \\in A_3} P^{\\nu_2}_0 ( Z_{T_1} =x ) P^x_{\\omega}( Z_{T_1} =y) \\\\\n&\\ge \\sum_{x \\in A_3} P^{\\nu_2}_0 ( Z_{T_1} =x ) P^x_{\\omega}( Z_{T_1 \\wedge T_2} =y) \n\\ge \\min_{x \\in A_3} \\nu^x_3(y) \\ge c_7^{-1} \\nu^{x'}_3(y).\n\\end{align*}\nHence for any $x \\in A_3$,\n$$ E^{\\nu_3^x}_{\\omega} T_2 \\le c_7 E^{\\nu_1}_{\\omega} T_2 \\le c_8 \\beta_n^2,$$\nand combining this with \\eqref{ma21.2} completes the proof. \n\\end{proof}\n\nLet\n\\begin{align*}\nR_n^y =\\inf\\left\\{t\\geq 0 : X^{(n)}_t \\in (y + a_{n-1}{\\mathbb Z}^2) \\cup \\Gamma^1_n\\right\\}.\n\\end{align*}\n\n\\begin{lemma}\nThere exist $c_1>0$ and $p_1<1$ such that for all $x,y \\in {\\mathbb Z}^2$,\n\\begin{align}\\label{ma23.1}\n&P^x_{\\omega}\\left(R_n^y \\geq c_1 b_n^2 \\right) \\le p_1,\\\\\n&P^x_{\\omega}\\left(\\sup_{0\\leq t \\leq R^y_n}|x- X^{(n)}_t| \\geq c_1 b_n \\right) \\le p_1.\n\\label{ma23.2}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nRecall that the family $\\{\\mu^{n-1}_{x+ \\cdot}\\}_{x\\in {\\mathbb Z}^2}$ of translates of the \nenvironment $\\mu^{n-1}_\\cdot$ contains only a finite number of distinct elements.\nSince each square in $\\mathcal{S}_{n-1}$ contains one point in $(y + a_{n-1}{\\mathbb Z}^2)$, \nif $b_n\/a_{n-1}$ is sufficiently large ($\\clubsuit$) then using the transition density estimates \\eqref{e:GB1}\nas well as \\eqref{e:PdistBM}, we obtain \\eqref{ma23.1} and \\eqref{ma23.2}. \n\\end{proof}\n\n\\begin{lemma}\nFor some $n_1$ and $c_1$, for all $n\\geq n_1$, $k\\geq 1$, and $\\omega$ such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$, \n\\begin{align}\\label{ma21.5}\nE^0_\\omega(V^n_k - T^n_{k-1} \\mid \\mathcal{F}_{T^n_{k-1}}) \\leq c_1 b_n^2 n^{1\/2}.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nAssume that $\\omega$ is such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$.\nLet\n\\begin{align*}\n\\widehat R^n_k =\\inf\\left\\{t\\geq U^n_k : X^{(n)}_t \\in (X^{(n)}(T^n_0) + a_{n-1}{\\mathbb Z}^2) \\cup \\Gamma^1_n\\right\\}.\n\\end{align*}\nLet $F_k = \\{ \\widehat R^n_k < S^n_k\\}$ and $G_k = \\bigcap _{j=1}^k F_j^c$.\nSince $b_n n^{1\/8} < \\beta_n$ for large $n$, we obtain from \\eqref{ma23.2} and definitions of $\\Gamma^1_n, \\Gamma^2_n, U^n_k$ and $S^n_k$ that there exists $p_2>0$ such that for $x\\in \\Gamma^2_n$,\n\\begin{align*}\nP^x_\\omega(F_k \\mid \\mathcal{F}_{U^n_k}) > p_2.\n\\end{align*}\nHence,\n\\begin{align}\\label{n5.2}\nP^x_\\omega(G_k ) < (1-p_2)^k.\n\\end{align}\nNote that if $F_k$ occurs then $V^n_1 \\leq \\widehat R^n_k$.\nWe have, using \\eqref{z1.1}, \\eqref{ma23.1} and \\eqref{n5.2},\n\\begin{align*}\nE^0_\\omega(V^n_1 - T^n_{0} ) \n&\\leq\n\\sum_{k=1}^\\infty\nE^0_\\omega((U^n_k - S^n_{k-1}) {\\bf 1}_{G_{k-1}})\n+ \n\\sum_{k=1}^\\infty\nE^0_\\omega((\\widehat R^n_k - U^n_{k}) {\\bf 1}_{G_{k-1}})\\\\\n&\\leq \\sum_{k=1}^\\infty\nc_2 \\beta_n^2 (1-p_2)^{k-1}\n+ \\sum_{k=1}^\\infty c_3 b_n^2 (1-p_2)^{k-1} \\\\\n&\\leq c_4 \\beta_n^2 \\leq c_5 b_n^2 n^{1\/2}.\n\\end{align*}\nThis proves the lemma for $k=1$.\nThe general case is obtained by applying this estimate to the\nprocess shifted by $T^n_{k-1}$; in other words, by using the strong Markov property.\n\\end{proof}\n\n\\begin{lemma} \\label{L:sigma_1}\nFor every $\\delta>0$ there exists $n_1$ such that for all $n\\geq n_1$, $u\\geq a_n^2$, and $\\omega$ such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$,\n\\begin{align}\\label{n8.5}\nP^0_\\omega\n\\left( \\sigma^{n,2}_u \/ u \\le \\delta \\right) \\geq 1-\\delta\/2.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nAssume that $\\omega$ is such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$.\nFix an arbitrarily small $\\delta >0$, consider $u \\geq a_n^2$ and let $j_* = \\lceil u\/(b_n^2 n^{5\/8} )\\rceil$. Then\n\\eqref{ma21.5} implies that for some $c_1$ and $n_2$, all $n\\geq n_2$, $u \\geq a_n^2$, \n\\begin{align*}\nE^0_{\\omega} \\left( \\frac 1{j_*} \\sum_{j=1}^{j_*} V^n_j - T^n_{j-1}\\right) \n\\leq c_1 b_n^2 n^{1\/2}.\n\\end{align*}\nHence, for some $n_3$, all $n\\geq n_3$, $u \\geq a_n^2$,\n\\begin{align*\nP^0_{\\omega} \\left( \\frac 1{j_*} \\sum_{j=1}^{j_*} V^n_j - T^n_{j-1} \\ge \\delta b_n^2 n^{9\/16} \\right) \\le \\delta\/8 ,\n\\end{align*}\nand, since $j_* \\delta b_n^2 n^{9\/16} \\leq \\delta u$, \n\\begin{align}\\label{d27.1}\nP^0_{\\omega} \\left( \\sum_{j=1}^{j_*} V^n_j - T^n_{j-1} \\ge \\delta u \\right) \\le \\delta\/8 . \n\\end{align}\n\nRecall $\\mathcal{K} (r)$ from \\eqref{e:Hdef}.\nLet\n\\begin{align*}\n\\widehat V^n_k & = \\inf\\{t\\geq V^n_{k}: X^{(n)}_t \\in \n{\\mathbb Z}^2 \\setminus \\mathcal{K}(b_n n ^{3\/8})\\} \\land T^n_k, \\qquad k \\geq 1,\\\\\n\\widetilde V^n_k & = \\inf\\{t\\geq \\widehat V^n_{k}: |X^{(n)}_t - X^{(n)}(\\widehat V^n_k)| \\geq (1\/2)b_n n^{3\/8} \\} , \\qquad k \\geq 1. \n\\end{align*}\nWe can use estimates for Brownian hitting \nprobabilities ($\\clubsuit$) to see that for some $c_2, c_3$ and $n_4$,\nall $n\\geq n_4$, $k$, \n\\begin{align}\\label{j2.13}\nP^0_{\\omega}(\\widehat V^n_k < T^n_k \\mid \\mathcal{F}_{V^n_k}) \\geq c_2\n\\frac{\\log (4 \\beta_n) - \\log (2 \\beta_n)}\n{\\log (2 b_n n^{3\/8})- \\log (2 \\beta_n)} \n\\geq c_3 \/\\log n . \n\\end{align}\nThere exist ($\\clubsuit$) $c_4$ and $n_5$, such that for\nall $n\\geq n_5$, $k\\geq 2$, \n\\begin{align*}\nP^0_{\\omega} &(T^n_k - V^n_k \\geq c_4 b_n^2 n^{3\/4} \\mid \\widehat V^n_k < T^n_k, \\mathcal{F}_{\\widehat V^n_k}) \\\\\n&\\ge\nP^0_{\\omega}(\\widetilde V^n_k - \\widehat V^n_k \\geq c_4 b_n^2 n^{3\/4} \\mid \\widehat V^n_k < T^n_k, \\mathcal{F}_{\\widehat V^n_k}) \\ge 3\/4. \n\\end{align*}\nThis and \\eqref{j2.13} imply that the sequence $\\{T^n_k - V^n_k\\}_{k\\geq 2}$ is stochastically minorized by a sequence of i.i.d.~random variables which take value $c_4 b_n^2 n^{3\/4} $ with probability $c_3 \/\\log n$ and they take value 0 otherwise.\nThis implies that for some $n_6$,\nall $n\\geq n_6$, $u \\geq a_n^2$, \n\\begin{align*\nP^0_{\\omega}\\left( \\frac 1{j_*} \\sum_{j=2}^{j_*} T^n_j - V^n_j \\le b_n^2 n^{3\/4}\/ \\log^2 n \\right) \\le \\delta\/4 \n\\end{align*}\nand, because $j_* b_n^2 n^{3\/4}\/\\log^2 n \\geq u$ assuming $n_6$ is large enough,\n\\begin{align*\nP^0_{\\omega}\\left( \\sum_{j=2}^{j_*} T^n_j - V^n_j \\le u \\right) \\le \\delta\/4 . \n\\end{align*}\nWe combine this with \\eqref{d27.1} and the definition of $\\sigma^{n,2}_u$ to obtain\nfor some $n_7$,\nall $n\\geq n_7$, $u \\geq a_n^2$, \n\\begin{align}\\label{d27.4}\nP^0_{\\omega}(\\sigma^{n,2}_u\/u \\le \\delta) \\geq 1-3\\delta\/8.\n\\end{align}\nThis completes the proof of the lemma.\n\\end{proof}\n\n\\bigskip\n\nLet $Y^n_k = (Y^n_{k,1}, Y^n_{k,2}) = X^{(n)}(V^n_{k+1} ) - X^{(n)}(T^n_{k} )$.\nSet \n$\\bar Y^n_k = \\sup_{T^n_k \\leq t \\leq V^n_{k+1}} |X^{(n)}(t)- X^{(n)}(T^n_{k} )|$.\nFor $x\\in {\\mathbb Z}^2$, let\n $\\Pi_n(x) \\in B'_n -u_n + \\mathcal{O}_n $ be the unique point with the \nproperty that $x-\\Pi_n(x) = a_n y$ for some $y\\in {\\mathbb Z}^2$.\n\n\nWe next estimate the variance of $X^{n,2}(V^n_{m+1}) = \\sum_{k=0}^ m Y^n_k$.\n\n\\begin{lemma} \\label{L:Ynest}\nThere exist $c_1, c_2$ and $n_1$ such that for all $n\\geq n_1$, $k\\geq 0$, $j=1,2$, and $\\omega$,\n\\begin{align}\\label{ma24.6}\nE^0_\\omega |Y^n_{k,j}| &\\leq E^0_\\omega |Y^n_k| \\leq E^0_\\omega |\\bar Y^n_k| \\leq c_1 \\beta_n, \\\\\n\\label{ma24.7}\n\\Var Y^n_{k,j} &\\le \\Var \\bar Y^n_{k} \\le c_2 \\beta_n^2 , \\qquad \\text { under } P^x_\\omega.\n\\end{align} \n\\end{lemma} \n\n\\begin{proof} Let \n\\begin{align}\\label{n8.1}\n\\mathcal{X}^{(n)}_k(t) &= X^{(n)}_t + \\Pi_n(X^{(n)}(T^n_k)) - X^{(n)}(T^n_k),\n\\qquad t\\in [T^n_k, V^n_{k+1}],\n\\end{align}\nand note that \n\\begin{align*}\nY^n_k = (Y^n_{k,1}, Y^n_{k,2}) = \\mathcal{X}^{(n)}_k(V^n_{k+1} ) - \\mathcal{X}^{(n)}_k(T^n_{k} ).\n\\end{align*}\n\nIt follows from the definition that we have $\\sup_{S^n_{k-1} \\leq t \\leq U^n_k} |X^{(n)}(t ) - X^{(n)}(S^n_{k-1} )| \\le 16\\beta_n$, a.s.\nThis, \\eqref{ma23.2} and the definition of $V^n_{k+1}$ imply that $|\\bar Y^n_k|$ is stochastically majorized by an exponential random variable with mean $c_3 \\beta_n$. This easily implies the lemma.\n\\end{proof}\n\n\nNext we will estimate the covariance of $Y^n_{k,1}$ and $Y^n_{j,1}$ for $j\\ne k$.\n\n\\begin{lemma} \\label{L:covY}\nThere exist $c_1, c_2$ and $n_1$ such that for all $n\\geq n_1$, $j < k-1$ and $\\omega$ such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$, under $P^0_\\omega$,\n\\begin{align}\\label{n8.3} \n\\Cov(Y^n_{j,1},Y^n_{k,1}) & \\le c_1 e^{-c_2 (k-j)} \\beta_n^2.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nAssume that $\\omega$ is such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$.\nLet\n\\begin{align*}\n\\Gamma^3_n &= \\Gamma^1_n \\cap \\mathcal{B}(u_n + \\mathcal{O}_n, a_n\/2)\n= \\mathcal{B}(u_n + \\mathcal{O}_n, 2\\beta_n),\\\\\n\\Gamma^4_n &= \\partial_i \\mathcal{B}(u_n + \\mathcal{O}_n, 3\\beta_n),\\\\\n\\tau(A) &= \\inf\\{t\\geq 0: \\mathcal{X}^{(n)}_0(t) \\in A\\}.\n\\end{align*}\n\nSuppose that $x,v\\in \\Gamma^3_n$ and $y \\in \\Gamma^4_n$. \nBy the Harnack inequality proved in Lemma \\ref{L:harn},\n\\begin{align}\\label{ma23.4}\n\\frac{P_\\omega^x (\\mathcal{X}^{(n)}_0(\\tau(\\Gamma^4_n)) = y)}\n{P_\\omega^v (\\mathcal{X}^{(n)}_0(\\tau(\\Gamma^4_n)) = y)}\n\\geq c_3.\n\\end{align}\n\nLet $\\mathcal{T}^n_k$ have the same meaning as $T^n_k$ but relative to the process $\\mathcal{X}^{(n)}_k$ rather than $X^{(n)}$.\nWe obtain from \\eqref{ma23.4} and the strong Markov property applied at $\\tau(\\Gamma^4_n)$ that, \nfor any \n$x,v,y \\in \\Gamma^3_n$ we have\n\\begin{align*\n\\frac{P_\\omega^x (\\mathcal{X}^{(n)}_0(\\mathcal{T}^n_1) = y)}\n{P_\\omega^v (\\mathcal{X}^{(n)}_0(\\mathcal{T}^n_1) = y)}\n\\geq c_3.\n\\end{align*}\nRecall that $T^n_0 =0$.\nThe last estimate implies that, for \n$x,v,y \\in \\Gamma^3_n$,\n\\begin{align*\n\\frac{P_\\omega (\\mathcal{X}^{(n)}_1(T^n_{1}) = y \\mid \\mathcal{X}^{(n)}_0(T^n_0) = x)}\n{P_\\omega (\\mathcal{X}^{(n)}_1(T^n_{1}) = y \\mid \\mathcal{X}^{(n)}_0(T^n_0) = v)}\n\\geq c_3.\n\\end{align*}\nSince the process $X^{(n)}$ is time-homogeneous, this shows that for \n$x,v,y \\in \\Gamma^3_n$ and all $k$,\n\\begin{align}\\label{d29.2}\n\\frac{P_\\omega (\\mathcal{X}^{(n)}_{k+1}(T^n_{k+1}) = y \\mid \\mathcal{X}^{(n)}_k(T^n_k) = x)}\n{P_\\omega (\\mathcal{X}^{(n)}_{k+1}(T^n_{k+1}) = y \\mid \\mathcal{X}^{(n)}_k(T^n_k) = v)}\n\\geq c_3.\n\\end{align}\nWe now apply Lemma 6.1 of \\cite{BTW} (see Lemma 1 of \\cite{BK} for a better presentation of the same estimate) to see that \\eqref{d29.2} implies that \nthere exist constants $C_k$, $k\\geq 1$, such that for every $k$ and all \n$x,v,y \\in \\Gamma^3_n$,\n\\begin{align*\n\\frac{P_\\omega^x (\\mathcal{X}_k^{(n)}(T^n_k) = y)}\n{P_\\omega^v (\\mathcal{X}_k^{(n)}(T^n_k) = y)}\n\\geq C_k.\n\\end{align*}\nMoreover, $C_k\\in(0,1)$, $C_k$'s depend only on $c_3$, and $1-C_k \\le e^{-c_4 k}$ for some $c_4>0$ and all $k$.\nBy time homogeneity of $X^{(n)}$, for $m\\leq j0$ and $c_{5}<1\/4$ and all large $t$, we have \n\\begin{align*}\nP^{\\mathbf{p}(n)}_{\\omega} \\left(\\sup_{1\\le s \\le t} |\\widehat X^{n,1} _s| \\geq c_{4}\\sqrt{t}\\right) \n=P^{\\mathbf{p}(n)}_{\\omega} \\left(\\sup_{1\\le s \\le t} |X^{(n-1)} _s| \\geq c_{4}\\sqrt{t}\\right) < c_{5}. \n\\end{align*}\nSince $\\widehat X^{n,1}_t = X^{n,1}(\\widehat \\sigma^{n,1}_t)$ and $\\widehat \\sigma^{n,1}_t \\geq t$, the last estimate implies that\n\\begin{align*}\nP^{\\mathbf{p}(n)}_{\\omega} \\left(\\sup_{1\\le s \\le t} |X^{n,1} _s| \\geq c_{4}\\sqrt{t}\\right) < c_{5}. \n\\end{align*}\nWe also have \nfor some $c_{6}>0$ and $c_{7}<1\/4$, and all large $t$, \n\\begin{align*}\nP^{\\mathbf{p}(n)}_{\\omega} \\left(\\sup_{1\\le s \\le t} |X^{(n)} _s| \\geq c_{6}\\sqrt{t}\\right) < c_{7}. \n\\end{align*}\nSince $X^{n,2} = X^{(n)} - X^{n,1}$, we obtain\nfor some $c_{8}>0$ and $c_{9}<1\/2$ and all large $t$, \n\\begin{align*}\nP^{\\mathbf{p}(n)}_{\\omega} \\left(\\sup_{1\\le s \\le t} |X^{n,2} _s| \\geq c_{8}\\sqrt{t}\\right) < c_{9}. \n\\end{align*}\nThis shows that $X^{n,2}$ does not have a linear drift.\nIt is clear from the law of large numbers that $\\liminf_{t\\to\\infty} \\sigma_t^{n,2}\/t >0$, so $\\widehat X^{n,2}$ does not have a linear drift either.\nWe conclude that ${E}^{\\mathbf{p}(n)}_\\omega Y^n_{k,1} = 0$. \n\nNow suppose that $X^{(n)}_0$ does not necessarily have the distribution $\\mathbf{p}(n)$. \nThe fact that ${E}^{\\mathbf{p}(n)}_\\omega Y^n_{k,1} = 0$ and a calculation similar to that in \\eqref{ma24.8} imply that,\n\\begin{align*\n|{E}^0_\\omega Y^n_{k,1}| \\le c_{10} e^{-c_{11} k} \\beta_n .\n\\end{align*}\n\nLet $c_{12} $ be the constant denoted $c_1$ in \\eqref{ma24.6}.\nThe last estimate and \\eqref{ma24.6} imply that for some $c_{13}$ and all $m\\geq 1$,\n\\begin{align} \\nonumber\n\\left| E^0_\\omega\\sum_{k=0}^{m} Y^n_{k,1}\\right| \n&\\leq \\sum_{k\\geq 0} |{E}^0_\\omega Y^n_{k,1}|\n+ \\sup_{k\\geq 1} E^0_{\\omega} |\\bar Y^n_k| \\\\\n\\label{j1.1}\n&\\leq \\sum_{k\\geq 0} c_{10} e^{-c_{11} k} \\beta_n +c_{12}\\beta_n\n\\leq c_{13} \\beta_n .\n\\end{align}\nAll estimates that we derived for $Y^n_{k,1}$'s apply to $Y^n_{k,2}$'s as well, by symmetry.\n\nNote that $|X^{(n)}(U^n_{k+1} ) - X^{(n)}(T^n_{k} )|\\geq \\beta_n\/2$.\nWe have\n$V^n_{k+1} - T^n_{k} \\geq U^n_{k+1} - T^n_{k} $ so\nwe can assume ($\\clubsuit$) that $b_n\/a_{n-1}$ is so large that for some $p_1>0$ and $n_{2}$, for all $n\\geq n_{2}$ and $k\\geq 1$, \n\\begin{align*\nP_\\omega^x(V^n_{k+1} - T^n_{k} \\geq \\beta_n^2\n\\mid \\mathcal{F}_{T^n_k}) \\geq p_1.\n\\end{align*}\nLet $\\mathcal{V}_m$ be a binomial random variable with parameters \n$m$ and $p_1$. We see that $\\sigma^{n,2}(V^n_{m })= \\sum_{k=0}^m V^n_{k+1} - T^n_{k}$ is stochastically minorized by \n$\\beta_n^2 \\mathcal{V}_m$. \n\nRecall that $u \\geq a_n^2$.\nLet $m_1$ be the smallest integer such that\n\\begin{align}\\label{c1.1}\nP^0_\\omega(V^n_{m_1 } \\leq u) < \\delta\/4.\n\\end{align}\nThen\n\\begin{align}\\label{c1.2}\nP^0_\\omega(V^n_{m_1 -1} \\leq u) \\geq \\delta\/4.\n\\end{align}\nSince $\\delta$ in \\eqref{d27.4} can be arbitrarily small, we have\nfor for some $n_{3}$ and all $n\\geq n_{3}$, \n\\begin{align}\\label{c1.3}\nP^0_\\omega(\\sigma^{n,2}_u\/u \\leq \\delta^4) \\geq 1-\\delta\/8.\n\\end{align}\nThe following estimate follows from the fact that $\\sigma^{n,2}(V^n_{m_1-1 })$ is stochastically minorized by \n$\\beta_n^2 \\mathcal{V}_{m_1-1}$, and from \\eqref{c1.2}-\\eqref{c1.3},\n\\begin{align*}\nP^0_\\omega(\\beta_n^2 \\mathcal{V}_{m_1-1} \\leq \\delta^4 u) \n&\\geq \nP^0_\\omega(\\sigma^{n,2}(V^n_{m_1 -1}) \\leq \\delta^4 u) \\\\\n&\\geq\nP^0_\\omega(\\sigma^{n,2}_u \\leq \\delta^4 u, V^n_{m_1 -1} \\leq u) \n\\geq \\delta\/8.\n\\end{align*}\nThis implies that for some $c_{14}$, we have\n$m_1 \\leq c_{14}\\delta^3 u\/\\beta_n^2$. In other words, $u \\geq m_1 \\beta_n^2\/(c_{14} \\delta^3)$. \nNote that for a fixed $\\delta$, we have for large $n$, ($\\clubsuit$) $u^{1\/2}\\delta\/4 - c_{13} \\beta_n \\geq u^{1\/2} \\delta\/8$.\nThese observations, \\eqref{d29.10}, \\eqref{j1.1} and the Chebyshev inequality imply that for $m\\le m_1$,\n\\begin{align}\\label{d29.22}\nP^0_\\omega&\\left(u^{-1\/2}\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m}Y^n_{k,2}\\right|\\right) \\geq \\delta\/2\\right)\\\\\n&\\le\nP^0_\\omega\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,1}\\right| \\geq u^{1\/2} \\delta\/4\\right)\n+ P^0_\\omega\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,2}\\right| \\geq u^{1\/2} \\delta\/4\\right)\n\\nonumber \\\\\n&\\le\nP^0_\\omega\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,1}\n- E^0_\\omega\\sum_{k=0}^{m} Y^n_{k,1}\\right| \\geq u^{1\/2} \\delta\/4\n- c_{13} \\beta_n\\right)\\nonumber \\\\\n&\\qquad + P^0_\\omega\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,2}\n- E^0_\\omega\\sum_{k=0}^{m} Y^n_{k,2}\\right| \\geq u^{1\/2} \\delta\/4\n- c_{13} \\beta_n\\right)\n\\nonumber \\\\\n&\\le \\frac{\\Var\\left(\\sum_{k=0}^{m} Y^n_{k,1}\\right)}\n{u \\delta^2\/64} + \\frac{\\Var\\left(\\sum_{k=0}^{m} Y^n_{k,2}\\right)}\n{u \\delta^2\/64} \\nonumber \\\\\n&\\le \\frac{2c_{2} m_1 \\beta_n^2}\n{(c_{14}^{-1}\\delta^{-3} m_1\\beta_n^2) \\delta^2\/64} \\le c_{15} \\delta.\n\\nonumber\n\\end{align}\nLet $M = \\min\\{m\\geq 1: \nu^{-1\/2}\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m}Y^n_{k,2}\\right|\\right)\n\\geq \\delta\\}$. By the strong Markov property applied at $M$ and \\eqref{d29.22},\n\\begin{align}\\label{d29.23}\n&P^0_\\omega\\left(\n\\sup_{1\\le m \\le m_1}\nu^{-1\/2}\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m}Y^n_{k,2}\\right|\\right)\n\\geq \\delta,\\ u^{-1\/2}\\left(\\left|\\sum_{k=0}^{m_1} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m_1}Y^n_{k,2}\\right|\\right) \\le \\delta\/2\\right)\\\\\n&\\le \nP^0_\\omega\\left( u^{-1\/2}\\left(\\left|\\sum_{k=0}^{m_1-M} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m_1-M}Y^n_{k,2}\\right|\\right) \\geq \\delta\/2 \\mid M < m_1\\right)\n\\le c_{15} \\delta. \\nonumber\n\\end{align}\n\nRecall that $u \\geq m_1 \\beta_n^2\/(c_{14} \\delta^3)$. For a fixed $\\delta$ and large $n$, ($\\clubsuit$) $u^{1\/2}\\delta - 2 c_{12} \\beta_n \\geq u^{1\/2} \\delta\/2$.\nIt follows from this, \\eqref{ma24.6} and \\eqref{ma24.7} that\n\\begin{align}\\label{ma25.1}\nP^0_{\\omega}\\left(\\exists k \\leq m_1: |\\bar Y^n_k| \\geq u^{1\/2}\\delta \\right) \n&\\leq \nm_1 \\sup_{k\\leq m_1} P^0_{\\omega}\\left( |\\bar Y^n_k| \\geq u^{1\/2}\\delta \\right) \\\\\n&\\leq \nm_1 \\sup_{k\\leq m_1} P^0_{\\omega}\\left( |\\bar Y^n_k| -E^0_{\\omega} |\\bar Y^n_k|\\geq u^{1\/2}\\delta - c_{12} \\beta_n \\right)\\nonumber \\\\\n&\\le m_1 \\frac {c_{11} \\beta_n^2}{ u \\delta^2 \/4} \n\\le m_1 \\frac{c_{11} \\beta_n^2}\n{(c_{14}^{-1}\\delta^{-3} m_1\\beta_n^2) \\delta^2} \\le c_{16} \\delta.\n\\nonumber\n\\end{align}\n\nWe use \\eqref{c1.1}, \\eqref{d29.22}, \\eqref{d29.23} and \\eqref{ma25.1} to obtain\n\\begin{align*\n&P^0_\\omega \\left(\\sup_{0\\le s \\le u} u^{-1\/2} |X^{n,2}_s| \\geq 2\\delta\\right)\\\\\n &\\le P^0_\\omega(V^n_{m_1 } \\le u) \n+ P^0_\\omega\\left(u^{-1\/2}\\left(\\left|\\sum_{k=0}^{m_1} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m_1}Y^n_{k,2}\\right|\\right) \\geq \\delta\/2\\right)\\\\\n& + P^0_\\omega\\left(\n\\sup_{1\\le m \\le m_1}\nu^{-1\/2}\\left(\\left|\\sum_{k=0}^{m} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m}Y^n_{k,2}\\right|\\right)\n\\geq \\delta,\\ u^{-1\/2}\\left(\\left|\\sum_{k=0}^{m_1} Y^n_{k,1}\\right|+\\left|\\sum_{k=0}^{m_1}Y^n_{k,2}\\right|\\right) \\le \\delta\/2\\right)\\\\\n& +\nP^0_{\\omega}\\left(\\exists k \\leq m_1: |\\bar Y^n_k| \\geq u^{1\/2}\\delta \\right) \\\\\n&\\le \\delta\/4 + c_{15} \\delta + c_{15} \\delta + c_{16}\\delta.\n\\end{align*}\nSince $\\delta>0$ is arbitrarily small, this implies that for every $\\delta>0$, some $n_{3}$ and all $n\\geq n_{3}$,\n\\begin{align*\nP^0_\\omega &\\left(\\sup_{0\\le s \\le u} u^{-1\/2} |X^{n,2}_s| \\geq \\delta\\right)\\le \\delta\/2.\n\\end{align*}\nThis and \\eqref{n8.5} yield the proposition.\n\\end{proof}\n\nRecall from \\eqref{e:bfPdef} the definition of the averaged measure ${\\bf P}$.\n\n\\begin{lemma}\\label{n9.1}\nFor every $\\delta>0$ there exists $n_1$ such that for all $n\\geq n_1$ and $u\\geq a_n^2$, \n\\begin{align}\\label{n9.2}\n{\\bf P}\n\\left( \\sigma^{n,2}_u \/ u \\le \\delta, \\sup_{0\\le s \\le u} u^{-1\/2} |X^{n,2}_s| \\le \\delta \\right) \\geq 1-\\delta.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nBy Proposition \\ref{d22.2} applied to $\\delta\/2$ in place of $\\delta$, for every $\\delta>0$ there exists $n_2$ such that for all $n\\geq n_2$, $u\\geq a_n^2$, and $\\omega$ such that $0 \\notin \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n$, \n\\begin{align}\\label{n9.3}\nP^0_\\omega\n\\left( \\sigma^{n,2}_u \/ u \\le \\delta, \\sup_{0\\le s \\le u} u^{-1\/2} |X^{n,2}_s| \\le \\delta \\right) \\geq 1-\\delta\/2.\n\\end{align}\n\nLet $|A|$ denote the cardinality of $A\\subset {\\mathbb Z}^2$. Since $|\\Gamma^1_n| \\leq 25 \\beta_n^2 \\leq 25 a_n^2 n^{-1\/2} = 25 n^{-1\/2} |B'_n|$, the definitions of $\\mathcal{O}_n$ and $\\Gamma^1_n$ imply that ${\\bf P}(0 \\in \\Gamma^1_n \\setminus \\partial_i \\Gamma^1_n) < \\delta\/2$ for some $n_3 \\geq n_2$ and all $n \\geq n_3$. This and \\eqref{n9.3} imply \\eqref{n9.2}.\n\\end{proof}\n\nIn the following lemma and its proof, when we write the Prokhorov distance between processes such as $\\{ (1\/a)X^{(n-1)}_{ta^2}, t\\in[ 0,1]\\}$, we always assume that they are distributed according to ${\\bf P}$.\n\n\\begin{lemma}\\label{d22.1}\nThere exists a function $\\rho^*: (0,\\infty) \\to (0,\\infty)$ with \n$ \\lim_{\\delta\\downarrow 0} \\rho^*(\\delta) = 0$ \nand a sequence $\\{a_n\\}$ with the following properties,\n\\begin{align}\\label{d19.3}\n&{d_P}(\\{ (1\/a)X^{(n-1)}_{ta^2}, t\\in[ 0,1]\\}, P_{\\text{BM}}) \\le 2^{-n},\\qquad a \\geq a_n.\n\\end{align}\nMoreover, suppose that for $\\delta<1\/2$ and all $u\\geq a_n^2$, \n\\begin{align}\\label{d19.2}\n{\\bf P} \\left( \\sigma^{n,2}_u \/ u \\le \\delta, \\sup_{0\\le s \\le u} u^{-1\/2} |X^{n,2}_s| \\le \\delta \\right) \\geq 1-\\delta.\n\\end{align}\nThen\n${d_P}( \\{(1\/a)X^{(n)}_{ta^2}, t\\in[ 0,1]\\}, P_{\\text{BM}}) \\le 2^{-n} + \\rho^*(\\delta)$, for all $a\\geq a_n$.\n\\end{lemma}\n\n\\begin{proof}\n\nFormula \\eqref{d19.3} is special case of \\eqref{e:PdistBM}.\n\nFix some $a\\geq a_n$. We will apply \\eqref{d19.2} with $u=a^2$. \nNote that on the event in \\eqref{d19.2} we have\n\\begin{align}\\label{n9.5}\n1- \\sigma^{n,1}_{a^2}\/a^2= u\/u- \\sigma^{n,1}_u\/u \n= \\sigma^{n,2}_u \/ u \\le \\delta.\n\\end{align}\nThe function $t\\to \\sigma^{n,1}_{ta^2}\/a^2$ is Lipschitz with the constant 1 and $\\sigma^{n,1}_{ta^2}\/a^2 \\leq t$ so \\eqref{n9.5} implies for $t\\in[0,1]$,\n\\begin{align}\\label{n9.6}\nt- \\sigma^{n,1}_{ta^2}\/a^2 \\leq 1- \\sigma^{n,1}_{a^2}\/a^2 \\leq \\delta.\n\\end{align}\n\nRecall the function $\\rho(\\delta)$ from the proof of Lemma \\ref{d21.1}, \nsuch that $P_{\\text{BM}}(\\Osc(W,\\delta) \\geq\\rho(\\delta) )<\\rho(\\delta)$ and \n$\\lim_{\\delta\\downarrow 0} \\rho(\\delta) = 0$.\nBy \\eqref{n9.6}, we can apply Lemma \\ref{d21.1} with $\\sigma_t = \\sigma^{n,1}_{ta^2}\/a^2$. Recall\nthat $W^*(t) = W(\\sigma_t)$.\nBy the definition of $\\widehat X^{n,1}$,\n\\begin{align}\n\\nonumber\n&{d_P}( \\{(1\/a) X^{n,1}_{ta^2}, t\\in[0,1]\\}, P_{\\text{BM}}) \\\\\n\\nonumber\n&\\le {d_P}( \\{(1\/a) X^{n,1}_{t\/a^2}, t\\in[0,1]\\}, \\{W^*_t, t\\in[0,1]\\})\n + {d_P}( \\{W^*_t, t\\in[0,1]\\}, P_{\\text{BM}}) \\\\ \n\\nonumber \n&\\le {d_P}( \\{(1\/a) X^{n,1}_{ta^2}, t\\in[0,1]\\}, \\{W^*_t, t\\in[0,1]\\})\n + \\rho(\\delta) + \\delta\\\\ \n&= {d_P}( \\{(1\/a)\\widehat X^{n,1}(\\sigma^{n,1}_{ta^2}), t\\in[0,1]\\}, \\{W(\\sigma^{n,1}_{ta^2}\/a^2), t\\in[0,1]\\})\n + \\rho(\\delta) + \\delta .\n\\end{align}\n\nRecall from \\eqref{e:Xhatdsn} that for a fixed $\\omega\\in \\Omega$, the distribution of $\\{\\widehat X^{n,1}_t, t\\geq 0\\}$ \nis the same as that of $\\{X^{n-1}_t, t\\geq 0\\}$.\nIn view of Theorem \\ref{T:eK}, we can make $a_n$ so large ($\\clubsuit$) that \n$\\Pp(\\Osc(\\widehat X^{n,1},\\delta) \\geq 2\\rho(\\delta) )< 2\\rho(\\delta)$. \nThis, Lemma \\ref{d21.2} and the definition of the Prokhorov distance imply that\n\\begin{align*}\n{d_P}( \\{(1\/a)\\widehat X^{n,1}&(\\sigma^{n,1}_{ta^2}), \\, t\\in[0,1]\\}, \\{W(\\sigma^{n,1}_{ta^2}\/a^2), t\\in[0,1]\\}) \\\\\n&\\le \n{d_P}( \\{(1\/a)\\widehat X^{n,1}_{ta^2}, t\\in[0,1]\\}, \\{W_{t}, t\\in[0,1]\\}) + 4\\rho(\\delta) \\\\\n&= {d_P}( \\{(1\/a) X^{(n-1)}_{ta^2}, t\\in[0,1]\\}, \\{W_{t}, t\\in[0,1]\\}) + 4\\rho(\\delta) \\\\\n&\\le 2^{-n} + 4\\rho(\\delta).\n\\end{align*}\nIn the final two lines line we used \\eqref{e:Xhatdsn} and \\eqref{d19.3}.\n\n\nCombining the estimates above, since\n$P^0_{\\omega} \\left( \\sup_{0\\le s \\le u} u^{-1\/2} |X^{n,2}_s| \\le \\delta \\right) \\geq 1-\\delta$ and \n$X^{(n)} = X^{n,1} + X^{n,2}$, Lemma \\ref{ma26.5} shows that\n\\begin{align*}\n&{d_P}( \\{(1\/a) X^{(n)}_{ta^2}, t\\in[0,1]\\}, P_{\\text{BM}}) \\\\\n&\\le\n{d_P}( \\{(1\/a) X^{(n)}_{ta^2}, t\\in[0,1]\\}, \\{(1\/a) X^{n,1}_{ta^2}, t\\in[0,1]\\}) \\\\\n&\\qquad + {d_P}( \\{(1\/a) X^{n,1}_{ta^2}, t\\in[0,1]\\}, P_{\\text{BM}}) \\\\\n&\\le \n\\delta + 2^{-n} + 5 \\rho(\\delta) + \\delta .\n\\end{align*}\nWe conclude that the lemma holds if we take $\\rho^*(\\delta) = 5\\rho(\\delta) + 2\\delta $.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{T:main}]\nChoose an arbitrarily small $\\varepsilon>0$. We will show that there exists $a_*$ such that for every $a\\geq a_*$, \n\\begin{align}\\label{d23.1}\n&{d_P}( \\{(1\/a) X_{ta^2}, t\\in[0,1]\\}, P_{\\text{BM}}) \\le \\varepsilon .\n\\end{align}\n\nRecall $\\rho^*$ from Lemma \\ref{d22.1}.\nLet $n_1 $ be such that $2^{-n_1}\\le \\varepsilon\/4$ and let $\\delta>0$ be so small that \n$2^{-n_1}+ \\rho^*(\\delta) < \\varepsilon\/2$. Let $n_2$ be defined as $n_1$ in Lemma \\ref{n9.1}, \nrelative to this $\\delta$. Then, according to Lemma \\ref{d22.1}, \n\\begin{align}\\label{d22.5}\n{d_P}( \\{(1\/a)X^{n}_{ta^2}, t\\in[ 0,1]\\}, P_{\\text{BM}}) \\le 2^{-n} + \\rho^*(\\delta) < \\varepsilon\/2,\n\\end{align}\nfor all $n\\geq n_3 := n_1\\lor n_2$ and $a\\geq a_n$.\n\nFor a set $K$ let \n$\\mathcal{B}(K,r) = \\{z: \\dist(z,K) < r\\}$ and recall the definition of $D_n$ given in \\eqref{ma26.1}. Let \n\\begin{align*}\nF_1 &= \\{0\\in \\mathcal{B}( D_{n+1}, a_{n+1}\/\\log (n+1))\\},\\\\\nF_2 &= \\{0\\notin \\mathcal{B}( D_{n+1}, a_{n+1}\/\\log (n+1))\\}\n\\cap\n\\{\\exists t\\in[0, a_{n+1}^2]: X^{(n)}_t \\in D_{n+1}\\},\\\\\nG_1^k &= \\{0\\in \\mathcal{B}( D_k, b_k\/k)\\},\n\\qquad k> n+1,\\\\\nG_2^k &= \\{0\\notin \\mathcal{B}( D_k, b_k\/k)\\}\n\\cap\n\\{\\exists t\\in[0, a_{n+1}^2]: X^{(n)}_t \\in D_k\\}, \\qquad k> n+1. \n\\end{align*}\n\nThe area of $\\mathcal{B}(D_{n+1}, a_{n+1}\/\\log (n+1))$ is bounded by $c_1 (a_{n+1}\/\\log (n+1))^2$ so \n\\begin{align}\\label{d23.10}\n\\Pp(F_1) \\le c_1 (a_{n+1}\/\\log (n+1))^2\/ a_{n+1}^2 = c_1 \/\\log^2 (n+1).\n\\end{align}\nWe choose $n_4 > n_3 $ such that \n$c_1 \/\\log^2 (n+1) < \\varepsilon\/8$ for $n \\geq n_4$.\n\nNote that $D_{n+1}$ is a subset of a square with side $4\\beta_{n+1} \\leq 4 a_{n+1} n^{-1\/4}$. This easily implies that\nthere exists $n_5 \\geq n_4$ such that for $n\\geq n_5$,\n\\begin{align*\nP_{\\text{BM}}\\left(\\exists t\\in[0, a_{n+1}^2]: W(t) \\in D_{n+1} \n\\mid \n0\\notin \\mathcal{B}( D_{n+1}, a_{n+1}\/\\log (n+1))\n\\right) \\le \\varepsilon\/16.\n\\end{align*}\nWe can assume ($\\clubsuit$) that $a_{n+1}\/a_n$ is so large that\nfor some $n_6 \\geq n_5$ and all $n\\geq n_6$, \n\\begin{align}\\label{d23.11}\n\\Pp(F_2) &\\le \\Pp\\left(\\exists t\\in[0, a_{n+1}^2]: X^{(n)}_t \\in D_{n+1} \n \\mid 0\\notin \\mathcal{B}( D_{n+1}, a_{n+1}\/\\log (n+1)) \\right) \\\\\n &\\le \\varepsilon\/8.\n\\end{align}\n\nThe area of $\\mathcal{B}( D_k, b_k\/k)$ is bounded by $c_2 b_k^2\/k$ so \n\\begin{align}\\label{d23.12}\n\\Pp(G_1^k) \\le (c_2 b_k^2\/k)\/ a_k^2 \\le c_3 ( b_k^2\/k)\/ (k b_k^2)= c_3 \/k^2.\n\\end{align}\nWe let $n_7 >n_6$ be so large that $\\sum_{k\\geq n_7} c_3 \/k^2 < \\varepsilon\/8$.\nFor all $k>n+1\\geq n_7+1$, we make $b_k\/k$ so large ($\\clubsuit$) that \n\\begin{align}\\label{d23.13}\n\\Pp(G_2^k) \\le \n\\Pp\\left(\\sup_{t\\in[0, a_{n+1}^2]} |X^{n}_t| \\geq b_k\/k\\right) \\le c_3\/k^2.\n\\end{align}\n\nWe combine \\eqref{d23.10}, \\eqref{d23.11}, \\eqref{d23.12} and \\eqref{d23.13} to see that for $n\\geq n_7$,\n\\begin{align}\\label{d23.14}\n\\Pp&(\\exists t\\in[0, a_{n+1}^2] \\ \\exists k\\geq n+1: X^{(n)}_t \\in D_k)\\\\\n&\\le\n\\Pp(F_1) + \\Pp(F_2) + \\sum_{k>n+1}\n\\Pp(G_1^k) + \\sum_{k>n+1}\n\\Pp(G_2^k) \\nonumber\\\\\n& \\le \\varepsilon\/8 + \\varepsilon\/8 + \\varepsilon\/8 + \\varepsilon\/8 = \\varepsilon\/2.\n\\nonumber\n\\end{align}\n\nLet $R_{n+1} = \\inf\\{t\\geq 0: X_t \\in \\bigcup_{k\\geq n+1} \\mathcal{D}_k\\}$. \nIt is standard to construct $X$ and $X^{(n)}$ on a common probability space so that $X_t = X_t^n$ for all $t\\in [0, R_{n+1})$. This and \\eqref{d23.14}\nimply that for $n\\geq n_7$ and all $a\\in[a_n, a_{n+1}]$ we have\n\\begin{align*\nP(\\exists t\\in[ 0,1]: (1\/a)X_{ta^2} \\ne (1\/a)X^{(n)}_{ta^2}) \\le \\varepsilon\/2.\n\\end{align*}\nWe combine this with \\eqref{d22.5} to see that for all $a\\geq a_{n_6}$, \n\\begin{align*\n{d_P}( \\{(1\/a)X_{ta^2}, t\\in[ 0,1]\\}, P_{\\text{BM}}) \\le \\varepsilon\/2 +\\varepsilon\/2 = \\varepsilon.\n\\end{align*}\nWe conclude that \\eqref{d23.1} holds with $a_* = a_{n_7}$. \n\nThis completes the proof of AFCLT. The WFCLT then follows from Theorem 2.13 of \\cite{BBT1}.\n\\end{proof}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{introduction}\n\nOne of the fundamental tasks of the Galactic studies is to\nestimate the structure parameters\nof the major structure components.\n\\citet{Bahcall1980} fit the observations with two structure components, namely a disk and a halo. \n\\citet{Gilmore1983} introduce a third component, namely a thick disk, \nconfirmed in the earliest Besancon Galaxy Model \\citet{Creze1983}.\nSince then, various methods and observations have been adopted to\nestimate parameters of the thin and thick disks and of\nthe halo of our Galaxy. As the quantity and quality of data available continue to improve over the years,\nthe model parameters derived have become more precise, numerically. \nIronically, those numerically more precise results do not converge (see Table~1 of \n\\citealt{Chang2011}, Table~2 of \\citealt{Lopez2014} and Sect.~5 and 6 of \\citealt{Bland2016} for a review). \nThe scatters in density law parameters, such as scale lengths, \nscale heights and local densities of these Galactic components, \nas reported in the literature, are rather large.\nAt least parts of the discrepancies are caused by degeneracy of \nmodel parameters, which in turn, can be traced back to the different \ndata sets adopted in the analyses. Those differing \ndata sets either probe different sky areas \n\\citep{Bilir2006a, Du2006, Cabrera2007, Ak2007, Yaz2010, Yaz2015}, \nare of different completeness magnitudes and therefore \nrefer to different limiting distances \\citep{Karaali2007},\nor of consist of stars of different populations of different absolute magnitudes \n\\citep{Karaali2004, Bilir2006b, Juric2008, Jia2014}. \nIt should be noted that the analysis of \\citet{Bovy2012}, using the SEGUE spectroscopic survey, \nhas given a new insight on the thin and thick disk structural parameters. This analysis provides estimate \nof their scale height and scale height as a function of metallicity and alpha abundance ratio. However, \nit relies on incomplete data (since it is spectroscopic) with relatively low range of Galactocentric radius as \nfor the thin disk is concerned. \n\nA wider and deeper sample than those employed hitherto may help break the degeneracy \ninherent in a multi-parameter analysis and yield a globally representative Galactic model. \nA single or a few fields are insufficient to break the degeneracy. \nThe resulted best-fit parameters, while sufficient for the description of \nthe lines of sight observed, may be unrepresentative of the entire Galaxy. For the latter purpose, \nsystematic surveys of deep limiting magnitude of all or a wide sky area, such as the \nTwo Micron All Sky Survey (2MASS; \\citealt{Skrutskie2006}), the Sloan Digital Sky Survey \n(SDSS; \\citealt{York2000}), the Panoramic Survey Telescope \\& Rapid Response System \n(Pan-Starrs; \\citealt{Kaiser2002}) and the GAIA mission \\citep{Perryman2001}, \nare always preferred.\n\nSeveral authors have studied the Galactic structure with 2MASS data at low \n\\citep{Lopez2002, Yaz2015} or high latitudes \\citep{Cabrera2005, Cabrera2007, Chang2011}. \n\\citet{Polido2013} uses the model from \\citet{Ortiz1993} and rederive the parameters of this model\nbased on the 2MASS star counts over the whole sky area. \nHowever, the survey depth of 2MASS is not quite enough to reach the outer disk and the halo.\nThe survey depth of SDSS is much deeper than that of the 2MASS. Many authors \n(e.g. \\citealt{Chen2001, Bilir2006a, Bilir2008, Jia2014, Lopez2014})\nhave previously used the SDSS data to constrain the Galactic parameters. \nThose authors have only made use of a portion of the surveyed fields, \nat intermediate or high Galactic latitudes. \n\\citet{Juric2008} obtain Galactic model parameters from the stellar \nnumber density distribution of 48 million \nstars detected by the SDSS that sample distances from 100\\,pc to 20\\,kpc and cover \n6500\\,deg$^2$ of sky. Their results are amongst those mostly quoted. \nHowever, in their analysis, they have avoided the Galactic plane. \nSo the constraints of their results on the disks, especially the\nthin disk, are weak. In their analysis, \\citet{Juric2008} have also \nadopted photometric parallaxes assuming that all stars of the same colour\nhave the same metallicity. Clearly, (disk) stars in different parts of the Galaxy have quite different \n\\citep{Ivezic2008, Xiang2015, Huang2015} metallicities, and these variations in metallicities may \nwell lead to biases in the model parameters derived. \n\n \n In order to provide a quality input catalog for the LAMOST Spectroscopic Survey of the Galactic Anticentre\n(LSS-GAC; \\citealt{Liu2014,Liu2015, Yuan2015}), \na multi-band CCD photometric survey of the Galactic \nAnticentre with the Xuyi 1.04\/1.20m Schmidt Telescope \n(XSTPS-GAC; \\citealt{Zhang2013,Zhang2014,Liu2014}) \nhas been carried out. The XSTPS-GAC photometric catalog contains \nmore than 100 million stars in the direction of Galactic anticentre (GAC). It provides an excellent \ndata set to study the Galactic disk, its structures and substructures. \nIn this paper, we take the effort to constrain the Galactic model \nparameters by combining photometric \ndata from the XSTPS-GAC and SDSS surveys.\nThis is the third paper of a series on the Milky Way study based on the XSTPS-GAC data. In \n\\citet{Chen2014}, we present a three dimensional extinction map in $r$ band. The map has a spatial \nangular resolution, depending on latitude, between 3 and 9\\,arcmin and covers the entire XSTPS-GAC \nsurvey area of over 6,000 deg$^2$ for Galactic longitude 140 $< l <$220\\,deg and latitude 40 $< b <$40\\,deg. \nIn \\citet{Chen2015}, we investigate the correlation between the extinction and the $\\rm H~{\\scriptstyle I}$~ and CO emission at intermediate \nand high Galactic latitudes ($|b| >$ 10\\degr) within the footprint of the XSTPS-GAC, on small and large scales. \nIn the current work we are interested in the global, smooth structure of the Galaxy. \n\nFor the Galactic structure, in addition to the global, smooth major components,\nmany more (sub-)structures have been discovered, including the inner bars near the Galactic centre\n\\citep{Alves2000, Hammersley2000, vanLoon2003, Nishiyama2005, Cabrera2008, Robin2012},\nflares and warps of the (outer) disk \\citep{Lopez2002, Robin2003, Momany2006, Reyle2009, Lopez2014},\nand various overdensities in the halo and the outer disk, such as the Sagittarius Stream\n\\citep{Majewski2003}, the Triangulum-Andromeda \\citep{Rocha2004, Majewski2004} and\nVirgo \\citep{Juric2008} overdensities, the Monoceros ring \\citep{Newberg2002,Rocha2003}\nand the Anti-Center Stream \\citep{Rocha2003,Crane2003, Frinchaboy2004}.\nThey show the complexity of the Milky Way. \nRecently, \\citet{Widrow2012} and \\citet{Yanny2013} have \nfound evidence for a significant Galactic North-South \nasymmetry in the stellar number density distribution, exhibiting some \nwavelike perturbations that seem to be intrinsic to the disk.\n\\citet{Xu2015} show that in the anticentre regions\nthere is an oscillating asymmetry in the main-sequence star counts \non either sides of the Galactic plane, in support of the prediction of \n\\citet{Ibata2003}. The asymmetry oscillates in the sense that there are more \nstars in the north, then in the south, then back in the north,\nand then back in the south at distances of about 2, 4 -- 6, 8 -- 10 and 12 -- 16\\,kpc \nfrom the Sun, respectively.\n\nThe paper is structured as follows. The data are introduced in Section~2.\nWe describe our model and the analysis method in Section~3. Section~4 \npresents the results and discussions. In Section~5 we discuss the large \nscale excess\/deficiency of star counts that reflect the substructures in the halo and disk. \nFinally we give a summary in Section~6.\n\n\\section{Data}\n\n\\begin{table}\n \\centering\n \\caption{Data sets.}\n \\begin{tabular}{lcccc}\n \\hline\n \\hline\n & area & field size & $N_{\\rm fields}$& $r$ ranges \\\\\n & (deg$^2$) & (deg $\\times$ deg)& & (mag) \\\\\n \\hline\nXSTPS-GAC & $\\sim$3392 & 2.5$\\times$2.5 & 574 &12--18 \\\\\nXSTPS-M31\/M33 & $\\sim$588 & 2.5$\\times$2.5 & 108 & 12--18 \\\\\nSDSS & $\\sim$6871 & 3.0$\\times$3.0 & 1592 &15--21 \\\\\n \\hline\n\\end{tabular}\\\\\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{xstpsarmap.eps}\n \\includegraphics[width=0.48\\textwidth]{sdssdenmap.eps}\n \\includegraphics[width=0.48\\textwidth]{allfields.eps}\n \\caption{{\\it Upper panel}: Extinction map of the GAC and M31\/M33 areas within the footprint of \nXSTPS-GAC (\\citealt{Chen2015} map for GAC area and \\citealt{Schlegel1998} map for M31\/M33 area). \nThe selected fields for GAC area and M31\/M33 area are marked as red and blue pluses,\nrespectively. The red star symbols mark the central positions of M31 and M33, respectively. {\\it Middle panel}: \nSDSS DR12 density map of stars in a magnitude bin of $r$ = 15.5 to 16.5\\,mag at a \nresolution of 0.1\\degr. The selected fields from SDSS are marked as red pluses. {\\it Bottom panel}: \nLocation of the 682 fields selected from the XSTPS-GAC (red) and 1592 fields selected from the SDSS (blue) in \nGalactic coordinates.}\n \\label{data}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{selcorr.eps}\n \\caption{Colour-magnitude distributions of stars in all selected subfields from \nthe Sample\\,C, the re-weighted Sample\\,C (see Equation~(1) \nand related discussion), and the XSTPS-GAC.\nThe bottom three panels show the grey-scaled number densities distributed in the $g-i$ vs. $r$ space\nrespectively for the XSTPS-GAC (left), the re-weighted Sample\\,C (middle), and the XSTPS-GAC (right).\nThe upper three panels show the number distribution contours in the $g-i$ vs. $r$ space\n(left) as well as number distributions \nrespectively in $r$ (middle) and $g-i$ (right) for each sample.\nThe black contours and histograms show the density of all \ntargets in the XSTPS-GAC, the red ones represent the distributions of stars in \nSample\\,C and the blue ones display the \ndistributions for the re-weighted Sample\\,C. The contours labeled with `a', `b' and `c'\nin the left-upper panel represent the contour levels of star number \nof 6\\,000, 24\\,000 and 48\\,000, respectively.\nThe re-weighted Sample\\,C perfectly reproduces the colour-magnitude\nsampling provided by the XSTPS-GAC.}\n \\label{sel}\n\\end{figure*}\n\n\\subsection{The XSTPS-GAC Data}\n\nThe XSTPS-GAC started collecting data in the fall of 2009 and completed in the spring of 2011.\nIt was carried out in order to provide input catalogue for the LSS-GAC.\nThe survey was performed in the SDSS $g$, $r$ and $i$ bands using the \nXuyi 1.04\/1.20\\,m Schmidt Telescope equipped with a 4k$\\times$4k CCD camera, \noperated by the Near Earth Objects Research Group of the Purple Mountain Observatory.\nThe CCD offers a field of view (FoV) of 1.94\\degr $\\times$ 1.94\\degr, with a pixel scale of 1.705\\,arcsec.\nIn total, the XSTPS-GAC archives approximately 100 million stars down to a\nlimiting magnitude of about 19 in $r$ band ($\\sim$ 10$\\sigma$) \nwith an astrometric accuracy about 0.1\\,arcsec\n and a global photometric accuracy of about 2\\% \\citep{Liu2014}.\n The total survey area of XSTPS-GAC is close to 7,000\\,deg$^2$,\ncovering an area of $\\sim$ 5,400\\,deg$^2$ centered on the GAC,\nfrom RA $\\sim$ 3 to 9\\,h and Dec $\\sim~-$10\\degr to $+60\\degr$, plus\nan extension of about 900\\,deg$^2$ to the M31\/M33 area and the bridging fields \nconnecting the two areas.\n\n\\subsubsection{GAC area}\n\nIn the direction of GAC, the $r$-band extinction exceeds 1\\,mag over a significant fraction of the sky \n(see Fig.~\\ref{data}). To correct the extinction of stars in high extinguish area \nusing extinction maps integrated over lines of sight, \nsuch as \\citet{Schlegel1998}, will introduce over corrections.\nIt will make stars too bright and blue. We select a subsample, the\nso-called ``Sample\\,C'' in \\citet{Chen2014}, from XSTPS-GAC.\nExtinction of all stars in Sample\\,C were calculated by the spectral energy distribution (SED) \nfitting to the multi-band data, including the photometric data from the optical ($g,~r,~i$ from \nXSTPS-GAC) to the near-infrared ($J,~H,~K_S$ from 2MASS and $W1, ~W2$ from the Wide-field \nInfrared Survey Explorer, WISE, \\citealt{Wright2010}). \nThe extinction of targets in the subsample, Sample\\,C, \nis highly reliable, all having minimum SED fitting \n$\\chi ^2 _{min} ~<$ 2.0 (see \\citealt{Chen2014} for more details). \nWe correct the extinction of stars in Sample\\,C using the SED fitting extinction\nand the extinction law from \\citet{Yuan2013}. \nThere are more than 13 million stars in Sample\\,C. We divide them \ninto small subfields of roughly 2.5\\degr $\\times$ 2.5\\degr. The\nwidth ($\\Delta l$) and height ($\\Delta b$) of each subfield are always exactly 2.5\\degr.\nEach subfield is not exactly 6.25\\,deg$^2$ but varies with Galactic latitude $b$. \nBecause of the heavy extinction or poor observational conditions \n(large photometric errors), some subfields have \nobviously small amount of stars, comparing to most normal \nneighboring fields and thus be excluded. \nAs a result, 574 subfields, covering about 3392\\,deg$^2$, \nare selected. The locations of these subfields are shown in the top panel of \nFig.~\\ref{data}, with the grey-scale background image \nillustrating the 4\\,kpc extinction map from \\citet{Chen2015}.\n\nFor each subfield, Sample\\,C does not contain all stars in XSTPS-GAC.\nTo connect the distribution of targets in Sample\\,C\nto the underlying distribution of all stars, it is necessary to correct for the effects of the selection \n(often referred to as selection biases). Generally, the selection effects of Sample\\,C \nare due to the following two reasons: (1) the procedure by which we cross-match the photometric \ncatalogue of the XSTPS-GAC with those of 2MASS and WISE, and (2) the $\\chi^2$ cut when we\ndefine the sample with highly reliable extinction estimates. For the first part, we lose about 15\\,per\\,cent \nobjects, mainly due to the limiting depths of 2MASS and WISE, \nespecially at low Galactic latitudes (see Fig.~1 of \\citealt{Chen2014}). \nFor the second part, we lose more than half of the objects, because of the large photometric errors, \nhigh extinction effects, or the special targets contamination, \nsuch as blended or binaries which are not well fitted by the standard SED \nlibrary in \\citet{Chen2014}. Our model for the selection function of Sample\\,C \ncan thus be expressed as the function of the positions ($l$, $b$), colour ($g-i$) \nand magnitude ($r$) of stars, given by,\\\\\n\\begin{equation}\n S(l,b,g-i,r) = \\frac{N_{\\rm SC}(l,b,g-i,r)}{N_{\\rm XSTPS}(l,b,g-i,r)},\n\\end{equation}\nwhere $N_{\\rm SC}(l,b,g-i,r)$ and $N_{\\rm XSTPS}(l,b,g-i,r)$ are the number of stars \nin the Sample\\,C and the XSTPS-GAC, respectively. \nThe numbers of objects are evaluated within each subfield with area of $\\sim$ 6.25\\,deg$^2$, \neach colour ($g-i$) bin ranging from 0 to 3.0\\,mag with a bin-size of 0.1\\,mag, \nand each $r$-band magnitude bin ranging from 12 to 18.5\\,mag with a bin-size of 0.1\\,mag. \n\nThe number distributions in colour $(g-i)$ and magnitude $r$ for the stars\nin all selected subfields in the Sample\\,C, the Sample\\,C\nre-weighted by the selection effect, as well as the XSTPS-GAC,\nare shown as the density grey-scales and \ndensity contours and histograms in Fig.~\\ref{sel}. \nIt is clear that our correction of selection effect leads to perfect agreement between the \ncomplete XSTPS-GAC photometric sample and the re-weighted Sample\\,C.\n\n\\subsubsection{M31\/M33 area}\n\nThe dust extinction in the M31 and M33 area is much smaller, \ncompared with the GAC area (see the top panel of Fig.~\\ref{data}). \nWe adopt the extinction map from \\citet{Schlegel1998} \nand the extinction law from \\citet{Yuan2013} \nto correct the extinction of stars in M31\/M33 area. \nSimilarly as in the GAC area, all stars in M31\/M33 area are divided into small subfields, \nwhich have width ($\\Delta l$) and height ($\\Delta b$) always of 2.5\\degr. \nWe exclude the subfields which have maximum $E(B-V)$ larger \nthan 0.15\\,mag (i.e. $A_r$ = 0.4\\,mag, according to the extinction law from \\citealt{Yuan2013}), \nto avoid the relatively large uncertainties caused \nby the high extinction in the highly extinguished regions. The subfields that cover M31 are also \nexcluded. As a result, there are 108 subfields in the\nM31\/M33 area, covering about 588\\,deg$^2$. The locations of these subfields \nare also plotted in the top panel of Fig.~\\ref{data}, with \nthe grey-scale background image illustrating the extinction map from \\citet{Schlegel1998}. \nConsidering the limiting magnitude of XSTPS-GAC ($r$ $\\sim$ 19\\,mag), \nwe claim that the data in the M31\/M33 area from XSTPS-GAC is complete in the magnitude \nrange $12 < r_0 < 18$\\,mag. \n\n\\subsection{The SDSS Data}\n\nAs the survey area of XSTPS-GAC mainly locate around the low Galactic latitudes, \nwe also use the photometric data from SDSS, for constraining better the outer disk and \nthe halo. We use the photometric data from SDSS data release 12 (DR12, \\citealt{Alam2015}). \nThe SDSS surveys mainly for high Galactic latitudes, with only a few stripes \ncrossing the Galactic plane. It complements one another with the XSTPS-GAC. We cut the \nSDSS data with Galactic latitude $|b| >$ 30\\degr, where the influence of the dust \nextinction is small. The dust extinction are corrected using the extinction \nmap from \\citet{Schlegel1998} and the extinction law from \\citet{Yuan2013}.\nThe SDSS data are divided into subfields with\nwidth ($\\Delta l$) and height ($\\Delta b$) always of 3\\degr. \nTo make sure that each subfield is fully sampled by the SDSS survey, we further\ndivide each subfield into smaller pixels (of size 0.1\\degr $\\times$ 0.1\\degr) and \nexclude the subfield which has no stars detected in at least one of the smaller pixels. \nAs a result we have obtained 1592 subfields, covering a sky area of \nabout 6871\\,deg$^2$. In the middle panel of Fig.~\\ref{data}, we show the \nspatial distributions of these subfields,\nwith grey-scale background image illustrating the number density of the SDSS data.\nTo remove the contaminations of hot white dwarfs, low-redshift quasars and \nwhite dwarf\/red dwarf unresolved binaries from the SDSS sample, we\nreject objects at distances larger than 0.3 mag from the $(r-i)_0$ vs. $(g-r)_0$ \nstellar loci \\citep{Juric2008, Chen2014}.\nThe 95 per\\,cent completeness limits of the SDSS images are $u$, $g$, $r$, $i$ and $z$ $=$\n 22.0, 22.2, 22.2, 21.3 and 20.5\\,mag, respectively \\citep{Abazajian2004}. Thus \nthe SDSS data is complete in the magnitude range of $15 < r_0 < 21$\\,mag. \n\nA brief summary of the data selection in the current work is given in Table~1. In total, \nthere are 2274 subfields, covering nearly 11,000\\,deg$^2$,\nwhich is more than a quarter of the whole sky area. \nThe positions of all the subfields, from both the XSTPS-GAC and the\nSDSS, are plotted in the bottom panel of \nFig.~\\ref{data}. They cover the whole range of Galactic latitudes.\nGenerally, the XSTPS-GAC provides nice constraints of the Galactic disk(s), \nespecially for the thin disk, while the SDSS provides us a good opportunity to \nrefine the structure of Galactic halo, as well as the outer disk. \n \n\\section{The Method}\n\n\\subsection{The Galactic model}\n\nWe adopt a three-components model for the smooth stellar distribution of the\nMilky Way. It comprises two exponential disks (the thin disk and the thick disk) \nand a two-axial power-law ellipsoid halo \\citep{Bahcall1980, Gilmore1983}. \nThus the overall stellar density $n(R,Z)$ at a location $(R,Z)$ can be decomposed\nby the sum of the thin disk, the thick disk and the halo,\n\\begin{equation}\n n(R,Z)=D_1(R,Z)+D_2(R,Z)+H(R,Z),\n\\end{equation}\nwhere $R$ is the Galactocentric distance in the Galactic plane, $Z$ is the\ndistance from the Galactic mid-plane.\n$D_1$ and $D_2$ are stellar densities of the thin disk and the thick disk,\n\\begin{equation}\n D_i(R,Z)=f_{i}\\,n_{0}\\exp\\left[-\\,{(R-R_\\odot)\\over L_{i}}-\\,{(|Z|-Z_\\odot)\\over H_{i}}\\right],\n\\end{equation}\nwhere the suffix $i=1$ and $2$ stands for the thin disk and thick disk, respectively. \n$R_\\odot$ is the radial distance of the Sun to the Galactic centre on the plane, \n$Z_\\odot$ is the vertical distance of the Sun from the plane, \n$n_0$ is the local stellar number density of the thin disk at ($R_\\odot$, $Z_\\odot$),\n$f_i$ is the density ratio to the thin disk ($f_1$=1),\n$L_{i}$ is the scale-length and $H_{i}$ is the scale-height. \nWe adopt $R_\\odot=8$\\,kpc \\citep{Reid1993} and\n$Z_\\odot = 25$\\,pc \\citep{Juric2008} in the current work.\n$H$ is the stellar density of the halo,\n\\begin{equation}\n H(R,Z)=f_{h}\\,n_{0}\\left[R^2+(Z\/\\kappa)^2\\over R_\\odot^2+(Z_\\odot\/\\kappa)^2\\right]^{-p\/2},\n\\end{equation}\nwhere $\\kappa$ is the axis ratio, $p$ is the power index and $f_h$ is the halo normalization \nrelative to the thin disk.\n\n\\subsection{Halo fit}\n\n\\begin{table}\n \\centering\n \\caption{The parameter space and results of the halo fit}\n \\begin{tabular}{lcccc}\n \\hline\n \\hline\nParameters & Range & Grid size & Best value & Uncertainty \\\\\n \\hline\n$\\kappa$ & 0.1--1.0 & 0.01 & 0.65 & 0.05 \\\\\n$p$ & 2.3--3.3 & 0.01 & 2.79 & 0.17 \\\\\n \\hline\n\\end{tabular}\\\\\n\\end{table}\n\nWe fit the component of the halo first.\nThe metallicity distribution of the halo stars can be described as a single \nGaussian component, with a median halo metallicity of $\\mu_{\\rm H}$=$-$1.46\\,dex and spatially\ninvariant of $\\sigma_{\\rm H}$=0.30\\,dex \\citep{Ivezic2008}. \nWe assume the metallicity of all halo stars as [Fe\/H]$=-1.46$\\,dex and adopt the\nphotometric parallax relation from \\citet{Ivezic2008},\n\\begin{equation}\n\\begin{split}\nM_{r} = & 4.50-1.11{\\rm [Fe\/H]}-0.18{\\rm [Fe\/H]}^2 \\\\\n & -5.06+14.32(g-i)_0-12.97(g-i)_0^2 \\\\\n & + 6.127(g-i)_0^3 - 1.267(g-i)_0^4+0.0967 (g-i)_0 ^5.\n\\end{split}\n\\end{equation}\nThe distances of the halo stars can thus be calculated from the standard relation, \n\\begin{equation}\n d=10^{0.2(r_0-M_r)+1}.\n\\end{equation}\n\nStar in a blue colour bin $0.5 \\le g-i < 0.6$ are selected. They do not suffer \nfrom the giant star contamination and probe larger distances to constrain the halo.\nWe calculate their distance using Equations~(5) and (6). The distances of the disk stars will be\nunderestimated because they are more metal-rich. To exclude the contamination of the disk stars, \nwe use stars with absolute distance to the Galactic plane \n$|Z| >$ 4\\,kpc. For each subfield, we divide all halo stars\ninto suitable numbers of logarithmic distance bins and then count the \nnumber for each bin. This number can be modelled as,\n\\begin{equation}\nN_{\\rm H}(d)=H(d)\\Delta V(d),\n\\end{equation}\nwhere $H(d)$ is the halo stellar density given by Equation~(4) and \n$\\Delta V(d)$ is the volume, given by,\n\\begin{equation}\n\\Delta V(d) =\\frac{\\omega}{3}(\\frac{\\pi}{180})^2(d^3_2-d^3_1), \n\\end{equation}\nwhere $\\omega$ denotes the area of the field (unit in deg$^2$), $d_1$ and $d_2$ are\nthe lower distance limit and upper distance limit of the bin, respectively. \n\nWe fit the halo model parameters $p$ and $\\kappa$ to the data.\nAs we explicitly exclude the disk, we cannot fit for \nthe halo-to-thin disk normalization $f_h$.\nA maximum likelihood technique is adopted to explore the best \nvalues of those halo model parameters.\nIn Table~2, we list the searching parameter space and the grid size. \nFor each set of parameters, a reduced likelihood is computed between the simulated \ndata (star counts in bins of distances) and the observations,\ngiven by \\citet{Bienayme1987} and \\citet{Robin2014},\n\\begin{equation}\n Lr=\\sum_{i=1}^{N} q_i \\times (1-R_i+{\\rm ln}(R_i)),\n\\end{equation}\nwhere $Lr$ is the reduced likelihood for a binomial statistics, \n$i$ is the index of each distance bin, $f_i$ and $q_i$ are the \nnumber of stars in the $i$th bin for the model \nand the data, respectively and $R_i=f_i\/q_i$. \nThe uncertainties of the halo parameters are estimated similarly as those in \\citet{Chang2011}.\nWe calculate the likelihood for 1000 times using the observed data and the \nsimulations of the best-fit model adding with the Poisson noises. \nThe resulted likelihood range defines the confidence level and thus the uncertainties. \n\n\\subsection{Disk fit}\n\nThe metallicity distribution of the disk is more complicated than that of the halo. \nThus we fit the disk model parameters through a different way.\nWe compare the $r$-band differential star counts in different colour bins and compare \nthem to the simulations to search for the best disk model parameters\n($n_0, ~L_1,~H_1,~f_2,~L_2$ and $H_2$), as well \nas the halo-to-thin disk normalization $f_h$.\n\nTowards a subfield of galactic coordinates ($l,~b$) \nand solid angle $\\omega$, the $r$-band differential star counts $N_{\\rm sim} (r^k_0)$ \n($k$ is the index of each magnitude bin) in a given colour bin $(g-i)^j_{0}$ \n($j$ is the index of each colour bin) can be simulated as follows:\n\\begin{enumerate}\n\\item The line of sight is divided into many small distance bins. For a given distance \nbin with centre distance of $d_i$ ($i$ is the index of each distance bin),\nthe $r$-band apparent magnitude of a star is given by \n\\begin{equation}\n r_0(d_i) = M_r ((g-i)^j_0, {\\rm [Fe\/H]}|l,b,d_i)+\\mu,\n\\end{equation}\nwhere $\\mu$ is the distance modulus [$\\mu = 5{\\rm log}_{10}(d_i)-5$] and $M_r$\nis the $r$-band absolute magnitude of the star given by Equation (5). The metallicities \nof halo stars are again assumed to be $-$1.46\\,dex and those of disk stars are \ngiven as a function of positions, which is fitted using the metallicities of \nmain sequence turn off stars from LSS-GAC \\citep{Xiang2015},\n\\begin{equation}\n {\\rm [Fe\/H]} = -0.61+0.51 \\cdot {\\rm exp}{(-|Z|\/1.57)}.\n\\end{equation}\n\n\\item The number of stars in each distance bin can be calculated by,\n\\begin{equation}\n N(d_i)=n(R,Z|l,b,d_i)V(d_i), \n\\end{equation}\nwhere $V(d_i)$ is the volume given by Equation~(8) and $n(R,Z|l,b,d_i)$\nis the stellar number density given by Equation~(2, 3 and 4). The halo model parameters,\n$\\kappa$ and $p$, resulted from the halo fit are adopted and settled to be not changeable here.\n\n\\item Combining all distance bins, \nwe can obtain the modeled $r$-band star counts $N(r^k_0)$, by\n\\begin{equation}\n N(r^k_0) = \\Sigma N(d_i) ~ {\\rm where} ~ r^k_0 - \\frac{\\rm rbin}{2} < r_0(d_i) < r^k_0 + \\frac{\\rm rbin}{2},\n\\end{equation}\nwhere rbin is the bin size of $r$-band magnitude (we adopt rbin=1\\,mag in the current work).\n$N(r^k_0)$ is the underlying star counts. When comparing to the observations, we \nneed to apply the selection function, by\n\\begin{equation}\n N_{\\rm sim}(r^k_0) = N(r^k_0)S(l,b,g-i,r) C,\n\\end{equation}\nwhere $S(l,b,g-i,r)$ is the selection function, calculated by Equation~(1) for \nXSTPS-GAC subfields in GAC area and equals to one for \nXSTPS-GAC subfields in M31\/M33 area and all the SDSS subfields. Besides,\n \\begin{equation}\n C =\n\\begin{cases}\n1 & {\\rm for~} d_{\\rm min} < d_i < d_{\\rm max}; \\\\\n0\t\t& {\\rm otherwise}; \\\\\n\\end{cases}\n\\end{equation}\n\\begin{eqnarray}\n d_{\\rm min} &=& 10^{0.2(r_{\\rm min}-A_r(d_i)-M_r((g-i)_0,{\\rm [Fe\/H]}))+1},\\\\\n d_{\\rm max}&=&10^{0.2(r_{\\rm max}-A_r(d_i)-M_r((g-i)_0,{\\rm [Fe\/H]}))+1},\n\\end{eqnarray}\nwhere $r_{\\rm min}$ and $r_{\\rm max}$ are the \nmagnitude limits of each subfield.\nWe adopt $r_{\\rm min} = 12$ and $r_{\\rm max} = 18$ for all XSTPS-GAC subfields,\nand $r_{\\rm min} = 15$ and $r_{\\rm max} = 21$ for all SDSS subfields. \n$A_r(d_i)$ is the extinction in $r$-band at distance of $d_i$. We adopt the 3D extinction map from\n\\citet{Chen2014} for XSTPS-GAC subfields in GAC area and \n2D extinction map from \\citet{Schlegel1998} for\nXSTPS-GAC subfields in M31\/M33 area and all SDSS subfields. As the size of each subfield is quite\nlarge ($\\sim$ 4\\,deg$^2$), the extinction $A_r(d)$ varies within a subfield. We thus adopt the maximum \nvalues to make sure that our data are complete.\n\\end{enumerate}\n\nThe photometric parallax relation of Equation~(5) is only valid for the single \nstars. A large fraction of stars in the Milky Way are\nactually binaries (e.g. \\citealt{Yuan2015b}). In the current work we adopt the binary fraction \nresulted from \\citet{Yuan2015b} and assume that 40\\,per\\,cent of\nthe stars are binaries. The absolute magnitudes $M_r$ of the binaries are calculated as the same way\nas in \\citet{Yuan2015b}. \n\nWe also consider the effects of photometric errors, \nthe dispersion of disk star metallicities and the errors due to\nthe photometric parallax relation of \\citet{Ivezic2008}. The $r$-band photometric errors\nof most stars in the XSTPS-GAC and the SDSS are smaller than 0.05\\,mag \n(\\citealt{Chen2014} for the XSTPS-GAC and \\citealt{Sesar2006} for the SDSS).\nWhen we fit the metallicities of disk stars as a function of positions [Equation~(11)], we \nfind a dispersion of the residuals of about 0.05\\,dex. \nAccording to Equation~(5), this dispersion \nwould introduce an offset of about 0.05\\,mag for \nthe absolute magnitude when [Fe\/H] = $-$0.2\\,dex. \nAs a result, the effect of the photometric errors and the\ndisk stars metallicities dispersions would \nintroduce a distance errors of smaller than 5\\,per\\,cent.\nCombining with the systematic error of the photometric parallax \nrelation, which is claimed to be smaller than 10\\,per\\,cent \\citep{Ivezic2008},\nwe assume a total error of distance of 15\\,per\\,cent. This distance error is added \nwhen we model the $r$-band\nmagnitude of stars in a given distance bin [Equation~(10)].\n\nWe select three different colour bins for the disk fit. \nTwo of them correspond to G-type stars with \n$0.5 \\le (g-i)_0 < 0.6$\\,mag and $0.6 \\le (g-i)_0 < 0.7$\\,mag,\nand the other one corresponds to late K-type stars with $1.5 \\le (g-i)_0 < 1.6$\\,mag. \nThe giant and sub-giant contaminations for the first two G-type star bins are very small. \nFor the late K-type stars, we exclude stars with $r$-band magnitude $r_0 < 15$\\,mag \nto avoid the giant contaminations. For each colour bin, we count the differential\n$r$-band star counts with a binsize of $\\Delta r=1$\\,mag and then \ncompare them to the simulations to search for the \nbest disk model parameters, i.e $n_0,~L_1,~H_1,~f_2,~L_2,$ and $H_2$ and\nthe halo-to-thin disk normalization $f_h$. \nSimilarly as in \\citet{Robin2014}, \nan ABC-MCMC algorithm is implemented using the reduced likelihood calculated by Equation~(9) \nin the Metropolis-Hastings algorithm acceptance ratio \n\\citep{Metropolis1953, Hastings1970}. \nWe note that the 68\\,per\\,cent probability intervals of the \nmarginalised probability distribution functions (PDFs) of each parameter, given\nby the accepted values after post-burn period in the MCMC chain are only the fitting uncertainties \nwhich do not include systematic uncertainties.\nA detailed analysis of errors of the scale parameters will be given in Sect.~4.2.\n\nThe stellar flare is becoming significant at $R \\ge$15\\,kpc \\citep{Lopez2014} \nwhile the limiting magnitude we adopt for XSTPS-GAC is $r=18$\\,mag, which corresponds\nto $R ~\\sim$ 13\\,kpc for early G-type dwarfs. On the other hand, the disk warp is \na second order effect on the star counts and the XSTPS-GAC centre around the \nGAC, with $l$ around 180\\degr. The effect of the disk warp is thus\nnegligible \\citep{Lopez2002}. So in the current work we ignore the influences of the disk warps and flares. \nIn order to minimise the effects coming from other irregular structures (overdensities)\nof the Galactic disk and halo (e.g., Virgo overdensity, etc. ), we iterate our fitting \nprocedure to automatically and gradually \nremove pixels contaminated by unidentified irregular structures, similarly as in \\citet{Juric2008}. \nThe model is initially fitted using all the data points.\nThe resulted best-fit model is then used to define the outlying data, \nwhich have ratios of residuals (data minus the best-fit model) to the best-fit model\nhigher than a given value, i.e. $(N_{\\rm obs}-N_{\\rm mod})\/N_{\\rm mod} > a_1$. \nThe model is then refitted with the outliers excluded. \nThe newly derived best-fit model is again compared to all the data points. \nNew outliers with $(N_{\\rm obs}-N_{\\rm mod})\/N_{\\rm mod} > a_2$\nare excluded for the next fit. We repeat this procedure with a sequence of values\n$a_i$ = 0.5, 0.4, and 0.3. The iteration, which gradually reject about 1, 5 and 15\\,per\\,cent of\nthe irregular data points of smaller and smaller significance, will make our model-fitting algorithm to \nconverge toward a robust solution which describes the smooth background best. \n\n\\section{The Results and Discussion}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{halogrid.eps}\n \\caption{Reduced likelihood surface of the halo parameters $p$ and $\\kappa$ \n space (see Table~2). The best-fitted values and uncertainties are marked \n as a red plus with error bars. \n The red contour ellipse shows the likelihood ranges used for estimating \n the uncertainties.}\n \\label{halog}\n\\end{figure}\n\n\n\\begin{table*}\n \\centering\n \\caption{The best-fit values of the disk fit}\n \\begin{tabular}{lcccccccc}\n \\hline\n \\hline\n Bin & $n_1$ & $L_1$ & $H_1$ & $f_2$ & $L_2$ & $H_2$ & $f_H$ & $Lr$ \\\\\n & $10^{-3}$stars\\,$pc^{-3}$ & pc & pc & per\\,cent & pc & pc & per\\,cent & \\\\\n\\hline\nJoint fit\\\\\n\\hline\n $0.5 \\le (g-i)_0 < 0.6$ & 1.25 & 2343 & 322 & 11 & 3638 & 794 & 0.16 & $-$86769 \\\\\n $0.6 \\le (g-i)_0 < 0.7$ & 1.20 & & & & & & & \\\\\n $1.5 \\le (g-i)_0 < 1.6$ & 0.54 & & & & & & & \\\\ \n \\hline \n Individual fit\\\\\n \\hline\n $0.5 \\le (g-i)_0 < 0.6$ & 1.31 & 1737 & 321 & 14 \n & 3581& 731 & 0.16 & $-$43699 \\\\ \n $0.6 \\le (g-i)_0< 0.7$ & 1.65 & 2350 & 284 & 7 \n & 3699 & 798 & 0.12 & $-$35774 \\\\ \n $1.5 \\le (g-i)_0 < 1.6$ & 0.41 & 2780 & 359 & 8 \n & 2926 & 1014 & 0.50 & $-$4028 \\\\\n \\hline\n stddev & & 429 & 31 & 3 & 360 & 124 & 0.02 & \\\\\n \\hline \n\\end{tabular}\n\\end{table*} \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{cross2d.eps}\n \\caption{\n Two-dimensional marginalized PDFs for the disk model parameters,\n$L_1,~H_1,~f_2,~L_2,$ and $H_2$\nand the halo-to-thin disk normalization $f_h$, obtained from the MCMC analysis. \nHistograms on top of each column show the one-dimensional marginalized PDFs \nof each parameter labeled at the bottom of the column. \nRed pluses and lines indicate the best solutions. \nThe dash lines give the 16th and 84th percentiles, \nwhich denotes only the fitting uncertainties.}\n \\label{cross2d}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.68\\textwidth]{rntestmodel.eps}\n \\caption{Star count (per deg$^2$) for the colour bin $0.5 \\le (g-i)_0 <0.6$\\,mag and magnitude bins,\n $r_0$ = 15 (left) and 16\\,mag (right), of both the XSTPS-GAC\n (red pluses) and the SDSS (blue pluses) data as a function of the Galactic latitude\nfor example subfields with Galactic longitude 177\\degr $ N_{mod}$, except for \na few subfields such as those located at ($l,b$)=(170\\degr, 0\\degr). The extinction in\nthese fields are large \\citep{Chen2014}. It is very difficult to distinguish that whether it is a real `hole' \nor it is caused by the selection effects or extinction correction errors. \nFor the overdensities, we find three large scale structures,\nwhich are located at different positions on the sky and appear at\ndifferent magnitudes. We describe them as follows.\n\nThe first large region where star counts are in excess is located at \n240\\degr\\ $$} occurs when the communities are close to each other, in fact\\textcolor{orange}{, in fact,} they're together to form a shared structure for the entire \\textcolor{red}{body}.\n\\tabularnewline \\hline \\hline\n\\textbf{Generated Translation - MQT}\\tabularnewline\n\\hline \n(a) For me, it means to spend time to think, to talk about the poor people, who have a \\textcolor{blue}{difficult situation}, about who there is \\textcolor{blue}{no opportunity} to go to TED. \\\\\n(b) A time of \\textcolor{red}{communication} happens when communities are close to each other, in fact, they together form a \\textcolor{blue}{formula} that is shared for the whole \\textcolor{blue}{colony} on a single \\textcolor{blue}{array} of DNA.\n\\tabularnewline \\hline\n\\end{tabularx}\n\\label{vi-en:translations}\n\\end{table}\n\nWe perform dropout regularization of the trained models, with a dropout rate equal to 0.2. We minimize $\\pazocal{L}(\\phi)$ by employing the Adam \\cite{adam} optimizer with its default settings for En$\\leftrightarrow$Vi and simple stochastic gradient descent (SGD) for En$\\leftrightarrow$Ro\\footnote{En$\\leftrightarrow$Vi models are trained for $\\sim$12 epochs; En$\\leftrightarrow$Ro for$\\sim$12 epochs for structured attention models and $\\sim$4 for the rest.}. We preserve homogeneity throughout the trained architectures as follows.\nBoth the encoders and the decoders of all the evaluated models are presented with 256-dimensional \\emph{trainable} word embeddings. The maximum inference length is set to 50. We utilize 2-layer BiLSTM encoders, and 2-layer LSTM decoders; all comprise 256-dimensional hidden states on each layer. For the remainder of hyper-parameters, we adopt the default settings used in the code\\footnote{https:\/\/github.com\/harvardnlp\/struct-attn.} provided by the authors in \\cite{structuredAttention} for structured attention models and the code in \\cite{luong17} for the rest. Except specified otherwise, the default settings used by the latter for En$\\leftrightarrow$Vi also apply to En$\\leftrightarrow$Ro.\n\n\n\n\\subsection{Results} \n\\begin{table}[t]\n\\caption{Ro$\\rightarrow$En, dev set - Examples (a) 5 and (b) 182.}\n\\begin{tabularx}{\\textwidth} {|X|} \\hline \n\\textbf{Reference Translation}\\tabularnewline \\hline \n(a) Dirceu is the most senior member of the ruling Workers' Party to be taken into custody in connection with the scheme.\n\\\\\n(b) With one voice the lobbyists talked about a hoped-for ability in Turnbull to make the public argument, to cut the political deal and get tough things done.\n\\tabularnewline \\hline \\hline\n\\textbf{Generated Translation - Baseline}\\tabularnewline \\hline\n(a) He is the oldest member of the \\textcolor{orange}{Dutch People's Party} on \\textcolor{orange}{Human Rights} in custody for \\textcolor{orange}{the} \\textcolor{blue}{links} with this scheme.\n\\\\\n(b) The representatives of \\textcolor{blue}{lobbyists} have spoken about their hope in the ability of \\textcolor{orange}{Turngl} to \\textcolor{blue}{satisfy} the public interest, to reach a political agreement and to do things well.\n\\tabularnewline \\hline \\hline\n\\textbf{Generated Translation - Structured Attention}\\tabularnewline \\hline\n(a) \\textcolor{orange}{It} is the oldest member of the \\textcolor{orange}{Mandi} \\textcolor{orange}{of the Massi} in \\textcolor{orange}{the} government \\textcolor{red}{in the government}.\n\\\\\n(b) The representatives of the \\textcolor{blue}{interest groups} have spoken \\textcolor{red}{in mind} about their hope to \\textcolor{blue}{meet} the public interest, to achieve a political and good thing.\n\\tabularnewline \\hline\n\\textbf{Generated Translation - MQT}\\tabularnewline \\hline \n(a) \\textcolor{orange}{Dirre} is the oldest member of the \\textcolor{orange}{People's Party} in government \\textcolor{blue}{held} in custody for \\textcolor{blue}{ties} with this scheme.\n\\\\\n(b) The representatives of \\textcolor{blue}{interest groups} have spoken \\textcolor{orange}{to} \\textcolor{blue}{unison} about their hope in \\textcolor{orange}{Turkey's} ability to \\textcolor{blue}{meet} the public interest, to reach a political agreement and to do things well.\n\\tabularnewline \\hline\n\\end{tabularx}\n\\label{ro-en:translations}\n\\end{table}\n\nTable \\ref{translation:results} shows superior performance for our \\emph{multiplicative} approach. In addition, note that despite their extended training requirements, structured attention models demonstrate an inability to properly capture long-temporal information, both score and output-wise, as presented in Tables \\ref{vi-en:translations} and \\ref{ro-en:translations}. These showcase some characteristic examples of generated translations for a hands-on inspection of model outputs. We annotate deviations from the reference translation with orange and red, for minor\nand major deviations\nrespectively. Synonyms are highlighted with blue. We also indicate missing tokens, such as verbs, articles and adjectives, by adding the \\textbf{[$<$token$>$]} identifier.\n\n\n\n\n\n\n\n\n\n\\section{Discussion} \\label{discuss}\n\nIn this section, we want to further explore how and why our proposed approach can enable\nbetter utilization of infrequent words through second-order interactions. Furthermore, we outline technical augmentations that we consider as future research directives.\n\n\\subsection{Model uncertainty} \n\n\\begin{table*}[t]\n\\caption{Rare word mean reference frequency deviation}\n\\small\n\\centering{}%\n\\begin{tabu}{|c|c|c|c|c|}\n\\hline \n\\multirow{3}{*}{Language Pair} & & \\multicolumn{3}{c|}{Deviation (\\%)}\\tabularnewline\n\\cline{3-5} \n & \\makecell{Mean reference \\\\ frequency (\\%)} & Baseline & \\makecell{Structured \\\\ Attention} & MQT\\tabularnewline\n\\hline\n\\multirow{1}{*}{En$\\rightarrow$Vi} & \\multirow{1}{*}{3.59} & \\multirow{1}{*}{-4.82} & \\multirow{1}{*}{11.57} & \\multirow{1}{*}{\\textit{\\textbf{0.28}}}\n\\tabularnewline\n\\cline{2-5}\n\\multirow{1}{*}{\\makecell{Vi$\\rightarrow$En}} & \\multirow{1}{*}{8.91} & \\multirow{1}{*}{-10.77} & \\multirow{1}{*}{-20.85} & \\multirow{1}{*}{\\textit{\\textbf{-8.68}}}\n\\tabularnewline\n\\cline{2-5}\n\\multirow{1}{*}{En$\\rightarrow$Ro} & \\multirow{1}{*}{7.00} & \\multirow{1}{*}{\\textbf{4.59}} & \\multirow{1}{*}{\\textit{14.61}} & \\multirow{1}{*}{-7.87}\n\\tabularnewline\n\\cline{2-5}\n\\multirow{1}{*}{Ro$\\rightarrow$En} & \\multirow{1}{*}{6.75} & \\multirow{1}{*}{34.23} & \\multirow{1}{*}{37.47} & \\multirow{1}{*}{\\textbf{\\textit{23.17}}}\n\\tabularnewline\n\\hline\n\\end{tabu}\n\\label{rare-words-table}\n\\end{table*}\n\n\nA first case study is inspired from the interesting work of \\cite{ott2018analyzing}. Therein, the major claim is that if model and data distributions match, then samples drawn from both should also match. As a broader extension to our evaluation, we present deviation from unigram word distributions (across 1 - 30\\% frequency groups). Table \\ref{rare-words-table} reveals how well our evaluated models estimate said frequencies. Words are split into groups based on their appearance in their respective training datasets. We present this against development set frequencies. Note that evaluation criteria include not only \\textit{deviation} but also \\textit{least over-representation} (favored to any magnitude of under-representation); these are presented in bold and italics, respectively. This is because swapping a frequent for a rare word would not be as harmful as the reverse, in terms of translation quality\nIn 3 out of 4 cases, our approach achieves close representation of target frequencies.\n\n\n\n\\subsection{High rank matrix approximation}\nThe capacity of most natural language models is crippled by their inability to cope in highly context-dependent settings \\cite{yang2017breaking}. Admittedly, this shortcoming is due to their limited capacity to capture complex hierarchical dependencies. To address this issue, we need to devise computationally efficient ways of capturing higher-order dynamics. A first step towards this goal is offered by our approach. However, our approach is limited to second-order interactions; in addition, to allow for computational efficiency, we have resorted to a mean-pooling solution in the computation of the final context vector.\nA more general solution would be to resort to spectral decomposition, which would require performing eigen\/tensor decomposition.\nHowever, differentiating this operation during back-propagation may lead to numerical instability issues \\cite{dang2018eigendecomposition}, rendering it non-differentiable. \nWe aim to examine solutions to these issues in our future work.\n\\section{Conclusions}\n\nIn this work, we introduced a novel regard towards formulating attention layers in \\emph{seq2seq}-type models. Our work was inspired from the quest of a more expressive way of computing dependencies between input and output sequences. Specifically, our aim was to enable capturing of second-order dependencies between the source sequence encodings and the generated output sequences. \n\nTo effect this goal, for the first time in the literature, we leveraged concepts from the field of quantum statistics. We cast the operation of the attention layer into the computation of the \\emph{Attention Density Matrix}, which expresses how pairs of source sequence elements correlated with each other, and jointly with the generated output sequence. Our formulation of the ADM was based on \\emph{Density Matrix} theory; it is an attempt to encapsulate the core concepts of the Density Matrix in the context of attention networks, without adhering though to the exact definition and properties of density matrices.\n\nWe exhibited the merits of our approach on \\emph{seq2seq} architectures addressing competitive MT tasks.\nWe have showed that the unique modeling capacity of our approach translates into better handling of (rare) words in the model outputs. Hence, this finding offers a quite plausible explanation of the obtained improvement in the achieved BLEU scores.\nFinally, we emphasize inference using our method entails minor computational overhead compared to conventional SA, with only a single extra forward-propagation computation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn recent years, entanglement has been regarded as a quantum\nresource for many novel tasks such as quantum computation, quantum\ncryptography, quantum teleportation and so on \\cite{Nielsen}.\nThese quantum-information tasks cannot be carried out by classical\nresources and they rely on the entangled states. Although the\nmixed entangled states are directly used in some\nquantum-information tasks \\cite{Murao}, most of them require the\npure entangled states of bipartite or multipartite system to be\nthe crucial elements. However in a lab, it turned out that the\npure entangled states always become mixed by the decoherence due\nto the coupling with the environment. A central topic in quantum\ninformation theory is thus how to extract pure entangled states\nfrom mixed states \\cite{Horodecki1}.\n\n\nAn entangled state $\\rho$ is distillable if one can asymptotically\nor explicitly extract some pure entangled state from infinitely\nmany copies of $\\rho$ by using only local operations and classical\ncommunication (LOCC). It has been proved that the entangled\n2-qubit states are always distillable\n\\cite{Bennett1,Bennett2,Horodecki2}. Nevertheless there exist\nbound entangled (BE) states which are not distillable under LOCC\n\\cite{Horodecki4}. Concretely, a bipartite entangled state $\\rho$\nin the Hilbert space $H_A\\otimes H_B$ is BE if it has positive\npartial transpose (PPT) with respect to system $A$ (or $B$),\nnamely $\\rho^{T_A}(\\mbox{or}\\ \\rho^{T_B})\\geq0$. Such states are\ncalled PPT BE states and usually it cannot be used for\nquantum-information tasks under LOCC \\cite{Murao,Eggeling}.\n\n\nA more formidable challenge is that whether a bipartite state\n$\\rho_{AB}$ having non-positive partial transpose (NPT) with\nrespect to system $A$ (or $B$) is always distillable. This class\nof states are always entangled due to the celebrated\nPeres-Horodecki criterion \\cite{Peres}. It was pointed out by\n\\cite{Horodecki5} that any NPT state can be converted into some\nNPT Werner state under LOCC. Much efforts have been devoted to\ndistilling this kind of states and there has been a common belief\nthat NPT BE Werner states indeed exists\n\\cite{DiVincenzo,Vianna,Pankowski,Hiroshima,Bandyopadhyay,Kraus,Clarisse}.\nIn addition, it has been proved that the NPT states in $2\\times N$\nspace are distillable \\cite{Horodecki2,Kraus2} and the rank two\nNPT states of bipartite systems are also distillable\n\\cite{Horodecki6}. However, the situation becomes more complex\nwhen we distill the entangled state whose subsystems have higher\ndimensions or that has a higher rank.\n\nIn this paper we show that the rank three bipartite entangled\nstates are distillable under LOCC. We give the concrete method of\ndistilling this class of states. It helps infer the analytical\ncalculation of distillable entanglement \\cite{Rains,Devetak}. A\nrank three state is entangled if and only if (iff) it is NPT,\nnamely there is no PPT BE state of rank three \\cite{Lewenstein}.\nSo we also obtain that there are no rank-three NPT BE states and\nall of them can be used for quantum-information tasks. It is\nsimilar to the case of rank two states and we conclude: a rank two\nor three state is distillable iff it is entangled. This conclusion\ndoes not hold for the bipartite entangled states with higher\nranks, e.g., there have been the rank four PPT BE states\nconstructed by the unextendible product bases (UPB) \\cite{Mor}.\n\nMoreover, we will investigate the NPT states of rank four and find\nout some families of states that are distillable. This helps\ndistill the NPT states which have more complex structure. In\naddition, we will show that locally converting the Werner state\ninto the rank three entangled state is difficult, so our result is\nindependent of the expectant fact that there exists NPT BE Werner\nstate.\n\nThe rest of this paper is organized as follows. In Sec. II we\nprove our main result on rank three states and then we use it to\ndistill the rank four NPT states. We also discuss the relationship\nbetween the result in this paper and the Werner state. We conclude\nin Sec. III.\n\n\n\\section{distillation of rank three and four bipartite states}\n\nThroughout this paper we will use the following notations. The\nrank of a bipartite state $\\rho_{AB}$ is referred to as\n$r(\\rho_{AB})$, and the reduced density operator of it as\n$\\rho_A=\\mbox{Tr}_B\\rho_{AB},\\rho_B=\\mbox{Tr}_A\\rho_{AB}$. The\nrange of the density operator $\\rho_{AB}$ is referred to as\n$R(\\rho_{AB})$. Another useful tool is the so-called invertible\nlocal operator (ILO) (or the local filter) \\cite{Dur}, namely the\nnonsingular matrix. Physically, it can be probabilistically\nrealized through the positive operator valued measure (POVM)\n\\cite{Nielsen}, so we can use it when distilling the NPT states.\n\n\nWe first consider the NPT states of rank three. Before proving our\nmain theorem, we recall a useful lemma that was proved in\n\\cite{Horodecki6}.\n\n\\textit{Lemma 1}. If\n$r(\\rho_{AB})<\\mbox{max}[r(\\rho_{A}),r(\\rho_{B})]$, then the\nbipartite state $\\rho_{AB}$ is distillable.\n\\hspace*{\\fill}$\\blacksquare$\n\nThe lemma has been used to show that there is no rank two BE state\n\\cite{Horodecki6}. It was proven by using the reduction criterion\n\\cite{Horodecki5}, i.e., a state is distillable when the reduction\ncriterion is violated (See Eq. (6) in \\cite{Horodecki6}). It\nfollows from lemma 1 that any rank three state in $M\\times N$\nspace with $\\mbox{max}[M,N]>3$ is distillable. Since an NPT state\nin $2\\times2$ or $2\\times3$ space is also distillable\n\\cite{Bennett1,Bennett2,Horodecki2}, it suffices to consider the\nrank three NPT states $\\rho_{AB}$ in $3\\times3$ space. Moreover,\nwe can perform some ILO on the subsystem $B$ such that\n$\\rho_B=\\frac13I$. Then only the state having the following form\ndoes not violate the reduction criterion ( up to local unitary\ntransformations )\n\\begin{equation}\n\\sigma_{AB}\\equiv\\frac13|\\psi_0\\rangle\\langle\\psi_0|+\\frac13|\\psi_1\\rangle\\langle\\psi_1|\n+\\frac13|\\psi_2\\rangle\\langle\\psi_2|,\\sigma_B=\\frac13I\n\\end{equation}\nwhere the three eigenvectors satisfy\n$\\langle\\psi_i|\\psi_j\\rangle=\\delta_{ij}$ and\n\\begin{eqnarray}\n|\\psi_0\\rangle&=&\\cos\\theta|00\\rangle+\\sin\\theta|11\\rangle,\\\\\n|\\psi_1\\rangle&=&\\sum\\nolimits^{2}_{i,j=0}b_{ij}|ij\\rangle,\\\\\n|\\psi_2\\rangle&=&\\sum\\nolimits^{2}_{i,j=0}c_{ij}|ij\\rangle.\n\\end{eqnarray}\nNotice that there is always at least a Schmidt rank two state by\nlinear combination of the eigenvectors. In addition, any spectral\ndecomposition of the state $\\sigma_{AB}$ have the form in Eq. (1)\n(in which the state $|\\psi_0\\rangle$ has a more general form,\ne.g., $|\\psi_0\\rangle=\\sum\\nolimits^{2}_{i,j=0}a_{ij}|ij\\rangle$).\n\nIn what follows we will concentrate on the NPT state $\\sigma_{AB}$\nin Eq. (1) because any rank three NPT state in $3\\times3$ space\ncan be locally converted into $\\sigma_{AB}$, otherwise it is\ndistillable in terms of the reduction criterion. There is a simple\nsituation we can treat easily as follows.\n\n\\textit{Lemma 2}. The state $\\sigma_{AB}$ is distillable when\nthere is a product state in its range.\n\n\\textit{Proof.} Without loss of generality, we consider the state\n$\\sigma_{AB}$ with $\\theta=0$. Then its coefficients\n$b_{i0},c_{i0},i=0,1,2$ equal zero because of the condition\n$\\sigma_B=\\frac13I$. We project the state $\\sigma_{AB}$ by using\nthe local projector\n$I_A\\otimes(|1\\rangle\\langle1|+|2\\rangle\\langle2|)_B$ and obtain\nthe resulting state\n$\\frac12|\\psi_1\\rangle\\langle\\psi_1|+\\frac12|\\psi_2\\rangle\\langle\\psi_2|$.\nIt's a rank two NPT state and hence distillable. It implies the\nstate $\\sigma_{AB}$ is also distillable.\n\\hspace*{\\fill}$\\blacksquare$\n\nLemma 2 has given a criterion that tells whether a rank three NPT\nstate is distillable. We will generalize it to the case of rank\nfour states later. It is also useful for the distillation of\ngeneral rank three NPT state as shown below. Let us consider the\nstate $\\sigma_{AB}$ whose range has no product state. We take the\nprojector $P_{AB}$ onto the $2\\times3$ subspace spanned by\n$\\{|00\\rangle,|01\\rangle,|02\\rangle,|10\\rangle,|11\\rangle,|12\\rangle\\}$\nand obtain the state\n\\begin{equation}\n\\sigma^1_{AB}=|\\psi^1_0\\rangle\\langle\\psi^1_0|+|\\psi^1_1\\rangle\\langle\\psi^1_1|\n+|\\psi^1_2\\rangle\\langle\\psi^1_2|,\n\\end{equation}\nwhich is not normalized for convenience. The resulting states\n$|\\psi^1_i\\rangle$ equal $P_{AB}|\\psi_i\\rangle$, respectively. We\nwill follow this notation below, e.g.,\n$|\\psi^2_i\\rangle=V_A\\otimes V_B|\\psi^1_i\\rangle$, etc.\n\nThe state $\\sigma^1_{AB}$ is distillable if it is entangled since\nit is in $2\\times2$ or $2\\times3$ space. Let us consider the case\nin which $\\sigma^1_{AB}$ is separable. First, the state\n$\\sigma^1_{AB}$ is in $2\\times2$ space iff\n$b_{i2}=c_{i2}=0,i=0,1$. In this case, the condition\n$\\sigma_B=\\frac13I$ leads to\n$b_{2i}b^*_{22}+c_{2i}c^*_{22}=0,i=0,1$ and\n$|b_{22}|^2+|c_{22}|^2=1$. When $b_{22}c_{22}=0$, either the state\n$|\\psi_1\\rangle$ or $|\\psi_2\\rangle$ becomes a product state and\nhence $\\sigma_{AB}$ is distillable in terms of lemma 2; When\n$b_{22}c_{22}\\neq0$, we can remove the coefficients\n$b_{2i},c_{2i},i=0,1$ by using linear combination of the\neigenvectors $|\\psi_i\\rangle,i=0,1,2$. It is then easy to see that\n$R(\\sigma_{AB})$ contains a product state and thus $\\sigma_{AB}$\nis distillable.\n\n\nSecond, we investigate the state $\\sigma^1_{AB}$ in $2\\times3$\nspace. Notice the rank of $\\sigma^1_{AB}$ remains three, otherwise\nthere will be a product state in $R(\\sigma_{AB})$ and it is\ndistillable. We can always write a rank three separable state\n$\\rho$ in $2\\times3$ space as the sum of three product states\n\\cite{Wootters,Werner}. To prove it, suppose the state has the\nform\n\\begin{equation}\n\\rho=\\sum\\nolimits^{d-1}_{i=0}|\\phi_i\\rangle|\\omega_i\\rangle\\langle\\phi_i|\\langle\\omega_i|,d>3.\n\\end{equation}\nWithout loss of generality we choose the first three product\nstates as a set of linearly independent vectors, so any other\nproduct state can be written as\n$|\\phi_j\\rangle|\\omega_j\\rangle=\\sum\\nolimits^2_{i=0}k_{ij}|\\phi_i\\rangle|\\omega_i\\rangle,j=3,...$\nNotice the vectors $|\\omega_i\\rangle,i=0,1,2$, and two vectors in\n$|\\phi_i\\rangle,i=0,1,2$ are linearly independent, respectively.\nSo the product state $|\\phi_j\\rangle|\\omega_j\\rangle,j>3$ equals\neither one of the first three product states, or\n$|\\phi_j\\rangle|\\omega_j\\rangle=\\sum\\nolimits^1_{i=0}k_{ij}|\\phi_i\\rangle|\\omega_i\\rangle$\nin which $|\\phi_0\\rangle$ is proportional to $|\\phi_1\\rangle$. In\nthis case it is easy to write the state $\\rho$ as the sum of three\nproduct states.\n\nUsing the above conclusion, we can express the state $\\sigma_{AB}$\nby means of eigenvectors\n$|\\psi_i\\rangle=(a_{i0}|0\\rangle+a_{i1}|1\\rangle)|\\phi_{i1}\\rangle+|2\\rangle|\\phi_{i2}\\rangle,i=0,1,2.$\nMoreover, the vectors $|\\phi_{i1}\\rangle$'s are linearly\nindependent, while $|\\phi_{i2}\\rangle$'s linearly dependent. We\nperform some ILO's on the state $\\sigma_{AB}$ and remove two\ncoefficients $a_{00}$ and $a_{11}$. The resulting state\n$\\sigma^2_{AB}$ still has the form in Eq. (1), otherwise it is\ndistillable.\n\nFor the state $\\sigma^2_{AB}$ when the condition $a_{20}a_{21}=0$\nis satisfied, we find that $R(\\sigma^2_{AB})$ contains a product\nstate because of the orthogonal conditions\n$\\langle\\psi^2_i|\\psi^2_j\\rangle=\\delta_{ij}$. So the state\n$\\sigma_{AB}$ is distillable. Let us move to investigate the state\n$\\sigma^2_{AB}$ satisfying the condition $a_{20}a_{21}\\neq0$. By\nperforming ILO's on $\\sigma^2_{AB}$ we greatly simplify its form\nsuch that\n\\begin{equation}\n\\sigma^3_{AB}=|\\psi^3_0\\rangle\\langle\\psi^3_0|+|\\psi^3_1\\rangle\\langle\\psi^3_1|\n+|\\psi^3_2\\rangle\\langle\\psi^3_2|,\n\\end{equation}\nwhere\n\\begin{eqnarray}\n|\\psi^3_0\\rangle&=&|00\\rangle+|2\\rangle|\\psi\\rangle,\\\\\n|\\psi^3_1\\rangle&=&|11\\rangle+|2\\rangle|\\phi\\rangle,\\\\\n|\\psi^3_2\\rangle&=&(|0\\rangle+|1\\rangle)|2\\rangle+\n|2\\rangle(\\alpha|\\psi\\rangle+\\beta|\\phi\\rangle),\\\\\n|\\psi\\rangle&=&x_0|0\\rangle+x_1|1\\rangle+x_2|2\\rangle,\\\\\n|\\phi\\rangle&=&y_0|0\\rangle+y_1|1\\rangle+y_2|2\\rangle.\n\\end{eqnarray}\nNotice the state is not normalized and the condition\n$\\sigma^3_B=\\frac13I$ is also not required. We project\n$\\sigma^3_{AB}$ by the projector\n$[|0\\rangle(\\langle0|+a\\langle1|)+|2\\rangle\\langle2|]_A\\otimes\nI_B,a\\in R$ and obtain the state $\\sigma^4_{AB}$ in $2\\times3$\nspace. It is entangled and thus distillable when its partial\ntranspose is not positive \\cite{Peres}. Nevertheless, there may be\nsome cases in which the coefficients $x_i,y_i,i=0,1,2$ make that\n$(\\sigma^4_{AB})^{T_A}\\geq0$. We are going to find out such\ncoefficients by calculating several average values\n$\\mbox{Tr}[(\\sigma^4_{AB})^{T_A}|\\omega_i\\rangle\\langle\\omega_i|]$,\nwhere\n$|\\omega_0\\rangle=|00\\rangle+b|22\\rangle,|\\omega_1\\rangle=|00\\rangle+b|21\\rangle\n,|\\omega_2\\rangle=|01\\rangle+b|20\\rangle,|\\omega_3\\rangle=|02\\rangle+b|20\\rangle,b\\in\nC.$ To keep the average value always positive, we find that it is\nnecessary that $x_1=x_2=0$. However, this means the state\n$|\\psi^3_0\\rangle$ is of product form and hence the state\n$\\sigma^3_{AB}$ is distillable. As it can be converted into the\nstate $\\sigma_{AB}$ by ILO's, the latter is also distillable. Now\nwe reach our main theorem in this paper.\n\n\\textit{Theorem.} The rank three NPT states are distillable under\nLOCC. \\hspace*{\\fill}$\\blacksquare$\n\nSo the rank three entangled states can be used for\nquantum-information tasks. In fact, we have proposed the method of\ndistilling $\\sigma_{AB}$ in the proof of the theorem. First, when\nthe given state contains a product state in its range, it can be\nprojected onto a rank two entangled state. According to the\nreduction criterion, we can distill it by the procedure similar to\nthe famous BBPSSW protocol \\cite{Bennett2,Horodecki5}. It is also\nthe method of distilling the rank three entangled states that\ncannot be converted into $\\sigma_{AB}$. Second, when the given\nstate $\\rho$ contains no product state in $R(\\rho)$, we project it\nby the projector $(|0\\rangle\\langle0|+|1\\rangle\\langle1|)_A\\otimes\nI_B$. The resulting state is entangled and thus distillable;\notherwise, we should project the initial state $\\rho$ by the\nprojector\n$[|0\\rangle(\\langle0|+a\\langle1|)+|2\\rangle\\langle2|]_A\\otimes\nI_B$ after performing some ILOs on $\\rho$. There will be a\nsuitable parameter $a$ making the resulting state entangled and\nthus distillable.\n\nThe rank three entangled states are a quite special class of\nstates. As there have been PPT BE states of any higher rank (e.g.,\nrank four PPT BE states constructed by UPB \\cite{Mor}), we indeed\nhave found out the lowest rank space in which there is no BE\nstate. It also implies that when a state can be locally projected\ninto some rank three NPT state, then it is distillable. This\ncauses new methods of distilling quantum states having more\ncomplex structure. We will show it in terms of distilling the rank\nfour NPT states below. One may also find other way to distill the\nentangled states based on the theorem. For example, the tensor\nproduct of the rank three entangled states are also entangled for\ncertain.\n\nOn the other hand, the analytical calculation of distillable\nentanglement is also an important issue in quantum information\ntheory. The problem is very difficult and there have been some\noptimal bounds on distillable entanglement \\cite{Rains,Devetak}.\nSpecially, the bound is saturated if we can find a way to distill\nthe state and get the same amount of pure entanglement as the\nbound. In this case we get the analytical result of distillable\nentanglement. As it is possible to find out whether the bound on\nrank three NPT state is saturated by using our method of\ndistilling it, we indeed provide new ways to calculate the\ndistillable entanglement.\n\nThird, our result is also independent of the expectant fact that\nthere exist NPT Werner states $\\rho_w$. We do not know whether an\nNPT state $\\rho$ is distillable, even it can be converted into\nsome Werner state which is proved to be not distillable under\nLOCC. One can easily exemplify it by locally taking some rank\nthree NPT state into $\\rho_w$, while the latter is expected to be\nnot distillable. Conversely, it is difficult to convert the Werner\nstate into the state $\\sigma_{AB}$, so we still do not know\nwhether the latter is distillable. To see it, we have the Werner\nstate in a $N\\times N$ space as follows \\cite{Werner}\n\\begin{eqnarray}\n\\rho_w&=&(a+b)\\sum\\nolimits^{N-1}_{i,j=0}|ij\\rangle\\langle\nij|\\nonumber\\\\\n&-&2b\\sum\\nolimits^{N-1}_{i0,b<0$ are two parameters satisfying $a+b\\geq0$. The most\ngeneral local transformation on a quantum state $\\rho$ has the\nform $\\Lambda(\\rho)=\\sum\\nolimits_i A_i\\otimes B_i\\rho\nA^{\\dag}_i\\otimes B^{\\dag}_i$ \\cite{Vedral}. Because the resulting\nstate is entangled, there must be at least one pair of Kraus\noperators $A_i,B_i$ that have at least rank two, respectively. In\nthis case, the state $\\Lambda(\\rho)$ will have the rank not less\nthan four when $a+b>0$, which means a rank three NPT state cannot\nbe output by this local channel. The only exception happens when\n$a+b=0$, but it is difficult to judge whether the state\n$\\Lambda(\\rho)$ is of rank three and entangled.\n\nLet us investigate further the problem of distilling rank four\nstates by using the theorem in this paper. Different from the case\nof rank three state, it is well-known that there indeed exist PPT\nBE states of rank four even in the $3\\times3$ space. It is easy to\nshow that the NPT BE states of rank four possibly exist only in\nthree kinds of spaces, $4\\times4,3\\times4,3\\times3$ in terms of\nlemma 1. One will meet lots of difficulties when applying the\ntechnique in this paper to distill the rank four NPT state $\\rho$,\ne.g., the resulting state from $\\rho$ by projection can be\n$3\\times3$ and it may be PPT BE. Besides, the Peres-Horodecki\ncriterion is no more a sufficient condition for the separability\nof state in $2\\times4$ space, etc. Nevertheless, we still can\nobtain some useful results on this problem when the target state\nhas a special form.\n\n\\textit{Lemma 3}. For a rank four NPT state in $4\\times4$ or\n$3\\times4$ space, it is distillable when there is a product state\nin its range.\n\n\\textit{Proof.} By employing similar deduction for the state\n$\\sigma_{AB}$, only the state having the following form does not\nviolate the reduction criterion\n\\begin{equation}\n\\rho_{AB}=\\frac14\\sum\\nolimits^3_{i=0}|\\psi_i\\rangle\\langle\\psi_i|,\\rho_B=\\frac14I,\n\\end{equation}\nwhere the four eigenvectors satisfy\n$\\langle\\psi_i|\\psi_j\\rangle=\\delta_{ij}$. Up to the local unitary\ntransformations we have $|\\psi_0\\rangle=|00\\rangle$. Next, we\nproject the state $\\rho_{AB}$ by the projector\n$I_A\\otimes(|1\\rangle\\langle1|+|2\\rangle\\langle2|+|3\\rangle\\langle3|)_B$\nand obtain the NPT state\n$\\rho^{\\prime}_{AB}=\\frac13\\sum\\nolimits^3_{i=1}|\\psi_i\\rangle\\langle\\psi_i|$\nin $2\\times3$, or $3\\times3$, or $4\\times3$ space. By means of the\nBBPSSW and Horodeckis' protocol, our theorem and the reduction\ncriterion, respectively, the state $\\rho^{\\prime}_{AB}$ and hence\n$\\rho_{AB}$ is always distillable. \\hspace*{\\fill}$\\blacksquare$\n\nSo we have generalized lemma 1 to the case of rank four NPT\nstates. Moreover, we hope that it always holds for the NPT states\nwhose rank equal to its maximal dimension of subsystems. However,\nit does not hold when the rank of a state is larger, e.g, the PPT\nBE state in $3\\times3$ space constructed in \\cite{Horodecki4}\ncontains infinitely many product states in its range, but its rank\nequals eight. It is also unclear that whether the rank four NPT\nstates $\\rho$ in this space are distillable. Solving this problem\nis more difficult since we cannot rely on the reduction criterion.\nHowever, $\\rho$ is distillable when we can project it onto a rank\nthree NPT state in terms of our theorem.\n\nFor example, the following $3\\times3$ rank four NPT state is\ndistillable\n\\begin{eqnarray}\n\\rho_{AB}&=&\\lambda_0|00\\rangle\\langle00|+\\lambda_1|01\\rangle\\langle01|+\n\\lambda_2|\\psi_2\\rangle\\langle\\psi_2|+\\lambda_3|\\psi_3\\rangle\\langle\\psi_3|,\\nonumber\\\\\n|\\psi_2\\rangle&=&\\sum\\nolimits^{2}_{i,j=0}c_{ij}|ij\\rangle,\\nonumber\\\\\n|\\psi_3\\rangle&=&\\sum\\nolimits^{2}_{i,j=0}d_{ij}|ij\\rangle,\\lambda_0,\\lambda_1,\\lambda_2,\\lambda_3>0.\n\\end{eqnarray}\nTo prove it, we project the state $\\rho_{AB}$ by the projector\n$(|1\\rangle\\langle1|+|2\\rangle\\langle2|)_A\\otimes I_B$. When the\nresulting state\n$\\rho^{1}_{AB}=\\lambda_2|\\psi^1_2\\rangle\\langle\\psi^1_2|+\\lambda_3|\\psi^1_3\\rangle\\langle\\psi^1_3|$\nis entangled, it is also distillable. On the other hand when\n$\\rho^{1}_{AB}$ is separable, we can write it as the sum of two\nproduct states since it is in a space not larger than $2\\times3.$\nBesides, the rank of $\\rho^{1}_{AB}$ must be two because of\n$r(\\rho_A)=3$. By performing some ILOs on the state $\\rho_{AB}$\nand linear combination of $|\\psi_2\\rangle$ and $|\\psi_3\\rangle$,\nwe can convert them into\n$|\\psi^2_2\\rangle=|0\\rangle|\\phi_0\\rangle+|1\\rangle|\\phi_1\\rangle$\nand\n$|\\psi^2_3\\rangle=|0\\rangle|\\omega_0\\rangle+|2\\rangle|\\omega_2\\rangle$,\nand keep the other two terms $|00\\rangle$ and $|01\\rangle$\nunchanged.\n\nWhen either of the states $|\\psi^2_2\\rangle$ and\n$|\\psi^2_3\\rangle$ is of product form, we easily project the state\n$\\rho^2_{AB}$ onto a $2\\times3$ subspace, the resulting state is\nstill entangled and distillable. On the other hand when both the\nstates $|\\psi^2_2\\rangle$ and $|\\psi^2_3\\rangle$ are entangled, we\nproject the state $\\rho^2_{AB}$ by the projector\n$[|0\\rangle(\\langle0|+a\\langle1|)+|2\\rangle\\langle2|]_A\\otimes\nI_B$. The obtained state $\\rho^3_{AB}$ is $2\\times3$ and its rank\nis four by choosing suitable parameter $a$. This state is\nseparable iff it has the decomposition\n$\\rho^3_{AB}=|\\psi\\rangle_A\\langle\\psi|\\otimes|\\omega_2\\rangle_B\\langle\\omega_2|\n+|0\\rangle_A\\langle0|\\otimes\\rho^3_B$ with $r(\\rho^3_B)=3.$\nHowever it is impossible, since it requires a $4\\times4$\ncoefficient unitary matrix $[a_{ij}]$ in which $a_{i3}=0,i=1,2,3,$\nand $a_{0i},i=0,1,2$ cannot be zero simultaneously. Hence the\nstate $\\rho^3_{AB}$ is entangled and thus distillable. This also\ncompletes the proof showing that the state $\\rho_{AB}$ in Eq. (15)\nis distillable.\n\n\nAs above we have given several families of states that can be\ndistilled by means of the fact that the rank three NPT states are\ndistillable. The main difficulty in entanglement distillation is\nthe great amount of parameters that cannot be removed during the\nfiltering process. For example, it is unknown that whether the\nrank four NPT states are distillable. All in all, more efforts are\nrequired to distill other classes of rank four NPT states.\n\n\n\n\n\n\n\\section{conclusions}\n\nWe have proved that the bipartite rank three NPT states and some\nfamilies of rank four NPT states are distillable. So they are\nindeed available resource for quantum-information tasks. An open\nproblem is that whether all rank four NPT states are distillable.\nOur result also gives an insight into the relationship between the\nlow rank states and the Werner states.\n\n\n\nThe work was partly supported by the NNSF of China Grant\nNo.90503009, No.10775116, and 973 Program Grant No.2005CB724508.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzancl b/data_all_eng_slimpj/shuffled/split2/finalzzancl new file mode 100644 index 0000000000000000000000000000000000000000..2b24e422a01a52279a5f8ad9e4de9ac125e29789 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzancl @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe idea of cosmological inflation is capable to address some\nproblems of the standard big bang theory, such as the horizon,\nflatness and monopole problems. Also, it can provide a reliable\nmechanism for generation of density perturbations responsible for\nstructure formation and therefore temperature anisotropies in Cosmic\nMicrowave Background (CMB)spectrum [1-8]. There are a wide variety\nof cosmological inflation models where viability of their\npredictions in comparison with observations makes them to be\nacceptable or unacceptable (see for instance [9] for this purpose).\nThe simplest inflationary model is a single scalar field scenario in\nwhich inflation is driven by a scalar field called the inflaton that\npredicts adiabatic, Gaussian and scale-invariant fluctuations [10].\nBut, recently observational data have revealed some degrees of\nscale-dependence in the primordial density perturbations. Also,\nPlanck team have obtained some constraints on the primordial\nnon-Gaussianity [11-13]. Therefore, it seems that extended models of\ninflation which can explain or address this scale-dependence and\nnon-Gaussianity of perturbations are more desirable. There are a lot\nof studies in this respect, some of which can be seen in Refs.\n[14-19] with references therein. Among various inflationary models,\nthe non-minimal models have attracted much attention. Non-minimal\ncoupling of the inflaton field and gravitational sector is\ninevitable from the renormalizability of the corresponding field\ntheory (see for instance [20]). Cosmological inflation driven by a\nscalar field non-minimally coupled to gravity are studied, for\ninstance, in Refs. [21-28]. There were some issues on the unitarity\nviolation with non-minimal coupling (see for instance, Refs.\n[29-31]) which have forced researchers to consider possible coupling\nof the derivatives of the scalar field with geometry [32]. In fact,\nit has been shown that a model with nonminimal coupling between the\nkinetic terms of the inflaton (derivatives of the scalar field) and\nthe Einstein tensor preserves the unitary bound during inflation\n[33]. Also, the presence of nonminimal derivative coupling is a\npowerful tool to increase the friction of an inflaton rolling down\nits own potential [33]. Some authors have considered the model with\nthis coupling term and have studied the early time accelerating\nexpansion of the universe as well as the late time dynamics [34-36].\nIn this paper we extend the non-minimal inflation models to the case\nthat a canonical inflaton field is coupled non-minimally to the\ngravitational sector and in the same time the derivatives of the\nfield are also coupled to the background geometry (Einstein's\ntensor). This model provides a more realistic framework for treating\ncosmological inflation in essence. We study in details the\ncosmological perturbations and possible non-Gaussianities in the\ndistribution of these perturbations in this non-minimal inflation.\nWe expand the action of the model up to the third order and compare\nour results with observational data from Planck2015 to see the\nviability of this extended model. In this manner we are able to\nconstraint parameter space of the model in comparison with\nobservation.\n\n\n\\section{Field Equations}\n\nWe consider an inflationary model where both a canonical scalar field and its\nderivatives are coupled non-minimally to gravity. The four-dimensional action for\nthis model is given by the following expression:\n\n\\begin{equation}\nS=\\frac{1}{2}\\int\nd^{4}x\\sqrt{-g}\\Bigg[M_{p}^{2}f(\\phi)R+\\frac{1}{\\widetilde{M}^{2}}G_{\\mu\\nu}\\partial^{\\mu}\\phi\\partial^{\\nu}\\phi-2V(\\phi)\\Bigg]\\,,\n\\end{equation}\nwhere $M_{p}$ is a reduced planck mass, $\\phi$ is a canonical scalar field,\n$f(\\phi)$ is a general function of the scalar field and\n$\\widetilde{M}$ is a mass parameter. The energy-momentum tensor is obtained from action (1) as follows\n\n\\vspace{0.5cm}\n\n$T_{\\mu\\nu}=\\frac{1}{2\\widetilde{M}^{2}}\\bigg[\\nabla_{\\mu}\\nabla_{\\nu}(\\nabla^{\\alpha}\\phi\\nabla_{\\alpha}\\phi)\n-g_{\\mu\\nu}\\Box(\\nabla^{\\alpha}\\phi\\nabla_{\\alpha}\\phi)\n+g_{\\mu\\nu}g^{\\alpha\\rho}g^{\\beta\\lambda}\\nabla_{\\rho}\\nabla_{\\lambda}(\\nabla_{\\alpha}\\phi\\nabla_{\\beta}\\phi)$\n\\begin{equation}\n+\\Box(\\nabla_{\\mu}\\phi\n\\nabla_{\\nu}\\phi)\\bigg]-\\frac{g^{\\alpha\\beta}}{\\widetilde{M}^{2}}\n\\nabla_{\\beta}\\nabla_{\\mu}(\\nabla_{\\alpha}\\phi\n\\nabla_{\\nu}\\phi)-M_{p}^{2}\\nabla_{\\mu}\\nabla_{\\nu}f(\\phi)+M_{p}^{2}g_{\\mu\\nu}\\Box\nf(\\phi)+g_{\\mu\\nu}V(\\phi)\\,.\n\\end{equation}\n\nOn the other hand, variation of the action (1) with respect to the\nscalar field gives the scalar field equation of motion as\n\n\\begin{equation}\n\\frac{1}{2}M_{p}^{2}Rf'(\\phi)-\\frac{1}{\\widetilde{M}^{2}}G^{\\mu\\nu}\\nabla_{\\mu}\\nabla_{\\nu}\\phi-V'(\\phi)=0\\,,\n\\end{equation}\nwhere a prime denotes derivative with respect to the scalar field. We consider a spatially flat\nFriedmann-Robertson-Walker (FRW) line element as\n\n\\begin{equation}\nds^{2}=-dt^{2}+a^{2}(t)\\delta_{ij}dx^{i}dx^{j}\\,,\n\\end{equation}\nwhere $a(t)$ is scale factor. Now, let's assume that $f(\\phi)=\\frac{1}{2}\\phi^{2}$. In this\nframework, $T_{\\mu\\nu}$ leads to the following energy density and\npressure for this model respectively\n\n\\begin{equation}\n\\rho=\\frac{9H^{2}}{2\\widetilde{M}^{2}}\\dot{\\phi}^{2}-\\frac{3}{2}M_{p}^{2}H\\phi(2\\dot{\\phi}+H\\phi)+V(\\phi)\n\\end{equation}\n\n$$p=-\\frac{3}{2}\\frac{H^{2}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}-\\frac{\\dot{\\phi}^{2}\\dot{H}}{\\widetilde{M}^{2}}\n-\\frac{2H}{\\widetilde{M}^{2}}\\dot{\\phi}\\ddot{\\phi}$$\n\\begin{equation}\n+\\frac{1}{2}M_{p}^{2}\\Bigg[2\\dot{H}\\phi^{2}+3H^{2}\\phi^{2}\n+4H\\phi\\dot{\\phi}+2\\phi\\ddot{\\phi}+2\\dot{\\phi}\\Bigg]-V(\\phi)\\,,\n\\end{equation}\nwhere a dot refers to derivative with respect to the cosmic time. The equations of motion following from action (1) are\n\n\\begin{equation}\nH^{2}=\\frac{1}{3M_{p}^{2}}\\Bigg[-\\frac{3}{2}M_{p}^{2}H\\phi(2\\dot{\\phi}+H\\phi)+\\frac{9H^{2}}{2\\widetilde{M}^{2}}\\dot{\\phi}^{2}+V(\\phi)\\Bigg]\\,,\n\\end{equation}\n\n$$\\dot{H}=-\\frac{1}{2M_{p}^{2}}\\Bigg[\\dot{\\phi}^{2}\\bigg(\\frac{3H^{2}}{\\widetilde{M}^{2}}-\\frac{\\dot{H}}{\\widetilde{M}^{2}}\\bigg)\n-\\frac{2H}{\\widetilde{M}^{2}}\\dot{\\phi}\\ddot{\\phi}-\\frac{3}{2}M_{p}^{2}H\\phi(2\\dot{\\phi}+H\\phi)$$\n\\begin{equation}+\\frac{1}{2}M_{p}^{2}\\bigg((2\\dot{H}+3H^{2})\\phi^{2}+4H\\phi\\dot{\\phi}+2\\phi\\ddot{\\phi}+2\\dot{\\phi}^{2}\\bigg)\\Bigg]\n\\end{equation}\n\n\\begin{equation}\n-3M_{p}^{2}(2H^{2}+\\dot{H})\\phi+\\frac{3H^{2}}{\\widetilde{M}^{2}}\\ddot{\\phi}+3H\\bigg(\\frac{3H^{2}}{\\widetilde{M}^{2}}\n+\\frac{2\\dot{H}}{\\widetilde{M}^{2}}\\bigg)\\dot{\\phi}+V'(\\phi)=0\\,.\n\\end{equation}\n\nThe slow-roll parameters in this model are defined as\n\\begin{equation}\n\\epsilon\\equiv-\\frac{\\dot{H}}{H^{2}}\\,\\,\\,\\,,\\,\\,\\,\\,\\eta\\equiv-\\frac{1}{H}\\frac{\\ddot{H}}{\\dot{H}}\\,.\n\\end{equation}\nTo have inflationary phase, $\\epsilon$ and $\\eta$ should satisfy slow-roll conditions($\\epsilon\\ll1$ , $\\eta\\ll1$). In our setup, we\nfind the following result\n\\begin{equation}\n\\epsilon=\\bigg[1+\\frac{\\phi^{2}}{2}-\\frac{\\dot{\\phi^{2}}}{2\\widetilde{M}^{2}M_{p}^{2}}\\bigg]^{-1}\n\\bigg[\\frac{3\\dot{\\phi}^{2}}{2\\widetilde{M}^{2}M_{p}^{2}}+\\frac{\\phi\\dot{\\phi}}{2H}\n+\\frac{\\ddot{\\phi}}{H\\dot{\\phi}}\\bigg(\\frac{\\phi\\dot{\\phi}}{2H}\n-\\frac{\\dot{\\phi^{2}}}{\\widetilde{M}^{2}M_{p}^{2}}\\bigg)\\bigg]\n\\end{equation}\nand\n\\begin{equation}\n\\eta=-2\\epsilon-\\frac{\\dot{\\epsilon}}{H\\epsilon}\\,.\n\\end{equation}\n\nWithin the slow-roll approximation, equations (7),(8) and (9) can be\nwritten respectively as\n\\begin{equation}\nH^{2}\\simeq\\frac{1}{3M_{p}^{2}}\\Bigg[-\\frac{3}{2}M_{p}^{2}H^{2}\\phi^{2}+V(\\phi)\\Bigg]\\,,\n\\end{equation}\n\n\\begin{equation}\n\\dot{H}\\simeq-\\frac{1}{2M_{p}^{2}}\\Bigg[\\frac{3H^{2}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}-M_{p}^{2}H\\phi\\dot{\\phi}+M_{p}^{2}\\dot{H}\\phi^{2}\\Bigg]\\,,\n\\end{equation}\nand\n\\begin{equation}\n-6M_{p}^{2}H^{2}\\phi+\\frac{9H^{3}\\dot{\\phi}}{\\widetilde{M}^{2}}+V'(\\phi)\\simeq0\\,.\n\\end{equation}\nThe number of e-folds during inflation is defined as\n\\begin{equation}\n{\\cal N}=\\int_{t_{hc}}^{t_{e}}H\\,dt\\,,\n\\end{equation}\nwhere $t_{hc}$ and $t_{e}$ are time of horizon crossing and end of\ninflation respectively. The number of e-folds in the slow-roll approximation in our setup can be\nexpressed as follows\n\n\\begin{equation}\n{\\cal N}\\simeq\\int_{\\phi_{hc}}^{\\phi_{e}}\\frac{V(\\phi)d\\phi}{M_{p}^{2}\\bigg(1\n+\\frac{1}{2}\\phi^{2}\\bigg)\\Bigg[2M_{p}^{2}\\widetilde{M}^{2}\\phi\n-M_{p}^{2}\\widetilde{M}^{2}\\frac{V'(\\phi)}{V(\\phi)}\\bigg(1+\\frac{1}{2}\\phi^{2}\\bigg)\\Bigg]}\\,.\n\\end{equation}\nAfter providing the basic setup of the model, for testing cosmological viability\nof this extended model we treat the perturbations in comparison with observation.\n\n\n\\section{Second-Order Action: Linear Perturbations}\n\nIn this section, we study linear perturbations around the\nhomogeneous background solution. To this end, the first step is\nexpanding the action (1) up to the second order in small fluctuations. It\nis convenient to work in the ADM formalism given by [37]\n\\begin{equation}\nds^{2}=-N^{2}dt^{2}+h_{ij}(N^{i}dt+dx^{i})(N^{j}dt+dx^{j})\\,,\n\\end{equation}\nwhere $N^{i}$ is the shift vector and $N$ is the lapse function.\nWe expand the lapse function and shift vector to $N=1+2\\Phi$ and\n$N^{i}=\\delta^{ij}\\partial_{j}\\Upsilon$ respectively, where $\\Phi$\nand $\\Upsilon$ are three-scalars. Also,\n$h_{ij}=a^{2}(t)[(1+2\\Psi)\\delta_{ij}+\\gamma_{ij}]$, where $\\Psi$ is\nspatial curvature perturbation and $\\gamma_{ij}$ is shear\nthree-tensor which is traceless and symmetric. In the rest of our study, we\nchoose $\\delta\\Phi=0$ and $\\gamma_{ij}=0$. By taking into account\nthe scalar perturbations in linear-order, the metric (18) is written\nas (see for instance [38])\n\\begin{equation}\nds^{2}=-(1+2\\Phi)dt^{2}+2\\partial_{i}\\Upsilon\ndtdx^{i}+a^{2}(t)(1+2\\Psi)\\delta_{ij}dx^{i}dx^{j}\\,.\n\\end{equation}\n\nNow by replacing metric (19) in action (1) and expanding the action up to the\nsecond-order in perturbations, we find (see for instance [39,40])\n\n$$S^{(2)}=\\int dt dx^{3}a^{3}\\Bigg[-\\frac{3}{2}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}^{2}\n+\\frac{1}{a^{2}}((M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}$$\n$$-(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi)\\partial^{2}\\Upsilon\n-\\frac{1}{a^{2}}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi\\partial^{2}\\Psi$$\n$$+3(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi\\dot{\\Psi}\n+3H(-\\frac{1}{2}M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}$$\n\\begin{equation}\n+\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi^{2}\n+\\frac{1}{2a^{2}}(M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})(\\partial\\Psi)^{2}\\Bigg]\\,.\n\\end{equation}\n\nBy variation of action (20) with respect to $N$ and $N^{i}$ we find\n\\begin{equation}\n\\Phi=\\frac{M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}{M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}\\dot{\\Psi}\\,,\n\\end{equation}\n\n$$\\partial^{2}\\Upsilon=\\frac{2a^{2}}{3}\\frac{(-\\frac{9}{2}M_{p}^{2}H^{2}\\phi^{2}-9M_{p}^{2}H\\phi\\dot{\\phi}\n+\\frac{27H^{2}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})}{(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})}$$\n\\begin{equation}\n+3\\dot{\\Psi}a^{2}-\\frac{M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}{M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}\\dot{\\Psi}\\,. \\end{equation}\nFinally the second order action can be rewritten as follows\n\\begin{equation}\nS^{(2)}=\\int dt\ndx^{3}a^{3}\\vartheta_{s}\\bigg[\\dot{\\Psi}^{2}-\\frac{c_{s}^{2}}{a^{2}}(\\partial\\Psi)^{2}\\bigg]\n\\end{equation}\nwhere by definition\n\\begin{equation}\n\\vartheta_{s}\\equiv6\\frac{(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}(-\\frac{1}{2}M_{p}^{2}H^{2}\\phi^{2}-M_{p}^{2}H\\phi\\dot{\\phi}+\\frac{3}{\\widetilde{M}^{2}}\nH^{2}\\dot{\\phi}^{2})}{(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3}{\\widetilde{M}^{2}}H\\dot{\\phi}^{2})^{2}}+\n3(\\frac{1}{2}M_{p}^{2}\\phi^{2}-\\frac{1}{2\\widetilde{2}}\\dot{\\phi}^{2})\n\\end{equation}\nand\n$$c_{s}^{2}\\equiv\\frac{3}{2}\\bigg\\{(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}\n(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})H$$\n$$-(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})$$\n$$4(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})(M_{p}^{2}\\phi\\dot{\\phi}-\\frac{\\dot{\\phi}\\ddot{\\phi}}{\\widetilde{M}^{2}})\n(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})$$\n$$-(M_{p}^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}(M_{p}^{2}\\dot{H}\\phi^{2}+2M_{p}^{2}H\\phi\\dot{\\phi}M_{p}^{2}\\dot{\\phi}^{2}+M_{p}^{2}\\phi\\ddot{\\phi}\n-\\frac{3\\dot{H}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}-\\frac{6}{\\widetilde{M}^{2}}H\\dot{\\phi}\\ddot{\\phi})\\bigg\\}$$\n$$\\bigg\\{9[\\frac{1}{2}M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{2\\widetilde{M}^{2}}][4(\\frac{1}{2}M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{2\\widetilde{M}^{2}})\n(-\\frac{1}{2}M_{p}^{2}H^{2}\\phi^{2}-M_{p}^{2}H\\phi\\dot{\\phi}+\\frac{3}{M\\widetilde{^{2}}H^{2}\\dot{\\phi}^{2}})$$\n\\begin{equation}\n+(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}]\\bigg\\}^{-1}\\,.\n\\end{equation}\n\nIn order to obtain quantum perturbations $\\Psi$, we can find\nequation of motion of the curvature perturbation by varying action (23)\nwhich follows\n\\begin{equation}\n\\ddot{\\Psi}+\\bigg(3H+\\frac{\\dot{\\vartheta_{s}}}{\\vartheta_{s}}\\bigg)+\\frac{c_{s}^{2}k^{2}}{a^{2}}\\Psi=0\\,.\n\\end{equation}\nBy solving the above equation up to the lowest order in slow-roll approximation,\nwe find\n\\begin{equation}\n\\Psi=\\frac{iH\\exp(-ic_{s}k\\tau)}{2c_{s}^{\\frac{3}{2}}\\sqrt{k^{3}}\\vartheta_{s}}(1+ic_{s}k\\tau)\\,.\n\\end{equation}\nBy using the two-point correlation functions we can study power\nspectrum of curvature perturbation in this setup. We find two-point correlation\nfunction by obtaining vacuum expectation value at the end of inflation.\nWe define the power spectrum $P_{s}$, as\n\\begin{equation}\n\\langle0|\\Psi(0,\\textbf{k}_{1})\\Psi(0,\\textbf{k}_{2})|0\\rangle=\\frac{2\\pi^{2}}{k^{3}}P_{s}(2\\pi)^{3}\\delta^{3}(\\textbf{k}_{1}+\\textbf{k}_{2})\\,,\n\\end{equation}\nwhere\n\\begin{equation}\nP_{s}=\\frac{H^{2}}{8\\pi^{2}\\vartheta_{s} c_{s}^{3}}\\,.\n\\end{equation}\n\nThe spectral index of scalar perturbations is given by (see Refs. [41-43] for more details on the cosmological perturbations in generalized gravity theories and also inflationary spectral index in these theories.)\n\n\\begin{equation}\nn_{s}-1=\\frac{d\\ln P_{s}}{d\\ln\nk}|_{c_{s}k=aH}=-2\\epsilon-\\delta_{F}-\\eta_{s}-S\n\\end{equation}\nwhere by definition\n\\begin{equation}\n\\delta_{F}=\\frac{\\dot{f}}{H(1+f)}\\,\\,\\,\\,,\\,\\,\\,\\,\\eta_{s}=\\frac{\\dot{\\epsilon_{s}}}{H\\epsilon_{s}}\\,\\,\\,\\,,\\,\\,\\,\\,S=\\frac{\\dot{c_{s}}}{Hc_{s}}\n\\end{equation}\nalso\n\\begin{equation}\n\\epsilon_{s}=\\frac{\\vartheta_{s}c_{s}^{2}}{M_{pl}^{2}(1+f)}.\n\\end{equation}\n\n\nwe obtain finally\n\n\\begin{equation}\nn_{s}-1=-2\\epsilon-\\frac{1}{H}\\frac{d\\ln c_{s}}{dt}\n-\\frac{1}{H}\\frac{d\\ln[2H(1+\\frac{\\phi^{2}}{2})\\epsilon+\\phi\\dot{\\phi}]}{dt}\\,,\n\\end{equation}\nwhich shows the scale dependence of perturbations due to deviation of $n_{s}$\nfrom $1$.\n\nNow we study tensor perturbations in this setup. To this end, we write the metric as follows\n\\begin{equation}\nds^{2}=-dt^{2}+a(t)^{2}(\\delta_{ij}+T_{ij})dx^{i}dx^{j}\\,,\n\\end{equation}\nwhere $T_{ij}$ is a spatial shear 3-tensor which is transverse and\ntraceless. It is convenient to write $T_{ij}$ in terms of two\npolarization modes, as follows\n\\begin{equation}\nT_{ij}=T_{+}e^{+}_{ij}+T^{\\times}e^{\\times}_{ij}\\,,\n\\end{equation}\nwhere $e^{+}_{ij}$ and $e^{\\times}_{ij}$ are the polarization tensors. In this case the second order action for the tensor mode can de\nwritten as\n\\begin{equation}\nS_{T}=\\int dt dx^{3}\na^{3}\\vartheta_{T}\\bigg[\\dot{T}_{(+,\\times)}^{2}-\\frac{c_{T}^{2}}{a^{2}}(\\partial\nT_{(+,\\times)})^{2}\\bigg]\\,,\n\\end{equation}\nwhere by definition\n\\begin{equation}\n\\vartheta_{T}\\equiv\\frac{1}{8}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n\\end{equation}\nand\n\\begin{equation}\nc_{T}^{2}\\equiv\\frac{\\widetilde{M}^{2}M_{p}^{2}\\phi^{2}+\\dot{\\phi}^{2}}{\\widetilde{M}^{2}M_{p}^{2}\\phi^{2}-\\dot{\\phi}^{2}}\\,.\n\\end{equation}\n\nNow, the amplitude of tensor perturbations is given by\n\\begin{equation}\nP_{T}=\\frac{H^{2}}{2\\pi^{2}\\vartheta_{T}c_{T}^{3}}\\,,\n\\end{equation}\nwhere we have defined the tensor spectral index as\n\\begin{equation}\nn_{T}\\equiv\\frac{d\\ln P_{T}}{d\\ln\nk}|_{c_{T}k=aH}\\,=-2\\epsilon-\\delta_{F}.\n\\end{equation}\nBy using above equations we get finally\n\\begin{equation}\nn_{T}=-2\\epsilon-\\frac{\\phi\\dot{\\phi}}{H(1+\\frac{\\phi^{2}}{2})}\\,.\n\\end{equation}\n\nThe tensor-to-scalar ratio as an important observational quantity in our setup is given by\n\\begin{equation}\nr=\\frac{P_{T}}{P_{s}}=16c_{s}\\bigg(\\epsilon+\\frac{\\phi\\dot{\\phi}}{2H(1+\\frac{\\phi^{2}}{2})}+O(\\epsilon^{2})\\bigg)\\simeq-8c_{s}n_{T}\n\\end{equation}\nwhich yields the standard consistency relation.\n\n\\section{Third-Order Action: Non-Gaussianity}\n\nSince a two-point correlation function of the scalar perturbations\ngives no information about possible non-Gaussian feature of distribution, we study\nhigher-order correlation functions. A three-point correlation function is capable to give the required information. For this\npurpose, we should expand action (1) up to the third order in small\nfluctuations around the homogeneous background solutions. In this respect we obtain\n\n\\vspace{0.5cm} $S^{(3)}=\\int\ndtdx^{3}a^{3}\\bigg\\{3\\Phi^{3}[M_{p}^{2}H^{2}(1+\\frac{\\phi^{2}}{2})+M_{p}^{2}H\\phi\\dot{\\phi}-\\frac{5}{\\widetilde{M^{2}}}H^{2}\\dot{\\phi^{2}}]\n+\\Phi^{2}[9\\Psi(-\\frac{1}{2}M_{p}^{2}\\phi^{2}-M_{p}^{2}H\\phi\\dot{\\phi}$\n\\vspace{0.5cm} $+\\frac{3}{\\widetilde{M}^{}}H^{2}\\dot{\\phi}^{2})\n+6\\dot{\\Psi}(-M_{p}^{2}H(1+\\frac{\\phi^{2}}{2})-\\frac{1}{2}M_{p}^{2}\\phi\\dot{\\phi}\\frac{3}{\\widetilde{M}^{2}}H\\dot{\\phi}^{2})\n-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}a^{2}}\\partial^{2}\\Psi\n-\\frac{2}{a^{2}}\\partial^{2}\\Upsilon(-M_{p}^{2}H$\\vspace{0.05cm}\n$(1+\\frac{\\phi^{2}}{2})-\\frac{1}{2}M_{p}^{2}\\phi\\dot{\\phi}\\frac{3}{\\widetilde{M}^{2}}H\\dot{\\phi}^{2})]\n+\\Phi[\\frac{1}{a^{2}}(-M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}+\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\partial_{i}\\Psi\\partial_{i}\\Upsilon\n-9(-M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}+$ \\vspace{0.5cm}\n$\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}\\Psi+\\frac{1}{2a^{4}}(M_{p}^{2}(1+\\frac{\\phi^{2}}{2})+\\frac{3}{2}\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n(\\partial_{i}\\partial_{j}\\Upsilon\\partial_{i}\\partial_{j}\\Upsilon-\\partial^{2}\\Upsilon\\partial^{2}\\Upsilon)\n+\\frac{1}{a^{2}}(-M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}+\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Psi\\partial^{2}\\Upsilon\n$ \\vspace{0.5cm}\n$+\\frac{4}{2a^{2}}(M_{p}^{2}(1+\\frac{\\phi^{2}}{2})+\\frac{3}{2}\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n\\dot{\\Psi}\\partial^{2}\\Upsilon+\\frac{1}{a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\n\\Psi\\partial^{2}\\Psi\n+\\frac{1}{2a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})(\\partial\\Psi)^{2}\n-6(M_{p}^{2}(1+\\frac{\\phi^{2}}{2})+$ \\vspace{0.5cm}\n$\\frac{3}{2}\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}^{2}]+\\frac{1}{2a^{2}}(M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n\\Psi(\\partial\\Psi)^{2}\n+\\frac{9}{2}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\\dot{\\Psi^{2}}\\Psi\n-\\frac{1}{a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\\dot{\\Psi}\\partial_{i}\\Psi\\partial_{i}\\Upsilon\n-\\frac{1}{a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})$\n\\begin{equation}\n\\dot{\\Psi}\\Psi\\partial^{2}\\Upsilon-\\frac{3}{4a^{4}}\\Psi(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\n(\\partial_{i}\\partial_{j}\\Upsilon\\partial_{i}\\partial_{j}\\Upsilon-\\partial^{2}\\Upsilon\\partial^{2}\\Upsilon)\n+\\frac{1}{a^{4}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\\partial_{i}\\Psi\\partial_{i}\\Upsilon\\partial^{2}\\Upsilon\\bigg\\}\n\\end{equation}\n\nWe use Eqs. (21) and (22) for eliminating $\\Phi$ and $\\Upsilon$ in this relation. For this end, we introduce the quantity $\\chi$ as follows\n\n\\begin{equation}\n\\Upsilon=\\frac{M_{p}^{2}\\widetilde{M}^{2}\\phi^{2}-\\dot{\\phi}^{2}}{\\widetilde{M}^{2}M_{p}^{2}(H\\phi^{2}+\\phi\\dot{\\phi})-3H\\dot{\\phi}^{2}}\\Psi\n+\\frac{2\\widetilde{M^{2}}a^{2}\\chi}{M_{p}^{2}\\widetilde{M}^{2}\\phi^{2}-\\dot{\\phi}^{2}}\\,,\n\\end{equation}\nwhere\n\n\\begin{equation}\n\\partial^{2}\\chi=\\vartheta_{s}\\dot{\\Psi}\\,.\n\\end{equation}\n\nNow the third order action (43) takes the following form\n\n\\vspace{0.5cm}\n\n$S^{(3)}=\\int dt\\,\ndx^{3}a^{3}\\bigg\\{[-3M_{p}^{2}c_{s}^{-2}\\Psi\\dot{\\Psi^{2}}\n+M_{p}^{2}a^{-2}\\Psi(\\partial\\Psi)^{2}+M_{p}^{2}c_{s}^{-2}H^{-1}\\dot{\\Psi}^{3}]$\n\\begin{equation}\n\\bigg[(1+\\frac{1}{4}\\phi^{2})\\epsilon+\\frac{5}{8}\\frac{\\phi\\dot{\\phi}}{H}\\bigg]\n-2(1+\\frac{1}{4}\\phi^{2})^{-1}(\\frac{5}{8}\\frac{\\phi\\dot{\\phi}}{c_{s}^{2}H})\\dot{\\Psi}\\partial_{i}\\Psi\\partial_{i}\\chi\\bigg\\}\\,.\n\\end{equation}\n\nBy calculating the three-point correlation function we can study\nnon-Gaussianity feature of the primordial perturbations. For the\npresent model, we use the interaction picture in which the\ninteraction Hamiltonian, $H_{int}$, is equal to the Lagrangian third\norder action. The vacuum expectation value of curvature\nperturbations at $\\tau=\\tau_{f}$ is\n\n\\begin{equation}\n\\langle\\Psi(\\textbf{k}_{1})\\Psi(\\textbf{k}_{2})\\Psi(\\textbf{k}_{3})\\rangle=-i\\int_{\\tau_{i}}^{\\tau_{f}}d\\tau\n\\langle0|[\\Psi(\\tau_{f},\\textbf{k}_{1})\\Psi(\\tau_{f},\\textbf{k}_{2})\\Psi(\\tau_{f},\\textbf{k}_{3}),H_{int}(\\tau)]|0\\rangle\\,.\n\\end{equation}\n\nBy solving the above integral in Fourier space, we find\n\\begin{equation}\n\\langle\\Psi(\\textbf{k}_{1})\\Psi(\\textbf{k}_{2})\\Psi(\\textbf{k}_{3})\\rangle=(2\\pi)^{3}\\delta^{3}(\\textbf{k}_{1}+\\textbf{k}_{2}+\\textbf{k}_{3})\nP_{s}^{2}F_{\\Psi}(\\textbf{k}_{1},\\textbf{k}_{2},\\textbf{k}_{3})\\,,\n\\end{equation}\nwhere\n\\begin{equation}\nF_{\\Psi}(\\textbf{k}_{1},\\textbf{k}_{2},\\textbf{k}_{3})=\\frac{(2\\pi)^{2}}{\\prod_{i=1}^{3}k_{i}^{3}}G_{\\Psi}\\,,\n\\end{equation}\n\n\\vspace{0.5cm}\n\n$G_{\\Psi}=\\bigg[\\frac{3}{4}\\bigg(\\frac{2}{K}\\Sigma_{i>j}k_{i}^{2}k_{j}^{2}-\\frac{1}{K^{2}}\\Sigma_{i\\neq\nj}k_{i}^{2}k_{j}^{3}\\bigg)+\\frac{1}{4}\\bigg(\\frac{1}{2}\\Sigma_{i}k_{i}^{3}+\\frac{2}{K}\\Sigma_{i>j}k_{i}^{2}k_{j}^{2}\n-\\frac{1}{K^{2}}\\Sigma_{i\\neq j} k_{i}^{2}k_{j}^{3}\\bigg)$\n\\begin{equation}\n-\\frac{3}{2}\\bigg(\\frac{(k_{1}k_{2}k_{3})^{2}}{K^{3}}\\bigg)\\bigg]\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\,,\n\\end{equation}\n\nand $K=\\sum_{i}k_{i}$. Finally the non-linear parameter $f_{NL}$ is defined as follows\n\n\\begin{equation}\nf_{NL}=\\frac{10}{3}\\frac{G_{\\Psi}}{\\sum_{i=1}^{3}k_{i}}\\,.\n\\end{equation}\n\nHere we study non-Gaussianity in the orthogonal and the equilateral\nconfigurations [44,45]. Firstly we should account $G_{\\Psi}$ in\nthese configurations. To this end, we follow Refs. [46-48] to\nintroduce a shape $\\zeta_{\\ast}^{equi}$ as\n$\\zeta_{\\ast}^{equi}=-\\frac{12}{13}(3\\zeta_{1}-\\zeta_{2})$. In this\nmanner we define the following shape which is orthogonal to\n$\\zeta_{\\ast}^{equi}$\n\\begin{equation}\n\\zeta_{\\ast}^{ortho}=-\\frac{12}{14-13\\beta}[\\beta(3\\zeta_{1}-\\zeta_{2})+3\\zeta_{1}-\\zeta_{2}]\\,,\n\\end{equation}\nwhere $\\beta\\simeq1.1967996$. Finally, bispectrum (48) can be\nwritten in terms of $\\zeta_{\\ast}^{equi}$ and $\\zeta_{\\ast}^{ortho}$\nas follows\n\\begin{equation}\nG_{\\Psi}=G_{1}\\zeta_{\\ast}^{equi}+G_{2}\\zeta_{\\ast}^{ortho}\\,,\n\\end{equation}\nwhere\n\\begin{equation}\nG_{1}=\\frac{13}{12}\\bigg[\\frac{1}{24}\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg](2+3\\beta)\n\\end{equation}\nand\n\\begin{equation}\nG_{2}=\\frac{14-13\\beta}{12}\\bigg[\\frac{1}{8}\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg]\\,.\n\\end{equation}\n\nNow, by using equations (50-55) we obtain the amplitude of\nnon-Gaussianity in the orthogonal and equilateral configurations\nrespectively as\n\\begin{equation}\nf_{NL}^{equi}=\\frac{130}{36\\sum_{i=1}^{3}k_{i}^{3}}\\bigg[\\frac{1}{24}\n\\bigg(\\frac{1}{1-c_{s}^{2}}\\bigg)\\bigg](2+3\\beta)\\zeta_{\\ast}^{equi}\\,,\n\\end{equation}\nand\n\\begin{equation}\nf_{NL}^{ortho}=\\frac{140-130\\beta}{36\\sum_{i=1}^{3}k_{i}^{3}}\\bigg[\\frac{1}{8}\n\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg]\\zeta_{\\ast}^{ortho}\\,.\n\\end{equation}\n\nThe equilateral and the orthogonal shape have a negative and a positive peak in $k_{1}=k_{2}=k_{3}$ limit, respectively [49].\nThus, we can rewrite the above equations in this limit as\n\\begin{equation}\nf_{NL}^{equi}=\\frac{325}{18}\\bigg[\\frac{1}{24}\\bigg(\\frac{1}{c_{s}^{2}}-1\\bigg)\\bigg](2+3\\beta)\\,,\n\\end{equation}\nand\n\\begin{equation}\nf_{NL}^{ortho}=\\frac{10}{9}\\bigg[\\frac{1}{8}\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg](\\frac{7}{6}+\\frac{65}{4}\\beta)\\,,\n\\end{equation}\nrespectively.\n\n\n\\section{Confronting with Observation}\n\nThe previous sections were devoted to the theoretical framework of\nthis extended model. In this section we compare our model with\nobservational data to find some observational constraints on the\nmodel parameter space. In this regard, we introduce a suitable\ncandidate for potential term in the action. We adopt\\footnote{Note that in general\n$\\lambda$ has dimension related to the Planck mass. This can be seen easily by considering the normalization of $\\phi$ via\n$V(\\phi)=\\frac{1}{n}\\lambda(\\frac{\\phi}{\\phi_{0}})^{n}$ which indicates that $\\lambda$ cannot be dimensionless in general. When we consider some numerical values for $\\lambda$ in our numerical analysis, these values are in \\emph{``appropriate units\"}.}\n$V(\\phi)=\\frac{1}{n}\\lambda\\phi^{n}$ which contains some interesting\ninflation models such as chaotic inflation. To be more specified, we\nconsider a quartic potential with $n=4$. Firstly we substitute this\npotential into equation (11) and then by adopting $\\epsilon=1$ we\nfind the inflaton field's value at the end of inflation. Then by\nsolving the integral (17), we find the inflaton field's value at the\nhorizon crossing in terms of number of e-folds, $N$. Then we\nsubstitute $\\phi_{hc}$ into Eqs. (33), (42), (58) and (59). The\nresulting relations are the basis of our numerical analysis on the\nparameter space of the model at hand. To proceed with numerical\nanalysis, we study the behavior of the tensor-to-scalar ratio versus\nthe scalar spectral index. In figure (1), we have plotted the\ntensor-to-scalar ratio versus the scalar spectral index for $N=60$\nin the background of Planck2015 data. The trajectory of result in\nthis extended non-minimal inflationary model lies well in the confidence levels\nof Planck2015 observational data for viable spectral index and $r$.\nThe amplitude of orthogonal configuration of non-Gaussianity versus\nthe amplitude of equilateral configuration is depicted in figure 2\nfor $N=60$. We see that this extended non-minimal model, in some\nranges of the parameter $\\lambda$, is consistent with observation.\nIf we restrict the spectral index to the observationally viable\ninterval $0.95 ''\n\\end{quote}\n\\end{mdframed}\n\n\n\\descrit{2) News Reporting:} it reports the headline of a misleading news article or another tweet, without any additional commentary and text from the tweet's author.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``BREAKING: Unofficial: Trump trailing Biden by only 4,202! There is a ballot count upload glitch in Arizona. Reports saying over 6,000 False Biden Votes Discovered .''\n\n\n\\end{quote}\n\\end{mdframed}\n\n\\descrit{3) Counter Claim:} it attempts to question and\/or debunk the misleading information.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``Misleading claims that Trump ballots in Arizona were thrown out because Sharpie pens were provided to voters are untrue. A ballot that cannot be read by the machine would be re-examined by hand and not invalidated if it was marked with a Sharpie.\\#Election2020 .''\n\n\\end{quote}\n\\end{mdframed}\n\n\n\\descrit{4) Satire:} it discusses the false claim in a satirical way.\n\n\\smallskip\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``@\\textless USER\\textgreater He was allegedly slain by Soros, who then had Chavez's personal army of false voters cram him inside a Dominion voting machine before loading him into an RV with Hunter Biden's second laptop and Hillary's server.''\n\n\\end{quote}\n\\end{mdframed}\n\\descrit{5) Discussion:} it prompts discussion of the details of the misleading claim by adding commentary.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``@\\textless USER\\textgreater What happened to all the votes cast for Trump that were destroyed? How are those tallied? Detroit-based Democratic Party activist a local: boasts On FB: I threw out every Trump ballot I saw while working for Wayne County, Michigan. They number in the tens of thousands, as did all of my coworkers.''\n\n\\end{quote}\n\\end{mdframed}\n\\descrit{6) Inquiry:} it inquires about the details of events related to the misleading claim and does not attempt to either support or deny the claim under question. \n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``Has anyone got a compelling justification for this? In accordance with a tweet I saw from @\\textless USER\\textgreater: A \"James Bradley\" born in 1900 has recently been entered into the Michigan Voter Information Center. James apparently submitted an absentee ballot on October 25. For a 120-year-old, not bad! ''\n\\end{quote}\n\\end{mdframed}\n\n\\descrit{7) Irrelevant:} it is irrelevant to the misleading claim. These are considered false positives.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``\\#LOSANGELES: Our truck will be at the @HollywoodBowl voting location till 7pm. Use your voting rights and reward yourself with some.''\n\\end{quote}\n\\end{mdframed}\n\n\n\n\\begin{table}[t]\n\\centering\n\\small\n\t\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{lrr}\n\\toprule\n\\textbf{Category} & \\textbf{Candidates} & \\textbf{Moderated (\\%)} \\\\ \\midrule\nAmplifying & 1,198 & 241 (20.11\\%) \\\\\nReporting & 922 & 222 (24.07\\%) \\\\\nCounter & 122 & 4 (3.27\\%) \\\\\nSatire & 15 & 0 (0.00\\%) \\\\\nDiscussion & 646 & 83 (12.84\\%) \\\\\nInquiry & 84 & 22 (26.19\\%) \\\\\nIrrelevant & 59 & 1 (1.69\\%) \\\\ \\bottomrule\n\\end{tabular}\n\t%\n\\caption{Categories of candidate tweets and number\/percentage receiving soft moderation by Twitter.}\n\\label{tab:candidate_category}\n\\end{table}\n\n\nTable~\\ref{tab:candidate_category}, reports the number of candidate tweets in each category. %\nThe vast majority falls in the amplification category with 1,198 out of 1,500 tweets (79.86\\%), followed by tweets reporting about the false claim with 922 tweets (61.46\\%).\n43\\% of the tweets add further discussion to the misleading claim under question rather than simply sharing the headline of a news article, and 8\\% of them try to debunk it.\nFinally, 59 (3.93\\%) of the tweets flagged by \\textsc{Lambretta}\\xspace are irrelevant to the claim under study and can therefore be considered false positives. \nAs mentioned, the goal of \\textsc{Lambretta}\\xspace is to flag tweets that are related to a claim that the platform wants to moderate, but human moderators should still make the final decision about applying labels to the candidates flagged by our system. \nWe further discuss the implications of running \\textsc{Lambretta}\\xspace in the wild in Section~\\ref{sec:discussion}.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.95\\columnwidth]{Plots\/coverage.pdf}\n\\caption{Moderation coverage of misleading tweets flagged by \\textsc{Lambretta}\\xspace per claim.}\n\\label{fig:coverage}\n\\end{figure}\n\n\\descr{False Negatives.} \nTo evaluate the False Negatives of \\textsc{Lambretta}\\xspace, we first evaluate the false negative of each of its two phases separately using \\done~ for the first phase, and \\dthree~ for the second phase.\nIn the claim structure extraction module, the Proposition Extractor component fails to extract 2.77\\% of the propositions that are claim span.\nAfter the propositions are extracted, \\textsc{Lambretta}\\xspace misclassifies 3.46\\% of the propositions that contain a claim, implying the missed claim structure would not be processed further in the second phase.\nIn the second phase, we quantify the proportion of tweets missed by the keywords identified through the LTR component of \\textsc{Lambretta}\\xspace. \nThe keywords produced by LTR identify 8,748 of the 10,776 tweets in the ground truth; this yields an 18.81\\% false negative rate from \\textsc{Lambretta}\\xspace's keyword extraction phase.\nThis is much lower than the false negative rate of the second best state-of-the-art approach, YAKE, which is 32.45\\%.\n\n\n\\noindent{\\bf Comparison to Twitter's soft moderation.}\nAfter determining that the recommendations made by \\textsc{Lambretta}\\xspace are accurate, we check if the tweets recommended by our approach were also soft-moderated by Twitter.\nFor every claim from the Claim Extraction Module, we retrieve the relevant set of tweets guided by the best set of keywords from our LTR component. \nWe then follow~\\cite{zannettou2021won} and extract metadata of soft moderation interventions for each tweet (i.e., if the tweet received a soft moderation and the corresponding warning label).\nWe perform this experiment on \\dthree~.\n\nOut of the 101,353 tweets flagged by \\textsc{Lambretta}\\xspace as candidates for moderation, we find that only 4,330 (4.31\\%) were soft moderated by Twitter.\nNote that we could not check the existence of warning labels for 993 tweets as they were inaccessible, with either the tweets having been deleted or the accounts that posted them being deleted or suspended.\nThis experiment highlights the limitations of Twitter's soft moderation approach, suggesting that the platform would benefit from an automated system like \\textsc{Lambretta}\\xspace to aid content moderation.\nIn Section~\\ref{sec:twitter}, we further investigate whether we can identify a specific strategy followed by Twitter in moderating content.\n\n %\n\n\n\n\\subsection{What drives Twitter moderation?}\n\\label{sec:twitter}\n\nThe analysis from the previous sections shows that Twitter only moderates a small fraction of tweets that should be moderated.\nIn this section, we aim to better understand how these moderation decisions are made.\n\nWe start by examining whether certain claims are moderated more aggressively than others and whether the type of message in a tweet affects its chances of being moderated. \nWe then analyze the text and the URLs in moderated and unmoderated tweets, aiming to ascertain: 1) whether Twitter uses text similarity to identify moderation candidates and 2) whether Twitter automatically moderates all tweets linking to a known misleading news article.\nNext, we look at the account characteristics of the users who posted moderated and unmoderated tweets, and engagement metrics (i.e., likes and retweets), aiming to understand if Twitter prioritizes moderating tweets by popular accounts or viral content.\n\n\n\\descr{Coverage by claim.}\nIn Figure~\\ref{fig:coverage}, we plot the Cumulative Distribution Function (CDF) of the percentage of tweets moderated by Twitter for each of our 900 claims, out of the total candidate set flagged by \\textsc{Lambretta}\\xspace.\nApproximately 80\\% of the claims have less than 10\\% of the tweets moderated, whereas 95\\% of claims have close to 20\\% of the tweets moderated.\nVery few claims (5) have at least half of the tweets moderated.\nThe misleading claim with the highest coverage is ``\\textbf{\\textit{Russ Ramsland file affidavit showing physical impossibility of election result in Michigan}}'' with 159 out of 309 (51\\%) candidate tweets receiving moderation labels by Twitter.\nOn the other hand, the claim ``\\textbf{\\textit{Chinese Communists Used Computer Fraud and Mail Ballot Fraud to Interfere with Our National Election}}'' only has 1 out of 236 tweets (0.42\\%) with warning labels.\nThis shows that, while the fraction of pertinent tweets moderated by Twitter is generally low, the platform seems to moderate certain claims more aggressively than others. \n\n\\descr{Coverage by tweet type.}\nIn Section~\\ref{sec:validation}, we list seven categories of tweets discussing misleading claims.\nWe now set out to understand whether Twitter moderates certain types of tweets more than others.\nTable~\\ref{tab:candidate_category} shows the fraction of tweets in our sample set of 1,500 manually analyzed tweets that did receive soft moderation by Twitter, broken down by category.\nTweets raising questions, reporting, or amplifying false claims are more likely to be moderated (with 26.19\\%, 24.07\\%, and 20.11\\% of their tweets being moderated, respectively).\nSatire tweets never received moderation labels, while tweets debunking false claims were only moderated in 3.27\\% of the cases.\nThis indicates that Twitter considers the stance of a tweet mentioning a false claim, perhaps as part of a manual moderation effort.\n\n\\descr{Content analysis.}\nNext, we investigate whether Twitter looks at near identical tweets when applying soft moderation decisions.\nWe take all tweets flagged as candidates by \\textsc{Lambretta}\\xspace, and group together those with a high Jaccard similarity of their words.\nWe remove all the links, user mentions, and lemmatize the tweet tokens by using the \\textbf{\\textit{ekphrasis}} tokenizer~\\cite{ekphrasis}.\nWe consider two tweets to be near identical if their Jaccard similarity is in the range 0.75--0.9 (out of 1.0).\nWe do so to extract tweet pairs that are not exactly the same, but have some variation in the content while discussing the same misleading claim.\nWe exclude retweets, and only consider the tweets originally authored by the users.\n\n\\begin{figure*}[t]\n\\centering\n \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_followings.pdf}\n \\caption{Following} \n\t\\end{subfigure}\n\t~\n\t \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_followers.pdf} \n \\caption{Followers} \n\t\\end{subfigure}\n\t \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_tweets.pdf}\n \\caption{Tweet Counts} \n\t\\end{subfigure}\n\t~\n\t \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_accountlife.pdf}\n \\caption{Account Age} \n\t\\end{subfigure}\n\t \n\\caption{Cumulative Distribution Functions (CDF) of various user metrics for moderated and unmoderated tweets.}\n\\label{fig:ccdf_useranalysis}\n\\end{figure*}\n\n\nWe extract 17,241 pairs of tweets (out of 438,986 possible pairs), where at least one of the two was moderated by Twitter.\nOnly 3,857 pairs have both tweets moderated.\nNote that \\textsc{Lambretta}\\xspace effectively identifies {\\em all} the 17,241 pairs of tweets as moderation candidates. %\nHere is an example of a very similar pair of tweets, for which Twitter did not add labels to one of them:\n\\smallskip\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n\t\\textbf{Moderated}: ``RudyGiuliani in Trump campaign news conference: \"\"Joe Biden said a few weeks ago that his voting fraud crew was the best in the world. They were excellent, but we got them!\"''\n\t\n\t\\textbf{Unmoderated}: ``Joe Biden said a few weeks ago that his crew was the greatest in the world at catching voter fraud, but we caught them.''\n\\end{quote}\n\\end{mdframed}\n\nThese findings indicate that the decision by Twitter to add soft moderation to a tweet does not seem to be driven by the lexical similarity of tweets.\n\n\\descr{URL analysis.}\nAnother potential indicator used by Twitter when deciding which tweets to moderate is whether they include links to known news disinformation articles.\nFirst, we expand all the links in the body of candidate tweets identified by \\textsc{Lambretta}\\xspace to get rid of URL shorteners~\\cite{maggi2013two}.\nThis yields 13,108 distinct URLs.\nNext, we group candidate tweets by URLs and check what fraction of tweets sharing the URL are moderated by Twitter.\n\n\n\n\\begin{table}[t]\n\\centering\n\\small\n\t\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{lrr}\n\\toprule\n\\textbf{URL news story} & \\textbf{Candidates} & \\textbf{Moderated} \\\\ \\midrule\nUSPS whistleblower & 315 & 7 (2.2\\%) \\\\\nChina manipulating election & 252 & 4 (1.5\\%) \\\\\nMichigan ballot dump & 215 & 44 (20\\%) \\\\\n\\#Suitcasegate related FB video & 208 & 3 (1.4\\%) \\\\\nDominion remote machine control & 135 & 15 (11\\%) \\\\ \\bottomrule\n\\end{tabular}%\n\\caption{Examples of URLs in candidate tweets and those being moderated by Twitter.}\n\\label{tab:urlmoderation}\n\\end{table}\n\n\n\nTable~\\ref{tab:urlmoderation} shows the five most common URLs (abstracted to the topic of the news articles) in our dataset, with the fraction of tweets including those URLs moderated by Twitter.\nAll these news stories, excluding one Facebook video, originate from known low-credibility websites like TheGatewayPundit and DC Dirty Laundry, which promote election misinformation.\nTwitter moderates tweets containing those URLs in an inconsistent matter. %\nAlso note that \\textsc{Lambretta}\\xspace can help identify 4,598 additional moderation candidate tweets compared to those on which Twitter intervened.\n\n\\descr{User analysis.}\nWe examine the differences in the social capital (e.g., number of followers) of the authors of tweets moderated by Twitter, compared to those our system recommends for moderation but for which Twitter did not intervene.\nFigure~\\ref{fig:ccdf_useranalysis} reports the CDF of followers, following, tweet count, and account age of accounts that posted moderated and unmoderated tweets.\nWe find that authors of tweets that have warning labels have much fewer followers, followings, lower account activity, and have younger accounts than tweets without warning labels.\nWe also conduct two-sample Kolmogorov-Smirnov tests for each user metric, finding that the differences are statistically significant for followers and account age ($p < 0.01$) as well as following count and status count ($p < 0.05$).\nThis goes against the notion that popular accounts are more likely to have their content moderated.\n\nWe also check if the accounts with moderated tweets were suspended for violating Twitter Rules~\\cite{twitter_rules}. %\nWe find that only 33 out of 3,397 users were suspended by Twitter; this gives us strong ground to rule out the possibility that tweet moderation is not due to the ``legitimacy'' of the account themselves.\n\n\\descr{Engagement analysis.}\nFinally, we analyze engagement metrics. %\nFigure~\\ref{fig:ccdf_tweetanalysis} reports the CDF of retweets and likes categorized by moderation status of the 101,353 candidate tweets \\textsc{Lambretta}\\xspace flags for moderation from \\dthree~, compared to the ones flagged by Twitter.\nSimilar to the user analysis, we find that unmoderated tweets have more engagement. %\nWhen we check for statistical significance of difference in distributions of the retweet count using Kolmogorov-Smirnov tests, we find that it is statistically significant ($p < 0.01$), while we cannot reject the null hypothesis for the likes.\nNote, however, that these results have to be taken with a grain of salt, as we do not have a timeline of when exactly moderation was applied, and whether the soft interventions hampered the virality of online content. \n\n\\begin{figure}\n\\centering\n \\begin{subfigure}{0.375\\textwidth}\\includegraphics[width=\\linewidth]{Plots\/tweets_rt.pdf} \n \\caption{Retweets} \n\t\\end{subfigure}\n\t \\begin{subfigure}{0.375\\textwidth}\\includegraphics[width=\\linewidth]{Plots\/tweets_favs.pdf} \n \\caption{Likes} \n\t\\end{subfigure}\n\t \n\\caption{CDFs of engagement metrics for moderated and unmoderated tweets.}\n\\label{fig:ccdf_tweetanalysis}\n\\end{figure}\n\n\\descr{Takeaways.} Our analysis paints a puzzling picture of soft moderation on Twitter. \nWe find that certain claims are moderated more aggressively.\nStill, Twitter does not seem to have a system in place to identify similar tweets discussing the same false narrative, nor flagging tweets that link to the same debunked news article.\nWe also find that Twitter does not appear to focus on the tweets posted by popular accounts for moderation, but rather that tweets posted by accounts with more followers, friends, activity, and a longer lifespan are more likely to go unmoderated.\nThis confirms the need for a system like \\textsc{Lambretta}\\xspace. %\n\n\\section{Related Work}\nIn this section, we review relevant work on soft moderation, security warnings, and keyword extraction in the context of disinformation.\n\n\\descr{Soft Moderation during the 2020 Elections.}\nAs part of the Civic Integrity Policy efforts surrounding the 2020 US elections, Twitter applied warning labels on ``misleading information.''\nEmpirical analysis~\\cite{zannettou2021won} reports 12 different types of warning messages occurring on a sample of 2,244 tweets with warning labels.\nStatistical assessment of the impact of Twitter labels on Donald Trump's false claims during the 2020 US Presidential election finds that warning labels did not result in any statistically significant increase or decrease in the spread of misinformation~\\cite{papakyriakopoulos2022impact}.\nTwitter later reported that approximately 74\\% of the tweet viewership happened post-moderation and, more importantly, that the warnings yielded an estimated 29\\% decrease in users quoting the labeled tweets~\\cite{twitter_update_2020}. %\n\n\n\\descr{Security Warnings for Disinformation.}\nThe warning labels adopted by Twitter as soft moderation intervention can be broadly categorized as a type of security warning.\nSecurity warnings can be classified into two types: contextual and interstitial.\nThe former passively inform the users about misinformation through UI elements that appear alongside social media posts.\nThe latter prompt the user to engage before taking action with the potential piece of disinformation (e.g., retweeting or sharing).\nA recent study~\\cite{kaiser2021adapting} shows that interstitial warnings may be more effective, with a lower clickthrough rate of misleading articles.\nAdditionally, interstitial warnings are more effective design-wise because they capture attention and provide a window of opportunity for users to think about their actions.\nEfforts to study warning labels on countering disinformation have thus far been mostly focused on Facebook~\\cite{pennycook2018prior,ross2018fake,pennycook2020implied}, where warning labels were limited to ``disputed'' or ``rated false'', and the approach was deemed to be of limited utility by Facebook~\\cite{smith2017designing}.\nRecently, other platforms like Twitter~\\cite{alba2020twitter}, Google~\\cite{googlelabel}, and Bing~\\cite{bing_label} also used some form of fact-check warnings to counter disinformation.\n\n\\descr{Tools for automated content moderation.}\nThe sheer scale of content being produced on modern social media platforms (Facebook, Reddit, YouTube etc.) have motivated the need to adopt tools for automated content moderation~\\cite{fb_guidelines,youtube_guidelines}.\nHowever, due to the nuanced and context-sensitive nature of content moderation, it is a complex socio-technical problem~\\cite{jhaver2019human,seering2019moderator}.\nMost of the work in this space of automated content moderation are focused on Reddit, aiming to identify submissions that violate community-specific policies and norms ranging from hate speech to other types of problematic content~\\cite{seering2020reconsidering}.\nThe most popular solution to automated content moderation in Reddit, AutoModerator~\\cite{jhaver2019human} allows community moderators to set up simple rules based on regular expressions and metadata of users for automated moderation. \nOn YouTube, FilterBuddy~\\cite{jhaver2022designing} is available as a tool for creator-led content moderation by designing filters for keywords and key phrases.\nSimilarly, Twitch offers an automated moderation tool called Automod to allow creators to moderate four categories of content (discriminations and slurs, sexual content, hostility, and profanity) on the platform~\\cite{twitch_automod}.\nAnother tool, called CrossMod~\\cite{chandrasekharan2019crossmod} uses an ensemble of models learned via cross-community learning from empirical moderation decisions made on two subreddits of over 10M subscribers each.\n\n\n\\descr{Keyword Extraction for Disinformation.}\nResearchers have used an array of methods to detect disinformation, ranging from modeling user interactions~\\cite{shu2019role,tschiatschek2018fake,qian2018neural}, leveraging semantic content~\\cite{ciampaglia2015computational,zhang2017constraint,esmaeilzadeh2019neural,pan2018content}, and graph based representations~\\cite{nguyen2020fang,gangireddy2020unsupervised,lu2020gcan}.\nThe foundation of our system lies in the keyword detection, which has been used before to study disinformation on social media.\nDisInfoNet, a toolbox presented in~\\cite{guarino2019beyond}, represents news stories through keyword-based queries to track news stories and reconstruct the prevalence of disinformation over time and space.\nSimilarly, the work in~\\cite{gaglani2020unsupervised} uses keyword extraction techniques as the base for semantic search to detect fake news on WhatsApp.\nThe work in~\\cite{choudhary2018neural} focuses on credibility assessment of textual claims on news articles with potentially false information, also using keyword extraction as a part of their multi-component module.\n\n\\descr{Learning To Rank for Keyword extraction.}\nThe closest applications of LTR to our work are the proposals in~\\citep{jiang2009ranking} for keyphrase extraction and in~\\citep{cai2017keyword} for keyword extraction in Chinese news articles.\nThe foundational work by~\\citep{jiang2009ranking} motivates the necessity of framing the problem of keyphrase extraction as a ranking task rather than a classification task while improving results on extracting keyphrases from academic research papers, and social tagging data.\nThe LTR approach for keyword extraction utilized by \\textsc{Lambretta}\\xspace is motivated by the premise set up by this work.\nSimilarly,~\\citep{cai2017keyword} use Learning To Rank to identify keywords from 1800 public Chinese new articles using TF-IDF, TextRank, and Latent Dirichlet Allocation (LDA) as the set of features for the ranking model.\n\n\\section{Discussion and Conclusion}\n\\label{sec:discussion}\n\nThis paper presented \\textsc{Lambretta}\\xspace, a system geared to automatically flag candidate tweets for soft moderation interventions.\nOur experiments demonstrate that Learning to Rank (LTR) techniques are effective, that \\textsc{Lambretta}\\xspace outperforms other approaches, produces accurate recommendations, and can increase the set of tweets that receive soft moderation by over 20 times compared to those flagged by Twitter during the 2020 US Presidential Election.\n\n\n\\descr{Implications for social media platforms.}\nAs discussed in Section~\\ref{sec:twitter}, soft moderation interventions applied by Twitter appear to be spotty and not following precise criteria.\nThis might be due to moderation being conducted mainly in an ad-hoc fashion, relying on user reports and the judgment of moderators.\n\\textsc{Lambretta}\\xspace can assist this human effort, working upstream of the content moderation process and presenting moderators with an optimal set of tweets that are candidates for moderation. \nBecause of the nuances of moderating false and misleading content, we envision \\textsc{Lambretta}\\xspace to be deployed as an aid to human moderation rather than an automated detection tool.\n\nNonetheless, the claim-specific design of \\textsc{Lambretta}\\xspace can also be used by moderators for other actions as per their policies, e.g., asking users to remove a given tweet or performing hard moderation by removing the tweets.\nThe choice between soft moderation and hard moderation can be made by moderators contextually after the moderation candidates are retrieved through \\textsc{Lambretta}\\xspace, either based on underlying claims or a case-by-case basis.\nE.g., platforms may decide to soft moderate posts that push a certain false narrative but not add warnings if posts inform users about falsehood.\nAlternatively, they might add warnings to posts about the false narrative, providing additional context to users and allowing them to make up their minds about it.\nPlatforms could also craft warning messages depending on the context in which a false claim is discussed or design these messages to be more effective based on the audience and risk levels of specific false claims.\nFor example, different type of warning messages can be applied by platforms to distinguish between different levels of risk associated with the misleading claims (e.g., high and low-level risks associated with COVID-19 misinformation)~\\cite{ling2022learn}.\nWe are confident that Human-Computer Interaction researchers will be able to address these challenges, which go beyond the scope of this paper.\n\n\\descr{Human effort required for adopting \\textsc{Lambretta}\\xspace.}\nWhen setting up \\textsc{Lambretta}\\xspace to work in a new context, platform moderators need to follow the steps highlighted in Sections~\\ref{sec:claimextraction} and~\\ref{sec:keyword}.\nFirst, they need a set of tweets with claims they identified as containing misleading information, together with a tuning dataset like \\dthree~.\nModerators can create a tuning dataset like \\dthree~ by using a broad set of keywords associated with the event or topic and querying the Twitter API to which they have full access (e.g. ``COVID-19,'' ``coronavirus,'' etc., in the case of the pandemic).\nThey then need to tune the threshold for the Claim Stopper API in the Claim Extraction component (see Section~\\ref{sec:claimextraction}).\nIn our experiments, this phase took us, on average, two minutes per claim.\nFinally, they need to create the training set for the LTR model by following the iterative process discussed in Section~\\ref{sec:keyword}.\nWhen performed by a single annotator, this process took, on average, 15 minutes per claim for the experiments discussed in this paper.\nTwitter could speed up these steps further by having multiple annotators work on the same task.\nAdditionally, the work required on each claim is independent of other claims; therefore this process can be easily parallelized within the organization or even through crowdsourcing campaigns~\\cite{founta2018large,lease2012crowdsourcing,oleson2011programmatic}.\n\n\n\\descr{Resilience to evasion.}\nAs with any adversarial problem, malicious actors are likely to try to evade being flagged by \\textsc{Lambretta}\\xspace.\nE.g., they might avoid using certain words to avoid detection and use synonyms or dog whistles instead~\\cite{gerrard2018beyond,tahmasbi2021go,zannettou2020quantitative,zhu2021self}.\nHowever, this would make the false messaging less accessible to the general public, who would need to first understand the alternative words used and ultimately be counterproductive for malicious actors by limiting the reach of false narratives.\n\n\n\n\n\\descr{Limitations.} \\textsc{Lambretta}\\xspace requires a seed of tweets to be moderated, making it inherently reactive.\nHowever, this is a problem common to all moderation approaches, including the work conducted by fact-checking organizations.\nAnother limitation is that we could only test \\textsc{Lambretta}\\xspace on one dataset related to the same major event (the 2020 US Presidential Election), as this is the only reliable dataset with soft moderation labels available to the research community.\n\nEven though Twitter applied warning labels on misinformation about COVID-19, previous research reported that these were unreliable and inconsistent~\\cite{lange_2020,lyons_2020}, which we independently confirmed in our preliminary analysis.\nMore recently, Twitter recently started applying warning labels to tweets in the context of the Russian invasion of Ukraine~\\cite{benson_twitter_russia}, but these labels are applied based on the account posting them (i.e., if the account belongs to Russian or Belarusian state-affiliated media) instead of being claim-specific as required by \\textsc{Lambretta}\\xspace.\nWhile the LTR model used by \\textsc{Lambretta}\\xspace is not specific to the actual keywords being searched, and therefore we expect that it should generalize across the entirety of Twitter, platform moderators using the tool should take further steps to validate it when used in contexts other than politics and elections.\n\n\n\\descr{Future work.} \nWe plan to extend \\textsc{Lambretta}\\xspace to additional platforms.\nSince our system only needs the text of posts as input, we expect it to generalize to other platforms, e.g., Facebook, Reddit, etc.\nWe will also investigate how claims automatically built by \\textsc{Lambretta}\\xspace can be incorporated into warning messages to provide more context to users and allow them to be better protected against disinformation.\n\n\\descr{Acknowledgments.} \nWe thank the anonymous reviewers for their comments that helped us improve the paper.\nOur work was supported by the NSF under grants CNS-1942610, IIS-2046590, CNS-2114407, IIP-1827700, and CNS-2114411, and by the UK's National Research Centre on Privacy, Harm Reduction, and Adversarial Influence Online (REPHRAIN, UKRI grant: EP\/V011189\/1).\n\n\\small\n\\bibliographystyle{abbrv}\n\\input{no_comments.bbl}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe occurence of important Solar Energetic Particle (SEP) events is one of the\nprominent planning considerations for manned and unmanned lunar and planetary\nmissions \\cite{posner2007up}.\nA high exposure to large solar particles events can deliver critical doses to\nhuman organs and may damage the instruments on board of satellites and the\nglobal positioning system (GPS) due to the risk of saturation. SEP\nevents usually happen 30 minutes after the occurrence of the X-ray flare, which\nleaves very little time for astronauts performing extra-vehicular activity on\nthe International Space Station or planetary surfaces to\ntake evasive actions \\cite{2016l}. Earlier warning of the SEP events will be a valuable tool\nfor mission controllers that need to take a prompt decision concerning the\natrounauts' safety and the mission completion.\nWhen a solar flare or a CME happens, the magnetic force that is exercised is\nmanifested through different effects. Some of the effects are listed in their\norder of occurrence: light, thermal, particle acceleration, and matter ejection\nin case of CMEs. The first effect of a solar flare is a flash of increased\nbrightness that is\nobserved near the Sun's surface which is due to the X-rays and UV radiation.\nThen, part of the magnetic energy is converted into thermal energy in the area\nwhere the eruption happened.\nSolar particles in the atmosphere are then accelerated with different speed\nspectra, that can reach up to 80\\% of the speed of light, depending on the\nintensity of the parent eruptive event.\n \\begin{figure}[h!]\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/SolarParticleEvent.png}\n \\caption{Example of Sun-Earth magnetical connection and accelerated\n particles movement following the Parker's spiral before reaching\n Earth. (Drawing courtesy from Space Weather) \\cite{greendale}}\n \\label{fig1}\n\\end{figure}\n\nFinally, in the case of a CME, plasma and\nmagnetic field from the solar corona is released into the solar wind.\nThough most of the solar particles have the same composition, they are labeled\ndifferently depending on their energies starting from 1 keV, in the solar wind,\nto more than 500 MeV. SEP events are composed of particles, predominantly\nprotons and electrons, with at least 1 MeV energy that last between 2-20 days\nand have a range of fluxes of 5-6 orders of magnitude \\cite{gabriel1996power}.\nOnly $>$100 MeV particles are discussed herein. It is generally accepted that\nthere are two types of SEP events, one is associated with CMEs and the other is\nassociated with flares that are called respectively gradual and impulsive\n\\cite{reames1999particle}.\n\nIn this paper, we propose a novel method for predicting SEP events $>$100 MeV\nbased on the proton and X-ray correlations time series using interpretable\ndecision tree models.\nPredicting impulsive events is considered to be a more challenging problem than\npredicting the gradual events that happen progressively and leave a large window\nfor issuing SEP warnings. While we are mainly concerned with impulsive events,\nwe used the gradual events as well to test our model with. The accelerated\nimpulsive events may or may not reach Earth depending on the location of their\nparent event because their motion is confined by the magnetic field. More\nspecifically, in order for the accelerated particles to reach Earth, a Sun-Earth\nmagnetic connection needs to exist that allows the particles to flow to Earth\nvia the Parker spiral. Fig~.\\ref{fig1} shows a cartoon of a solar eruption that\nhappened in the Western limb of the Sun that happens to be magnetically\nconnected to Earth.\n\nSince SEP events are also part of solar activity, it may seem that\ntheir occurrence is dependent on the solar cycle and therefore the number of\nSunspots on the Sun's surface, which is the case for other solar eruptions.\nHowever, according to \\cite{gabriel1996power}, there is no correlation between the solar\ncycle and SEP event occurrence and fluences. In addition, there is no\nevidence the dependence of SEP events on the number of Sunspots\nthat are present during that snapshot in time \\cite{gabriel1996power}.\n\\par\n The rest\nof the paper is presented as follows. In Section 2 we provide background\nmaterial on the SEP predictive models and related works. Section 3\ndefines our dataset used in this study, and then in Section 4 we lay out our\nmethodology. Finally, Section 5 contains our experimental results, and we finish\nwith conclusions and future work in Section 6.\n\n\n\\section{Related Works}\nThere are a number of predictive models of SEP events that can be categorized\ninto two classes: physics-based models \\cite{p0, p1} and the precursor-based\nmodels \\cite{c0, c1}.\nThe first category of models includes the SOLar Particle Engineering Code\n(SOLPENCO) that can predict the flux and fluence of the gradual SEP events\noriginating from CMEs \\cite{aran2006solpenco}. However, such efforts mainly\nfocus on gradual events.\nOn the other hand, there are models that rely on historical observations to find\nSEP events associated precursors. One example of such systems is the proton\nprediction system (PPS), which is a program developed at the Air Force Research\nLaboratory (AFRL), that predicts low energy SEP events E$>$\\{5, 10, 50\\} MeV,\ncomposition, and intensities. PPS assumes that there is a relationship between\nthe proton flux and the parent flare event. PPS takes\nadvantage of the correlation between large SEP events observed by the\nInterplanetary Monitoring Platform (IMP) satellites as well as their correlated\nflare signatures captured by GOES proton, X-ray flux and H$\\alpha$ flare\nlocation \\cite{PPS}. Also, the Relativistic Electron Alert System for\nExploration (RELEASE), predicts the intensity of SEP events using relativistic\nnear light speed electrons \\cite{posner2007up}.\nRELEASE uses electron flux data from the SOHO\/COSTEP sensor of the range of\n0.3-1.2 MeV to forecast the proton intensity of the energy range 30-50 MeV.\nAnother example of precursor-based models appear in\n\\cite{laurenza2009technique}, that base their study on the \"big flare\nsyndrome\". This latter theory states that SEP events occurrence at 1 AU is\nhighly probable when the intensity of the parent flare is high. Following this\nassumption, the authors in \\cite{laurenza2009technique} issue SEP forecasts for\nimportant flares greater than M2. To this end, it uses type III radio burst\ndata, H$\\alpha$ data, and GOES soft X-ray data. Finally, $GLE Alert Plus$, is an\noperational system that uses a ground-based neutron\nmonitor (MNDB, www.nmdb.com) to issue alerts of SEP events of energies E$>$433\nMeV. Finally, the University of Malaga Solar Energetic Particles Prediction\n(UMASEP) is another system that first predicts whether a $>$10 MeV and $>$100\nMeV SEP will happen or not. To do so, it computes the correlation between the\nsoft X-ray and proton channels to assess if there is a magnetic connection\nbetween the Sun and Earth at the time of the X-ray event.\n Then, in case of existence of magnetic connection, UMASEP gives an\nestimation on the time when the proton flux is expected to surpass the SWPC\nthreshold of J(E $>$ 10MeV)= 10$pfu$ and J(E $>$ 100MeV)= 1$pfu$ (1$pfu$ = $pr\ncm^{-2} sr^{-1} s^{-1}$) and for the case of UMASEP-100, the intensity of the\nfirst three hours after the event onset time.\n\n \\begin{figure} \n \\centering\n \\includegraphics[width=0.85\\linewidth]{figs\/GOESchart.pdf}\n \\caption{Primary (bold lines) and secondary (thin lines) GOES satellites\n for XRS data since 1986 (the primary and secondary satellites designation is\n unknown prior to 1986) (Figure from NOAA instrument report)}\n \\label{fig2}\n\\end{figure} \n\n \\begin{figure} \n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/ChartGOES.pdf}\n \\caption{Catalogs used to make the x-ray-parent event mapping. X-ray and\n CME catalogs for detecting the parent event report for flare and CME\n respectively.}\n \\label{cats}\n\\end{figure} \n\nWhile most of the SEP predictive systems either focus on the CME associated\nevents or low energy SEP events, with the exception of $GLE Alert Plus$ and\nUMASEP, in this present work, we focus on higher energy SEP events that can be\nmore disruptive than lower energy events. In this work, we study the\nGOES cross-channel correlations that can give an early insight on whether there\nexist a magnetic connection or not.\n\nWe aim to provide an interpretable decision tree models using a\nbalanced dataset of SEP and non-SEP events.\nThe highest SEP energy band of $>$500 MeV or higher\nthat are measurable from the ground is out of the scope of this study.\nSimilarly, the lower SEP energy band of $<$100 MeV is not considered in this\nstudy.\n\n\\section{Data}\nOur dataset is composed of multivariate time series of X-ray, integral proton flux and fluences spectra that were measured on board of\nSpace Environment Monitor (SEM) instruments package of the Geostationary\nOperational Earth Satellites (GOES).\nIn particular, we consider both the short and long X-ray channel data recorded\nby the X-ray Sensor (XRS). For the proton channels, we consider channels P6 and\nP7 recorded by the Energetic Particle Sensor (EPS) and proton channels P8, P9,\nP10, and P11 recorded by the High Energy Proton and Alpha Detector (HEPAD).\nTable~\\ref{goesinstruments} summarizes the instruments onboard the GOES satellites\nand their corresponding data channels that we used.\n\n\\begin{figure*}\n\\centering \n\\begin{tabular}{ |c|c|c|}\nFast-Rising & Slow-Rising & Lack of SEP\\\\\n\\hline \n\\includegraphics[width=0.315\\linewidth]{figs\/g1.pdf}&\n\\includegraphics[width=0.32\\linewidth]{figs\/g2.pdf}&\n\\includegraphics[width=0.32\\linewidth]{figs\/g3.pdf}\n\\\\\n\\hline\n\\includegraphics[width=0.175\\linewidth]{figs\/impulsive.pdf} &\n\\includegraphics[width=0.17\\linewidth]{figs\/CME1.PNG}&\n\\includegraphics[width=0.17\\linewidth]{figs\/FL3.PNG}\n\\\\\n \\hline \n \\end{tabular}\n \\caption{Example of an (a) Impulsive SEP event\n that started on the 2001-04-15 14:05:00 as a result of a flare\n that occurred in the 2011-04-15 13:15:00 shown in the SOHO EIT instument and\n a (b) gradual SEP event whose nearest temporal flare happened on 2001-04-02 21:30:03A and occurred as a\n result of a CME on the 2001-04-02 22:06:07 shown in the SOHO LASCO instrument\n and a (c) a flare that happened on 1999-01-16 12:00:00 that did not lead to any\n $>$ 100 MeV SEP event shown in the SOHO EIT\n instrument.\\label{table1impulsive}}\n\\end{figure*}\n\n The data we collected is made publically available by NOAA in the\nfollowing link:\n(\\href{https:\/\/satdat.ngdc.noaa.gov\/sem\/goes\/data\/new\\_avg\/}{\\textit{https:\/\/satdat.ngdc.noaa.gov\/sem\/goes\/data\/new\\_avg\/}})\nThe data is available in three different cadences. The full resolution data is captured\nevery three seconds from the GOES satellites, which is aggregated and made\navailable with one and five minute cadences. In this paper we use the\naggregated five minute data which is the one usually cited in the literature\n\\cite{neal2001predicting} \\cite{nunez2015real} \\cite{nunez2011predicting}.\nIn most cases, there are a couple a co-existing GOES satellites whose data is\ncaptured by more than one GOES satellite at a time. In this study, we always\nconsider the data reported by the primary GOES satellite that is designated by\nthe NOAA, as illustrated in Fig~.\\ref{fig2}. The latter figure shows the primary GOES\nsatellite with a bold line and the other co-existing GOES for every year.\nGOES-13 measurements were unstable for many years, but have been stable since\n2014.\n\n\n\\begin{table*} \n\\centering\n\\caption{GOES X-ray and Proton instuments and Channels.\\label{goesinstruments}}\n\\begin{tabular}{ |c|c|c|c|}\n\\hline\nInstrument & Channels & Description \\\\\n\\hline\n\\multirow{2}{*}{XRS}\n & xs &Short wavelength channel irradiance (0.5 - 0.3 nm)\\\\\n & xl &Long wavelength channel irradiance (0.1-0.8 nm)\\\\\n \\hline\n\\multirow{4}{*}{HEPAD}\n & p8\\_flux &Proton Channel 350.0 - 420.0 MeV \\\\\n & p9\\_flux &Proton Channel 420.0 - 510.0 MeV \\\\\n & p10\\_flux &Proton Channel 510.0 - 700.0 MeV \\\\\n & p11\\_flux &Proton Channel $>$ 700.0 MeV \\\\\n \\hline\n \\multirow{2}{*}{EPS}\n\n & p6\\_flux &Proton Channel 80.0 - 165.0 MeV \\\\\n & p7\\_flux &Proton Channel 165.0 - 500.0 MeV \\\\\n \\hline\n \n \\end{tabular}\n\\end{table*}\n\n\n\\begin{table}\n \\caption{$>$ 100 MeV SEP Event List with their Parent Events\n(CME\/Flare)\\label{sepevents}}\n\\centering \n\\begin{threeparttable}\n\\begin{tabular}{ |c|c|c|}\n\\hline \nSEP Event ID & Onset Time of SEP Event & Parent X-ray Event \\\\ \\hline\n1 & 1997-11-04 05:52:00 & 1997-11-04 05:52:00 \\\\ \\hline\n2 & 1997-11-06 11:49:00 & 1997-11-06 11:49:00 \\\\ \\hline\n3\\tnote{*} & 1998-04-20 09:38:00 & 1998-04-20 09:38:00 \\\\ \\hline\n4 & 1998-05-02 13:31:00 & 1998-05-02 13:31:00 \\\\ \\hline\n5 & 1998-05-06 07:58:00 & 1998-05-06 07:58:00 \\\\ \\hline\n6 & 1998-08-24 21:50:00 & 1998-08-24 21:50:00 \\\\ \\hline\n7 & 1998-09-30 13:50:00 & 1998-09-30 13:50:00 \\\\ \\hline\n8 & 1998-11-14 05:15:00 & 1998-11-14 06:05:00 \\\\ \\hline\n9 & 2000-06-10 16:40:00 & 2000-06-10 16:40:00 \\\\ \\hline\n10 & 2000-07-14 10:03:00 & 2000-07-14 10:03:00 \\\\ \\hline\n11 & 2000-11-08 22:42:00 & 2000-11-08 22:42:00 \\\\ \\hline\n12 & 2000-11-24 14:51:00 & 2000-11-24 14:51:00 \\\\ \\hline\n13\\tnote{*} & 2000-11-26 16:34:00 & 2000-11-26 16:34:00 \\\\ \\hline\n14\\tnote{*} & 2001-04-02 21:32:00 & 2001-04-02 21:32:00 \\\\ \\hline\n15 & 2001-04-12 09:39:00 & 2001-04-12 09:39:00 \\\\ \\hline\n16 & 2001-04-15 13:19:00 & 2001-04-15 13:19:00 \\\\ \\hline\n17 & 2001-04-17 21:18:00 & 2001-04-18 02:05:00 \\\\ \\hline\n18 & 2001-08-15 12:38:00 & 2001-08-16 23:30:00 \\\\ \\hline\n19\\tnote{*} & 2001-09-24 09:32:00 & 2001-09-24 09:32:00 \\\\ \\hline\n20 & 2001-11-04 16:03:00 & 2001-11-04 16:03:00 \\\\ \\hline\n21 & 2001-11-22 22:32:00 & 2001-11-22 19:45:00 \\\\ \\hline\n22 & 2001-12-26 04:32:00 & 2001-12-26 04:32:00 \\\\ \\hline\n23 & 2002-04-21 00:43:00 & 2002-04-21 00:43:00 \\\\ \\hline\n24 & 2002-08-22 01:47:00 & 2002-08-22 01:47:00 \\\\ \\hline\n25 & 2002-08-24 00:49:00 & 2002-08-24 00:49:00 \\\\ \\hline\n26 & 2003-10-28 09:51:00 & 2003-10-28 09:51:00 \\\\ \\hline\n27 & 2003-11-02 17:03:00 & 2003-11-02 17:03:00 \\\\ \\hline\n28\\tnote{*} & 2003-11-05 02:37:00 & 2003-11-05 02:37:00 \\\\ \\hline\n29\\tnote{+} & 2004-11-01 03:04:00 & 2004-11-01 03:04:00 \\\\ \\hline\n30\\tnote{+} & 2004-11-10 01:59:00 & 2004-11-10 01:59:00 \\\\ \\hline\n31\\tnote{+} & 2005-01-16 21:55:00 & 2005-01-17 08:00:00 \\\\ \\hline\n32\\tnote{+} & 2005-01-20 06:36:00 & 2005-01-20 06:36:00 \\\\ \\hline\n33\\tnote{+} & 2005-06-16 20:01:00 & 2005-06-16 20:01:00 \\\\ \\hline\n34\\tnote{+} \\tnote{*} & 2005-09-07 17:17:00 & 2005-09-07 17:17:00 \\\\ \\hline\n35\\tnote{+} \\tnote{*} & 2006-12-06 18:29:00 & 2006-12-06 18:29:00 \\\\ \\hline\n36\\tnote{+} & 2006-12-13 02:14:00 & 2006-12-13 02:14:00 \\\\ \\hline\n37\\tnote{+} & 2006-12-14 21:07:00 & 2006-12-14 21:07:00 \\\\ \\hline\n38 & 2011-06-07 06:16:00 & 2011-06-07 06:16:00 \\\\ \\hline\n39 & 2011-08-04 03:41:00 & 2011-08-04 03:41:00 \\\\ \\hline\n40 & 2011-08-09 07:48:00 & 2011-08-09 07:48:00 \\\\ \\hline\n41 & 2012-01-23 03:38:00 & 2012-01-23 03:38:00 \\\\ \\hline\n42\\tnote{*} & 2012-01-27 17:37:00 & 2012-01-27 17:37:00 \\\\ \\hline\n43 & 2012-03-07 01:05:00 & 2012-03-07 01:05:00 \\\\ \\hline\n44 & 2012-03-13 17:12:00 & 2012-03-13 17:12:00 \\\\ \\hline\n45 & 2012-05-17 01:25:00 & 2012-05-17 01:25:00 \\\\ \\hline\n46\\tnote{*} & 2013-04-11 06:55:00 & 2013-04-11 06:55:00 \\\\ \\hline\n47 & 2013-05-22 13:08:00 & 2013-05-22 13:08:00 \\\\ \\hline\n \\end{tabular}\n \\begin{tablenotes}\n \\item[*] Gradual Events.\n \\item[+] Missing Data in P6 and P7.\n \\end{tablenotes}\n \\end{threeparttable} \n\\end{table}\n\n\n\\begin{table}\n \\caption{Non SEP Event List \\label{ns}}\n\\centering \n\\begin{threeparttable}\n\\begin{tabular}{|c|c|c|}\n\\hline\nNon SEP Event ID & X-ray Event & Class \\\\ \\hline\n1 & 1997-09-24 02:43:00 & M59 \\\\ \\hline\n2 & 1997-11-27 12:59:00 & X26 \\\\ \\hline\n3 & 1997-11-28 04:53:00 & M68 \\\\ \\hline\n4 & 1997-11-29 22:28:00 & M64 \\\\ \\hline\n5 & 1998-07-14 12:51:00 & M46 \\\\ \\hline\n6 & 1998-08-18 08:14:00 & X28 \\\\ \\hline\n7 & 1998-08-18 22:10:00 & X49 \\\\ \\hline\n8 & 1998-08-19 21:35:00 & X39 \\\\ \\hline\n9 & 1998-11-28 04:54:00 & X33 \\\\ \\hline\n10 & 1999-01-16 12:02:00 & M36 \\\\ \\hline\n11 & 1999-04-03 22:56:00 & M43 \\\\ \\hline\n12 & 1999-04-04 05:15:00 & M54 \\\\ \\hline\n13 & 1999-05-03 05:36:00 & M44 \\\\ \\hline\n14 & 1999-07-19 08:16:00 & M58 \\\\ \\hline\n15 & 1999-07-29 19:31:00 & M51 \\\\ \\hline\n16 & 1999-08-20 23:03:00 & M98 \\\\ \\hline\n17 & 1999-08-21 16:30:00 & M37 \\\\ \\hline\n18 & 1999-08-21 22:10:00 & M59 \\\\ \\hline\n19 & 1999-08-25 01:32:00 & M36 \\\\ \\hline\n20 & 1999-10-14 08:54:00 & X18 \\\\ \\hline\n21 & 1999-11-14 07:54:00 & M80 \\\\ \\hline\n22 & 1999-11-16 02:36:00 & M38 \\\\ \\hline\n23 & 1999-11-17 09:47:00 & M74 \\\\ \\hline\n24 & 1999-12-22 18:52:00 & M53 \\\\ \\hline\n25 & 2000-01-18 17:07:00 & M39 \\\\ \\hline\n26 & 2000-02-05 19:17:00 & X12 \\\\ \\hline\n27 & 2000-03-12 23:30:00 & M36 \\\\ \\hline\n28 & 2000-03-31 10:13:00 & M41 \\\\ \\hline\n29 & 2000-04-15 10:09:00 & M43 \\\\ \\hline\n30 & 2000-06-02 06:52:00 & M41 \\\\ \\hline\n31 & 2000-06-02 18:48:00 & M76 \\\\ \\hline\n32 & 2000-10-29 01:28:00 & M44 \\\\ \\hline\n33 & 2000-12-27 15:30:00 & M43 \\\\ \\hline\n34 & 2001-01-20 21:06:00 & M77 \\\\ \\hline\n35 & 2001-03-28 11:21:00 & M43 \\\\ \\hline\n36 & 2001-06-13 11:22:00 & M78 \\\\ \\hline\n37 & 2001-06-23 00:10:00 & M56 \\\\ \\hline\n38 & 2001-06-23 04:02:00 & X12 \\\\ \\hline\n39\\tnote{+} & 2004-12-30 22:02:00 & M42 \\\\ \\hline\n40\\tnote{+} & 2004-01-07 03:43:00 & M45 \\\\ \\hline\n41\\tnote{+} & 2004-09-12 00:04:00 & M48 \\\\ \\hline\n42\\tnote{+} & 2004-01-17 17:35:00 & M50 \\\\ \\hline\n43\\tnote{+} & 2005-07-27 04:33:00 & M37 \\\\ \\hline\n44\\tnote{+} & 2005-11-14 14:16:00 & M39 \\\\ \\hline\n45\\tnote{+} & 2005-08-02 18:22:00 & M42 \\\\ \\hline\n46\\tnote{+} & 2005-07-28 21:39:00 & M48 \\\\ \\hline\n47\\tnote{+} & 2006-04-27 15:22:00 & M79 \\\\ \\hline\n \\end{tabular}\n \\begin{tablenotes}\n \\item[+] Missing Data in P6 and P7.\n \\end{tablenotes}\n \\end{threeparttable} \n\\end{table}\n\n\n\n\nOnly a portion of the collected data is used to train and test our\nclassifier. The positive class in this study is composed of X-Ray and proton\nchannels time series that led to $>$100 MeV SEP impulsive or gradual events. On\nthe other hand, the negative class is composed of X-Ray and proton channels\ntime series that did not lead to any $>$100 MeV SEP events. In order to\nselect such events we used a number of catalogs. For the positive class events\nwe used the same catalog of SEP events $>$100 MeV in\n\\cite{nunez2011predicting} that covers the events that happened between 1997 and\n2013.\n\n\\par \n\nOur positive class is composed of the 47 X-Ray parent events of their\ncorresponding $>$100 MeV SEP events that appear in \\cite{nunez2011predicting}\nand shown in Table~\\ref{sepevents}.\nWe use the X-Ray catalog\n(\\href{https:\/\/www.ngdc.noaa.gov\/stp\/space-weather\/solar-data\/solar-features\/solar-flares\/x-rays\/goes\/xrs\/}{\\textit{https:\/\/www.ngdc.noaa.gov\/stp\/space-weather\/solar-data\/solar-features\/solar-flares\/x-rays\/goes\/xrs\/}})\nas well as the CME catalog\n(\\href{https:\/\/cdaw.gsfc.nasa.gov\/CME\\_list\/}{\\textit{https:\/\/cdaw.gsfc.nasa.gov\/CME\\_list\/}})\nfrom the SOlar Heliospheric Observatory (SOHO) to derive the parent\nevents of the $>$100 MeV SEP events. There was an exception of two SEP events\nthat happened in August and September 1998 that we believe are gradual events but\ncould not map to any CME report due to the missing data during the SOHO mission\ninterruption. This latter happened because of the major loss of altitude\nexperienced by the spacecraft due to the failure to adequately monitor the\nspacecraft status, and an erroneous decision which disabled part of the on-board\nautonomous failure detection \\cite{nunez2011predicting}.\nIt is worth to note that we consulted the NOAA-prepared\nSEP events catalog along with their parent flare\/CME events\n(\\href{ftp:\/\/ftp.swpc.noaa.gov\/pub\/indices\/SPE.txt}{\\textit{ftp:\/\/ftp.swpc.noaa.gov\/pub\/indices\/SPE.txt}}).\nFor the case of events that are missing the NOAA catalog, we made our own\nflare\/CME-SEP events mapping. Fig.~\\ref{cats} shows the three external catalogs\nthat we used to produce our own catalog from which we generate our SEP\ndataset. \nTo obtain a balanced dataset, we selected another 47 X-ray events that did not\nproduce any SEP events that is, shown in Table~\\ref{ns}. We noticed\nthat there are nine SEP events (refer Table~\\ref{sepevents}\nID:29-37) that happened during the period when only GOES-12\nwas operational as can be seen in Fig.~\\ref{fig2}. At that period, channels P6\nand P7 failed and there was no secondary GOES.\nTo make sure not to create any biased classifier that relies on the missing data to\nmake the prediction, we made sure to choose nine events from the negative class\nas well that did not produce any SEP event (see Table~\\ref{ns}\nID:39-47).\n\nIn this paper we make a clear distinction\nbetween the two different classes of SEP events: gradual and impulsive. We\nassume that an SEP event is flare accelerated, and therefore impulsive, if the lag\nbetween the flare occurrence and the SEP onset time is very small and the peak\nflux intensity has reached a global peak few minutes to an hour after the onset\ntime. On the other hand, a gradual event shows a progressive increase in the\nproton flux trend that does not reach a global peak; instead, the peak is\nmaintained steadily before dropping again progressively. Finally, a non-SEP\nevent happens when there is an X-ray event of minimum intensity $M3.5$ that is\nnot followed by any significant proton flux increase in one of the P6-P11\nchannels. \n\n\\section{Methodology}\n\nThis section introduces a novel approach in predicting the occurrence of $>$ 100\nMeV SEP events based on interpretable decision tree models. We considered the\nX-ray and proton channels as multivariate time series that entail some\ncorrelations which may be precursors to the occurrence of an event. While \\cite\n{nunez2011predicting} considers the correlation between the X-ray and proton\nchannels only, we extended the correlation study into all the channels,\nincluding correlations that happen across different proton channels. We\napproached the problem from a multivariate time series classification\nperspective. The classification task being whether the observed time series\nwindows will lead to an SEP event or not. There are two ways of performing a\ntime series classification. The first approach, which first appeared in\n\\cite{xi2006fast}, is to use the raw time series data and find the\nK-nearest-neighbor with a similarity measures such as Euclidean distance,\nand dynamic time warping. This approach is effective when the time series of\nthe same label shows a distinguishable similar shape pattern. In this problem,\nthe time series that we are working with are direct instruments readings that\nshow a jitter effect, which is common in electromechanical device readings\n\\cite{scargle1982studies}.\nAn example of the jitter effect is shown in P10, and P11 in\nFigure.~\\ref{table1impulsive}-b and Figure.~\\ref{table1impulsive}-c. Time series\njitter makes it hard for distance measures, including elastic measures, to\ncapture similar shape patterns.\nTherefore, we explored the second time series classification approach that\nrelies on extracting features from the raw time series before feeding it to a\nmodel. In the next subsections, we will talk about the time series data\nextraction, the feature generation and data pre-processing.\n\n\\subsection{Data Extraction}\nOur approach starts from the assumption that a $>$100 MeV impulsive event may\noccur if the parent X-ray event peak is at least $M3.5$ as was suggested\nin \\cite{nunez2011predicting}. Therefore we carefully picked the negative class\nan X-ray event whose peak intensity is at least $M3.5$ but did not lead to any\nSEP event (refer column 3 in Table~\\ref{ns}). We extracted different\nobservation windows of data that we call a span. A span is defined as the number\nof hours that constitute the observation period prior to an X-ray event. A total\nof 94 (47*2) X-ray events (shown in column 3 and column 2 of\nTable~.\\ref{sepevents} and Table~.\\ref{ns} respectively) were extracted\nwith different span windows. The span concept is illustrated in the yellow\nshaded area in Figure.~\\ref{table1impulsive}. The span window, in this case is\n10 hours and stops exactly at the start time of the X-ray event. As we\nconsidered the five minutes as the cadence between reports, a 10-hour span\nwindow represents a 120-length multivariate X-ray and proton time series.\n\n\\subsection{Feature Generation}\n\nTo express the X-ray and proton cross-channel correlations we used a Vector\nAutoregression Model (VAR) which is a stochastic process model used to capture\nthe linear interdependencies among multiple time series. VAR is the extension of\nthe univariate autoregressive model to multivariate time series. The VAR model\nis useful for describing the behavior of economic, financial time series and for\nforecasting \\cite{zivot2006vector}. The VAR model permits us to express each\ntime series window as a linear function of past lags (values in the past) of\nitself and of past lags of the other time series. The lag $l$ signifies the\nfactor by which we multiply a value of a time series\nto produce its previous value in time.\nTheoretically, if there exists a magnetic connection between the Sun and Earth\nthrough the Parker spiral, the X-ray fluctuation precedes its corresponding\nproton fluctuation. Therefore, we do not express the X-ray channels in terms of\nthe other time series, but, we focus on expressing the proton channels with\nrespect to the past lags of\nthemselves and with past lags of the X-ray channels (xs and xl). The VAR model\nof order one, denoted as VAR(1) in our setting can be expressed\nby Equations.(\\ref{eq1})-(\\ref{eq6}).\n \nThere is a total of eight time series that represent the proton channels. Every\nequation highlights the relationship between the dependent variable and the\nother protons and X-ray variables, which are independent variables. The higher\nthe dependence of a proton channel on an independent variable, the higher is\nthe magnitude of the coefficient $||\\phi_{dependent\\_{independent}}||$.\nWe used the coefficients of the proton equations\nas a feature vector representing a data sample. The feature vector representing\na data point using the VAR(n) model is expressed in Equation.\\ref{vec}.\n \nSince the lag parameter $l$ determines the number of coefficients involved in the\nequation, the number of features in the feature vector varies. More\nspecifically, the total number of features are 8 (independent variables) * 6 \n(dependent variables).\n\\begin{table*}\n\\begin{equation}\\label{eq1}\nP6_{t,1} = \\phi_{P6\\_{xs,1}}*P6_{t-1,1} + \\phi_{P6\\_{xl,1}}*P6_{t-1,1} +\n\\phi_{P6\\_{P6,1}}*P6_{t-1,1} + \\ldots +\n\\phi_{P6\\_{P11,1}}*P6_{t-1,1} + \\alpha_{P6_{t,1}}\n \\end{equation}\n \\begin{equation}\nP7_{t,1} = \\phi_{P7\\_{xs,1}}*P7_{t-1,1} + \\phi_{P7\\_{xl,1}}*P7_{t-1,1} +\n\\phi_{P7\\_{P6,1}}*P7_{t-1,1} + \\ldots +\n\\phi_{P7\\_{P11,1}}*P7_{t-1,1} + \\alpha_{P7_{t,1}}\n \\end{equation}\n \\begin{equation}\nP8_{t,1} = \\phi_{P8\\_{xs,1}}*P8_{t-1,1} + \\phi_{P8\\_{xl,1}}*P8_{t-1,1} +\n\\phi_{P8\\_{P6,1}}*P8_{t-1,1} + \\ldots +\n\\phi_{P8\\_{P11,1}}*P8_{t-1,1} + \\alpha_{P8_{t,1}}\n\\end{equation}\n\\centering \\vdots\n\\begin{equation}\\tag{6}\\label{eq6}\nP11_{t,1} = \\phi_{P11\\_{xs,1}}*P11_{t-1,1} + \\phi_{P11\\_{xl,1}}*P11_{t-1,1} +\n\\phi_{P11\\_{P6,1}}*P11_{t-1,1} + \\ldots +\n\\phi_{P11\\_{P11,1}}*P11_{t-1,1} + \\alpha_{P11_{t,1}}\n\\end{equation}\n\\end{table*}\n\n\\begin{align}\\tag{7}\n x &= \\begin{bmatrix}\n \\phi_{P6\\_{xs,1}} \\\\\n \\phi_{P6\\_{xl,1}} \\\\\n \\phi_{P6\\_{P6,1}} \\\\\n \\phi_{P6\\_{P7,1}} \\\\\n \\vdots \\\\\n \\phi_{P11\\_{P8,n}}\\\\ \n \\phi_{P11\\_{P9,n}}\\\\ \n \\phi_{P11\\_{P10,n}}\\\\ \n \\phi_{P11\\_{P11,n}}\\\\ \n \\end{bmatrix} \\label{vec}\n \\end{align} \n\n\\subsection{Data Preprocessing}\nBefore feeding the data to a classifier we cleaned the data from empty values\nthat appear in the generated features. To do so, we used the 3-nearest neighbors\nclass-level imputation technique. The method finds the 3 nearest neighbors\nthat have the same label of the sample with the missing feature. Nearest\nneighbors imputation weights the samples using the mean squared difference on\nfeatures based on the other non-missing features. Then it imputes the missing\nvalue with the nearest neighbor sample. The reason why\nthe imputation is done on a class level basis is that features may behave\ndifferently across the two classes (SEP and non-SEP), therefore; it is\nimportant to impute the missing data with the same class values.\n\n\\section{Experimental Evaluation}\n\nIn this section we explain the decision tree model that we will be using as well\nas the sampling methodology. We will also provide a rationale for the choice of\nparameters ($l$ and $span$). Finally we will zoom in the best model\nwith the most promising performance levels.\n\n\n\\subsection{Decision Tree Model}\n\nA decision tree is a hierarchical tree structure used to determine classes based\non a series of rules\/questions about the attribute values of the data points\n\\cite{safavian1991survey}.\nEvery non-leaf node represents an attribute split (question) and all the leaf\nnodes represent the classification result. In short, given a set of features with\ntheir corresponding classes a decision tree produces a sequence of\nquestions that can be used to recognize the class of a data sample.\nIn this paper, the data attributes are the VAR($l$) coefficients\n$[\\phi_{p6\\_{xs, 1}}, \\phi_{p6\\_{xl, 1}}, ..., \\phi_{p6\\_{xs, l}}]$ and the\nclasses are binary: SEP and non-SEP.\n\nThe decision tree classification model first starts by finding the variable that\nmaximizes the separation between classes.\nDifferent algorithms use different metrics, also called purity measures, for\nmeasuring the feature that maximizes the split. Some splitting criteria include\nGini impurity, information gain, and variance reduction. The commonality between\nthese metrics is that they all measure the homogeneity of a given feature with\nrespect to the classes. The metric is applied to each candidate feature to\nmeasure the quality of the split, and the best feature is used. In this paper we\nused the CART decision tree algorithm, as appeared in\n\\cite{loh2011classification} and \\cite{steinberg2009cart}, with Gini and\ninformation gain as the splitting criteria.\n\n\\subsection{Parameter Choice}\n\n \\begin{figure*} \n \\centering\n \\includegraphics[width=0.8\\linewidth]{figs\/SpanAcc.pdf}\n \\caption{Decision tree accuracy with respect to the span window and the\n lag parameters using Gini and information gain splitting criteria. The\n dotted line shows a linear fit to the accuracy curve. }\n \\label{spanacc}\n\\end{figure*} \n\nOur approach relies heavily on the choice of parameters, namely, the span window\nand the VAR model lag parameter. The span is the number of observation hours\nthat precede the occurrence of an X-ray event. The latter determines the length\nof the multivariate time series to be extracted. On the other hand, the lag\n($l$) determines the size of the feature space that will be used as well as the\nlength of the dependence of the time series with each other in the past.\nAs mentioned previously, with a one-step increment of the lag parameter the size of the\nfeature space almost doubles $features\\_number$ = 8*(independent variables)*6\n(equations)*$l$+6*(equations). In order to determine the optimal parameters to\nbe used, we run a decision tree model on a set of values for both the span and\nlag parameters. More specifically, we used the range [3-30] for the span window\nand the set \\{1,3,5,7,9\\} for $l$. Since we have a balanced dataset we used a\nstratified Ten-fold cross validation as the sampling methodology. A stratified\nsampling always ensures a balance in the number of positive and negative samples for\nboth the training and testing data samples. Ten-fold cross-validation randomly\nsplits the data into 10 subsets, models are trained with nine of the\nfolds (90\\% of the dataset), and test it with one fold (10\\% of the dataset).\nEvery fold is used once for testing and nine times for training. In our\nexperiments, we report the average accuracy on the 10 folds.\n Fig.~\\ref{spanacc} illustrates the accuracy curves with respect to the span\n windows for the five lags that we considered. We reported the accuracies of the\n decision tree model using both gini and information gain splitting criteria. In\n order to better capture the model behavior with the increasing span we\n plotted a linear fit to the accuracy curves of each lag. The first observation\n that can be made is that the slopes of the linear fit for $l$=1 and $l$=3 are\n relatively small in comparison to the other lags ($l>$3). This signifies that\n the model does not show any increasing or decreasing accuracy trend with the\n increase of the span window. Therefore we conclude that $l$=1 and $l$=3 are\n too small to discover any relationship between the proton and X-ray channels.\n Having the lag parameter set to $l$=1 and $l$=3 corresponds to expressing the\n time series (independent variable) going back in time up to five minute and 15\n minutes respectively. These latter times are small, especially for $l$=1 (5\n minutes), which theoretically is not possible since the\n protons can at most reach the speed of light that corresponds to a lag of at least 8.5\n minutes. For the other lags ($l>3$) there is noticeable increase in steepness in\n the accuracy linear fit which suggests that the accuracy increases with\n the increasing span window. The second observation is that for all the $l>3$\n datasets the best accuracy was achieved in the last four span window (i.e span\n $\\in$ \\{27,28,29,30\\}). Therefore, we filtered the initial range of\n parameter values to \\{5,7,9\\} for $l$ and \\{27,28,29,30\\} for the span. In the\n next subsection we will zoom in into every classifier within the parameter\n grid.\n \n \\subsection{Learning Curves}\nTo be able to discriminate decision tree models that show similar\naccuracies we use the model learning curves, also called experience curves, to\nhave an insight in how the accuracy changes as we feed the model with more\ntraining examples.\nLearning curves are particularly useful for comparing different\nalgorithms \\cite{madhavan1997new} and choosing optimal model parameters during\nthe design \\cite{pedregosa2011scikit}. It is also a good tool for visually\ninspecting the sanity of the model in case of overtraining or undertraining.\nFigs.~\\ref{lcgini} and \\ref{lcentropy} show the learning curves of the\ndecision tree model using gini and information gain as the splitting criteria\nrespectively. The red line represents the training accuracy which evaluates the\nmodel on the newly trained data. The green line shows the testing\naccuracy which evaluates the model on the the never-seen testing data. The\nshaded area represents the standard deviation of the accuracies after running\nthe model multiple times with the same number of training data. It is noticeable\nthat the standard deviation becomes higher as the lag is increased. Also, it can\nbe seen that the best average accuracies, that appeared in Fig.~\\ref{spanacc}, are\nnot always the ones that have the best learning curves. For example from\nFig.~\\ref{spanacc}, the best accuracy that has been reached appears to be in\n$l$=7 and $span=27,29$; however, the learning curves corresponding to that span\nand lag show that the standard deviation is not very smooth as compared to\n$l$=5. Therefore the experiments show that using $l$=5 results in relatively\nstable models with lower variance. Therefore, we will\nzoom in $l$=5 for all the spans $\\in \\{27,28,29,30\\}$ that we previously\nfiltered.\n\\begin{figure*} \n \\centering \n \\begin{tabular}{ c c c c c }\n\n & Span 27 & Span 28 & Span 29 & Span 30 \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 5 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/5s27_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s28_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/5s30_gini_m.png}\n \\\\\n \\begin{turn}{90} \\hspace{10mm} Lag 7 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/7s27_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s28_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/7s30_gini_m.png}\n \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 9 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/9s27_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s28_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/9s30_gini_m.png}\n \\\\\n \\end{tabular}\n\n \t\\caption{Learning curve of CART Decision Tree Models with Gini splitting\n criterion,spans $\\in$ \\{27,28,29,30\\} and lag $\\in$ \\{5,7,9\\}\\label{lcgini}}\n \n\\end{figure*}\n\n\\begin{figure*} \n\n \\centering \n \\begin{tabular}{ c c c c c }\n\n & Span 27 & Span 28 & Span 29 & Span 30 \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 5 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/5s27_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s28_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s29_entropy_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/5s30_entropy_m.png}\n \\\\\n \\begin{turn}{90} \\hspace{10mm} Lag 7 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/7s27_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s28_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s29_entropy_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/7s30_entropy_m.png}\n \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 9 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/9s27_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s28_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/9s30_entropy_m.png}\n \\\\\n \n \\end{tabular}\n \\caption{Learning curve of CART Decision Tree Models with information gain\n splitting criterion,spans $\\in$ \\{27,28,29,30\\} and lag $\\in$ \\{5,7,9\\}\\label{lcentropy} }\n\\end{figure*}\n\n\n\\begin{table}[]\n\\centering\n\\caption{Decision Tree model evaluation for gini and information gain splitting\ncriteria\\label{eval}}\n\\label{my-label}\n\\begin{tabular}{l|l|l|l|l|l|l|l|l|}\n\\cline{2-9}\n & \\multicolumn{4}{c|}{Gini} & \\multicolumn{4}{c|}{Information Gain} \\\\ \\hline\n\\multicolumn{1}{|l|}{Span} & 27 & 28 & 29 & 30 & 27 & 28 & 29 & 30 \\\\ \\hline\n\\multicolumn{1}{|l|}{Accuracy} & 0.64 & 0.74 & 0.73 & \\textbf{0.74} & 0.77 &0.70 & 0.67 & \\textbf{0.78} \\\\ \\hline\n\\multicolumn{1}{|l|}{Recall} & 0.69 & 0.70 & 0.74 & \\textbf{0.70} & 0.76 &0.70 & 0.70 & \\textbf{0.73} \\\\ \\hline\n \\multicolumn{1}{|l|}{Precision} & 0.62 & 0.75 & 0.75 & \\textbf{0.76} & 0.78 & 0.72 & 0.72 & \\textbf{0.86} \\\\ \\hline\n \\multicolumn{1}{|l|}{F1} & 0.65 & 0.75 & 0.75 & \\textbf{0.74} & 0.79 &0.71 & 0.69 & \\textbf{0.82} \\\\ \\hline\n \\multicolumn{1}{|l|}{AUC} & 0.65& 0.72 & 0.74 & \\textbf{0.73} & 0.76 & 0.70 & 0.69 & \\textbf{0.77} \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\n\n \n\n \\begin{figure*} \n \\centering\n \\includegraphics[width=0.8\\linewidth]{figs\/PCA.pdf}\n \\caption{First 3 PCA components derived from (a) all the original 254 features, (b)\n the data sub-space containing only 4 parameters selected as the most\n relevant by the Gini index (as shown in the tree presented in Fig. 9),\n and (c) another data sub-space containing 4 different parameters (with 1\n repetition) selected as the most relevant by the Entropy measure. The\n PCA-based visualizations represent (sub-)spaces of the same data set (as\n shown in the tree presented in Fig. 10), with lag=5, and span=30.}\n \n \\label{pca}\n\\end{figure*} \n\n\n \\begin{figure} \n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/GiniDT.pdf}\n \\caption{Decision Tree with Gini splitting criteria ($span$=30, $l$=5) }\n \\label{giniDT}\n\\end{figure} \n\n \\begin{figure} \n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/EntropyDT.pdf}\n \\caption{Decision Tree with information gain splitting criteria\n ($span$=30, $l$=5) }\n \\label{entropyDT}\n\\end{figure}\n\nTo determine the best behaving model we choose six evaluation metrics that will\nassess the models' performance from different aspects. Accuracy is the most\nstandard evaluation measure used to assess the quality of a classifier by\ncounting the ratio of correct classification over all the classifications.\nIn this context the accuracy measure is particularly useful because our\ntraining and testing data is balanced. The data balance ensures that if the\nclassifier is highly biased toward a given class it will be reflected on the\naccuracy measure. Recall is the second evaluation measure we considered, also\nknown as the probability of detection, which characterizes the\nability of the classifier to find all of the positive cases. Precision is\nused to evaluate the model with respect to the false alarms. In fact, precision\nis 1 - false alarm ratio. Precision and recall are usually anti-correlated;\ntherefore, a useful quantity to compute is their harmonic mean, the F1 score.\nThe last evaluation measure that we consider in the Area Under Curve (AUC) of the Receiver\nOperating Characteristic curve (ROC) curve. The intuition behind this measure is\nthat AUC equals the probability that a randomly chosen positive example ranks\nabove (is deemed to have a higher probability of being positive than) a randomly\nchosen negative example. It has been claimed in \\cite{auccp} that the AUC is\nstatistically consistent and more discriminating than accuracy.\n\n\n\nTable~.\\ref{eval} shows the aforementioned evaluations on the $l$=5 datasets. It\nis noticeable that span=30 achieves the best performance levels for both\nsplitting criteria. The decision tree models corresponding to those settings\nusing gini and information gain are shown in Fig.~\\ref{giniDT} and\nFig.~\\ref{entropyDT} respectively. For the purpose of visualization we used PCA\ndimensionality reduction technique to plot the full feature space with the 254 dimensions of\nthe lag 5 and span 30 in Fig.~\\ref{pca}-a, as well as the reduced feature space\nwith only the selected features from the gini measure in Fig.~\\ref{pca}-b and\nentropy measure in Fig.~\\ref{pca}-c \\cite{PCA}.\nIt is clearly visible that the SEP and non-SEP classes are almost\nindistinguishable when all the dimensions are used. When the decision tree\nfeature selection is applied, the data points become more scattered in space and\ntherefore easier for the classifier to distinguish. We also note that both\ndecision tree classifiers have as a root a proton x-ray correlation parameter\n($P6\\_xl\\_l2$). Some of the intermediate and leaf nodes have features that show\ncorrelations between proton channels is their conditions. This suggests that\ncross-correlations in proton channels are equally important to X-ray and proton\nchannels correlations that appeared in \\cite{nunez2011predicting}. Our best\nmodel shows a descent accuracy that is comparable (3\\% better) to the UMASEP\nsystem that uses the same catalog. We also made sure that our model is not\nbiased towards the missing data of the lower energy channels P6 and P7 of\nGOES-12 by choosing the same number of samples of positive and negative class\nthat happened during the GOES-12 coverage period.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper we designed a new model to predict $>$100 MeV SEP events based on\nGOES satellite X-ray and proton data. This is the first effort that explores\nnot only the dependencies between the X-ray and proton channels but also the\nauto-correlations and cross-correlations within the proton channels. We have\nfound that proton channel cross-correlations based on a lag time (prior point\nin time) can be an important precursor as to whether an SEP event may happen or\nnot. In particular, we started finding patterns starting from lag 5 and our best\nmodels shows both that the correlation between proton channel $P6$ and X-ray\nchannel $xl$ is an important precursor to SEP events.\nBecause of the missing data due to the failure of the P6 and P7 proton\nchannels onboard GOES-12 we made sure that our dataset uses the same number of\npositive and negative examples coming from GOES-12. To our knowledge this is\nthe first study that explores proton cross-channels correlations in order to\npredict SEP events. As a future extension of this study we are interested in\ndoing ternary classification by further splitting the SEP class into impulsive\nand gradual. We are also interested in real-time SEP event predictions for\npractical applications of this research.\n\n\n\n\n\\section*{Acknowledgment}\nWe thank all those involved with the GOES missions as well as the SOHO\nmission.\nWe also acknowledge all the efforts of NOAA in making the catalogs and X-ray\nreports available.\nThis work was supported in part by two NASA Grant Awards (No.\nNNX11AM13A, and No. NNX15AF39G), and one NSF Grant (No. AC1443061). The\nNSF Grant has been supported by funding from the Division of Advanced\nCyberinfrastructure within the Directorate for Computer and Information Science\nand Engineering, the Division of Astronomical Sciences within the Directorate\nfor Mathematical and Physical Sciences, and the Division of Atmospheric and\nGeospace Sciences within the Directorate for Geosciences.\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFor studies of the hadronic final state in high-energy collisions, \nversatile programs for the calculation of QCD corrections are required.\nThe extraction of scale-dependent \nphysical quantities such as the running strong coupling\nconstant $\\alpha_s\\left(Q^2\\right)$ and parton densities\n$f_i\\left(\\xi,Q^2\\right)$ requires precise predictions \nin next-to-leading order of QCD perturbation theory.\nAt the electron--proton collider HERA at DESY in Hamburg, \nthe strong coupling constant has been measured via jet rates \n\\cite{1,2}. There is also a direct fit of the gluon density \n$f_g(\\xi, Q^2)$ \\cite{3} based on a Mellin transform\nmethod \\cite{4,5}. Calculations for jet cross sections \nin deeply inelastic scattering for the case of the modified JADE \nscheme\nhave been performed \\cite{6,7,8,9,10}\\footnote{\nIn these calculations, terms of the form $c\\log c$, \n$c$ being the jet cut, have been neglected. This implies in particular\na certain insensitivity with respect to the jet recombination scheme.\nThe set-up of the calculations \\cite{6,7,9} is such that\na jet consisting of two partons is always mapped onto a massless jet. \nTherefore the jet definition scheme which is used on experimental data\nshould be a ``massless'' scheme (this excludes, for example, the E-scheme).\nThe variation of jet cross sections within the possible massless schemes\ncannot be modelled by that calculation.}\nand implemented in the \ntwo programs {\\tt PROJET} \\cite{11} and {\\tt DISJET} \\cite{12}.\n\nIn the meantime,\ncalculations for arbitrary infrared-safe observables in \ndeeply inlastic scattering have become available \\cite{13,14}.\nIn the last few years, the technology for the calculation\nof QCD corrections in next-to-leading order\nhas developed considerably. \nThe main problem in higher-order QCD calculations is the occurence of\nsevere\ninfrared singularities (they ultimately cancel for infrared-safe\nobservables, or are absorbed into process-independent, physical\ndistribution functions such as parton densities and fragmentation functions).\nThere are explicit algorithms available\nwhich permit the calculation to be done in a ``universal'' way: the \ninfrared singularities are subtracted such that arbitrary \ninfrared-safe observables can be calculated numerically. In principle, \nall existing algorithms are variations on a common theme, namely the\ninterplay of the factorization theorems of perturbative QCD and the \ninfrared-safety of the observables under consideration.\nThere are two different ways to achieve the separation of\ndivergent and finite contributions:\nthe phase-space-slicing method \\cite{15} and\nthe subtraction method \\cite{16}. \nBoth methods have their merits and drawbacks.\n\\begin{description}\n\\item[{\\rm\\unboldmath ($\\alpha$)}] \nThe phase-space-slicing method relies on a separation of\nsingular phase-space regions from non-singular ones by means of a \nsmall slicing parameter $s\\rightarrow 0$. The divergent parts are evaluated \nunder the assumption that terms of ${\\cal O}(s (\\log s)^n)$ can be dropped.\nThe analytically evaluated phase-space integrals yield terms of the form\n$(\\log s)^m$, which explicitly cancel against equivalent terms of opposite\nsign from a numerically performed phase-space integration.\nThe simplicity of this scheme is obvious.\nThe main problem is the residual dependence on the technical cut \nparameter~$s$\n(in practice the limit $s\\rightarrow 0$ is not checked for every observable, \nbut it is assumed that a fixed small value will be sufficient).\nMoreover, the numerical cancellation of the logarithmic terms by means\nof a Monte-Carlo integration is delicate.\nThere is a calculational scheme available for the determination of \nthe explicit phase space integrals over the singular regions \\cite{17}. \nFor initial and final-state hadrons this scheme moreover \nrequires the introduction\nof so-called {\\it crossing functions} \\cite{18}, \nto be evaluated for every parton density parametrization.\nFor deeply-inelastic lepton--nucleon scattering, an implementation \nof this calculational scheme is provided by Mirkes and Zeppenfeld\nin {\\tt MEPJET} \\cite{19}.\n\\item[{\\rm\\unboldmath($\\beta$)}] \nThe subtraction method is technically more involved, \nsince the infrared singularities are cancelled point-by-point in \nphase space. The subtraction terms have, owing to the factorization\ntheorems of perturbative QCD, a simple form. The problem is \nto arrange the subtractions in such a way that in the numerical evaluation\nno spurious singularities appear. A general framework, using a specific\nphase space mapping besides the factorization theorems, is given by Catani\nand Seymour in Ref.~\\cite{20}, and implemented in {\\tt DISENT} \n\\cite{21}.\n\nThe approach of the present \npaper is to use a generalized partial fractions\nformula to separate the singularities \\cite{22}. The method is\nbriefly explained in Section~\\ref{algorithm}. We will describe in some\ndetail the implementation {\\tt DISASTER++}\\footnote{\nThis is an acronym for ``Deeply Inelastic Scattering: All Subtraction Through\nEvaluated Residues''.}\nin the form of a {\\tt C++} class library.\n\nThere are two reasons for a new calculation.\n(a) The existing\nprograms have the restriction that the number of flavours is fixed \n($N_f=5$ in the case of {\\tt MEPJET}\nand $N_f$ fixed, but arbitrary for {\\tt DISENT}).\nFor studies of the scale-dependence it is\nnecessary to\nhave a variable number of flavours,\nin order to be consistent with the scale evolution\nof the strong coupling constant and the parton densities.\n{\\tt DISASTER++} makes the $N_f$ dependence explicit in the ``user routine''\non an event-by-event basis,\nand thus results for several renormalization and factorization scales\ncan be calculated simultaneously.\n(b) {\\tt DISASTER++}\nis already set up such that the extension to one-particle-inclusive \nprocesses will be possible without the necessity of re-coding\nthe contributions which are already present for\nthe jet-type observables. This option will be made available\nin future versions of the program, as soon as the remaining contributions\nfor one-particle-inclusive processes are implemented.\n\n\\end{description}\n\nThe outline of this paper is as follows. In Section~\\ref{algorithm}\nwe briefly review the algorithm employed in the present calculation.\nIn Section~\\ref{structure} the {\\tt FORTRAN} interface\nto the {\\tt C++} class library is described. \nSome remarks concerning the installation of the package are made\nin Section~\\ref{installation}.\nA comparison of the available universal programs \n{\\tt DISASTER++} (Version 1.0), {\\tt MEPJET} (Version 2.0) \nand {\\tt DISENT} (Version 0.1)\nis presented in \nSection~\\ref{comparison}.\nIn a previous version of this paper, we have drawn \nthe conclusion that we find an overall, but not completely satisfactory\nagreement of {\\tt DISASTER++} and {\\tt MEPJET}, and \nthat there are large deviations when comparing\n{\\tt DISASTER++} and {\\tt DISENT}.\nOne of the purposes of this paper is to present the results of a comparison\nof {\\tt DISASTER++} and a new, corrected version (0.1) \nof {\\tt DISENT}. We now find\ngood agreement of the two programs.\nWe also give a few more results for {\\tt MEPJET}, in particular\nfor the dependence on the technical cut~$s$. It turns out that\neven for very small values of~$s$ \nwe do not achieve agreement with the\n{\\tt DISASTER++}~\/ {\\tt DISENT} results\nfor several cases under\nconsideration\\footnote{\nIn a very recent paper \\cite{23}, E.~Mirkes quotes the results of the\ncomparison of the three programs as performed in the \nprevious version of this paper \\cite{24} as resulting in \na ``so far satisfactory agreement''. This is a \nmisquotation. The formulation in Ref.~\\cite{24} was that \nfor {\\tt MEPJET} and {\\tt DISASTER++} we find an ``overall, though not \ncompletely satisfactory agreement'', and that the results of {\\tt DISENT}\n(Version 0.0) ``differ considerably''. Moreover, in the summary\nof Ref.~\\cite{24} we mention that a few deviations of {\\tt MEPJET} and\n{\\tt DISASTER++} are present. We wish to stress that there is a certain \nsemantic\ngap between the expression \n``satisfactory agreement'' and the results just quoted.\n}.\nThe paper closes with a summary.\nThe contents of this paper are mainly technical. The details of the calculation\nand phenomenological applications will be described in a forthcoming \npublication.\n\n\\section{The Algorithm}\n\\label{algorithm}\nThe calculation is based on the subtraction method. A simple example\nto illustrate this method in general, and a comparison \nwith the phase-space-slicing\nmethod, is given in Ref.~\\cite{25}.\nFor a more detailed exposition of the contents of this section, \nsee Ref.~\\cite{22}.\n\nThe subtraction method is one of the solutions for the problem of \nhow to \ncalculate \nnumerically \ninfrared-safe observables without having \nto modify the calculation for every observable under consideration.\nIn QCD calculations, infrared singularities cancel for sufficiently \ninclusive observables. \nThe factorization theorems of perturbative\nQCD (see Ref.~\\cite{26} and references therein) \ntogether with the infrared-safety of the observable under consideration\nguarantee that \nthe structure of the limit of the convolution of the parton-level cross \nsection with the observable in soft and collinear regions of phase space\nis well-defined and factorizes in the form of a product of a kernel \nand the Born term.\nThis property allows, for the real corrections, the definition of a subtraction \nterm for every phase-space point.\nFormally:\n\\begin{eqnarray}\n\\int\\mbox{dPS}^{(n)}\\,\\sigma\\,{\\cal O}\n&=& \\sum_A \\int \\mbox{dPS}_{i_A} \\,k_A \\left(\n\\int \\mbox{dPS}^{(n-1)} \\tau_A -\n \\left[\n \\int \\mbox{dPS}^{(n-1)} \\tau_A\n \\right]_{\\mbox{\\scriptsize soft\/coll.~limit}}\n\\right)\\nonumber\\\\\n&+& \\sum_A \\int \\mbox{dPS}_{i_A} \\,k_A \\left[\n \\int \\mbox{dPS}^{(n-1)} \\tau_A\n \\right]_{\\mbox{\\scriptsize soft\/coll.~limit}},\n\\end{eqnarray}\nwhere $\\sigma$ is the parton-level cross section, $\\cal O$ is the\ninfrared-safe observable, $k_A$ is a singular kernel, \nand $\\tau_A$ is the non-singular part of the product $\\sigma\\,\\cal O$.\nThe index~$A$ runs over all possible soft, collinear and \nsimultaneously soft and collinear singularities of~$\\sigma$.\nThe first integral is finite and can be calculated numerically. The second\nintegral contains all infrared singularities. The term in the square bracket\nhas a simple structure\nbecause of the factorization theorems of QCD, and the one-particle\nintegral over the kernel $k_A$ and the factorization contribution from the\nterm in the square brackets can be performed easily.\nThis subtraction formula works only if the subtraction terms do not\nintroduce spurious singularities for the individual terms that eventually\ncancel in the sum. This is achieved by a separation of all singularities\nby means of a general partial fractions formula\n\\begin{equation}\n\\label{pfid}\n\\frac{1}{x_1\\,x_2\\cdots x_n}\n=\\sum_{\\sigma\\in S_n}\n\\frac{1}{x_{\\sigma_1}\\,(x_{\\sigma_1}+x_{\\sigma_2})\\cdots\n (x_{\\sigma_1}+\\ldots+x_{\\sigma_n})},\n\\end{equation}\nwhere the sum runs over all $n!$ permutations of $n$~objects.\n\nIn {\\tt DISASTER++}, the processes for (1+1) and (2+1)-jet production \nfor one-photon exchange are implemented. The program itself, however, \nis set up in a much more general way. The implemented subtraction procedure \ncan handle arbitrary number of final-state partons, and zero or one incoming \npartons (an extension to two incoming partons is possible). The {\\tt C++}\nclass library is intended to provide a very general framework for \nnext-to-leading-order QCD calculations for arbitrary \ninfrared-safe observables. Of course, the explicit matrix \nelements (Born terms, virtual corrections and factorized real corrections)\nhave to be provided for every additional process to be included.\n\n\\section{Program Structure}\n\\label{structure}\nWe now describe the {\\tt FORTRAN} interface to the {\\tt C++}\nclass library. The {\\tt C++} user interface will be documented in a \nforthcoming extension of this manual.\n\nTo set the stage, let us first introduce some terminology.\nThe user has to provide several subroutines which are called by \n{\\tt DISASTER++} for every generated event. Each {\\bf event} \n$e_n$, $n=1\\ldots N$\nconsists of a set of\n{\\bf phase spaces} ${\\cal P}_{nr}$, $r=1\\ldots R_n$, \nand a set of {\\bf contributions} ${\\cal C}_{ni}$, \n$i=1\\ldots L_n$. Phase spaces $\\cal P$\nprovide a set of four-vectors of initial and final-state\nparticles, which are used to calculate observables\n${\\cal O}({\\cal P})$.\nContributions ${\\cal C}_{ni}$ consist of a list of \n{\\bf weights} $w_{nij}$, $j=1\\ldots K_{ni}$ (here: \n$K_{ni}=11$) which have to be multiplied\nby certain {\\bf flavour factors} $F_{nij}$. \nEvery contribution ${\\cal C}_{ni}$ has an associated\nphase space ${\\cal P}_{nr_{ni}}$; \nit is generally the case that particular phase spaces are \nused for different contributions. Flavour factors are products\nof parton densities, quark charges, powers of the strong coupling constant, \nand powers of the electromagnetic coupling constant.\n\nThe expectation value $\\langle {\\cal O} \\rangle$ \nof a particular observable is given by the following \nsum:\n\\begin{equation}\n\\label{exval}\n\\langle {\\cal O} \\rangle = \n \\sum_{n=1}^N\n \\sum_{i=1}^{L_n}\n {\\cal O}({\\cal P}_{nr_{ni}}) \n \\sum_{j=1}^{K_{ni}}\n w_{nij} F_{nij}.\n\\end{equation}\nThe first sum is the main loop of the Monte Carlo integration.\n\n\\noindent\nThe user has to provide a subroutine\n{\\tt user1} and\na function \n{\\tt user2}.\nThe subroutine\n{\\tt user1(iaction)} is called from {\\tt DISASTER++} in the following cases:\n\\begin{description}\n\\item{\\quad{\\tt iaction=1}:} {\\ }\\\\after start-up of {\\tt DISASTER++}\n\\item{\\quad{\\tt iaction=2}:} {\\ }\\\\before the end of {\\tt DISASTER++}\n\\item{\\quad{\\tt iaction=3}:} {\\ }\\\\before the start of the grid-definition \nrun of the adaptive Monte-Carlo routine, or before the final run of\nthe adaptive integration, in case that there is no grid-definition run\n\\item{\\quad{\\tt iaction=4}:} {\\ }\\\\before the start of the final\nrun of the adaptive Monte-Carlo routine\n\\item{\\quad{\\tt iaction=5}:} {\\ }\\\\after the final\nrun of the adaptive Monte-Carlo routine\n\\item{\\quad{\\tt iaction=6}:} {\\ }\\\\once for every event (to initialize intermediate \nweight sums, etc.)\n\\item{\\quad{\\tt iaction=7}:} {\\ }\\\\signals that the event has to be dropped\nfor technical reasons\n\\end{description}\n\n\\noindent\nThe function {\\tt user2(...)} is called from {\\tt DISASTER++}\nafter an event has been constructed.\nIt has the following arguments (in an obvious\nnotation):\n\\begin{verbatim}\n double precision function\n & user2(\n & integer nr, \n & integer nl,\n & double precision fvect(0..3, -10..10, 1..30),\n & integer npartons(1..30),\n & double precision xb(1..30),\n & double precision q2(1..30),\n & double precision xi(1..30),\n & double precision weight(1..11, 1..50),\n & integer irps(1..50),\n & integer ialphas(1..50),\n & integer ialphaem(1..50),\n & integer lognf(1..50)\n & )\n\\end{verbatim}\n\nHere {\\tt nr} stands for $R_n$, {\\tt nl} stands for $L_n$, \n{\\tt fvect(mu, iparticle, ir)} is the {\\tt mu}$^{\\mbox{th}}$ component\nof the four-vector of the particle \nwith label {\\tt iparticle} ({\\tt mu}=0 corresponds to the energy component)\nin units of [GeV]\nin the Breit frame \nfor the phase space {\\tt ir};\n{\\tt npartons(ir)} is the number of final-state partons,\n{\\tt q2(ir)} is the value of $Q^2$, and {\\tt xi(ir)} is the momentum fraction\nof the incident parton.\nThe particle labels {\\tt iparticle} are given by\n\\begin{description}\n\\item[\\quad{\\tt iparticle=-8:}] proton remnant\n\\item[\\quad{\\tt iparticle=-7:}] incident proton\n\\item[\\quad{\\tt iparticle=-5:}] outgoing electron\n\\item[\\quad{\\tt iparticle=-4:}] incident electron\n\\item[\\quad{\\tt iparticle=-1:}] incident parton\n\\item[\\quad{\\tt iparticle=0..(npartons-1):}] outgoing partons\n\\end{description}\n\nThe array {\\tt weight(j, i)} is a list of the weights for contribution\n{\\tt i} in units of [pb], \n{\\tt irps(i)} gives the index of the phase space for this particular \ncontribution,\n{\\tt ialphas(i)} and {\\tt ialphaem(i)} are the powers of the strong \nand electromagnetic coupling constant, respectively, and {\\tt lognf(i)}\nis an index that specifies whether the weights have to be multiplied \nby a factor $\\lambda$ consisting of a product of\na logarithm of a scale and\/or a factor of $N_f$:\n\\begin{description}\n\\item[\\quad{\\tt lognf=0}:] $\\lambda=1$\n\\item[\\quad{\\tt lognf=1}:] $\\lambda=\\ln\\left(\\mu_r^2\/Q^2\\right)$\n\\item[\\quad{\\tt lognf=2}:] $\\lambda=N_f \\ln\\left(\\mu_r^2\/Q^2\\right)$\n\\item[\\quad{\\tt lognf=3}:] $\\lambda=\\ln\\left(\\mu_f^2\/Q^2\\right)$\n\\item[\\quad{\\tt lognf=4}:] $\\lambda=N_f \\ln\\left(\\mu_f^2\/Q^2\\right)$\n\\end{description}\nHere $\\mu_r$ and $\\mu_f$ are the renormalization and factorization scales, \nrespectively.\nThe total flavour factor for contribution $i$ is given by \n\\begin{equation}\nF_{nij} = \n \\lambda \\, \n \\alpha_s^{\\mbox{\\tt ialphas($i$)}} \\,\n \\alpha^{\\mbox{\\tt ialphem($i$)}} \\,\n \\rho_{ij},\n\\end{equation}\nwhere \nthe quantity $\\rho_{ij}$ is a product of squares of quark charges $Q_\\alpha$ \nin units of $e$ and parton densities.\nIn particular:\n\\begin{description}\n\\item \\quad$\\rho_{i1}\n = \\sum\\limits_{\\alpha=1}^{N_f} Q_\\alpha^2 \\, f_\\alpha$\n\\item \\quad$\\rho_{i2} \n = \\sum\\limits_{\\alpha=1}^{N_f} Q_\\alpha^2 \\, \n f_{\\overline{\\alpha}}$\n\\item \\quad$\\rho_{i3} \n = \\sum\\limits_{\\alpha=1}^{N_f} Q_\\alpha^2 \\, f_g$\n\\item \\quad$\\rho_{i4} = \\rho_{i1}$\n\\item \\quad$\\rho_{i5} = \\rho_{i2}$\n\\item \\quad$\\rho_{i6} = \\rho_{i1}\\,(N_f-1)$\n\\item \\quad$\\rho_{i7} = \\rho_{i2}\\,(N_f-1)$\n\\item \\quad$\\rho_{i8}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_\\alpha \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta^2$\n\\item \\quad$\\rho_{i9}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_{\\overline{\\alpha}} \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta^2$\n\\item \\quad$\\rho_{i10}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_\\alpha Q_\\alpha \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta$\n\\item \\quad$\\rho_{i11}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_{\\overline{\\alpha}} Q_\\alpha \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta$\n\\end{description}\nThe $f_\\alpha$ are parton densities evaluated for \nmomentum fractions $\\mbox{\\tt xi(irps($i$))}$ and factorization scale\n$\\mu_f$,\nand $f_{\\overline{\\alpha}}$ stands for the parton density of the anti-flavour\nof the flavour $\\alpha$. The renormalization and factorization schemes are\nthe $\\overline{\\mbox{MS}}$ scheme. The correction terms for the \nDIS factorization \nscheme will be implemented in the near future.\n\nWe wish to note that the product of the weights, the flavour factors and the \nvalues of the observable is normalized in such a way that\nthe sum yields the expectation value in units of [pb]. No additional \nfactor such as $1\/N$, $N$ being the total number of generated events,\nhas to be applied\nin Eq.~\\ref{exval}.\n\nSince phase spaces are used several times for different contributions,\nit is a good strategy to first evaluate the observable(s) under consideration\nfor every phase space and to store the corresponding results.\nThen there should be the loop over the various contributions (the second sum).\nThe innermost loop is the one over the flavour factors.\n\nThe Monte Carlo integration itself employs the program {\\tt VEGAS}\n\\cite{27,28}. \n{\\tt VEGAS} is an adaptive multi-dimensional integration routine.\nIntegrations proceed in two steps. \nThe first step is an adaptation step in order\nto set up a grid in the integration variables\nwhich then steers the final integration step.\nThe adaptation step itself refines\nthe grid in a sequence of several iterations.\n{\\tt VEGAS} requires, as parameters, the number of Monte Carlo points \nto be used in the first and second step, respectively, \nand the number of iterations to refine the grid. \nIn the framework of {\\tt DISASTER++}, {\\tt VEGAS} can be used in three different\nways: \n\\begin{itemize}\n\\item As an adaptive integration routine.\nThe routine {\\tt user2} should return a value. This value is handed over\nto {\\tt VEGAS} as \nthe value of the integrand at the particular phase space point, \nand summed up. The final integral quoted by {\\tt VEGAS}\nis the sum of these weights\nfor the final integration.\nThis is the best choice if just one observable, \nfor example a jet cross section, is to be evaluated.\n\\item As a routine that merely supplies random numbers for \nthe events.\nIf the number of iterations is set to zero, then {\\tt VEGAS} just performs\nthe final integration run. The user is then responsible for the correct\nsummation of the weights, and for the determination of the \nstatistical error. It should be noted that, since all weights are\navailable individually in the user routine, an arbitrary number of \nobservables can be evaluated in a single run. In particular, since the\ndependence on the renormalization and factorization scales and on $N_f$\nis fully explicit, the study of the scale dependence of observables\ncan be done in a very convenient way. For example, all plots from \nRef.~\\cite{22} \nhave been obtained in a single run of {\\tt DISASTER++}.\n\\item As a combination of the two preceeding alternatives. Here the adaptation \nsteps are included. A ``typical'' infrared-safe observable, \nin the following called the {\\it adaptation variable}, is evaluated, and\nits value is returned to {\\tt VEGAS}. This observable serves to optimize the\ndistribution of points over phase space. A convenient observable of this\nkind is provided by {\\tt DISASTER++} (see below).\nThe ``real'' observables under consideration are evaluated as in the \nsecond alternative in the final integration step.\n\\end{itemize}\n\n\\noindent\n{\\tt DISASTER++} is initialized by a call of \nthe subroutine {\\tt disaster\\_ca()}. It is recommended \nto end a {\\tt DISASTER++}\nrun by a call of the subroutine \n{\\tt disaster\\_cb()} in order to free\ndynamically allocated memory.\n\n\\noindent\nParameters can be set or commands be executed by means of three routines:\n\\begin{description}\n\\item {\\quad\\tt disaster\\_ci(str, i)} {\\ }\\\\ \n sets the integer parameter denoted by \n the character string {\\tt str} to the value {\\tt i}\n\\item {\\quad\\tt disaster\\_cd(str, d)} {\\ }\\\\\n sets the double precision parameter denoted by \n the character string {\\tt str} to the value {\\tt d}\n\\item {\\quad\\tt disaster\\_cc(str)} {\\ }\\\\ executes the command \ngiven by the character string {\\tt str}\n\\end{description}\nThe following parameters are available (there are a few more to optimize the\ngeneration of the phase space points; they will be documented in forthcoming\nversions of this manual):\n\\begin{description}\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt ECM:}}]{\\ }\\\\\n the centre-of-mass energy in [GeV]\n\n\\item[\\quad{\\tt LEPTON\\_INTEGRATION}:]{\\ }\\\\\n {\\tt 1:} integration over $x_B$ and $y$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt XBMIN:}}]{\\ }\\\\\n minimum value of $x_B$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt XBMAX:}}]{\\ }\\\\\n maximum value of $x_B$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt YMIN:}}]{\\ }\\\\\n minimum value of $y$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt YMAX:}}]{\\ }\\\\\n maximum value of $y$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt QMIN:}}]{\\ }\\\\\n minimum value of $Q$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt QMAX:}}]{\\ }\\\\\n maximum value of $Q$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt WMIN:}}]{\\ }\\\\\n minimum value of $W$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt WMAX:}}]{\\ }\\\\\n maximum value of $W$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt PROCESS\\_INDEX:}}]{\\ }\\\\\n {\\tt 1:} leading order\\\\\n {\\tt 2:} next-to-leading order\n \n\\item[\\quad{\\tt NUMBER\\_OF\\_FINAL\\_STATE\\_PARTONS\\_IN\\_BORN\\_TERM:}]{\\ }\\\\\n {\\tt 1}, {\\tt 2}, {\\tt 3} for the process under consideration;\\\\\n {\\tt 1:} (1+1)-jet-type observables (leading and next-to-leading order)\\\\\n {\\tt 2:} (2+1)-jet-type observables (leading and next-to-leading order)\\\\\n {\\tt 3:} (3+1)-jet-type observables (leading order only)\n \n\\item[\\quad{\\makebox[3cm][l]{\\tt POINTS:}}]{\\ }\\\\\n {\\tt POINTS * (FACT\\_PREP + FACT\\_FINAL)} is the \n number of generated points in the Monte Carlo integration\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt FACT\\_PREP:}}]{\\ }\\\\\n the number of points for the grid-definition run is given by\n {\\tt FACT\\_PREP * POINTS}\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt FACT\\_FINAL:}}]{\\ }\\\\\n the number of points for the final integration step is given by\n {\\tt FACT\\_FINAL * POINTS}\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt RUN\\_MC:}}]{\\ }\\\\\n to start the Monte-Carlo integration\n\n\\end{description}\n\n\\noindent\nA convenient adaptation observable can be evaluated by a call of\nthe following function:\n\\begin{verbatim}\n double precision function disaster_cao(\n & integer ipdf_collection,\n & integer ipdf_parametrization,\n & integer ipdf_set,\n & integer ialphas_variant,\n & integer ialphas_order,\n & double precision dalphas_lambdaqcd4,\n & integer ialphaem_variant\n & )\n\\end{verbatim}\nThe arguments of the function call are:\n\\begin{description}\n\n\\item[\\quad{\\tt ipdf\\_collection:}]{\\ }\\\\\nthe collection of parton densities; \\\\\n{\\tt 1:} {\\tt PDFLIB} \\cite{29}\n\n\\item[\\quad{\\tt ipdf\\_parametrization:}]{\\ }\\\\\nparametrization of parton densities (cf.\\ {\\tt PDFLIB})\n\n\\item[\\quad{\\tt ipdf\\_set:}]{\\ }\\\\\nset of parton densities (cf.\\ {\\tt PDFLIB})\n\n\\item[\\quad{\\tt ialphas\\_variant:}]{\\ }\\\\\nfunction which is used to evaluate the strong coupling constant;\\\\\n{\\tt 1:} running coupling $\\alpha_s(Q^2)$ with \nflavour thresholds at the single heavy quark masses\n\n\\item[\\quad{\\tt ialphas\\_order:}]{\\ }\\\\\n{\\tt 1:} one-loop formula\\\\\n{\\tt 2:} two-loop formula\\\\\nfor the running strong\ncoupling constant\n\n\\item[\\quad{\\tt dalphas\\_lambdaqcd4:}]{\\ }\\\\\nthe QCD parameter $\\Lambda_{\\mbox{\\scriptsize QCD}}^{(4)}$\nfor four flavours\n\n\\item[\\quad{\\tt ialphaem\\_variant:}]{\\ }\\\\\nfunction which is used to evaluate the electromagnetic coupling constant;\\\\\n{\\tt 1:} fine structure constant \\\\\n{\\tt 2:} 1\/137\\\\\n(an implementation of the running electromagnetic \ncoupling constant is in preparation)\n\n\\end{description}\n\n\\noindent\nTo simplify the calculation of the flavour factors, \na {\\tt DISASTER++} routine can be called which returns the\nrequired coupling constants and the combinations of parton densities\nand quark charges:\n\\begin{verbatim}\n subroutine disaster_cff(\n & integer ipdf_collection,\n & integer ipdf_parametrization,\n & integer ipdf_set,\n & integer ialphas_variant,\n & integer ialphas_order,\n & double precision dalphas_lambdaqcd4,\n & integer ialphaem_variant,\n & integer nf,\n & double precision ffactin(4),\n & double precision ffactout(13)\n & )\n\\end{verbatim}\nThe arguments of the function call are the same as in the case of the\nroutine {\\tt disaster\\_cao} (see above), except for the following:\n\\begin{description}\n\n\\item[\\quad{\\tt nf:}]{\\ }\\\\\nthe number of flavours $N_f$\n\n\\item[\\quad{\\tt ffactin:}]{\\ }\\\\\ninput parameters;\n\\begin{description}\n\\item {\\tt ffactin(1):} the momentum fraction variable $\\xi$\n\\item {\\tt ffactin(2):} the factorization scale in [GeV] \n(i.e.\\ the scale argument of the parton densities)\n\\item {\\tt ffactin(3):} the renormalization scale in [GeV] \n(i.e.\\ the scale argument of the running strong coupling constant)\n\\item {\\tt ffactin(4):} the scale argument of the running electromagnetic\ncoupling constant\n\\end{description}\n\n\\item[\\quad{\\tt ffactout:}]{\\ }\\\\\noutput parameters;\n\\begin{description}\n\\item {\\tt ffactout(1..11):} the quantities $\\rho_{i1}$ \\ldots\n$\\rho_{i11}$,\n\\item {\\tt ffactout(12):} the running strong coupling constant\n\\item {\\tt ffactout(13):} the electromagnetic\ncoupling constant\n\\end{description}\n\n\\end{description}\n\nIt is strongly recommended to use this routine, since it uses \na cache that stores a few of the most recent values temporarily, such that\nthe sums $\\rho_{ij}$ and the parton densities do not have to be reevaluated.\nThis routine is supplied for the convenience of the user. The weights\nand events generated by {\\tt DISASTER++} do not depend on this routine.\n\nThe description of the program structure just given may sound\ncomplicated. It is actually quite simple to use the program; an example \nfor the calculation of the (2+1)-jet cross section for the JADE algorithm\nin the E-scheme is given in the files {\\tt disaster\\_f.f}\nand {\\tt clust.f}, as described in Section~\\ref{installation}.\n\n\\section{Program Installation}\n\\label{installation}\n\n\\begin{description}\n\\item[Source code:]\nThe source code of the class library is available on the World Wide Web:\n\\begin{verbatim}\n http:\/\/wwwcn.cern.ch\/~graudenz\/disaster.html\n\\end{verbatim}\n\n\\item[Files:]\nThe package consists of a number of files. To facilitate the installation,\nand to enable the {\\tt C++} compiler to perform certain optimizations,\nthe complete {\\tt C++} part of the program is provided as one file\n{\\tt onefile\\_n.cc} (the individual files are available on request). \nAn example for the {\\tt FORTRAN} interface is\ngiven in the file {\\tt disaster\\_f.f} (calculation of the (2+1) jet \ncross section for the JADE algorithm in the E-scheme), \ntogether with a simple cluster\nroutine in the file {\\tt clust.f}. \nThe number of Monte Carlo events in the example is set to \na tiny number (100) in order to terminate the program after a few seconds.\nRealistic values for the parameter {\\tt POINTS} are of the order of \n$10^6$.\nAn example ``make file'' is given in {\\tt makedisaster}. \n\\item[Mixed Language Programming:]\n{\\tt DISASTER++} is mainly writen in the {\\tt C++} programming language.\nThe reason for the choice of this language are twofold:\nObject-oriented programming allows for programs that are easily \nmaintained and extended\\footnote{It could even be said that object-oriented\nprogramming is a kind of applied ontology: the central categories of this \napproach are given by {\\it objects} and {\\it methods} that define their\nrelationships.\n}, and in high-energy physics there is a trend in the experimental \ndomain for a transition from {\\tt FORTRAN} to {\\tt C++}.\nAlthough the goal has been to write a self-contained {\\tt C++}\npackage, \na few parts of the program are still coded in \n{\\tt FORTRAN}. Moreover, the standard parton density parametrizations\nare only \navailable as {\\tt FORTRAN} libraries. This means that the {\\tt DISASTER++}\npackage cannot be run as a stand-alone {\\tt C++} program. In most cases,\nusers may wish to interface the program to their existing {\\tt FORTRAN}\nroutines. An elegant and machine-independent \nway for {\\it mixed language programming} for the case\nof {\\tt C}, {\\tt C++} and {\\tt FORTRAN} is supported by the \n{\\tt cfortran.h} package described in Ref.~\\cite{30}. \nFor every {\\tt FORTRAN} routine to be called by a {\\tt C++} method, \nan {\\tt extern \"C\"} routine has to be defined as an interface, \nand vice versa. The explicit calls are then generated by means of macros \nfrom {\\tt cfortran.h}. The most convenient way is, after compilation, \nto link the {\\tt FORTRAN} and {\\tt C++} parts via the standard\n\\begin{verbatim}\n f77 -o disaster onefile_n.o ...\n\\end{verbatim}\ncommand\\footnote{The procedure is described here for the {\\tt UNIX}\noperating system.}, \nsuch that the {\\tt FORTRAN} part supplies the entry point.\nThe required {\\tt C++} libraries have to be stated explicitly\nvia the {\\tt -L} and {\\tt -l} options. The library paths can be obtained\nby compiling and linking a trivial program {\\tt hw.cc} of the type\n\\begin{verbatim}\n #include \n main() { printf(\"Hello world!\\n\"); }\n\\end{verbatim}\nwith\n\\begin{verbatim}\n gcc -v hw.cc\n\\end{verbatim}\n(for the {\\tt GNU C++} compiler). \nAn example for the required libraries can be found in the \nprototype ``make file'' {\\tt makedisaster}. Some machine-specific information\nis mentioned in the manual of {\\tt cfortran.h}.\n\nIn the {\\tt DISASTER++} package, the explicit {\\tt FORTRAN} interface, \nas described in Section~\\ref{structure},\nis already provided. Thus\nfrom the outside the {\\tt C++}\nkernel is transparent and hidden behind {\\tt FORTRAN} subroutines\nand functions.\n\n\\item[Template instantiation:]\nIn {\\tt DISASTER++}, heavy use is made of {\\it templates}. At present, there\nis not yet a universally accepted scheme for template instantiations.\nThe solution adopted here is the explicit instantiation\nof all templates. This requires\nthat the compiler itself does not instantiate templates automatically.\nThis is achieved for the {\\tt GNU} compiler by means of the switch\n\\begin{verbatim}\n -fno-external-templates\n\\end{verbatim}\n\n\\item[Output files:]\nThere is a small problem with the output from the {\\tt C++} and {\\tt FORTRAN}\nparts of {\\tt DISASTER++}. It seems to be the case that generally {\\tt C++} \n({\\tt FILE* stdout} and {\\tt ostream cout}) \nand {\\tt FORTRAN} ({\\tt UNIT=6}) keep different\nfile buffers. This is no problem when the output is written to a terminal, \nsince then the file buffers are immediately flushed\nafter each line-feed character. When writing to \na file (as is usually the case for batch jobs), the file buffers are not \nimmediately flushed, and this leads to the problem that the output \non the file is mixed in non-chronological order. This problem will be solved\nby the introduction of a particular stream class which hands over the output\nto a {\\tt FORTRAN} routine.\n\n\\item[Miscellaneous:]\n{\\tt DISASTER++} employs the {\\tt ANSI C} {\\tt signal} facility to \ncatch interrupts caused by floating point arithmetic. \nIf the signal {\\tt SIGFPE} is raised, a flag in {\\tt DISASTER++} is set, \nwhich eventually leads to the requirement that the event has to be \ndropped (via a call of {\\tt user1(7)}). Similarly a non-zero value of\nthe variable {\\tt errno} of the {\\tt ANSI C errno} facility\nis treated. The signal handler is also active \nwhen the user routine is executed, which leads to the effect that\nin the case of a floating point exception the program does not crash, but\ncontinues under the assumption that the event has been dropped.\nForthcoming version of {\\tt DISASTER++} will make a flag available\nthat can be used to access the status of the signal handler in \nthe user routines. \nMoreover, it is checked whether the weight returned to {\\tt DISASTER++} via\n{\\tt user2} fulfills the criterion for {\\tt IEEE} {\\tt NaN} (``not a number''). \nIf this is the case, it is also requested that the event be dropped.\n\n\\end{description}\n\n\\section{Comparison of Available Programs}\n\\label{comparison}\n\nIn this section, we compare the three available programs {\\tt MEPJET}\n(Version 2.0)\\footnote{\nFor the very-high statistics runs the default random number\ngenerator (generating a Sobol sequence of pseudo-random numbers) \nof {\\tt MEPJET} ran out of random numbers. We therefore had to modify the\nprogram such that it uses another generator which is also part of\nthe {\\tt MEPJET} package. --- The crossing functions for the ``artificial''\nparton densities have been obtained by means of a modification of the program\n{\\tt make\\_str\\_pdf1.f}.\n}, \n{\\tt DISENT} (Version 0.1)\\footnote{\nAn earlier version of this paper \\cite{24}\nreported results of a comparison \nwith {\\tt DISENT} Version 0.0. We found large discrepancies for \nsome choices of the parton density parametrization. In the meantime,\nan error in {\\tt DISENT} has been fixed, and the results \nof {\\tt DISENT} and {\\tt DISASTER++} are now\nin good agreement, see below.\n} and {\\tt DISASTER++}\n(Version 1.0) numerically for various bins of \n$x_B$ and $y$ as defined in Table~\\ref{tab1}, \nand for various choices of the parton density parametrizations.\n\n\\begin{table}[htb]\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|}\n\\cline{2-4}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\makebox[4.1cm]{$0.01 < y < 0.03 $} \n & \\makebox[4.1cm]{$0.03 < y < 0.1 $} \n & \\makebox[4.1cm]{$0.1 < y < 0.3 $} \n\\\\ \\hline\n$0.005 < x_B < 0.01$ \\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[1.2cm]{Bin 1}($Q^2 > 4.6\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 2}($Q^2 > 13.5\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 3}($Q^2 > 45.0\\,\\mbox{GeV}^2$)\n\\\\ \\hline\n$0.05 < x_B < 0.1$ \\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[1.2cm]{Bin 4} ($Q^2 > 45\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 5} ($Q^2 > 135\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 6} ($Q^2 > 450\\,\\mbox{GeV}^2$)\n\\\\ \\hline\n$0.2 < x_B < 0.4$ \\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[1.2cm]{Bin 7} ($Q^2 > 180\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 8} ($Q^2 > 540\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 9} ($Q^2 > 1800\\,\\mbox{GeV}^2$)\n\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption[tab1]\n{\n\\label{tab1}\n{\\it\nBins in $x_B$ and $y$. The values in parentheses give the resulting \nlower bounds on $Q^2$.\n}\n}\n\\end{table}\n\nThe centre-of-mass energy is set to 300\\,GeV. To facilitate the\ncomparison, the strong coupling constant is set to a fixed value of\n$\\alpha_s=0.1$, \nand the number of flavours is set to $N_f=5$, even below the bottom \nthreshold ($N_f=5$ is hard-wired into {\\tt MEPJET}). \nThe electromagnetic coupling constant \nis chosen to be $\\alpha=1\/137$ (the value\nis hard-wired into {\\tt DISENT}, but this could be changed trivially, \nin principle). The factorization- and renormalization schemes of the\nhard scattering cross sections are $\\overline{\\mbox{MS}}$, and the\nfactorization and renormalization scales $\\mu_f$ and $\\mu_r$, respectively, \nare set to $Q$.\n\nThe quantity under consideration is the (2+1)-jet cross section, \nshown in Tables~2--8 in Appendix~\\ref{capp}.\nFor simplicity we consider the modified JADE clustering scheme\nwith resolution criterion $S_{ij} <> c W^2$ and the E~recombination scheme, \nwhere\n$S_{ij} = (p_i + p_j)^2$, $W$ is the total hadronic energy,\nand $c=0.02$ is the jet resolution parameter.\nWe require, in the laboratory frame ($E_e=27.439$\\,GeV, \n$E_P=820$\\,GeV), a minimum transverse momentum of 1\\,GeV and a pseudo-rapidity\nof $-3.5<\\eta<3.5$ for all jets\\footnote{These cuts in $p_T$ and $\\eta$ \nare employed in order to facilitate event generation with {\\tt MEPJET}; \nthe phase space generator implemented in that program is reminiscent of a \ngenerator for pp~collider physics where $p_T$ and $\\eta$ cuts\nin the laboratory frame are\na standard experimental procedure. It is thus complicated to generate \nevents with {\\tt MEPJET} \nin the full phase space of the laboratory system, as usually required \nfor eP scattering, where ``natural'' cuts in transverse momentum and\npseudo-rapidity would be performed in the hadronic centre-of-mass frame\nor in the Breit frame.}.\n\nThe parton density parametrizations employed in the comparison are:\n\\begin{description}\n\\item[{\\makebox[1cm][l]{(a)}}] \n \\makebox[6cm][l]{the MRSD$_-^\\prime$ parton densities \n \\cite{31}} (Table 2),\n\\item[{\\makebox[1cm][l]{(b)}}] \n \\makebox[3cm][l]{$q(\\xi)=(1-\\xi)^5$,}\\makebox[3cm][l]{$g(\\xi)=0$}\n (Table 3),\n\\item[{\\makebox[1cm][l]{(c)}}] \n \\makebox[3cm][l]{$q(\\xi)=0$,}\\makebox[3cm][l]{$g(\\xi)=(1-\\xi)^5$} \n (Table 4),\n\\item[{\\makebox[1cm][l]{(d)}}] \n \\makebox[3cm][l]{$q(\\xi)=(1-\\xi)^2$,}\\makebox[3cm][l]{$g(\\xi)=0$}\n (Table 5),\n\\item[{\\makebox[1cm][l]{(e)}}] \n \\makebox[3cm][l]{$q(\\xi)=0$,}\\makebox[3cm][l]{$g(\\xi)=(1-\\xi)^2$} \n (Table 6),\n\\item[{\\makebox[1cm][l]{(f)}}] \n \\makebox[3cm][l]{$q(\\xi)=(1-\\xi)$,}\\makebox[3cm][l]{$g(\\xi)=0$}\n (Table 7),\n\\item[{\\makebox[1cm][l]{(g)}}] \n \\makebox[3cm][l]{$q(\\xi)=0$,}\\makebox[3cm][l]{$g(\\xi)=(1-\\xi)$} \n (Table 8).\n\\end{description}\nHere $q(\\xi)$ generically stands for valence and sea distributions\\footnote{\nThis means that $u_v(\\xi)$, $d_v(\\xi)$, $u_s(\\xi)$, $d_s(\\xi)$, \n$s_s(\\xi)$, $c_s(\\xi)$, \n$b_s(\\xi)$ have been set to $q(\\xi)$.\n}, \nand $g(\\xi)$ is the gluon distribution.\nWe wish to point out that the comparison involving the ``artificial''\nparton densities is not just of academic interest. On the contrary, \nfor the extraction \nof, for instance, the gluon density\nfrom jet data \nit is convenient\nto replace the parton densities by simple functions \nwith special properties (such as powers\nof the momentum fraction variable $\\xi$ or functions of an orthonormal \nbasis system), \nin order to achieve a fast fit. These functions usually do not have\nthe shape of physical parton densities, in particular they do not\nhave to fall off rapidly for $\\xi\\rightarrow 1$.\nMoreover, next-to-leading-order calculations yield unique and well-defined\nresults for the hard scattering cross sections to be convoluted with \nobservables and parton densities. We employ the ``artificial''\nparton densities also in order to have a stricter test of the \nhard scattering cross sections.\n\nThe leading-order results of all three programs are in excellent agreement. \nThe next-to-leading-order results of {\\tt DISASTER++} and {\\tt DISENT} are\nin good agreement within about two to (sometimes) \nthree standard deviations\\footnote{\nWe wish to note that the error estimates quoted by the programs are usually not\nrigorous estimates because of the non-Gaussian distribution of the\nMonte-Carlo weights. Therefore, in principle, it is not possible to \ninfer probabilities for the consistency of data samples produced by two\nprograms based on these estimates. \nA more precise, but in general unfeasible way to obtain an estimate of the\nMonte Carlo error would be to run the programs a number of times \nwith different random number seeds and to analyze the spread of the \nquoted results around their central value.\nSuch a study has recently been done by M.~Seymour\nfor {\\tt DISENT} with the result that the \nindividual error estimates are quite reliable \\cite{32}.\n}\nof the larger of the two errors quoted by the two programs. An exception\nis bin~7 for $g(\\xi) = (1-\\xi)^2$. A run of {\\tt DISENT} with higher statistics \nyields a value of $0.1836 \\pm 0.0025$, which is within two standard deviations\nof the {\\tt DISASTER++} result, indicating that there was indeed a statistical\nfluctuation in the original result.\n\nThe comparison of the next-to-leading-order results \nof {\\tt MEPJET} and {\\tt DISASTER++} requires a more detailed discussion:\n\\begin{itemize}\n\n\\item For the MRSD$_-^\\prime$ parton densities, the results for\nbins 3--9 are\ncompatible within about two standard deviations of the statistical error\nof the Monte-Carlo integrations.\nThe results for bins~1 and~2 differ considerably. \nRuns with a smaller value\nof the internal {\\tt MEPJET} cut-off variable~$s$, which is set by default\nto $s=0.1\\,$GeV$^2$, yield\nthe following results for bin 1:\n$580.6 \\pm 6.7$\\,pb ($s=0.01\\,$GeV$^2$), \n$564.8 \\pm 10.5$\\,pb ($s=0.001\\,$GeV$^2$) and\n$575.4 \\pm 13.0$\\,pb ($s=0.0001\\,$GeV$^2$).\nThe statistical error is increased for decreased~$s$ because the integration\nvolume\nof the (3+1) parton contributions is extended into the singular domain.\nBecause of the increased statistical error, we also performed a \nhigh-statistics runs with $\\sim 4\\cdot10^9$ (!) Monte Carlo events\nof {\\tt MEPJET} \nfor this bin. \nFor $s=0.001\\,$GeV$^2$ we obtain \n$576.3 \\pm 6.7$\\,pb\nand for $s=0.0001\\,$GeV$^2$\nthe result is\n$583.2 \\pm 7.4$\\,pb.\nThese values from {\\tt MEPJET} \nare compatible with the {\\tt DISASTER++} and {\\tt DISENT} results\\footnote{\nThese results underline that, for the phase space slicing method, results\ngenerally have to be validated {\\it ex post} by a cross-check with a \nsmaller technical cut~$s$ and much higher statistics. It may be argued that\nthere are jet algorithms (the $k_T$~algorithm, for example)\nwhich show a better convergence for $s\\rightarrow 0$.\nHowever, the point here is that one does not know in advance whether this\nis the case for the observable under consideration. --- In Ref.~\\cite{23}\nwe find the statement that $s$-independence in {\\tt MEPJET} is achieved for \n$s=0.1\\,$GeV$^2$. Our study shows that this is generally not the case, \nand that extremely small values of~$s$, possibly of the order of\n$s=0.0001\\,$GeV$^2$, might be necessary.\n}.\n\\item For the parton density parametrization (b) (quarks only, with a steeply\nfalling distribution $q(\\xi)$ for $\\xi \\rightarrow 1$), \n{\\tt DISASTER++} and {\\tt MEPJET}\nare in good agreement.\n\n\\item The results for parametrization (c) (steeply falling\ngluon parametrization)\nare in good agreement, except for bin 1.\n\n\\item For parametrization (d), \n{\\tt DISASTER++} and {\\tt MEPJET} are in agreement except for bins 1 and 4.\nRuns with a smaller value\nof the {\\tt MEPJET} cut-off variable~$s$\nyield\nthe following results for bin 1:\n$59.6 \\pm 1.8$\\,pb ($s=0.01\\,$GeV$^2$), \n$56.7 \\pm 5.8$\\,pb ($s=0.001\\,$GeV$^2$) and\n$54.9 \\pm 10.4$\\,pb ($s=0.0001\\,$GeV$^2$).\nA high-statistics run ($\\sim 4\\cdot10^9$ events) of {\\tt MEPJET} \nfor bin 1 with $s=0.0001\\,$GeV$^2$ gives the \ncross section $60.0 \\pm 1.9$\\,pb.\nContrary to the observation in case (a), for small~$s$ \nwe do not get agreement of \nthe {\\tt MEPJET} result with the {\\tt DISASTER++} \/ {\\tt DISENT} result\nof about $48$--$49$\\,pb.\n\n\\item The {\\tt MEPJET} results for parametrization (e) \n($g(\\xi) = (1-\\xi)^2$)\ndeviate considerably from the {\\tt DISASTER++}\nresults in bins~1, 2, 4 and 7.\n\n\\item For parametrization (f),\n{\\tt DISASTER++} and {\\tt MEPJET} are incompatible\nfor bins 1, 2, 4, 6 and 7.\n\n\\item For parametrization (g), \n{\\tt MEPJET} and {\\tt DISASTER++} are compatible in bins\n3, 5, 8 and 9 only.\nA high-statistics run ($\\sim 4\\cdot10^9$ events) of {\\tt MEPJET} \nfor bin 4 with $s=0.0001\\,$GeV$^2$ yields the \ncross section $1.29 \\pm 0.02$\\,pb.\nThis value is different from the result for $s=0.1\\,$GeV$^2$, \nbut still inconsistent\nwith the {\\tt DISASTER++} \/ {\\tt DISENT} result of about $0.69$\\,pb.\n\n\\end{itemize}\n\nThe overall picture is thus: Out of the three programs, {\\tt DISASTER++} \nand {\\tt DISENT} (Version 0.1) are in good agreement within about\ntwo, sometimes three standard deviations of the quoted integration errors, \nboth for ``physical'' and ``artificial'' parton densities. This agreement\nis very encouraging, but not yet perfect, and much more detailed studies\ninvolving different sets of observables and differential distributions\nare required. For the two programs, a direct comparison of the\n``jet structure functions'' should also be feasible.\n\nFor several bins, in particular for the ``artificial'' parton distribution \nfunctions, the {\\tt MEPJET}\nresults for the default setting of the \ninternal parameters deviate considerably from the {\\tt DISASTER++}\nand {\\tt DISENT} results. \nFor one particular bin studied in more detail for\nthe MRSD$_-^\\prime$ parton densities,\nthe\ndiscrepancy disappears in the case of an extremely small internal technical \ncut~$s$ of {\\tt MEPJET}, for a substantial increase of the\nnumber of generated events to obtain a meaningful Monte Carlo error. \nA few {\\tt MEPJET} results employing ``artificial'' \nparton densities have been studied in more detail. We observed that \nin these cases a reduction of the~$s$ parameter does not lead to an\nimprovement of the situation. For lack of computer time, we could not study \nall bins with a smaller $s$~cut. The overall situation \nis thus still inconclusive and unclear. An independent cross check of the\n{\\tt MEPJET} results, in particular of those using the \nimplementation of the crossing functions for the ``artificial'' parton \ndensities, is highly desirable.\n\n\\section{Miscellaneous}\n\\begin{itemize}\n\n\\item If you intend to install and use {\\tt DISASTER++}, please send me \na short e-mail message, and I will put your name on a mailing list\nso that I can inform you when there is a new version of the package.\n\n\\item Suggestions for improvements and bug reports are welcome.\n\n\\item In case that there are problems with the installation of the program, \nplease send me an e-mail message.\n\n\\end{itemize}\n\n\\section{Summary}\n\nWe have presented the {\\tt C++} class library \n{\\tt DISASTER++} for the calculation \nof (1+1) and (2+1)-jet type observables in deeply inelastic scattering.\nThe program is based on the subtraction formalism and thus does not require\na technical cut-off for the separation of the infrared-singular from the\ninfrared-finite phase-space regions. \nA {\\tt FORTRAN} interface to the {\\tt C++} class library is available.\n{\\tt DISASTER++} is actually intended to be a general object-oriented\nframework for next-to-leading order QCD calculations. In particular, \nthe subtraction formalism is implemented in a very general way.\n\nWe have performed a comparison of the three available programs\n{\\tt MEPJET}, {\\tt DISENT} and {\\tt DISASTER++}\nover a wide range of the parameters for the lepton phase space.\nWe find good agreement of {\\tt DISASTER++} and the Catani-Seymour\nprogram {\\tt DISENT} (Version 0.1).\nThe comparison of {\\tt DISASTER++} and the Mirkes-Zeppenfeld program\n{\\tt MEPJET} (for the {\\tt MEPJET} \ndefault parameters) leads to several\ndiscrepancies, both for physical and for ``artificial'' parton densities.\nFor the MRSD$_-^\\prime$ parton densities a \nreduction of the internal {\\tt MEPJET} phase-space slicing cut-off \nvariable~$s$, the number of Monte Carlo events kept fixed, leads to a certain \nimprovement of the central values of the results, \naccompanied by a substantially increased statistical error and fluctuating\ncentral values. A considerable increase of the number of generated events\n(up to of the order of several billion events) \neventually leads to an agreement of the {\\tt MEPJET} results with the\n{\\tt DISASTER++} \/ {\\tt DISENT} results for a particular bin of the lepton \nvariables which has been studied in detail.\nFor ``artificial'' parton densities and a selected set of bins of\nthe lepton variables, a reduction of the internal cut~$s$\ndoes not resolve the discrepancies.\nOther bins are not considered\nfor the lack of computer time for very-high statistics runs.\nIt should be stressed that the present study is still limited in scope.\nAn independent cross check of the {\\tt MEPJET} results for the ``artificial''\nparton densities has to be done until a firm conclusion can be reached.\nMoreover, \nthe study has to be repeated for a wider range of observables and much higher\nMonte Carlo statistics. The $s$~dependence of the {\\tt MEPJET} results\nshould also be studied in more detail.\n\nCompared to the other two programs {\\tt MEPJET} and {\\tt DISENT},\n{\\tt DISASTER++} makes the full $N_f$ dependence and the dependence\non the renormalization and factorization scales available in the \nuser routine. This is required for consistent studies of effects\nsuch as the scale dependence when the bottom threshold is crossed.\n\n\\section{Acknowledgements}\nI wish to thank M.~Seymour for sending me the numerical results for the new \n{\\tt DISENT} version. D.~Zeppenfeld made a few cross\nchecks of the results for the MRSD$_-^\\prime$ parton densities.\nJ.~Collins has provided me with the {\\tt FORTRAN} \nroutine to test the {\\tt IEEE NaN} condition.\nI am also grateful to Th.~Hadig for a few comments on the first version \nof this paper, and for suggestions for improvements of the program.\n\n\\clearpage\n\n\\begin{appendix}\n\n\\section{Numerical Results}\n\\label{capp}\n\nThis appendix contains the numerical results which are discussed in \nSection~\\ref{comparison}. The entries in the tables are the (2+1)-jet \ncross sections\nin units of [pb].\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{402.1}{1.13}\n & \\pmdg{399.9}{0.53}\n & \\pmdg{399.6}{1.1}\n & \\pmdg{585.0}{2.6}\n & \\pmdg{564.1}{1.9}\n & \\pmdg{578.4}{7.1}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{207.6}{0.59}\n & \\pmdg{207.5}{0.34}\n & \\pmdg{207.4}{0.15}\n & \\pmdg{364.8}{1.5}\n & \\pmdg{347.3}{2.4}\n & \\pmdg{361.1}{3.5}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{60.0}{0.16}\n & \\pmdg{59.9}{0.14}\n & \\pmdg{59.9}{0.15}\n & \\pmdg{119.1}{1.71}\n & \\pmdg{118.0}{1.05}\n & \\pmdg{120.1}{0.94}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{82.9}{0.16}\n & \\pmdg{82.9}{0.10}\n & \\pmdg{82.6}{0.21}\n & \\pmdg{98.1}{1.11}\n & \\pmdg{95.1}{0.61}\n & \\pmdg{95.4}{0.87}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{42.9}{0.08}\n & \\pmdg{42.9}{0.06}\n & \\pmdg{42.6}{0.28}\n & \\pmdg{55.3}{0.46}\n & \\pmdg{54.4}{0.49}\n & \\pmdg{54.9}{0.40}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{11.9}{0.02}\n & \\pmdg{11.9}{0.02}\n & \\pmdg{11.9}{0.08}\n & \\pmdg{17.5}{0.06}\n & \\pmdg{16.8}{0.22}\n & \\pmdg{17.3}{0.13}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{9.60}{0.03}\n & \\pmdg{9.58}{0.01}\n & \\pmdg{9.59}{0.04}\n & \\pmdg{12.1}{0.50}\n & \\pmdg{12.7}{0.07}\n & \\pmdg{12.3}{0.15}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.24}{0.01}\n & \\pmdg{6.23}{0.01}\n & \\pmdg{6.24}{0.02}\n & \\pmdg{8.61}{0.12}\n & \\pmdg{8.55}{0.15}\n & \\pmdg{8.52}{0.08}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.78}{0.003}\n & \\pmdg{1.78}{0.003}\n & \\pmdg{1.78}{0.06}\n & \\pmdg{2.65}{0.03}\n & \\pmdg{2.57}{0.06}\n & \\pmdg{2.63}{0.02}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 2: {\\it\nComparison for MRSD$_-^{\\,\\prime}$ parton densities.\n}\n\\end{center}\n\n\\clearpage\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{36.2}{0.09}\n & \\pmdg{36.3}{0.05}\n & \\pmdg{36.3}{0.12}\n & \\pmdg{39.1}{0.33}\n & \\pmdg{40.9}{0.89}\n & \\pmdg{38.2}{0.53}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{17.8}{0.04}\n & \\pmdg{17.8}{0.03}\n & \\pmdg{17.7}{0.05}\n & \\pmdg{23.2}{0.37}\n & \\pmdg{22.7}{0.41}\n & \\pmdg{22.6}{0.22}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{5.21}{0.01}\n & \\pmdg{5.21}{0.01}\n & \\pmdg{5.21}{0.02}\n & \\pmdg{8.24}{0.22}\n & \\pmdg{7.86}{0.12}\n & \\pmdg{8.14}{0.06}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{27.3}{0.06}\n & \\pmdg{27.3}{0.03}\n & \\pmdg{27.2}{0.09}\n & \\pmdg{28.0}{0.52}\n & \\pmdg{29.2}{0.18}\n & \\pmdg{30.0}{0.21}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{14.8}{0.03}\n & \\pmdg{14.8}{0.02}\n & \\pmdg{14.7}{0.04}\n & \\pmdg{17.4}{0.29}\n & \\pmdg{16.9}{0.10}\n & \\pmdg{17.0}{0.11}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.33}{0.008}\n & \\pmdg{4.32}{0.006}\n & \\pmdg{4.31}{0.01}\n & \\pmdg{5.62}{0.10}\n & \\pmdg{5.44}{0.05}\n & \\pmdg{5.54}{0.03}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.38}{0.02}\n & \\pmdg{6.37}{0.01}\n & \\pmdg{6.38}{0.03}\n & \\pmdg{8.49}{0.17}\n & \\pmdg{8.59}{0.10}\n & \\pmdg{8.37}{0.11}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.44}{0.01}\n & \\pmdg{4.43}{0.007}\n & \\pmdg{4.44}{0.02}\n & \\pmdg{6.11}{0.08}\n & \\pmdg{6.05}{0.07}\n & \\pmdg{6.07}{0.06}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.36}{0.002}\n & \\pmdg{1.36}{0.002}\n & \\pmdg{1.36}{0.05}\n & \\pmdg{2.02}{0.02}\n & \\pmdg{2.00}{0.05}\n & \\pmdg{2.01}{0.01}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 3: {\\it\nComparison for $q(\\xi) = (1-\\xi)^5$\n}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.89}{0.017}\n & \\pmdg{4.89}{0.007}\n & \\pmdg{4.87}{0.01}\n & \\pmdg{5.38}{0.07}\n & \\pmdg{6.03}{0.06}\n & \\pmdg{5.22}{0.13}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{2.66}{0.009}\n & \\pmdg{2.66}{0.007}\n & \\pmdg{2.65}{0.007}\n & \\pmdg{3.67}{0.08}\n & \\pmdg{3.66}{0.04}\n & \\pmdg{3.58}{0.05}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.825}{0.003}\n & \\pmdg{0.826}{0.002}\n & \\pmdg{0.826}{0.002}\n & \\pmdg{1.44}{0.07}\n & \\pmdg{1.37}{0.03}\n & \\pmdg{1.39}{0.02}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.60}{0.005}\n & \\pmdg{1.60}{0.003}\n & \\pmdg{1.60}{0.003}\n & \\pmdg{1.20}{0.05}\n & \\pmdg{1.30}{0.01}\n & \\pmdg{1.12}{0.04}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.904}{0.003}\n & \\pmdg{0.900}{0.001}\n & \\pmdg{0.899}{0.002}\n & \\pmdg{0.833}{0.027}\n & \\pmdg{0.801}{0.008}\n & \\pmdg{0.764}{0.019}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.279}{0.001}\n & \\pmdg{0.278}{0.001}\n & \\pmdg{0.278}{0.001}\n & \\pmdg{0.314}{0.007}\n & \\pmdg{0.287}{0.004}\n & \\pmdg{0.299}{0.006}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.130}{0.001}\n & \\pmdg{0.131}{0.001}\n & \\pmdg{0.130}{0.001}\n & \\pmdg{0.119}{0.005}\n & \\pmdg{0.118}{0.002}\n & \\pmdg{0.110}{0.006}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.0981}{0.001}\n & \\pmdg{0.0980}{0.001}\n & \\pmdg{0.0981}{0.001}\n & \\pmdg{0.105}{0.002}\n & \\pmdg{0.096}{0.001}\n & \\pmdg{0.099}{0.004}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.0313}{0.0001}\n & \\pmdg{0.0310}{0.001}\n & \\pmdg{0.0313}{0.001}\n & \\pmdg{0.0396}{0.001}\n & \\pmdg{0.034}{0.001}\n & \\pmdg{0.0386}{0.001}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 4: {\\it\nComparison for $g(\\xi) = (1-\\xi)^5$\n}\n\\end{center}\n\n\\clearpage\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{46.1}{0.11}\n & \\pmdg{46.2}{0.07}\n & \\pmdg{46.2}{0.14}\n & \\pmdg{49.4}{0.67}\n & \\pmdg{58.8}{0.65}\n & \\pmdg{47.8}{1.2}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{23.8}{0.05}\n & \\pmdg{23.8}{0.09}\n & \\pmdg{23.8}{0.07}\n & \\pmdg{30.6}{0.33}\n & \\pmdg{31.4}{0.71}\n & \\pmdg{29.0}{0.54}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{7.28}{0.02}\n & \\pmdg{7.28}{0.02}\n & \\pmdg{7.29}{0.02}\n & \\pmdg{11.2}{0.21}\n & \\pmdg{11.0}{0.24}\n & \\pmdg{11.4}{0.14}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{42.4}{0.09}\n & \\pmdg{42.3}{0.06}\n & \\pmdg{42.3}{0.12}\n & \\pmdg{38.4}{0.30}\n & \\pmdg{41.9}{0.26}\n & \\pmdg{38.4}{0.31}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{23.9}{0.04}\n & \\pmdg{23.9}{0.03}\n & \\pmdg{23.8}{0.06}\n & \\pmdg{24.8}{0.46}\n & \\pmdg{24.2}{0.19}\n & \\pmdg{23.9}{0.16}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{7.31}{0.01}\n & \\pmdg{7.30}{0.01}\n & \\pmdg{7.27}{0.02}\n & \\pmdg{8.11}{0.19}\n & \\pmdg{8.04}{0.41}\n & \\pmdg{8.24}{0.05}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{20.3}{0.05}\n & \\pmdg{20.3}{0.08}\n & \\pmdg{20.3}{0.08}\n & \\pmdg{23.3}{0.64}\n & \\pmdg{25.1}{0.18}\n & \\pmdg{22.4}{0.24}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{15.4}{0.03}\n & \\pmdg{15.4}{0.02}\n & \\pmdg{15.4}{0.01}\n & \\pmdg{18.6}{0.36}\n & \\pmdg{18.3}{0.47}\n & \\pmdg{18.4}{0.15}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.87}{0.01}\n & \\pmdg{4.86}{0.01}\n & \\pmdg{4.87}{0.04}\n & \\pmdg{6.47}{0.08}\n & \\pmdg{6.38}{0.07}\n & \\pmdg{6.41}{0.05}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 5: {\\it\nComparison for $q(\\xi) = (1-\\xi)^2$\n}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.24}{0.02}\n & \\pmdg{6.22}{0.01}\n & \\pmdg{6.21}{0.02}\n & \\pmdg{6.73}{0.13}\n & \\pmdg{8.94}{0.12}\n & \\pmdg{6.67}{0.24}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{3.59}{0.01}\n & \\pmdg{3.58}{0.01}\n & \\pmdg{3.57}{0.01}\n & \\pmdg{4.77}{0.06}\n & \\pmdg{5.24}{0.09}\n & \\pmdg{4.43}{0.11}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.18}{0.004}\n & \\pmdg{1.18}{0.004}\n & \\pmdg{1.18}{0.003}\n & \\pmdg{1.93}{0.04}\n & \\pmdg{1.89}{0.04}\n & \\pmdg{1.86}{0.03}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{2.65}{0.007}\n & \\pmdg{2.65}{0.003}\n & \\pmdg{2.65}{0.006}\n & \\pmdg{1.13}{0.03}\n & \\pmdg{1.66}{0.02}\n & \\pmdg{0.94}{0.07}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.62}{0.004}\n & \\pmdg{1.61}{0.002}\n & \\pmdg{1.61}{0.003}\n & \\pmdg{1.04}{0.04}\n & \\pmdg{1.09}{0.02}\n & \\pmdg{0.993}{0.03}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.535}{0.001}\n & \\pmdg{0.534}{0.001}\n & \\pmdg{0.533}{0.001}\n & \\pmdg{0.433}{0.018}\n & \\pmdg{0.412}{0.009}\n & \\pmdg{0.430}{0.010}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.452}{0.002}\n & \\pmdg{0.452}{0.001}\n & \\pmdg{0.451}{0.001}\n & \\pmdg{0.221}{0.026}\n & \\pmdg{0.292}{0.010}\n & \\pmdg{0.129}{0.02}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.398}{0.001}\n & \\pmdg{0.398}{0.001}\n & \\pmdg{0.397}{0.001}\n & \\pmdg{0.298}{0.01}\n & \\pmdg{0.271}{0.005}\n & \\pmdg{0.237}{0.01}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.136}{0.001}\n & \\pmdg{0.135}{0.001}\n & \\pmdg{0.135}{0.001}\n & \\pmdg{0.130}{0.003}\n & \\pmdg{0.109}{0.002}\n & \\pmdg{0.120}{0.004}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 6: {\\it\nComparison for $g(\\xi) = (1-\\xi)^2$\n}\n\\end{center}\n\n\\clearpage\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{50.6}{0.12}\n & \\pmdg{50.7}{0.13}\n & \\pmdg{50.7}{0.15}\n & \\pmdg{58.6}{1.29}\n & \\pmdg{72.9}{1.56}\n & \\pmdg{54.7}{2.1}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{27.1}{0.05}\n & \\pmdg{27.1}{0.16}\n & \\pmdg{27.0}{0.07}\n & \\pmdg{36.4}{0.57}\n & \\pmdg{40.0}{0.84}\n & \\pmdg{34.9}{1.0}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{8.51}{0.02}\n & \\pmdg{8.51}{0.02}\n & \\pmdg{8.52}{0.02}\n & \\pmdg{13.8}{0.35}\n & \\pmdg{13.3}{0.43}\n & \\pmdg{13.9}{0.2}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{49.8}{0.10}\n & \\pmdg{49.7}{0.05}\n & \\pmdg{49.6}{0.14}\n & \\pmdg{41.2}{0.55}\n & \\pmdg{47.2}{0.91}\n & \\pmdg{41.9}{0.38}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{29.0}{0.05}\n & \\pmdg{29.0}{0.03}\n & \\pmdg{28.8}{0.07}\n & \\pmdg{27.3}{0.52}\n & \\pmdg{28.2}{0.42}\n & \\pmdg{26.4}{0.19}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{9.09}{0.01}\n & \\pmdg{9.07}{0.01}\n & \\pmdg{9.04}{0.02}\n & \\pmdg{9.58}{0.06}\n & \\pmdg{9.16}{0.15}\n & \\pmdg{9.54}{0.06}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{30.6}{0.08}\n & \\pmdg{30.5}{0.04}\n & \\pmdg{30.5}{0.12}\n & \\pmdg{32.0}{0.34}\n & \\pmdg{36.3}{0.59}\n & \\pmdg{32.4}{0.52}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{24.3}{0.04}\n & \\pmdg{24.3}{0.03}\n & \\pmdg{24.3}{0.07}\n & \\pmdg{27.6}{0.56}\n & \\pmdg{28.4}{0.35}\n & \\pmdg{27.6}{0.21}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{7.88}{0.01}\n & \\pmdg{7.86}{0.01}\n & \\pmdg{7.87}{0.02}\n & \\pmdg{9.63}{0.21}\n & \\pmdg{9.50}{0.15}\n & \\pmdg{9.47}{0.06}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 7: {\\it\nComparison for $q(\\xi) = (1-\\xi)$\n}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.84}{0.02}\n & \\pmdg{6.84}{0.01}\n & \\pmdg{6.82}{0.02}\n & \\pmdg{8.20}{0.25}\n & \\pmdg{11.6}{0.14}\n & \\pmdg{8.26}{0.45}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.09}{0.01}\n & \\pmdg{4.07}{0.01}\n & \\pmdg{4.07}{0.01}\n & \\pmdg{5.70}{0.11}\n & \\pmdg{6.69}{0.16}\n & \\pmdg{5.68}{0.17}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.39}{0.004}\n & \\pmdg{1.39}{0.005}\n & \\pmdg{1.39}{0.003}\n & \\pmdg{2.41}{0.07}\n & \\pmdg{2.33}{0.05}\n & \\pmdg{2.34}{0.05}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{3.19}{0.01}\n & \\pmdg{3.19}{0.01}\n & \\pmdg{3.19}{0.01}\n & \\pmdg{0.686}{0.09}\n & \\pmdg{1.65}{0.03}\n & \\pmdg{0.691}{0.10}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{2.06}{0.005}\n & \\pmdg{2.06}{0.002}\n & \\pmdg{2.05}{0.003}\n & \\pmdg{1.00}{0.08}\n & \\pmdg{1.14}{0.03}\n & \\pmdg{0.866}{0.05}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.711}{0.001}\n & \\pmdg{0.710}{0.001}\n & \\pmdg{0.709}{0.001}\n & \\pmdg{0.500}{0.006}\n & \\pmdg{0.471}{0.01}\n & \\pmdg{0.442}{0.017}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.712}{0.003}\n & \\pmdg{0.711}{0.001}\n & \\pmdg{0.710}{0.002}\n & \\pmdg{0.157}{0.026}\n & \\pmdg{0.373}{0.008}\n & \\pmdg{0.082}{0.038}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.692}{0.002}\n & \\pmdg{0.690}{0.001}\n & \\pmdg{0.690}{0.001}\n & \\pmdg{0.411}{0.020}\n & \\pmdg{0.408}{0.022}\n & \\pmdg{0.340}{0.023}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.245}{0.001}\n & \\pmdg{0.245}{0.001}\n & \\pmdg{0.245}{0.001}\n & \\pmdg{0.194}{0.012}\n & \\pmdg{0.172}{0.007}\n & \\pmdg{0.161}{0.008}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 8: {\\it\nComparison for $g(\\xi) = (1-\\xi)$\n}\n\\end{center}\n\n\\end{appendix}\n\n\\clearpage\n\n\\newcommand{\\bibitema}[1]{\\bibitem{#1}}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\n\\\n\n\n\n\n\nThe AdS\/conformal field theory (CFT) correspondence~\\cite{MaldacenaOriginal,GKP,Witten,WittenThermal}\nhas yielded many important insights into the dynamics of strongly\ncoupled gauge theories. Among numerous results obtained so far,\none of the most striking is the universality of the ratio of the\nshear viscosity $\\eta$ to the entropy density\n$s$~\\cite{Policastro:2001yc,Kovtun:2003wp,Buchel:2003tz,KSSbound}\n\\begin{equation}\n\\label{bound} \\frac{\\eta}{s} = \\frac{1}{4\\pi}\n\\end{equation}\nfor all gauge theories with an Einstein gravity dual in the limit\n$N \\to \\infty$ and ${\\lambda} \\to \\infty$. Here, $N$ is the number of\ncolors and ${\\lambda}$ is the 't Hooft coupling. It was further\nconjectured in~\\cite{KSSbound} that~(\\ref{bound}) is a universal\nlower bound [the Kovtun-Starinets-Son (KSS) bound] for all materials. So far, all known\nsubstances including water and liquid helium satisfy the bound.\nThe systems coming closest to the bound include the quark-gluon\nplasma created at Relativistic Heavy Ion Collider (RHIC)~\\cite{Teaney:2003kp,rr1,songHeinz,r2,Dusling:2007gi, Adare:2006nq}\nand certain cold atomic gases in the unitarity limit (see\ne.g.~\\cite{Schafer:2007ib}). $\\eta\/s$ for pure gluon QCD\nslightly above the deconfinement temperature has also been\ncalculated on the lattice recently~\\cite{Meyer:2007ic} and is\nabout $30 \\%$ larger than~(\\ref{bound}). See\nalso~\\cite{sakai}. See~\\cite{Cohen:2007qr,Cherman:2007fj,Chen:2007jq,Son:2007xw,Fouxon:2007pz}\nfor other discussions of the bound.\n\nNow, as stated above, the ratio~(\\ref{bound}) was obtained for a class of gauge theories whose holographic duals are dictated by classical Einstein gravity (coupled to matter). More generally, string theory (or any\nquantum theory of gravity) contains higher derivative corrections from stringy or quantum effects,\ninclusion of which will modify the ratio. In terms of gauge theories,\nsuch modifications correspond to $1\/{\\lambda}$ or $1\/N$ corrections. As a concrete example, let us take ${\\cal N}=4$ super-Yang-Mills theory, whose dual corresponds to type IIB\nstring theory on $AdS_5 \\times S^5$. The leading order correction in $1\/{\\lambda}$ arises from stringy corrections to the low-energy effective action of type IIB supergravity, schematically of the form ${\\alpha'}^3 R^4$.\nThe correction to $\\eta\/s$ due to such a term was calculated\nin~\\cite{BLS,Benincasa:2005qc}. It was found that the correction is positive, consistent with the\nconjectured bound.\n\nIn this paper, instead of limiting ourselves to\nspecific known string theory corrections, we explore the modification of $\\eta\/s$ due to\ngeneric higher derivative terms in the holographic gravity dual. The reason is partly pragmatic: other\nthan in a few maximally supersymmetric circumstances, very little\nis known about forms of higher derivative corrections generated in string theory. Given the vastness of the string\nlandscape~\\cite{landscape}, one expects that generic corrections do\noccur. Restricting to the gravity sector in $AdS_5$, the leading order\nhigher derivative corrections can be written as\\footnote{Our\nconventions are those of \\cite{Carroll}. In this section we suppress Gibbons-Hawking surface terms.}\n \\begin{equation} \\label{epr}\nI= {1 \\over 16 \\pi G_N} \\int d^5 x \\, \\sqrt{- g} \\left(R - 2 \\Lambda +\n\\lad^2 \\left({\\alpha}_1 R^2+ {\\alpha}_2 R_{\\mu \\nu} R^{\\mu \\nu}+{\\alpha}_3 R^{\\mu\n\\nu\\rho \\sigma} R_{\\mu \\nu \\rho\\sigma} \\right)\\right) \\ ,\n \\end{equation}\nwhere ${\\Lambda} = -{6 \\over \\lad^2}$ and for now we assume that ${\\alpha}_i \\sim\n{{\\alpha'} \\over \\lad^2} \\ll 1$. Other terms with additional derivatives\nor factors of $R$ are naturally suppressed by higher powers of ${{\\alpha'} \\over\n\\lad^2}$. String loop (quantum) corrections can also generate such terms,\nbut they are suppressed by powers of $g_s$ and we will consistently neglect them by taking $g_s \\rightarrow 0$ limit.\\footnote{Note that to calculate $g_s$ corrections, all the light fields must be taken into account. In addition, the calculation of $\\eta\/s$ could be more subtle once we begin to include quantum effects.} To lowest order\nin ${\\alpha}_i$ the correction to $\\eta\/s$ will be a linear\ncombination of ${\\alpha}_i$'s, and the viscosity bound\nis then violated for one side of the half-plane.\nSpecifically, we will find\n \\begin{equation}\n {\\eta \\over s} = {1 \\over 4 \\pi} \\left(1 - 8 {\\alpha}_3 \\right) + O({\\alpha}_i^2)\n \\end{equation}\nand hence the bound is violated for ${\\alpha}_3>0$. Note that the above\nexpression is independent of ${\\alpha}_1$ and ${\\alpha}_2$. This can be\ninferred from a field redefinition argument (see\nSec.\\ref{ap:fR}).\n\nHow do we interpret these violations? Possible scenarios are:\n\n\\begin{enumerate}\n\n\\item The bound can be violated. For example, this scenario would be realized if one explicitly finds a well-defined string theory on $AdS_5$ which generates a stringy correction with ${\\alpha}_3>0$. (See~\\cite{new} for a plausible counterexample to the KSS bound.)\n\n\\item The bound is correct (for example, if one can prove it using a field\ntheoretical method), and a bulk gravity theory with ${\\alpha}_3>0$ cannot have a well-defined boundary CFT dual.\n\n \\begin{enumerate}\n\n\\item The bulk theory is manifestly\ninconsistent as an effective theory. For example, it could\nviolate bulk causality or unitarity.\n\n\n\\item It is impossible to generate such a low-energy effective\nclassical action from a consistent quantum theory of gravity. In\nmodern language we say that the theory lies in the swampland of\nstring theory.\n\n \\end{enumerate}\n\n\\end{enumerate}\n\nAny of these alternatives, if realized, is interesting. Needless\nto say, possibility 1 would be interesting.\nGiven that recent analyses from RHIC\ndata~\\cite{rr1,songHeinz,r2,Dusling:2007gi,Adare:2006nq} indicate \nthe $\\eta\/s$ is close to (and could be even smaller than) the bound, \nthis further motivates\nto investigate the universality of the KSS bound in holographic\nmodels.\n\nPossibility 2(a) should help clarify the physical origin of the\nbound by correlating bulk pathologies and the violation of the\nbound. Possibility 2(b) could provide powerful tools for\nconstraining possible higher derivative corrections in the string\nlandscape. Note that while there are some nice no-go theorems\nwhich rule out classes of nongravitational effective field\ntheories \\cite{AADNR} (also see \\cite{AKS}), the generalization of\nthe arguments of~\\cite{AADNR} to gravitational theories is subtle\nand difficult. Thus, constraints from AdS\/CFT based on the\nconsistency of the boundary theory would be valuable.\n\nIn investigating the scenarios above, Gauss-Bonnet (GB) gravity will\nprovide a useful model. Gauss-Bonnet gravity, defined by the\nclassical action of the form~\\cite{Zwiebach}\n\\begin{equation}\n\\label{action} I = \\frac{1}{16\\pi G_N} \\mathop\\int{d^{5}x \\,\n\\sqrt{-g} \\, \\left[R-2\\Lambda+ {{\\lambda}_{GB} \\over 2} \\lad^2\n(R^2-4R_{\\mu\\nu}R^{\\mu\\nu}+R_{\\mu\\nu\\rho\\sigma}R^{\\mu\\nu\\rho\\sigma})\n\\right]} \\ ,\n\\end{equation}\nhas many nice properties that are absent for theories with more general ratios of the ${\\alpha}_i$'s. For example, expanding around flat Minkowski space, the metric fluctuations\nhave exactly the same quadratic kinetic terms as those in\nEinstein gravity. All higher derivative terms\ncancel~\\cite{Zwiebach}. Similarly, expanding around the\nAdS black brane geometry, which will be the main focus of the\npaper, there are also only second derivatives on the\nmetric fluctuations. Thus small metric fluctuations can be\nquantized for finite values of the parameter\n${\\lambda}_{GB}$.\\footnote{Generic theories in~(\\ref{epr}) contain four\nderivatives and a consistent quantization is not possible other\nthan treating higher derivative terms as perturbations.}\nFurthermore, crucial for our investigation is its remarkable feature of solvability: sets of\nexact solutions to the classical equation of motion have been\nobtained \\cite{BD,Cai} and the exact form of the Gibbons-Hawking\nsurface term is known \\cite{Myers}.\n\nGiven these nice features of Gauss-Bonnet gravity, we will venture outside the regime of the perturbatively corrected Einstein gravity and study the theory with finite values of ${\\lambda}_{GB}$.\nTo physically motivate this, one could envision that somewhere in the string landscape ${\\lambda}_{GB}$ is large but all the other higher derivative corrections are small.\nOne of the main results of the paper is a value of $\\eta\/s$ for the CFT dual of Gauss-Bonnet gravity, \\emph{nonperturbative} in ${\\lambda}_{GB}$:\\footnote{We have also computed the value of $\\eta\/s$ for Gauss-Bonnet gravity\nfor any spacetime dimension $D$ and the expression is given\nin~(\\ref{final}).}\n\\begin{equation} \\label{advertise}\n\\frac{\\eta}{s}=\\frac{1}{4\\pi}[1- 4 {\\lambda}_{GB}].\n\\end{equation}\nWe emphasize that this is not just a linearly corrected value.\nIn particular, the viscosity bound is badly violated as ${\\lambda}_{GB} \\rightarrow \\frac{1}{4}$.\nAs we will discuss shortly, ${\\lambda}_{GB}$ is bounded above by $\\frac{1}{4}$ for the theory to have a boundary CFT, and $\\eta\/s$ never decreases beyond $0$.\n\nGiven the result~(\\ref{advertise}) for Gauss-Bonnet, if the possibility 2(a) were correct, we would expect that pathologies would become easier to discern in the limit where $\\eta\/s\\rightarrow 0$. We will investigate this line of thought in Sec.\\ref{gravitoncone}. On the other hand, thinking along the line of possibility 1, the Gauss-Bonnet theory with ${\\lambda}_{GB}$ arbitrarily close to $\\frac{1}{4}$ may have a concrete realization in the string landscape. In this case, there exists no lower bound for $\\eta\/s$, and investigating the CFT dual of Gauss-Bonnet theory should clarify how to evade the heuristic mean free path argument for the existence of the lower bound (presented in, e.g., \\cite{KSSbound}).\n\nThe plan of the paper is as follows. In Sec.\\ref{pre}, we review\nvarious properties of two-point correlation functions and outline\nthe real-time AdS\/CFT calculation of the shear viscosity. We then\nexplicitly calculate the shear viscosity for Gauss-Bonnet theory\nin Sec.\\ref{shear}. In Sec.\\ref{gravitoncone}, we seek possible\npathologies associated with theories violating the viscosity\nbound. There, we will find a curious new metastable state for\nlarge enough ${\\lambda}_{GB}$. Finally in Sec.\\ref{discussion}, we conclude\nwith various remarks and speculations. To make the paper fairly\nself-contained, various appendices are added. In particular,\nquasinormal mode calculations of the shear viscosity are\npresented in Appendix~\\ref{ap:so} and one using the membrane\nparadigm in Appendix~\\ref{junk}.\n\n\n\n\\section{Shear viscosity in $R^2$ theories: preliminaries} \\label{pre}\n\n\n\\subsection{Two-point correlation functions and viscosity} \\label{meds}\n\nLet us begin by collecting various properties of two-point\ncorrelation functions,\nfollowing~\\cite{Policastro:2002se,Policastro:2002tn,KovtunEV} (see\nalso~\\cite{Son:2007vk}). Consider retarded two-point correlation\nfunctions of the stress energy tensor $T_{\\mu \\nu}$ of a CFT in\n$3+1$-dimensional Minkowski space at a finite temperature $T$\n \\begin{equation} \\label{ttC}\n G_{\\mu\\nu,{\\alpha} \\beta} (\\omega,\\vec q) = - i\\int dt d\\vec x e^{ i\n\\omega t- i \\vec q\\cdot \\vec x} \\theta( t) \\vev{\\left[T_{\\mu\n\\nu}(t,\\vec x) , T_{{\\alpha} \\beta} (0,0)\\right]}.\n \\end{equation}\nThey describe linear responses of the system to small disturbances.\nIt turns out that various components of~(\\ref{ttC}) can be expressed in terms of\nthree independent scalar functions. For example, if we take spatial\nmomentum to be $\\vec q = (0,0 ,q)$, then\n \\begin{equation}\n G_{12,12}= {1\\over 2} G_3 (\\omega, q) , \\ \\ \\ \\ \\ G_{13,13} = {1\\over 2}{\\omega^2 \\over \\omega^2 -q^2}\n G_1 (\\omega, q), \\ \\ \\ \\ \\ G_{33,33} = {2 \\over 3} {\\omega^4 \\over (\\omega^2 -q^2)^2} G_2 (\\omega,\n q),\n \\end{equation}\nand so on. At $\\vec q=0$ all three function $G_{1,2,3} (\\omega,0)$\nare equal to one another as a consequence of rotational\nsymmetry.\n\nWhen $\\omega, |\\vec q| \\ll T$ one expects the CFT plasma to be described by hydrodynamics.\nThe scalar functions $G_{1,2,3}$ encode the hydrodynamic behavior of shear, sound, and transverse modes, respectively.\nMore explicitly, they have the following properties:\n \\begin{itemize}\n \\item $G_1$ has a simple diffusion pole at $\\omega= - i D q^2$, where\n \\begin{equation} \\label{shearA}\n D= {\\eta \\over {\\epsilon} + P} = {1 \\over T} { \\eta \\over s}\n \\end{equation}\n with ${\\epsilon}$ and $s$ being the energy and entropy density, and $P$ the pressure of the\n gauge theory plasma.\n \\item $G_2$ has a simple pole at $\\omega= \\pm c_s q - i \\Gamma_{s} q^2$, where $c_s$ is\n the speed of sound and $\\Gamma_{s}$ is the sound damping\n constant, given by (for conformal theories)\n \\begin{equation} \\label{sound}\n \\Gamma_s = {2 \\over 3T}{\\eta \\over s}\n \\end{equation}\n \\item $\\eta$ can also be obtained from $G_{1,2,3}$ at zero spatial momentum by\nthe Kubo formula, e.g.,\n \\begin{equation} \\label{rrp}\n \\eta= \\lim_{\\omega\\rightarrow 0} {1\\over \\omega} {\\rm Im} G_{12,12} (\\omega,0)\n \\end{equation}\n\n \\end{itemize}\nEquations (\\ref{shearA})--(\\ref{rrp}) provide three independent ways of extracting $\\eta\/s$.\nWe provide calculations utilizing the first two in Appendix~\\ref{ap:so}.\nA calculation utilizing the Kubo formula (\\ref{rrp}) is easier, and we will explicitly implement it for Gauss-Bonnet theory in Sec.\\ref{shear}.\nIn the next subsection, we outline how to obtain retarded two-point functions within the framework of the real-time AdS\/CFT correspondence.\n\n\n\\subsection{AdS\/CFT calculation of shear viscosity: Outline} \\label{adscft}\n\nThe stress tensor correlators for a boundary CFT described by\n(\\ref{epr}) or (\\ref{action}), can be computed from gravity as\nfollows. One first finds a black brane solution (i.e. a black hole\nwhose horizon is ${\\bf R}^3$) to the equations of motion of\n(\\ref{epr}) or (\\ref{action}). Such a solution describes the\nboundary theory on ${\\bf R}^{3,1}$ at a temperature $T$, which can\nbe identified with the Hawking temperature of the black brane. The\nentropy and energy density of the boundary theory are given by the\ncorresponding quantities of the black brane. The fluctuations of\nthe boundary theory stress tensor are described in the gravity\nlanguage by small metric fluctuations $h_{\\mu \\nu}$ around the\nblack brane solution. In particular, after taking into account of\nvarious symmetries and gauge degrees of freedom, the metric\nfluctuations can be combined into three independent scalar fields\n$\\phi_a, a=1,2,3$, which are dual to the three functions $G_a$ of the\nboundary theory.\n\nTo find $G_a$, one could first work out the bulk two-point\nretarded function for $\\phi_a$ and then take both points to the\nboundary of the black brane geometry. In practice it is often more\nconvenient to use the prescription proposed in~\\cite{SS}, which\ncan be derived from the real-time AdS\/CFT\ncorrespondence~\\cite{HS}. Let us briefly review it here:\n\n\\begin{enumerate}\n\n\\item Solve the linearized equation of motion for $\\phi_a (r; k)$ with the\nfollowing boundary conditions:\n\n\\begin{enumerate}\n\n\\item Impose the infalling boundary condition at the horizon. In other words, modes with timelike momenta should be\nfalling into the horizon and modes with spacelike momenta should\nbe regular.\n\n\\item Take $r$ to be the radial direction of the black brane geometry\nwith the boundary at $r=\\infty$. Require\n \\begin{equation} \\label{bdw}\n \\phi_a (r; k)|_{r= {1 \\over {\\epsilon}}} = J_a (k), \\qquad k = (\\omega, q),\n \\end{equation}\nwhere ${\\epsilon} \\to 0$ imposes an infrared cutoff near the infinity\nof the spacetime and $J_a (k)$ is an infinitesimal boundary source for the\nbulk field $\\phi_a(r; k)$.\n\n\\end{enumerate}\n\n\\item Plug in the above solution into the action, expanded to quadratic order in $\\phi_a (r; k)$.\nIt will reduce to pure surface contributions.\nThe prescription instructs us to pick up only the contribution from the boundary at $r={1 \\over {\\epsilon}}$.\nThe resulting action can be written as\n\\begin{equation}\\label{Sbd}\nS = - {1\\over 2} \\int\\!{d^4k\\over (2\\pi )^4}\\,\n J_a (-k) {\\cal F}_a (k,r) J_a (k) \\Big|_{r={1 \\over {\\epsilon}}}\\ .\n\\end{equation}\nFinally the retarded function $G_a (k)$ in momentum space for the\nboundary field dual to $\\phi_a$ is given by\n \\begin{equation} \\label{eo}\n G_a (k) = \\lim_{{\\epsilon} \\to 0} {\\cal F}_a (k,r)\\Big|_{r={1 \\over {\\epsilon}}} \\\n .\n \\end{equation}\n\n\\end{enumerate}\nUsing the Kubo formula~(\\ref{rrp}), we can get the shear viscosity by studying a mode $\\phi_3$ with $\\vec q=0$ in the low-frequency limit $\\omega \\rightarrow 0$. We will do so in the next section. Alternatively, using (\\ref{shearA}) or (\\ref{sound}), we can read off the viscosity from pole structures of retarded two-point functions. Such a calculation is a bit more involved and will be performed in Appendix~\\ref{ap:so}.\n\nThe above prescription for computing retarded functions in AdS\/CFT\nworks well if the bulk scalar field has only two derivatives as in\nGauss-Bonnet case~(\\ref{action}). If the bulk action contains more\nthan two derivatives, complications could arise even if one treats\nthe higher derivative parts as perturbations. For example, one\nneeds to add Gibbons-Hawking surface terms to ensure a\nwell-defined variational problem. A systematic prescription for\ndoing so is, however, not available at the moment beyond the\nlinear order. Thus there are potential ambiguities in implementing\n(\\ref{eo}).\\footnote{In~\\cite{BLS}, such additional terms do not\nappear to affect the calculation at the order under discussion\nthere.} Clearly these are important questions which should be\nexplored more systematically. At the $R^2$ level, as we describe\nbelow in Sec.\\ref{ap:fR}, all of our calculations can be reduced\nto the Gauss-Bonnet case in which these potential complications do\nnot arise.\n\n\n\\subsection{Field redefinitions in $R^2$ theories} \\label{ap:fR}\n\nWe now show that to linear order in ${\\alpha}_i$, $\\eta\/s$\nfor~(\\ref{epr}) is independent of ${\\alpha}_1$ and ${\\alpha}_2$. It is well\nknown that to linear order in ${\\alpha}_i$, one can make a field\nredefinition to remove the $R^2$ and $R_{\\mu \\nu}R^{\\mu\\nu}$ term\nin~(\\ref{epr}). More explicitly, in~(\\ref{epr}) set ${\\alpha}_3 =0$ and\ntake\n \\begin{equation} \\label{eep}\n g_{\\mu \\nu} = \\tilde g_{\\mu \\nu} + {\\alpha}_2 \\lad^2 \\tilde R_{\\mu \\nu} - {\\lad^2 \\over 3}\n ({\\alpha}_2\n+ 2 {\\alpha}_1 ) \\tilde g_{\\mu \\nu} \\tilde R,\n \\end{equation}\nwhere $\\tilde R$ denotes the Ricci scalar for $\\tilde g_{\\mu\\nu}$\nand so on. Then (\\ref{epr}) becomes\n \\begin{equation} \\label{newz}\n I = {1 \\over 16 \\pi G_N} \\int \\sqrt{- \\tilde g} ((1+ {\\cal K})\n \\tilde R - 2 \\Lambda ) + O({\\alpha}^2)\n = {1 + {\\cal K}\\over 16 \\pi G_N} \\int \\sqrt{- g} ( \\tilde R - 2 \\tilde \\Lambda ) +\n O({\\alpha}^2)\n \\end{equation}\nwith \\begin{equation} {\\cal K}= {2 {\\Lambda} \\lad^2 \\over 3} \\left(5 {\\alpha}_1 + {\\alpha}_2 \\right) , \\qquad\n \\tilde {\\Lambda} = {{\\Lambda} \\over 1 + {\\cal K}} \\ .\n\\end{equation} It follows from (\\ref{eep}) that a background solution\n$g^{(0)}$ to (\\ref{epr}) (with ${\\alpha}_3=0$) is related to a solution\n$\\tilde g^{(0)}$ to (\\ref{newz}) by\n \\begin{equation} \\label{sba}\n ds^2_0 = A^2 \\tilde{ds}^2_0, \\qquad A = 1- { {\\cal K} \\over 3} \\ .\n \\end{equation}\nThe scaling in (\\ref{sba}) does not change the background Hawking\ntemperature. The diffusion pole~(\\ref{shearA}) calculated using\n~(\\ref{newz}) around $\\tilde g^{(0)}$ then gives the standard\nresult $D = {1 \\over 4 \\pi T}$~\\cite{Policastro:2002se}.\n Thus we conclude that $\\eta\/s = {1 \\over\n4 \\pi}$ for (\\ref{epr}) with ${\\alpha}_3=0$. Then to linear order\nin ${\\alpha}_i$,\n $\\eta\/s$ can only\ndepend on ${\\alpha}_3$. To find this dependence, it is convenient to\nwork with the Gauss-Bonnet theory~(\\ref{action}). Gauss-Bonnet\ngravity is not only much simpler than~(\\ref{epr}) with generic\n${\\alpha}_3 \\neq 0$, but also contains only second derivative terms in\nthe equations of motion for $h_{\\mu \\nu}$, making the extraction\nof boundary correlators unambiguous.\n\n\n\n\\section{Shear Viscosity for Gauss-Bonnet Gravity}\n\\label{shear}\n\nIn this section, after briefly reviewing the thermodynamic properties of the black brane solution, we compute the shear viscosity for Gauss-Bonnet gravity~(\\ref{action}) nonperturbatively in ${\\lambda}_{GB}$.\nHere, we follow the outline presented in the previous section, with the Kubo formula (\\ref{rrp}) in mind.\nIn Appendix~\\ref{ap:so}, we extract $\\eta\/s$ from the shear channel~(\\ref{shearA}) and the sound channel~(\\ref{sound}) (perturbatively in ${\\lambda}_{GB}$).\nThere we also find that the sound velocity remains at the conformal value $c_s^2 = {1 \\over 3}$ as it should.\nIn Appendix~\\ref{junk}, we provide a membrane paradigm calculation, again nonperturbatively in ${\\lambda}_{GB}$.\nAll four methods give the same result.\n\n\n\n\\subsection{Black brane geometry and thermodynamics}\n\nExact solutions and thermodynamic properties of black objects in\nGauss-Bonnet gravity~(\\ref{action}) were discussed\nin~\\cite{Cai} (see also \\cite{Nojiri:2001aj, Cho:2002hq,Neupane:2002bf,Neupane:2003vz}). Here we summarize some features relevant for our\ndiscussion below. The black brane solution can be written as\n\\begin{equation}\n\\label{bba} ds^2=-f(r)N_{\\sharp}^2dt^2\n+\\frac{1}{f(r)}dr^2+\\frac{r^2}{\\lad^2}\n \\left(\\mathop\\sum_{i=1}^{3}dx_i^2 \\right),\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{perturb} f(r)=\\frac{r^2}{\\lad^2}\\frac{1}{2{\\lambda}_{GB}}\n\\left[1-\\sqrt{1-4{\\lambda}_{GB}\\left(1-\\frac{r_{+}^{\\,4}}{r^4}\\right)} \\right] \\ .\n\\end{equation}\nIn (\\ref{bba}), $N_{\\sharp}$ is an arbitrary constant which specifies the\nspeed of light of the boundary theory. Note that as $r \\to \\infty$,\n\\begin{equation} \\label{Ade}\n f(r) \\to {r^2 \\over a^2 \\lad^2}, \\qquad {\\rm with} \\qquad\n a^2 \\equiv\n {1\\over 2}\\left(1+\\sqrt{1-4{\\lambda}_{GB}}\\right)\\ .\n \\end{equation}\nIt is straightforward to see that the AdS curvature scale of these\ngeometries is $a \\lad$.\\footnote{Here we note that the Gauss-Bonnet\ntheory also admits another background with the curvature scale\n$\\tilde{a}\\,\\lad$ where $\\tilde{a}^2={1\\over 2}\\left(1-\\sqrt{1-4{\\lambda}_{GB}}\\right)$.\nEven though this remains an asymptotically AdS solution for ${\\lambda}_{GB}>0$,\nwe do not consider it here because this background is unstable and\ncontains ghosts \\cite{BD}.} If we choose $N_{\\sharp} = a$, then the boundary\nspeed of light is unity. However, we will leave it unspecified in\nthe following. We assume that ${\\lambda}_{GB}\\leq\\frac{1}{4}$. Beyond this\npoint, (\\ref{action}) does not admit a vacuum AdS solution, and\ncannot have a boundary CFT dual. In passing, we note that while the\ncurvature singularity occurs at $r=0$ for ${\\lambda}_{GB} \\geq 0$, it shifts to\n$r =r_+ \\left(1-{1 \\over 4 {\\lambda}_{GB}}\\right)^{-\\frac{1}{4}}$ for ${\\lambda}_{GB}<0$.\n\n\nThe horizon is located at $r=r_{+}$ and the Hawking temperature,\nentropy density, and energy density of the black brane are\n\\footnote{Note that for {\\it planar} black branes in Gauss-Bonnet\ntheory, the area law for entropy still holds \\cite{oldy}. This is\nnot the case for more general higher-derivative-corrected black\nobjects.}\n\\begin{equation}\n\\label{temperature} T =N_{\\sharp} \\frac{r_{+}}{\\pi \\lad^2},\n\\end{equation}\n\\begin{equation} \\label{entr}\ns =\\frac{1}{4G_{N}}\\left(\\frac{r_{+}}{\\lad}\\right)^3\n=\\frac{(\\pi \\lad)^3}{4G_{N}}\\frac{(T)^3}{N_{\\sharp}^3},\n\\qquad {{\\epsilon}} = {3 \\over 4} T s \\ .\n\\end{equation}\nIf we fix the boundary theory temperature $T$ and the speed of light to be unity (taking $N_{\\sharp}=a$), the entropy and energy density are monotonically\nincreasing functions of ${\\lambda}_{GB}$, reaching a maximum at ${\\lambda}_{GB}={1 \\over 4}$\nand going to zero as ${\\lambda}_{GB} \\to -\\infty$.\n\nTo make our discussion self-contained, in Appendix~\\ref{ap:ther}, we compute the free\nenergy of the black brane and derive the entropy density. In\nparticular, we show that the contribution from the Gibbons-Hawking\nsurface term to the free energy vanishes.\n\n\\subsection{Action and equation of motion for the scalar channel} \\label{scalar}\n\nTo compute the shear viscosity, we now study small metric fluctuations $\\phi = h^1_{\\ 2}$ around the black brane background of the form\n\\begin{equation}\nds^2=-f(r)N_{\\sharp}^2dt^2+\\frac{1}{f(r)}dr^2+\\frac{r^2}{\\lad^2}\n\\left(\\mathop\\sum_{i=1}^{3}dx_i^2 + 2 \\phi(t,\\vec x, r)dx_1 d x_2 \\right) \\ .\n\\end{equation}\nWe will take $\\phi$ to be independent of $x_1$ and $x_2$ and write\n\\begin{equation}\n\\phi(t, \\vec x, r)=\\mathop\\int\\frac{d\\omega dq}{(2\\pi)^{2}}\\, \\phi(r;k)\n\\, e^{-i\\omega t + i q x_3}, \\quad k = (\\omega, 0, 0, q) ,\\ \\ \\phi\n(r; -k)=\\phi^*(r; k) \\ .\n\\end{equation}\nFor notational convenience, let us introduce\n\\begin{equation} \\label{sDef}\nz=\\frac{r}{r_{+}},\\ \\ \\tilde{\\omega}=\\frac{\\lad^2}{r_{+}}\\omega,\\\n\\quad \\tilde{q}=\\frac{\\lad^2}{r_{+}}q, \\qquad\n\\tilde{f}=\\frac{\\lad^2}{r_{+}^2}f = {z^2 \\over 2 {\\lambda}_{GB}} \\left(1 -\n\\sqrt{1-4 {\\lambda}_{GB} + {4 {\\lambda}_{GB} \\over z^4}} \\right).\n\\end{equation}\nThen, at quadratic order, the action for $\\phi$ can be written as\n \\begin{eqnarray} \\label{pee}\n S&=&\\int{dk_1 dk_2 \\over (2 \\pi)^2}S(k_1, k_2) \\ \\ \\ {\\rm with} \\cr\n S(k_1=0, k_2=0)&=&-{1\\over 2} C \\int dz {\\frac{d\\omega dq}{(2\\pi)^{2}}} \\, \\left( K (\\partial_z \\phi)^2 - K_2 \\phi^2 +\n \\partial_z (K_3 \\phi^2) \\right),\n \\end{eqnarray}\nwhere\n \\begin{equation} \\label{vds}\n C = {1 \\over 16 \\pi G_N} \\left(N_{\\sharp} r_+^4\\over \\lad^5\\right), \\ \\ K= z^2 \\tilde{f} (z - {\\lambda}_{GB}\n \\partial_z\\tilde{f}), \\ \\ \\ K_2 = K {\\tilde \\omega^2 \\over N_{\\sharp}^2 \\tilde{f}^2} - \\tilde\n q^2 z \\left(1- {\\lambda}_{GB} \\partial_z^2\\tilde{f} \\right) \\ ,\n \\end{equation}\nand $\\phi^2$ should be understood as a shorthand notation for\n$\\phi(z;k) \\phi (z,-k)$.\nHere, $S$ is the sum of the bulk action (\\ref{action}) and the associated Gibbons-Hawking surface term \\cite{Myers}.\nThe explicit expression for $K_3$ will not be important for our subsequent discussion.\n\nThe equation of motion following from (\\ref{pee}) is\\footnote{An easy way to get the quadratic action~(\\ref{pee}) is to first obtain the linearized equation of motion and then read off $K$ and $K_2$ from it.}\n \\begin{equation} \\label{eom}\n K \\phi'' + K' \\phi' + K_2 \\phi =0 \\ ,\n \\end{equation}\nwhere primes indicate partial derivatives with respect to $z$.\nUsing the equation of motion, the action~(\\ref{pee}) reduces to the surface contributions as advertised in Sec.\\ref{adscft},\n \\begin{equation} \\label{rrk}\n S(k_1=0, k_2=0) = -{1\\over 2} C \\int {\\frac{d\\omega dq}{(2\\pi)^{2}}} \\, \\left(K \\phi' \\phi\n + K_3 \\phi^2 \\right)|_{{\\rm surface}} \\ .\n \\end{equation}\nThe prescription described in Sec.\\ref{adscft} instructs us to\npick up the contribution from the boundary at\n$z\\rightarrow\\infty$. Here, the term proportional to $K_3$ will\ngive rise to a real divergent contact term, which is discarded.\n\nA curious thing about (\\ref{pee}) is that for all values of $z$,\nboth $K$ and $K_2$ (but not $K_3$) are proportional to ${1 \\over 4}\n- {\\lambda}_{GB}$.\\footnote{This can be seen by using the following equation\nin $K$ and $K_2$ \\begin{equation} \\tilde{f}'(z) = {2 z (2z^2 -\\tilde{f}) \\over z^2 - 2 {\\lambda}_{GB}\n\\tilde{f}} \\ . \\end{equation} } Thus other than the boundary term the whole action\n(\\ref{pee}) vanishes identically at ${\\lambda}_{GB} = {1 \\over 4}$.\nNevertheless, the equation of motion (\\ref{eom}) remains\nnontrivial in the limit ${\\lambda}_{GB} \\to {1 \\over 4}$ as the ${1 \\over 4} - {\\lambda}_{GB}$ factor cancels\nout. Note that the correlation function does not necessarily go to\nzero in this limit since it also depends on\nthe behavior of the solution to~(\\ref{eom}) and the limiting\nprocedure~(\\ref{rrk}). As we will see momentarily, as least in the\nsmall frequency limit it does become zero with a vanishing shear\nviscosity.\n\n\n\\subsection{Low-frequency expansion and the viscosity}\n\\label{solution}\n\n\nGeneral solutions to the equation of motion~(\\ref{eom}) can be written as\n\\begin{equation}\n\\phi(z; k)=a_{in}(k)\\phi_{in}(z; k)+a_{out}(k)\\phi_{out}(z; k) \\ ,\n\\end{equation}\nwhere $\\phi_{in}$ and $\\phi_{out}$ satisfy infalling and outgoing boundary conditions at the horizon, respectively.\nThey are complex conjugates of each other, and we normalize them by requiring them to approach $1$ as $z\\to \\infty$.\nThen, the prescription of Sec.\\ref{adscft} corresponds to setting\n\\begin{equation} \\label{explicitBC}\na_{in}(k)=J(k)\\ , \\qquad a_{out}(k)=0 \\ ,\n\\end{equation}\nwhere $J(k)$ is an infinitesimal boundary source for the bulk field $\\phi$.\n\nMore explicitly, as $z \\to 1$, various functions in (\\ref{eom}) have the following behavior\n \\begin{equation}\n{K_2 \\over K} \\approx {\\tilde{\\omega}^2 \\over 16 N_{\\sharp}^2 (z-1)^2 } + O((z-1)^{-1})+O(\\tilde{q}^2),\n\\qquad {K' \\over K} = {1 \\over z-1} + O(1) \\ .\n \\end{equation}\nIt follows that near the horizon $z=1$, equation (\\ref{eom}) can\nbe solved by (for $\\vec q =0$)\n \\begin{equation}\n\\phi (z) \\sim (z-1)^{\\pm {i \\tilde{\\omega} \\over 4 N_{\\sharp}}} \\sim (z-1)^{\\pm {i \\omega\n\\over 4 \\pi T}}\n \\end{equation}\nwith the infalling boundary condition corresponding to the\nnegative sign. To solve (\\ref{eom}) in the small frequency limit,\nit is convenient to write\n\\begin{equation} \\label{anse}\n\\phi_{in} (z; k)=e^{-i \\left({\\tilde{\\omega} \\over 4 N_{\\sharp}}\\right) {\\rm ln}\\left(\\frac{a^2\n\\tilde{f}}{z^2}\\right)} \\left(1-i\\frac{\\tilde{\\omega}} {4\nN_{\\sharp}}g_1(z)+O(\\tilde{\\omega}^2, \\tilde{q}^2)\\right),\n\\end{equation}\nwhere we require $g_1 (z)$ to be nonsingular at the horizon $z=1$.\nWe show in Appendix~\\ref{ap:solo} that $g_1$ is a nonsingular function with the large $z$\nexpansion\n \\begin{equation} \\label{lowEatCFT}\n g_1 (z) = {4 {\\lambda}_{GB} \\over \\sqrt{1-4 {\\lambda}_{GB}}} {a^2 \\over z^4} + O(z^{-8}) \\\n .\n \\end{equation}\nTherefore, with our boundary conditions (\\ref{explicitBC}), we find\n \\begin{equation} \\label{asu}\n \\phi (z;k) =J(k)\\left[ 1 + {i \\tilde{\\omega} \\over 4 N_{\\sharp}} a^2 \\sqrt{1-4 {\\lambda}_{GB}} \\left({1 \\over z^4}\n + O(z^{-8}) \\right) + O(\\tilde{\\omega}^2, \\tilde{q}^2)\\right].\n \\end{equation}\nThis is the right asymptotic behavior for the bulk field $\\phi$\ndescribing metric fluctuations since the CFT stress tensor has\nconformal dimension 4.\n\nPlugging~(\\ref{asu}) into (\\ref{rrk}) and using the expressions for\n$C$ and $K$ in (\\ref{vds}), the prescription described in Sec.\\ref{adscft} gives\n \\begin{equation}\n {\\rm Im} G_{12,12} (\\omega,0)=\\omega{1 \\over 16 \\pi G_N} \\left(r_+^3\\over \\lad^3\\right) (1-4\n {\\lambda}_{GB}) +O(\\omega^2).\n \\end{equation}\nThen, the Kubo formula~(\\ref{rrp}) yields\n \\begin{equation} \\label{ets}\n \\eta = {1 \\over 16 \\pi G_N} \\left(r_+^3\\over \\lad^3\\right) (1-4\n {\\lambda}_{GB}).\n \\end{equation}\nFinally, taking the ratio of (\\ref{ets}) and (\\ref{entr}) we find that\n \\begin{equation} \\label{ror}\n {\\eta \\over s} = {1 \\over 4 \\pi} (1-4\n {\\lambda}_{GB}).\n \\end{equation}\nThis is \\emph{nonperturbative} in ${\\lambda}_{GB}$. Especially,\nthe linear correction is the only nonvanishing term.\\footnote{It would be interesting to find an explanation for vanishing of higher order corrections.}\n\nWe now conclude this section with various remarks:\n\n\\begin{enumerate}\n\n\\item Based on the field redefinition argument presented in\nSec.\\ref{ap:fR}, one finds from (\\ref{ror}) that for (\\ref{epr}),\n \\begin{equation} \\label{oror}\n {\\eta \\over s} = {1 \\over 4 \\pi} \\left(1 - 8 {\\alpha}_3 \\right) + O({\\alpha}_i^2).\n \\end{equation}\nWe have also performed an independent calculation of $\\eta\/s$\n(without using field redefinitions) for~(\\ref{epr}) using all\nthree methods outlined in Sec.\\ref{meds} and\nconfirmed~(\\ref{oror}).\n\n\\item The ratio $\\eta\/s$ dips below the viscosity bound for ${\\lambda}_{GB}\n> 0$ in Gauss-Bonnet gravity and for ${\\alpha}_3 > 0$ in~(\\ref{epr}).\nIn particular, the shear viscosity approaches zero as ${\\lambda}_{GB} \\to {1 \\over 4}$ for Gauss-Bonnet.\n Note that the whole off-shell action becomes zero in this limit. It is\nlikely the on-shell action also vanishes, implying that the\ncorrelation function could become identically zero in this limit.\n\n\n\\item Fixing the temperature $T$ and the boundary speed of light\nto be unity, as we take ${\\lambda}_{GB} \\to -\\infty$, $\\eta \\sim (-{\\lambda}_{GB})^{1\n\\over 4} \\to \\infty$. In contrast the entropy density decreases as\n$s \\sim (-{\\lambda}_{GB})^{-{3 \\over 4}} \\to 0$.\n\n\\item The shear viscosity of the boundary conformal field theory\nis associated with absorption of transverse modes by the black\nbrane in the bulk. This is a natural picture since the shear\nviscosity measures the dissipation rate of those fluctuations: the\nquicker the black brane absorbs them, the higher the dissipation\nrate will be.\nFor example, as ${\\lambda}_{GB} \\rightarrow-\\infty$, $\\eta\/s$ approaches\ninfinity; this describes a situation where every bit of the black\nbrane horizon devours the transverse fluctuations very quickly.\nIn this limit the curvature singularity at $z = \\left(1-{1 \\over 4 {\\lambda}_{GB}}\\right)^{-\\frac{1}{4}}$ approaches the horizon and the tidal force near the horizon becomes strong.\nOn the other hand, as ${\\lambda}_{GB} \\to {1 \\over 4}$,\n$\\eta\/s\\rightarrow0$ and the black brane very slowly absorbs transverse modes.\\footnote{We note that for\n${\\lambda}_{GB}=\\frac{1}{4}$ in $4+1$ spacetime dimension, the radial direction of the background\ngeometry resembles a ${\\rm Ba\\tilde{n}ados}$-Teitelboim-Zanelli (BTZ) black brane.}\n\n\\item The calculation leading to (\\ref{ror}) can be generalized to\ngeneral $D$ spacetime dimensions and one finds for $D\\geq4+1$\\footnote{For general\ndimensions we use the convention\n \\begin{equation}\nS = \\frac{1}{16\\pi G_N} \\mathop\\int{d^{D}x \\, \\sqrt{-g} \\,\n\\left[R-2\\Lambda+ \\alpha_{GB} \\lad^2\n(R^2-4R_{\\mu\\nu}R^{\\mu\\nu}+R_{\\mu\\nu\\rho\\sigma}R^{\\mu\\nu\\rho\\sigma})\n\\right]} \\\n\\end{equation}\nwith ${\\Lambda} = - {(D-1) (D-2) \\over 2 \\lad^2}$ and ${\\lambda}_{GB} = (D-3)(D-4)\n\\alpha_{GB}$.}\n \\begin{equation}\n\\label{final} \\frac{\\eta}{s}=\\frac{1}{4\\pi}\\left[1-2\\frac{(D-1)}{(D-3)}{\\lambda}_{GB} \\right] \\ .\n\\end{equation}\nHere again ${\\lambda}_{GB}$ is bounded above by ${1 \\over 4}$. Thus for $D >\n4+1$, $\\eta$ never approaches zero within Gauss-Bonnet theory. For\n$D=3+1$ or $2+1$, in which case the Gauss-Bonnet term is\ntopological, there is no correction to $\\eta\/s$.\n\n\\item In Appendix~\\ref{junk}, we obtain the same result\n(\\ref{ror}) using the membrane paradigm \\cite{Kovtun:2003wp}. Thus\nwhen embedded into the AdS\/CFT correspondence, the membrane paradigm\ncorrectly captures the infrared (hydrodynamic) sector of the\nboundary thermal field theory. Further, we see something interesting\nin its derivation. There, the diffusion constant is expressed as the\nproduct of a factor evaluated at the horizon (\\ref{one}) and an\nintegral from the horizon to infinity (\\ref{two}). In the limit\n${\\lambda}_{GB}\\to{1 \\over 4}$, it is the former that approaches zero.\n\n\\end{enumerate}\n\n\\section{Causality in Bulk and on Boundary}\n\\label{gravitoncone}\n\nIn this section we investigate if there are causality\nproblems in the bound-violating theories discussed above. First we will discuss the bulk causal structure.\nThen we discuss a curious high-momentum metastable state in the\nbulk graviton wave equation that may have consequences for\nboundary causality. The analysis in this section is refined in~\\cite{newBLMSY} where we indeed see a precise signal of causality violation for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\n\\subsection{Graviton cone tipping}\n\\label{conetip}\n\nAs a consequence of higher derivative terms in the gravity action,\ngraviton wave packets in general do not propagate on the\nlight cone of a given background geometry. For example, when ${\\lambda}_{GB}\n\\neq 0$, the equation (\\ref{eom}) for the propagation of a\ntransverse graviton differs from that of a minimally coupled\nmassless scalar field propagating in the same background geometry\n(\\ref{bba}). To make the discussion precise, let us write (we will\nconsider only $x_{1, 2}$-independent waves) \\begin{equation} \\label{envelope}\n\\phi(t, r, x_3)=e^{-i\\omega t +i k_r r+i q x_3}\\phi_{en}(t, r,\nx_3). \\end{equation} Here, $\\phi_{en}$ is a slowly-varying envelope function,\nand we take the limit $k=(\\omega, k_r, 0, 0, q)\\to \\infty$. In\nthis limit, the equation of motion (\\ref{eom}) reduces to \\begin{equation}\n\\label{eikonal} k^{\\mu}k^{\\nu}g^{\\rm eff}_{\\mu\\nu}\\approx 0, \\ \\end{equation}\nwhere\n \\begin{equation}\n\\label{effgeo} ds_{\\rm eff}^2=g^{\\rm eff}_{\\mu\\nu}dx^{\\mu}dx^{\\nu}\n=f(r)N_{\\sharp}^2 \\left(-dt^2 + {1 \\over c_g^2} dx_3^2 \\right)\n+\\frac{1}{f(r)}dr^2.\n \\end{equation}\nIn (\\ref{effgeo})\n \\begin{equation} \\label{Nse}\nc_g^2 (z) = {N_{\\sharp}^2 \\tilde f(z) \\over z^2} {1-{\\lambda}_{GB} \\tilde{f}'' \\over 1 -\n{{\\lambda}_{GB} \\tilde{f}' \\over z}} \\equiv c_b^2 {1-{\\lambda}_{GB} \\tilde{f}'' \\over 1 - {{\\lambda}_{GB} \\tilde{f}'\n\\over z}}\n \\end{equation}\ncan be interpreted as the local ``speed of graviton'' on a\nconstant $r$-hypersurface. $c_b^2 \\equiv {N_{\\sharp}^2 \\tilde f(z) \\over\nz^2} $ introduced in the second equality in~(\\ref{Nse}) is the\nlocal speed of light as defined by the background\nmetric~(\\ref{bba}). Thus the graviton cone in general does not coincide with the\nstandard null cone or light cone defined by the background metric.\\footnote{Note that\n \\begin{equation} \\label{Nsee}\n {c_g^2 \\over c_b^2} = {1-{\\lambda}_{GB} \\tilde{f}'' \\over 1 - {{\\lambda}_{GB} \\tilde{f}' \\over z}} =\n{1 - 4 {\\lambda}_{GB} + 12 {{\\lambda}_{GB} \\over z^4} \\over 1 - 4 {\\lambda}_{GB} + 4 {{\\lambda}_{GB} \\over z^4} }\n\\ , \\end{equation} and in particular the ratio is greater than $1$ for ${\\lambda}_{GB} >\n0$. Note that bulk causality and the existence of a well-posed\nCauchy problem do not crucially depend on reference metric\nlight cones and such tipping is not a definitive sign of\ncausality problems. Also for any value of ${\\lambda}_{GB}$, the graviton cone coincides with the\nlight cone in the radial direction. If not, we could have argued for\nthe violation of the second law of thermodynamics following\n\\cite{Dubovsky:2006vk,Eling:2007qd}. Further note that for ${\\lambda}_{GB} <\n-{1 \\over 8}$, there exists a region outside the horizon where $c_g^2\n< 0$ which will lead to the appearance of tachyonic modes, following\n\\cite{spectre}. We have not explored the full significance of this\ninstability here since it is not correlated with the viscosity bound.} A few more comments about graviton cone are found at the end of Appendix~\\ref{junk}.\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.7,angle=0]{1Velocity.eps}\n\\caption{$c_g^2 (z)$ (vertical axis) as a function of $z$\n(horizontal axis) for ${\\lambda}_{GB} =0.08$ (left panel) and ${\\lambda}_{GB} =0.1$\n(right panel). For ${\\lambda}_{GB} < {9 \\over 100}$, $c_g^2$ is a monotonically\nincreasing function of $z$. When ${\\lambda}_{GB} > {9 \\over 100}$, as one\ndecreases $z$ from infinity, $c_g^2$ increases from $1$ to a\nmaximum value at some $z>1$ and then decreases to $0$ as $z \\to 1$\n(horizon). }\n \\label{velo}\n\\end{figure}\n\nIn the nongravitational boundary theory there is an invariant\nnotion of light cone and causality.\nAt a heuristic level, a graviton wave packet moving at speed $c_g (z)$ in the\nbulk should translate into disturbances of the stress tensor propagating with the same velocity in the boundary theory.\nIt is\nthus instructive to compare $c_g$ and $c_b$ with the boundary\nspeed of light, which we now set to unity by taking $N_{\\sharp} = a$~($a$\nwas defined in~(\\ref{Ade})). At the boundary ($z= \\infty$) one finds\nthat $c_g (z)= c_b (z)= 1$. In the bulk, the background local\nspeed of light $c_b$ is always smaller than $1$, which is related\nto the redshift of the black hole geometry. The local speed of graviton $c_g (z)$, however, can be greater than $1$ for\ncertain range of $z$ if ${\\lambda}_{GB}$ is sufficiently large. To see this,\nwe can examine the behavior of $c_g^2$ near $z = \\infty$,\n \\begin{equation} \\label{veS}\n c_g^2 (z) - 1 = { b_1 \\over z^4} + O(z^{-8}) , \\quad z\n \\to \\infty,\n \\qquad b_1({\\lambda}_{GB}) = - {1 + \\sqrt{1 - 4 {\\lambda}_{GB}} - 20 {\\lambda}_{GB} \\over 2 (1 - 4\n {\\lambda}_{GB})} \\ .\n \\end{equation}\n $b_1 ({\\lambda}_{GB})$ becomes positive and thus $c_g^2$ increases above $1$\n if ${\\lambda}_{GB} > {9 \\over 100}$. For such a ${\\lambda}_{GB}$, as we decrease $z$ from\n infinity, $c_g^2$ will increase from $1$ to a maximum at some value of\n $z$ and then decrease to zero at the horizon. See Fig.~\\ref{velo}\n for the plot of $c_g^2 (z)$ as a function of $z$ for two values of ${\\lambda}_{GB}$.\n When ${\\lambda}_{GB} = {9 \\over 100}$ one finds that\n the next order term in (\\ref{veS}) is negative and thus $c_g^2$\n does not go above $1$.\n Also note that ${\\lambda}_{GB} \\to {1 \\over 4}$,\n $b_1 ({\\lambda}_{GB})$ goes to plus infinity.\\footnote{In fact coefficients of\n all higher order terms in $1\/z$ expansion become divergent in this limit.}\nThus heuristically, in the boundary theory there is a potential for superluminal propagation of disturbances of the stress tensor.\n\nIn~\\cite{newBLMSY} we explore whether such bulk graviton cone\nbehavior can lead to boundary causality violation by studying the\nbehavior of graviton null geodesics in the effective geometry.\nThere, we indeed see causality violation for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\n\n\\subsection{New metastable states at high momenta (${\\lambda}_{GB} > {9 \\over 100}$) }\n\nWe now study the behavior of the full graviton wave equation. Let us recast the equation (\\ref{eom}) in Schr\\\"{o}dinger form. For this purpose, we introduce\n \\begin{equation}\n{dy \\over dz} = {1 \\over N_{\\sharp} \\tilde{f}(z)} , \\qquad \\psi = B \\phi , \\qquad B\n= \\sqrt{K \\over \\tilde{f}} \\ .\n \\end{equation}\nThen (\\ref{eom}) becomes\n \\begin{equation} \\label{enr}\n - \\partial_y^2 \\psi + V(y) \\psi = \\tilde{\\omega}^2 \\psi\n \\end{equation}\nwith\n \\begin{equation} \\label{potential}\n V (y) = \\tilde{q}^2 c_g^2 (z) + V_1, \\qquad V_1 (y) = {\\partial_y^2 B\n\\over B} = {N_{\\sharp}^2 \\tilde{f}^2 \\over B} \\left(B'' + {\\tilde{f}' \\over \\tilde{f}} B' \\right) \\ ,\n \\end{equation}\n where $c_g^2 (z)$ was defined in~(\\ref{Nse}).\nThe advantage of using (\\ref{enr}) is that qualitative features of\nthe full graviton propagation (including the radial direction) can be\ninferred from the potential $V(y)$, since we have intuition for\nsolutions of the Schr\\\"{o}dinger equation. Since $y$ is\na monotonic function of $z$, below we will use the two coordinates\ninterchangeably in describing the qualitative behavior of $V(y)$.\n\n\nOne can check that $V_1 (z)$ is a monotonically increasing\nfunction for any ${\\lambda}_{GB} > 0$ (note $V_1 (z) \\to + \\infty$ as $z \\to\n\\infty$). For ${\\lambda}_{GB} \\leq {9 \\over 100}$, $c_g^2 (z)$ is also a\nmonotonically increasing function as we discussed in the last\nsubsection and the whole $V(z)$ is monotonic. When ${\\lambda}_{GB} > {9 \\over\n100}$, there exists a range of $z$ where $c_g^2 (z)$ decreases with\nincreasing $z$ for sufficiently large $z$. Thus $V(z)$ can now\nhave a local minimum for sufficiently large $\\tilde{q}$. For\nillustration, see Fig.~\\ref{pote}\n for the plot of $V (z)$ as a function $z$ for two values of ${\\lambda}_{GB}$.\n\n\n\n\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.65,angle=0]{2potential.eps}\n\\caption{$V(z)-q^2$ (vertical axis) as a function of $z$ (horizontal\naxis) for ${\\lambda}_{GB}=0.08$ and $\\tilde{q} =500$ (left panel) and for ${\\lambda}_{GB}=0.1$\nand $\\tilde{q} =500$ (right panel). $V(z)$ is a monotonically increasing function of\n$z$ for ${\\lambda}_{GB} \\leq {9 \\over 100}$, but develops a local minimum for\n${\\lambda}_{GB} > {9 \\over 100}$ with large enough $\\tilde{q}$.}\n \\label{pote}\n\\end{figure}\n\n\nGenerically, a graviton wave packet will fall into the black brane\nvery quickly, within the time scale of the inverse temperature ${1\n\\over T}$ (since this is the only scale in the boundary theory).\nHere, however, precisely when the local speed of graviton $c_g$ can exceed $1$ (i.e. for ${\\lambda}_{GB} > {9 \\over 100}$), $V(z)$\ndevelops a local minimum for large enough $\\tilde{q}$ and the\nSchr\\\"{o}dinger equation~(\\ref{enr}) can have metastable states\nliving around the minimum. Their lifetime is determined by the\ntunneling rate through the barrier which separates the minimum\nfrom the horizon. For very large $\\tilde{q}$ this barrier becomes very\nhigh and an associated metastable state has lifetime\nparametrically larger than the timescale set by the temperature.\nIn the boundary theory, these metastable states translate into\npoles of the retarded Green function for $T_{xy}$ in the lower\nhalf-plane. The imaginary part of such a pole is given by the\ntunneling rate of the corresponding metastable state. Thus for\n${\\lambda}_{GB} > {9 \\over 100}$, in boundary theory we find new\nquasiparticles at high momenta with a small imaginary\npart.\\footnote{A similar type of long-lived quasiparticles exist\nfor ${\\cal N}=4$ SYM theory on $S^3$~\\cite{guido}, but not on ${\\bf\nR^3}$.}\n\nIn~\\cite{newBLMSY}, we confirm that those long-lived quasiparticles give rise to causality violation for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\n\\section{Discussion}\n\\label{discussion}\n\nIn this paper we have computed $\\eta\/s $ for Gauss-Bonnet gravity using a variety of techniques. We have found that the viscosity bound\nis violated for ${\\lambda}_{GB} >0$ and have looked for pathologies correlated to this violation. For small positive ${\\lambda}_{GB}$ we have not found any.\nThe violation of the bound becomes extreme as ${\\lambda}_{GB} \\rightarrow {1 \\over 4}$ where $\\eta$ vanishes. We\nhave focused our attention on this region to find what unusual properties of the boundary theory could yield a violation not only of the bound but also of the qualitative intuitions suggesting a lower bound on $\\eta\/s$. Above we also have discussed a novel quasiparticle excitation. In~\\cite{newBLMSY}, causality violation is firmly established for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\nIt is also instructive to examine the\nbehavior of the zero temperature theory as ${\\lambda}_{GB} \\rightarrow {1 \\over 4}$. Basic parameters describing the boundary CFT are the coefficients of\nthe 4D Euler and Weyl densities called $a$ and $c$ respectively. These have been computed first in \\cite{Henningson:1998gx}, and for Gauss-Bonnet gravity in \\cite{Nojiri:1999mh}. Their results indicate that\n\\begin{eqnarray}\n\\label{ac} c &\\sim& (1 -4{\\lambda}_{GB})^{{1\\over 2}}, \\cr a &\\sim& (3\n(1-4{\\lambda}_{GB})^{{1\\over 2}}-2).\n\\end{eqnarray}\nThe parameter $c$ is related to the two-point function of a\nboundary stress tensor which is forced by unitarity to be\npositive. (\\ref{ac}) shows that $c$ vanishes at ${\\lambda}_{GB} ={1 \\over 4}$\ndemonstrating the sickness of this point.\\footnote{This can also\nbe seen from the derivations in Sec.\\ref{shear}.} For ${\\lambda}_{GB}$ a bit\nless than ${1 \\over 4}$ the stress tensor couples very weakly in a system with a large number of degrees of freedom. This is peculiar\nindeed. In the bulk it seems that gravity is becoming strongly coupled there.\n\nThe coefficient $a$ vanishes at ${\\lambda}_{GB}={5 \\over 36}$. The significance of this is unclear.\n\nMore generally, we believe it would be valuable to explore how\ngeneric higher derivative corrections modify various gauge theory\nobservables. This is important not only for seeing how reliable\nit is to use the infinite 't Hooft coupling approximation for\nquestions relevant to QCD, but also for achieving a more balanced\nconceptual picture of the strong coupling dynamics. Furthermore, this may\n generate new effective tools for separating the\nswampland from the landscape.\n\nAs a cautionary note we should mention that pathologies in the boundary theory in regions that violate the viscosity bound\nmay not be visible in gravitational correlators, at least when $g_s = 0$. As an example consider the ${\\alpha}'^3 R^4$ terms discussed\nin \\cite{BLS}. For positive ${\\alpha}'$, the physical case, the viscosity bound is preserved. But the bulk effective action can equally be\nstudied for ${\\alpha}'$ negative. Here gravitational correlators can be computed and will violate the viscosity bound. The only indication\nof trouble in the boundary theory at $g_s=0$ will come from correlators of string scale massive states, whose mass and CFT conformal weight\n$\\sim 1\/({\\alpha}')^{{1\\over 2}}$, an imaginary number!\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nWe thank A.~Adams, N.~Arkani-Hamed, R-G.~Cai, A.~Dymarsky, Q.~Ejaz, T.~Faulkner, H.~Jockers,\nP.~Kovtun, J.~Liu, D.~Mateos, H.~Meyer, K.~Rajagopal, D.~T.~Son,\nA.~Starinets, L.~Susskind, B.~Zwiebach for discussions. HL also wishes to thank\nJ.~Liu for collaboration at the initial stages of the work. We would also like to thank Yevgeny Katz and Pavel Petrov for sharing a draft of their work \\cite{new}.\n\n\nMB and HL are partly supported by the U.S. Department of Energy\n(D.O.E) under cooperative research agreement \\#DE-FG02-05ER41360.\nHL is also supported in part by the A. P. Sloan Foundation and the\nU.S. Department of Energy (DOE) OJI program. HL is also supported\nin part by the Project of Knowledge Innovation Program (PKIP) of\nChinese Academy of Sciences. HL would like to thank KITPC\n(Beijing) for hospitality during the last stage of this project.\nResearch at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \\& Innovation. RCM also acknowledges\nsupport from an NSERC Discovery grant and from the Canadian\nInstitute for Advanced Research. SS is supported by NSF grant\n9870115 and the Stanford Institute for Theoretical Physics. SY is\nsupported by an Albion Walter Hewlett Stanford Graduate Fellowship\nand the Stanford Institute for Theoretical Physics.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzatun b/data_all_eng_slimpj/shuffled/split2/finalzzatun new file mode 100644 index 0000000000000000000000000000000000000000..624928f69b751b427280582033b9c317d1079ca7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzatun @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n\n\\par Electrical power systems are complex engineering networks which are crucial to the present day infrastructure. A power network comprises of blocks consisting of numerous interconnected sub-systems, making them challenging to analyze and understand. With the continuous increase in electricity demand and the trend for more interconnections, an issue of concern is the mitigation and analysis of low-frequency interarea oscillations. Oscillations associated with individual generators in a power plant are called local mode oscillations typically ranging from 0.7-2.0Hz \\cite{rogers2012power, klein1991fundamental}. The stability of these oscillations characterized as intraarea (same area) and interarea (across areas) \\cite{rogers2012power, klein1991fundamental}. These oscillations between the generators which are inherent to power systems require appropriate mathematical models and techniques for their analysis.\n\\par Kuramoto-type models have been widely used to study the dynamics of a power system network through swing equations \\cite{filatrella2008analysis}. It must be noted though that a power system network has an added second order term due to generator inertia and are only similar to Kuramoto model. Power dissipation terms that arise in the swing equation model are absent in conventional Kuramoto model, which can be shown existent by few mathematical adjustments. In power systems, globally coupled phase oscillators of Kuramoto form have been viewed as electromechanical generators mutually coupled to deliver load power. A conventional second order Kuramoto-type oscillator can be written as follows,\n\\begin{equation}\nJ_i\\ddot{\\delta}_i+d_i\\dot{\\delta}_i=\\omega_i+\\sum_{j\\neq i,j=1}^{n}k_{ij}sin(\\delta_j-\\delta_i), \\ \\ \\ i \\in \\{1,\\ldots,n\\},\n\\label{kuramoto}\n\\end{equation}\nwhere $\\delta_i$ is angular position of rotor with respect to the synchronously rotating reference frame (for consistency, all the angles throughout the paper are in radians), $J_i$ inertia in kgm$^2$, $d_i$ damping and $\\omega_i$ is a natural frequency chosen from an appropriate distribution $g(\\omega)$ of $i$-th oscillator. $[k_{ij}]; i,j\\in \\{1,\\ldots,n\\}$ is the matrix of coupling constants and $n$ defines the number of oscillators. The standard Kuramoto-type equation assumes value of coupling constants $[k_{ij}]$ to be always positive and symmetric. In this paper, author explores the mapping between a power grid and Kuramoto oscillators.\n\\par Interarea oscillations emerge when two areas having independent sets of power generators experience supply-demand imbalance. The generators in individual areas are observed to beat against each other with frequencies ranging from 0.1-0.8Hz, classified as low frequency interarea oscillations in a power grid. These oscillations can be visualized as two large generators trying to desynchronize each other in the event of supply-demand balance being achieved in each individual area. The above phenomena is analyzed using small-signal or modal analysis \\cite{klein1991fundamental}, however additionally it would be advantageous to have a nonlinear (large-signal) model to capture the different behaviors and effects of these oscillations. We propose a novel `conformist-contrarian' (inspired from first order framework discussed in \\cite{hong2011kuramoto}) second order Kuramoto-type model (henceforth, referred to as CC-Kuramoto) which captures the in-phase (intraarea) and the anti-phase (interarea) oscillations in a power system.\n\\par The motivation behind developing such a model is to address some of the challenges related to modeling low frequency oscillations in power systems \\cite{kundur2002small}. Conventionally, small-signal analysis and damping control is used by power system engineers to assure system stability at planning stage and thereby execution. It has to be noted though, that over the years software packages on these design\/analysis have become computationally efficient in terms of execution time, but still bears significant computation cost for near real-time implementation. Some key challenges related to models for power systems, identified from the literature are as follows:\n\\begin{enumerate}\n \\item The major problems related to power system oscillations are of perturbed damping of overall system which are regulated conventionally using power system stabilizers. These oscillations are identified using eigen value analysis which are computationally costly.\n \\item In cases when power transfer needs to be increased or decreased, the groups of individual generators in the source and sink sides are dispatched in order of their sensitivities to the critical modes with respect to the output of these generators. These in turn assist to increased power levels without adding any further damping control actions. Computation of these critical modes and calculation of generator sensitivities to it in real-time is difficult.\n \\item Modeling such behaviors is not an easy task due to observed in-coherency from planning to actual implementations in the past. Details of major equipment and inclusion of newer loads like those of induction motors is still not simple in small-signal models.\n \\item On the similar lines, a $-$ sync behavior in system characteristics has tendency to mitigate homogeneous power oscillations in an interarea setup \\cite{shim2017synchronization}. The behavior and analysis of interarea oscillations in a nonlinear form by adding periodic disturbances to the major parameters that have significant impact \\cite{mao2008nonlinear}. It would be advantageous, if a model can integrate these results through simple modifications.\n\\end{enumerate}\n\\par Apart from fabricating a perfect model that can overcome above mentioned complexities, a power system engineer looks for a model that can provide significant inferences. With increasing vulnerability of modern power systems due to inclusion of various ancillary services it is important to study the settings that might lead to partial stability or instability. It must be noted though, that power grids are not simple physical network of transmission lines and are deeply impacted by its structural as well as dynamical interactions. Thus, a dynamic redesign\/modification of existent power network is not possible, as it can be a major limiting factor in optimizing synchronization. With these constraints as reference, we show occurrence of various stable, partially stable and unstable states via tuning of system parameters, and avoid fiddling with the structure. It is observed that these parameters beyond a certain threshold lead to randomization of steady-state equilibrium points thereby existence of a chaotic behaviour. The same power grid setup (and some other complex systems in nature) shows a state of partial stability by clustering themselves into islands of synchronised and de-synchronised oscillators, commonly referred to as chimera in literature \\cite{abrams2004chimera}. Hence, we emulate the existence of these chimera state behaviors and correlate them with blackouts with islanding commonly seen in power grids \\cite{nerc9209}.\n\\par Nonlinear modes associated with instabilities have been analysed and discussed in the literature \\cite{susuki2011nonlinear, susuki2009global, parrilo1999model} related to power grid synchronization. These provide an informative decomposition of nonlinear oscillations when the network looses synchrony. Sufficient conditions for synchronization, obtained via perturbation analysis for non-uniform Kuramoto oscillators are also widely studied \\cite{dorfler2012synchronization, dorfler2011critical}. It must be noted though, that the conditions attained in previous studies maintain homogeneity in system parameters, whereas we obtain conditions on various power grid parameters and hence heterogeneity. On the similar lines, effects of heterogeneity on power grid networks discussed in \\cite{motter2013spontaneous} inspire formulation of Kuramoto-type framework and extend it to practical blackout scenarios. Thus, in this work we take a standard example from power systems to show that chimera behaviors can lead to blackouts and can be correlated to a distributed grid. To summarize, major contributions of this work are as follows. We propose a nonlinear model for analysis of power systems in simplistic form to avoid discussed computational complexities. A practical example from power systems is used to correlate and showcase advantages of the proposed model. It has been shown, that the model not only provides information about the nominal states but also existence of chimera behaviors in power systems. This could help site engineers to take actions apriori or equip with necessary tools at the right time.\n\\par The manuscript is divided into two parts. \\emph{Part-\\uppercase\\expandafter{\\romannumeral 1}}: We first start with modeling a standard power grid in Kuramoto form. Then, gradually move towards addressing complexities discussed before and how it can be easily incorporated in a large scale (nonlinear model) using analogy to a standard physics example. Next, a standard power systems network to study interarea oscillations is considered and results are verified using computer simulations. \\emph{Part-\\uppercase\\expandafter{\\romannumeral 2}}: A detailed bifurcation analysis is performed on the proposed model parameters in order to analyse the system stability. Finally, we emulate chimera behavior \\cite{abrams2004chimera} commonly referred to in the literature and discuss its implications in a power network.\n\n\n\n\\section{Part - \\uppercase\\expandafter{\\romannumeral 1}: Power Network and Kuramoto Oscillators}\\label{mod1}\n\n\\subsection{Mathematical model of Power Grid}\\label{kura}\n\n\\par The basic elements of a power grid consists of active generators and passive machines\/loads. The generator converts some source of energy into electrical power which is produced by the prime mover of the generator with the frequency close to the standard or natural frequency $ \\Omega $ of an electrical system. All generators in a power grid can be looked upon as set of synchronous machines rotating at synchronous frequency $\\Omega$, with the stator windings of the generator delivering electrical power to the grid. Any power generator in a power system is described by a power balance equation of the form,\n\\begin{equation}\nP_{accumulated}+P_{dissipated}=P_{source}-P_{transmitted},\n\\label{sumpower}\n\\end{equation}\nwhere $P_{source}$ is the rate at which the energy is fed into the generator at frequency $\\Omega$ (i.e., $2\\pi\\Omega=50$Hz). Hence, the phase angle $\\theta_i$ at the output of the $i$-th generator in stationary frame is then given by,\n\\begin{equation}\n\\theta_i=\\Omega t +\\delta_i.\n\\label{theta_eq}\n\\end{equation}\n$P_{accumulated}$ is the rate at which kinetic energy is accumulated by the generator:\n\\begin{equation}\nP_{accumulated} = \\frac{1}{2} J_i \\frac{d}{dt}(\\dot{\\theta}_i)^2,\n\\label{pacc}\n\\end{equation}\nwhere $ J_i $ is the moment of inertia of the $i$-th generator in kgm$^{2}$. For the sake of simplicity, we assume identical machines (i.e., $J_i=J$). $P_{transmitted}$ is the power transmitted from generator $i$ to $j$ with phase difference, $\\Delta \\theta_{ij}=\\theta_j-\\theta_i \\neq 0$. \n\\begin{equation}\nP_{transmitted}=-P_{max}sin(\\Delta\\theta_{ij}).\n\\label{ptrans}\n\\end{equation}\n$P_{max}$ being maximum electrical power input in watts. The dissipated power ($P_{dissipated}$) with $K_D$ the dissipation constant of the prime mover in Ws$^2$\/rad$^2$, can be expressed as: \n\\begin{equation}\nP_{dissipated}=K_D (\\dot{\\theta}_i)^2.\n\\label{pdiss}\n\\end{equation}\nSince, all the generators share common frequencies $\\Omega$, $\\Delta \\theta_{ij} = \\Delta \\delta_{ij} = \\Phi_{ij}$. Substituting \\eqref{pacc}, \\eqref{ptrans} and \\eqref{pdiss} in \\eqref{sumpower}, following can be computed,\n\\begin{equation}\nP_{source} = J \\ddot{\\theta}_i\\dot{\\theta}_i + K_D(\\dot{\\theta}_i)^2 - P_{max}sin(\\Phi_{ij}).\n\\label{pgen}\n\\end{equation}\nDifferentiating \\eqref{theta_eq} with respect to time and further double differentiating it; thereby assuming perturbations around the synchronous frequency being very small, $i.e., \\dot{\\delta}_i \\ll \\Omega$, \\eqref{pgen} can be approximated as, \n\\begin{equation}\nP_{source} \\cong J\\Omega\\ddot{\\delta}_i+[J\\ddot{\\delta}_i+2K_D\\Omega]\\dot{\\delta}_i+K_D\\Omega^2-P_{max}sin(\\Phi_{ij}).\n\\label{pgen2}\n\\end{equation}\nUnder practically relevant assumptions, the coefficient of first derivative is constant and neglecting acceleration terms, as well as knowing that the rate at which the energy is stored in kinetic term is much lower as compared to rate at which energy is dissipated in friction, \\eqref{pgen2} is reduced to,\n\\begin{equation}\nJ\\Omega\\ddot{\\delta}_i=P_{source}-K_D\\Omega^2-2K_D\\Omega\\dot{\\delta}_i+P_{max}sin(\\Phi_{ij}).\n\\label{pgen3}\n\\end{equation}\nNow, using the fact that, $P_{max}=E_iE_j\\left|Y_{ij}\\right|$; $E_i$ being internal voltage of $i$-th generator, $Y_{ij}$ the Kron reduced admittance matrix denoting maximum power transferred between generators \\cite{dorfler2012synchronization} and choosing $P_{m,i}=P_{source}-K_D\\Omega^2$, where $P_{m,i}$ is the mechanical power input,\n\\begin{equation}\nJ\\Omega\\ddot{\\delta}_i=P_{m,i}-E^{2}_{i} \\Re(Y_{ii})-2K_D\\Omega\\dot{\\delta}_i\n+\\sum_{j\\neq i,j=1}^{n}E_iE_j \\left| Y_{ij}\\right|sin(\\Phi_{ij}).\n\\label{pgen5}\n\\end{equation}\nDividing both sides by $J\\Omega$,\n\\begin{equation}\n\\ddot{\\delta}_i=\\left[\\frac{P_{m,i}}{J\\Omega}-\\frac{E^{2}_{i} \\Re(Y_{ii})}{J\\Omega}\\right]-\\frac{2K_D}{J}\\dot{\\delta}_i\n+\\sum_{j\\neq i,j=1}^{n}\\frac{E_iE_j \\left| Y_{ij}\\right|}{J\\Omega}sin(\\Phi_{ij}).\n\\label{pgen6}\n\\end{equation}\n\\eqref{pgen6} can be rewritten as follows,\n\\begin{equation}\n\\ddot{\\delta}_i=\\omega_i-\\alpha \\dot{\\delta}_i+\\sum_{j\\neq i,j=1}^{n}k_{ij}sin(\\delta_j - \\delta_i),\n\\label{pkura}\n\\end{equation}\nwhere $\\alpha=\\frac{2K_D}{J}$ is the dissipation constant, coupling constant $[k_{ij}] = \\frac{E_iE_j \\left| Y_{ij}\\right|}{J\\Omega}$ and natural frequency $\\omega_i=\\left[\\frac{P_{m,i}}{J\\Omega}-\\frac{E^{2}_{i} \\Re(Y_{ii})}{J\\Omega}\\right]$. From a graph theoretic viewpoint $[k_{ij}]$ can be seen as a weighted laplacian matrix with $k_{ij}=0$ when generators are not connected to each other and $ k_{ij} \\geq \\frac{(\\omega_{max}-\\omega_{min})n}{(2(n-1))} $ otherwise (i.e., assumed to be greater than critical coupling, to ensure steady state synchronization \\cite{dorfler2012synchronization}). It can be observed that, \\eqref{pkura} has the same form as a second order Kuramoto oscillator model \\cite{dorfler2012synchronization}.\n\n\\subsection{CC-Kuramoto Model for Interarea Oscillations}\\label{model}\n\\par Further, we extend \\eqref{pkura} to spring coupled oscillators. In Kuramoto oscillators or spring coupled pendulums for that sake; the coupling term introduces restoring forces on the oscillators. Considering the special case, when there is no energy transfer between oscillators, either of in-phase or anti-phase steady state oscillations may exist (as observed in a spring coupled pendulum - Figure \\ref{fig:pend}). For the in-phase oscillations, the restoring forces are zero implying absence of the coupling term, which is not a tangible explanation for a coupled system in practice because there would always be some sort of restoring forces present in a coupled system. On the other hand, in the case of anti-phase oscillations the spring keeps contributing restoring forces whilst the energy transfer is zero \\cite{feynman2011feynman}. This also explains Huygens observations \\cite{bennett2002huygens} and is a valid template for modeling interarea oscillations in power systems.\n\\begin{figure}[t!]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[height=4.0cm,width=3.9cm]{0phase_pend2.png}&\n\\includegraphics[height=4.0cm,width=4.5cm]{180phase_pend23.png}\\\\\n(a) $\\theta_1=\\theta_2$ &(b) $\\theta_1=-\\theta_2$\n\\end{tabular}\n\\caption{Pendulums coupled by a spring, oscillating in two equilibrium modes. (a) Oscillations in in-phase mode. (b) Oscillations in anti-phase mode.}\n\\label{fig:pend}\n\\end{figure}\n\\par Any oscillator of the form given in \\eqref{pkura}, assuming damping\/dissipation constant to be zero, with identical natural frequencies ($\\omega_i=\\omega $) and $H(\\Phi_{ij})=\\sum_{j\\neq i,j=1}^{n}k_{ij}sin(\\Phi_{ij})$ and $\\Phi_{ij}=(\\delta_j-\\delta_i)=(\\theta_j-\\theta_i)$, can be written as,\n\\begin{equation}\n\\ddot{\\delta}_i=\\omega+H(\\Phi_{ij}),\n\\label{axonal}\n\\end{equation}\nand thereby, following can be deduced,\n\\begin{equation}\n\\ddot{\\Phi}_{ij}=H(-\\Phi_{ij})-H(\\Phi_{ij})=-2H(\\Phi_{ij}).\n\\label{axonal2}\n\\end{equation}\nThe above \\eqref{axonal2} has fixed points $\\Phi_{ij}=\\beta(2\\pi)$ or $\\Phi_{ij}=(2\\beta-1)\\pi; \\forall \\beta \\in \\mathbb{Z}$ which are respectively the in-phase and anti-phase modes of the oscillator. Linearizing \\eqref{axonal2} about its fixed points,\n\\begin{equation}\n\\begin{split}\n\\ddot{\\Phi}_{ij}&\\approx \\left[ -2\\frac{\\partial H(\\Phi_{ij})}{\\partial \\Phi_{ij}}\\Bigr|_{\\substack{H(\\Phi_{ij})=0}} \\right] \\Phi_{ij},\\\\\n&\\approx \\left [-2k_{ij}cos(\\Phi_{ij})\\Bigr|_{\\substack{H(\\Phi_{ij})=0}} \\right ]\\Phi_{ij}.\n\\end{split}\n\\label{axonal3}\n\\end{equation}\nThus, from \\eqref{axonal3} it can be deduced that in-phase solution $\\Phi_{ij}=0$ is synchronizing and stable, whereas anti-phase solution $\\Phi_{ij}=\\pi$ is desynchronizing and unstable. These results are similar to small-signal stability analysis performed by linearizing the nonlinear power system dynamics \\cite{klein1991fundamental}. Hence, next we integrate these equilibrium\/critical modes directly in the nonlinear dynamics of Kuramoto model \\eqref{pkura}. \nThe in-phase or `conformist' model of Kuramoto oscillators can be given as follows,\n\\begin{equation}\n \\ddot{\\delta}_i=\\omega_i-\\alpha_i \\dot{\\delta}_i \\ \\mathcolorbox{yellow}{\\mathlarger{\\mathlarger{+}}}\\sum_{j\\neq i,j=1}^{n}k_{ij}sin(\\delta_j - \\delta_i),\n \\label{conf}\n\\end{equation}\nwhereas, an anti-phase or `contrarian' Kuramoto model can be obtained by replacing $H(\\Phi_{ij})$ with $-H(\\Phi_{ij})$ in \\eqref{axonal} to give,\n \\begin{equation}\n \\ddot{\\delta}_i=\\omega_i-\\alpha_i \\dot{\\delta}_i\\ \\mathcolorbox{green}{\\mathlarger{\\mathlarger{-}}}\\sum_{j\\neq i,j=1}^{n}k_{ij}sin(\\delta_j - \\delta_i).\n \\label{cont}\n\\end{equation}\n\\par Linearisation of the `contrarian' model \\eqref{cont} on the lines of \\eqref{axonal3} yields $\\ddot{\\Phi}_{ij}\\leq 0$ for anti-phase modes and otherwise for in-phase modes. Thus, in the `contrarian' model the anti-phase mode is stable and in-phase mode is unstable. All oscillations in physical systems in general and power systems in particular are weighted sum of in-phase and anti-phase modes. Hence, to study the oscillations in power systems, we propose the CC-Kuramoto model of coupled oscillators given as,\n \\begin{equation}\n \\begin{split}\n \\ddot{\\delta}^{a_1}_i=\\omega_i-\\alpha_i \\dot{\\delta}^{a_1}_i&\\ \\mathcolorbox{yellow}{\\mathlarger{\\mathlarger{+}}}\\sum_{j\\neq i,j=1}^{p}k_{ij}sin(\\delta_j^{a_1} - \\delta^{a_1}_i)\\\\\n &\\ \\mathcolorbox{green}{\\mathlarger{\\mathlarger{-}}}\\sum_{j=p+1}^{n}k_{ij}sin(\\delta^{a_2}_j - \\delta^{a_1}_i),\\\\\n \\ddot{\\delta}^{a_2}_i=\\omega_i-\\alpha_i \\dot{\\delta}^{a_2}_i&\\ \\mathcolorbox{yellow}{\\mathlarger{\\mathlarger{+}}}\\sum_{j\\neq i,j=p+1}^{n}k_{ij}sin(\\delta_j^{a_2} - \\delta^{a_2}_i)\\\\\n &\\ \\mathcolorbox{green}{\\mathlarger{\\mathlarger{-}}}\\sum_{j=1}^{p}k_{ij}sin(\\delta^{a_1}_j - \\delta^{a_2}_i),\\\\\n \\end{split}\n \\label{interarea}\n \\end{equation}\n where without loss of generality we assume $\\delta_i^{a_c} \\in \\big[ \\delta_1^{a_1},\\delta_2^{a_1},\\delta_3^{a_2},\\delta_4^{a_2}\\big]$, $ p $ set of generators in area 1 ($a_1$) and $ (n-p) $ generators in area 2 ($a_2$).\n \n\\par Further, a CC-Kuramoto model settles into one of the three type of states, depending upon the system parameters and initial conditions: Incoherent state - a state of complete desynchronization, $\\pi$-state - when two groups of coherent oscillators are separated by phase difference of $\\pi$ radians and the Travelling wave state - where the two coherent groups are apart by a phase difference less than $ \\pi $ radians. Not only do the above states exhibit rich dynamical behavior but other interesting outcomes arise in the process of transition between these states. The direct relation between CC-Kuramoto and the power system network facilitates the study of complex dynamics arising in power networks.\n\n\\subsection{Case study: Classical two-area four-machine system}\\label{study}\n\n\\par In order to validate the proposed model, we use a classical two-area four-machine power system commonly referred to for interarea oscillation analysis \\cite{klein1991fundamental}. The system is symmetric; consisting of two identical areas connected through a relatively weak tie ($J_i=J=0.4$kgm$^2$, $\\alpha_i=\\alpha=0.125$). Each area includes two synchronous generators with equal power output. The single line diagram of the system considered is as shown in Figure \\ref{fig:sld}, \n\\begin{figure}[t!]\n\\centering\n\\includegraphics[height=3.3cm,width=8cm]{inter_area_sld.png}\n\\caption{Single line diagram of a two-area four-machine power system.}\n\\label{fig:sld}\n\\end{figure}\n\\par All loads are represented as constant impedances. The tie-line impedance was varied by changing the number of tie circuits in service. Power transfer between two areas is emulated, either by an uneven distribution of generation between the areas, or by an uneven split of the total system loads. The combinations for tie-line power flow are given in Table \\ref{data}.\n\\begin{table}[b!]\n\\centering\n\\caption{Load and Tie-Line Power of Test System}\n\\label{data}\n\\begin{tabular}{|l|l|l|l|l}\n\\cline{1-4}\n & \\multicolumn{2}{c|}{Generation\/Load (MW)} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Power flow from Area 1 \\\\ to Area 2 (MW)\\end{tabular}}} & \\\\ \\cline{1-3}\n & \\multicolumn{1}{c|}{Area 1} & \\multicolumn{1}{c|}{Area 2} & \\multicolumn{1}{c|}{} & \\\\ \\cline{1-4}\nCase 1 & 1400\/1367 & 1400\/1367 & \\multicolumn{1}{c|}{0} & \\\\ \\cline{1-4}\nCase 2 & 1400\/967 & 1450\/1767 & \\multicolumn{1}{c|}{400} & \\\\ \\cline{1-4}\n\\end{tabular}\n\\end{table}\n\n\n\n\\par The number of tie-line in service is two and the transfer level along the tie-line of the two areas varies from 0 to 400 MW due to the variation of load levels in the two areas. Case 1 relates to no power transfer between two areas. On the other hand, the event of power transfer between areas has been designated as Case 2.\n\\begin{align}\nK_{Case 1}=\\begin{bmatrix}0 & 1.9689 & 0.1766 & 0.1782\\\\ \n 1.9689 & 0 & 0.1782 & 0.1801\\\\\n 0.1766 & 0.1782 & 0 & 1.9363\\\\\n 0.1782 & 0.1801 & 1.9363 & 0 \\end{bmatrix}\n\\label{Kmatrix1}\n\\end{align}\n\\begin{align}\nK_{Case 2}=\\begin{bmatrix}0 & 2.5960 & 0.2130 & 0.2151\\\\ \n 2.5960 & 0 & 0.2151 & 0.2171\\\\\n 0.2130 & 0.2151 & 0 & 1.7214\\\\\n 0.2151 & 0.2171 & 1.7214 & 0 \\end{bmatrix}\n\\label{Kmatrix2}\n\\end{align}\nThe coupling matrix $[k_{ij}]$ is represented as (\\ref{Kmatrix1}) for Case 1 and (\\ref{Kmatrix2}) for Case 2. Elements of coupling matrix $[k_{ij}]$ are derived from $k_{ij}=\\frac{E_iE_j\\left| Y_{ij} \\right|}{J\\Omega} $. Natural frequencies are calculated using $\\omega_i=\\left[\\frac{P_{m,i}}{J\\Omega}-\\frac{E^{2}_{i} \\Re(Y_{ii})}{J\\Omega}\\right]$ and are shown in Table \\ref{freq_data}. For simplicity, we assume $\\Omega=1$Hz.\n\\begin{table}[b!]\n\\centering\n\\caption{Natural frequencies (in rad\/s)}\n\\label{freq_data}\n\\begin{tabular}{lllll}\n\\hline\n\\multicolumn{1}{|l|}{} & \\multicolumn{1}{c|}{$\\omega_1$ } & \\multicolumn{1}{c|}{$\\omega_2$ } & \\multicolumn{1}{c|}{$\\omega_3$ } & \\multicolumn{1}{c|}{$\\omega_4$ } \\\\ \\hline\n\\multicolumn{1}{|l|}{Case 1} & \\multicolumn{1}{c|}{17.5290} & \\multicolumn{1}{c|}{17.7923} & \\multicolumn{1}{c|}{17.5640} & \\multicolumn{1}{c|}{17.8285} \\\\ \\hline\n\\multicolumn{1}{|l|}{Case 2} & \\multicolumn{1}{c|}{16.8882} & \\multicolumn{1}{c|}{17.1532} & \\multicolumn{1}{c|}{17.7931} & \\multicolumn{1}{c|}{18.0629} \\\\ \\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{figure*}[t!]\n\\begin{tabular}{ccc}\n\\includegraphics[height=4.0cm,width=4.8cm]{eigen_plot_case25.png}&\n\\includegraphics[height=4.0cm,width=4.8cm]{circle_plot_case24.png}&\n\\includegraphics[height=4.0cm,width=6cm]{err_plot_case24.png}\\\\\n(a)&(b)&(c)\\\\\n\\includegraphics[height=4.0cm,width=4.8cm]{eigen_plot_case45.png}&\n\\includegraphics[height=4.0cm,width=4.8cm]{circle_plot_case44.png}&\n\\includegraphics[height=4.0cm,width=6cm]{err_plot_case44.png}\\\\\n(a)&(b)&(c)\n\\end{tabular}\n\\caption{Case 1 (first row from the top) - Dynamics of the proposed model for interarea oscillations, when no power is transferred. (a) Compass plot. (b) Circle plot. (c) Time series plot. Case 2 (second row from the top) - Dynamics of the proposed model for interarea oscillations, when power is transferred from area 1 to area 2. (a) Compass plot. (b) Circle plot. (c) Time series plot.}\n\\label{fig:case2}\n\\end{figure*}\n\n\\par The model proposed in \\eqref{interarea} was solved using MATLAB and following observations were made.\n\\par \\textit{Case 1}: The generators oscillate anti-phase ($-3.12 \\approx -\\pi$radians) in interarea and in-phase ($0.06 \\approx 0$radians) intraarea. To validate the results, we provide compass plots of normalized eigen modes, circle plot as well as time-domain plots of generators as shown in Figure \\ref{fig:case2}. The compass plots were obtained by using the steady-state vectors: $\\vec{c}_i(t) = \\big (\\delta_j-\\delta_i \\big )_{rms} \\angle \\delta_i(t) \\sim \\big ( \\delta_j-\\delta_i \\big )_{rms} \\left( \\sum^{n}_{i=1} e^{\\lambda_i t} u_i v^{T}_i \\delta_i(0) \\right)$, where $u_i$ is the normalized left eigen-vector, $v_i$ is the normalized right eigen-vector and $\\lambda_i$ are the eigen-values of the linearized system. It can be seen that the compass plot of steady state vectors show behavior similar to normalized eigen modes obtained by small-signal analysis performed traditionally.\n\\par \\textit{Case 2}: In the case when power is transferred between areas, the phase difference between interarea generators were observed to be $-2.6$radians (i.e., $ \\neq-\\pi$) and $ 0.05 $radians in intraarea, as shown in Figure \\ref{fig:case2}.\nThe results are compared with \\cite{klein1991fundamental}, providing validity to the proposed model.\n\n\\section{Part - \\uppercase\\expandafter{\\romannumeral 2}: Partial Stability in Power Systems}\n\n\\par In this section, we study the bifurcation analysis of the proposed CC-Kuramoto model in order to understand the effect of the design parameters on the stability of a power network. In order to do so, we first analyse some of the characteristics nodes in power systems.\n\n\\subsection{Equal Area Criteria in Power Systems}\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[height=7.2cm,width=8.2cm]{PRA_power_systems.png}\n\\caption{Equivalence of equal area criterion to equilibrium spaces. Green circles denote `sink' nodes where system achieves equilibrium. Red squares denote `source' node where system attains acceleration and hence increase in accumulated power.}\n\\label{fig:equal_area_martens}\n\\end{figure}\n\n\\par In power systems, the equal area criterion is a ``graphical technique used to examine the transient stability of the machine systems (one or more than one) with an infinite bus\". The areas under the curve of a power angle diagram are equated across to calculate effective acceleration\/deceleration thereby comment on the stability of the system. For instance, consider \\eqref{interarea}, rewriting in terms of mechanical and electrical power interactions,\n\n\\begin{equation}\n \\ddot{\\delta}_i=P_{m,i}-\\hat{P}_{transmitted},\n \\label{equal_area}\n\\end{equation}\n\nwhere $\\hat{P}_{transmitted}=\\alpha_i\\dot{\\delta_i} -P_{max}sin(\\Delta\\delta_{ij})=P_{transmitted}-P_{dissipated}$. It can be seen that the collective acceleration of generators is dependent on the difference of mechanical and electrical power inputs. As shown in Figure \\ref{fig:equal_area_martens}, difference in electrical-mechanical inputs either accelerate or decelerate the generators to achieve equilibrium. The generators accelerate when mechanical power is higher than the transmitted electrical power (i.e., $P_{m,i}>P_{transmitted}$) and decelerate when electrical power is higher (i.e., $P_{m,i}r_1>0.5$ CC-Kuramoto model achieves chaotic behavior. This can be visualised as complete loss of synchronism and thereby chaos. \n\n\\begin{equation}\n\\begin{split}\n0=\\hat{\\omega}_i-\\alpha_i y_i \\ &+\\sum_{j\\neq i,j=1}^{p}k_{ij}sin(x_j^{a_1} - x^{a_1}_i) \\\\ &-\\sum_{j=p+1}^{n}k_{ij} \\ sin(x^{a_2}_j - x^{a_1}_i),\\\\\n0=\\hat{\\omega}_i-\\alpha_i y_i \\ &+\\sum_{j\\neq i,j=p+1}^{n}k_{ij}sin(x_j^{a_2} - x^{a_2}_i)\n\\\\ &-\\sum_{j=1}^{p}k_{ij} \\ sin(x^{a_1}_j - x^{a_2}_i),\n\\end{split}\n\\label{bifur_eq2}\n\\end{equation}\n\n\\par For the next case, $\\omega^{a_2}_4$ is varied in a range $\\omega^{a_2}_4=r_2 \\in [5,12]$rad\/s keeping coupling parameter constant, showing synchronization at $r_2=7$rad\/s and leaves synchronicity at $r_2=10$rad\/s showing chimera behavior. We solve for \\eqref{bifur_eq2}, where $\\hat{\\omega}_i=[\\omega^{a_1}_1,\\omega^{a_1}_2,\\omega^{a_2}_3,r_2]^T$. This scenario can be interpreted as a gradual overload of one of the generators from two areas leading to de-synchronization in one area, whereas other area remains synchronized (refer Figure \\ref{fig:bifurcation_analysis} (b)). Further, using circle and time-series plots for angular separations (as shown in Figure \\ref{fig:chimera} (a)), the same partial de-synchronization is observed. These can be inferred as islanding of power network through circuit breakers to avoid the impact of excessive overloading of generators in a neighbouring area (and hence blackouts or cascaded failures \\cite{dey2016impact}). To summarize, the heterogeneity was introduced by increasing frequencies $\\omega_i$ incrementally in one area, while keeping parameters of other area constant \\big (i.e., $\\omega^{a1}_1,\\omega^{a1}_2,\\omega^{a2}_3 \\in g_1(\\omega);\\ \\omega^{a2}_4 \\in r_2 g_1(\\omega)=g_2(\\omega), r_2 \\in \\mathbb{R}$; $g_1,g_2$ being frequency distributions\\big). As seen from Figure \\ref{fig:chimera} area experiencing incremental perturbations loose synchronicity whereas other area remains unaffected, emulating blackout conditions with islanding. In order to measure loss of synchronicity, we use order parameter $R=\\frac{1}{N}\\sum^{N}_{j=1}e^{i\\theta_j}$ as shown in Figure \\ref{fig:chimera} (b), (c).\n\n\\subsubsection{Using Eigen value analysis}\n\n\\par In this subsection we provide an eigen value based justification for bifurcation phenomena observed in previous subsection. For instance, consider \\eqref{axonal3} and let $\\lambda_{ap}, \\lambda_{ip}$, $\\lambda_{inc}$ be eigenvalues of anti-phase, in-phase and incoherent dynamics respectively. Where\n\n\\begin{equation}\n \\begin{aligned}\n \\lambda = \\begin{cases}\n\\lambda_{ap}<0 & \\text{if} \\ \\Phi_{ij}=m(\\pi)\\\\\n\\lambda_{ip}>0 & \\text{if} \\ \\Phi_{ij}=0, m(2\\pi)\\\\\n\\lambda_{inc}=0 & \\text{if} \\ \\Phi_{ij}=m(\\pi\/2), \n\\end{cases}\n \\end{aligned}\n \\label{eigenanalysis}\n\\end{equation}\n\n\\par $m \\in \\mathbb{Z}$. Now, since $\\lambda_{ap}=-\\lambda_{ip}$, these two nodes exchange stability through $\\lambda_{inc}$ and hence following can be concluded. (i) anti-phase and in-phase modes have converse stabilities and are never stable simultaneously, (ii) these critical modes swap stabilities at incoherence state and (iii) if either of anti-phase or in-phase states are stable incoherence state must be unstable and vice-versa. Particularly, in power systems these states rest in anti-phase (unstable), in-phase (stable) and chimera (partially stable) states.\n\n\\section{Conclusions}\n\\par In this study, a mathematical model for interarea oscillations is proposed using Kuramoto-type framework with its analogy in power grids. It is shown how these oscillations can be visualized in a `conformist-contrarian' form to better understand interarea oscillations. Validity of the choices has been justified using analogy of spring coupled pendulums. In order to verify the model, a standard four generator power system was considered from the literature. Simulations were performed in MATLAB and results were verified and validated. The proposed model is used to investigate various phenomena like spatial\/temporal chimera \\cite{abrams2004chimera} and spontaneous failures in power systems \\cite{nerc9209}. \n\n\\section*{Acknowledgements}\nThe authors would like to thank Prof. S. D. Varwandkar for his valued discussions and feedbacks towards the outcome of this work.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}%\n\n\\subsection{Cosmological superstrings}\n\nCosmic superstrings \\cite{HenryTye:2006uv} are the strings of string\ntheory stretched to macroscopic length scales by the universe's early\nphase of exponential, inflationary growth\n\\cite{Guth:1980zm,Linde:1981mu,Albrecht:1982wi}. During subsequent\nepochs when the scale factor grows as a more leisurely power law of\ntime a complicated network of various string elements forms\n\\cite{Albrecht:1984xv,Bennett:1987vf,Allen:1990tv}. Long,\nhorizon-crossing strings stretch, short curved pieces accelerate and\nattempt to straighten, and, occasionally, individual segments\nintercommute (collide, break and reattach) chopping out loops and\nforming new, connected string pathways. Analytic and numerical calculations\ndemonstrate that these processes rapidly drive the network to a\nself-similar evolution with statistical properties largely determined\nby the string tension\n\\cite{Vanchurin:2005yb,Ringeval:2005kr,Martins:2005es,BlancoPillado:2011dq,Blanco-Pillado:2013qja}. The\nenergy densities in long strings, in loops, and in gravitational\nradiation divided by the critical energy density are all independent of time.\nThe distribution of loops of a given size relative to the\nhorizon scale is also fixed.\n\nAn understanding of this evolution is informed by previous\nstudies of one dimensional defects\nin the context of symmetry breaking in\ngrand unified theories (GUTs; \\cite{Kibble:1976sj};\nfor a general review see \\cite{Vilenkin:2000jqa}).\nOne important difference for superstrings is the expected\nvalue of the string tension. In GUT theories the string tension\n$G \\mu\/c^2 \\sim \\Lambda_{GUT}^2\/M_p^2 \\sim 10^{-6}$ is fixed by the\nGUT energy scale $\\Lambda_{GUT}$. Observations of the microwave sky have ruled out GUT strings as\nthe source of the cosmological perturbations \\cite{Smoot:1992td,Bennett:1996ce,Spergel:2006hy} and led to upper bounds\non the tension. Currently, broadly model-independent limits from\nlensing \\cite{Vilenkin:1981zs,Hogan:1984unknown,Vilenkin:1984ea,deLaix:1997dj,Bernardeau:2000xu,Sazhin:2003cp,Sazhin:2006fe,Christiansen:2008vi},\nCMB studies \\cite{Smoot:1992td,Bennett:1996ce,Pogosian:2003mz,Pogosian:2004ny,Tye:2005fn,Wyman:2005tu,Pogosian:2006hg,Seljak:2006bg,Spergel:2006hy,Bevis:2007qz,Fraisse:2006xc,Pogosian:2008am,Ade:2013xla}\nand gravitational wave background and bursts \\cite{Vachaspati:1984gt,Economou:1991bc,Battye:1997ji,Damour:2000wa,Damour:2001bk,Damour:2004kw,Siemens:2006vk,Hogan:2006we,Siemens:2006yp,Abbott:2006vg,Abbott:2009rr,Abbott:2009ws,Aasi:2013vna,TheLIGOScientific:2016dpb}\ngive $G \\mu\/c^2 \\lta 10^{-7}$. More stringent but somewhat more model-dependent limits from pulsar timing\n\\cite{Bouchet:1989ck,Caldwell:1991jj,Kaspi:1994hp,Jenet:2006sv,DePies:2007bm}\nhave regularly appeared. Currently, the\nstrongest inferred limit\nis $G \\mu\/c^2 \\lta 10^{-11}$ \\cite{Blanco-Pillado:2017oxo,Blanco-Pillado:2017rnf}.\n\nLow tension strings are natural in string theory and have little difficulty in this regard. In the most\nwell-studied compactifications the standard model physics is located\nat the bottom of a warped throat where all energy scales are\nexponentially diminished compared to the string scale. Superstrings\nhave tensions that are reduced by exactly this effect and can\ncorrespond to energies as small as TeV (see \\cite{HenryTye:2006uv,Chernoff:2014cba} for reviews).\n\nThe magnitude of $G\\mu\/c^2$\ninfluences many properties of the strings and loops that make up the\nnetwork. A loop with characteristic size $\\ell$ and energy\n$\\propto \\mu \\ell$ will completely dissipate by gravitational wave\nemission in times $t \\sim \\ell\/(\\Gamma G \\mu\/c)$ where $\\Gamma \\sim\n50$ is a loop-dependent pure number\n\\cite{Vachaspati:1984gt,Burden:1985md,Garfinkle:1987yw,Durrer:1989zi,Allen:1994bs,Allen:1994iq,Allen:1994ev,Casper:1995ub}.\nIf $G\\mu\/c^2 \\ll 10^{-6}$ then\nsuperstring loops evaporate by gravitational wave emission much less\nrapidly than GUT string loops. The characteristic size of loops that\nevaporate gravitationally in $t_H$, the age of the universe, is $\\ell_g =\nt_H \\Gamma G \\mu\/c $. These turn out to dominate the distribution\nof loop sizes found in the universe today.\n\nCurrent simulations report that about 10-20\\% of\nthe string network that is chopped out ends up in the form of large\nloops, with sizes within a few orders of magnitude of the horizon\nscale at birth \\cite{BlancoPillado:2011dq,Blanco-Pillado:2013qja}.\nThe rest forms very small loops with size scale\nrelative to the horizon set by a power of $G \\mu\/c^2$\n\\cite{Polchinski:2004hb,Polchinski:2006ee,Polchinski:2007qc,Polchinski:2007rg,Dubath:2007mf,Polchinski:2008hu}. These\nrapidly evaporate.\nToday, the string network's energy density is dominated by the large loops formed\nat an early epoch. If\n$G \\mu\/c^2 < 7 \\times 10^{-9}$\nit is before matter-radiation\nequality. Today's size distribution increases\nas $\\ell \\to \\ell_g$ from above (the universe was denser at earlier\ntimes and formed more smaller loops); the distribution is cutoff by the\nevaporation process at $\\ell < \\ell_g$.\n\nLong gravitational lifetimes have another important effect:\nthe center of mass\nvelocity of the old loops is small and they cluster like cold dark\nmatter \\cite{Chernoff:2009tp}. This opens the way to experimental\ntests of string\ntheory that are based upon direct detection of gravitational wave\nemission and observation of string microlensing of background stellar sources\n\\cite{Chernoff:2017xxx}\n\n\\subsection{Gravitational backreaction in the string network}\n\nThe most numerous loops are close\nto the characteristic size $\\ell_g$, set by\ngravitational backreaction.\nAn understanding of string gravitational backreaction is crucial for\nmaking forecasts of experimental studies and\nplanning future observational campaigns. The emission of gravitational\nradiation and the associated dissipative forces shrink the size of the\nloop (energy loss) and impart a recoil (momentum and angular momentum\nloss). These may change the character of the loop oscillation over\nlong timescales. The radiative emission processes have been\nwell-studied assuming that the loop is a long-lived periodic\noscillator \\cite{HoganRees:1984unknown,Vachaspati:1984gt,\n Burden:1985md,Hogan:1987unknown,Garfinkle:1987yw,Durrer:1989zi,Allen:1994bs,Allen:1994iq,Allen:1994ev,Casper:1995ub}.\nThe secular effects of gravitational backreaction on the loop\noscillation are relatively unexplored. Two important aspects are the\npropensity of loops to self-intersection and the evolution of\ndiscontinuous features on the loops.\n\nSelf-intersections are important because they can lead to the rapid\ndemise of the long-lived loops which are of greatest observational\ninterest. The reason is simple: isolated, dissipationless loops are\nexactly periodic. If a loop can self-intersect it will do so over and\nover again eventually leading to intercommutation and breakage. This\nprocess shatters the loop into many small looplets \\cite{Casper:1995ub}\nmoving apart at\nrelativistic speeds, each of which will evaporate in only a\nfraction of the time required by the original loop.\nSelf-intersections have the potential to radically depress the number\nof old loops of size $\\ell_g$ that would otherwise exist throughout the\nuniverse. The loop distribution will be cutoff at scale $> \\ell_g$; the number density at that cutoff will be substantially smaller.\nFurthermore, the intercommutation\nprocess evicts the shattered progeny from\nbeing bound to the galaxy. Backreaction\ncan significantly alter experimental\nforecasts.\n\nAnother important aspect of gravitational backreaction is the presence\nof kinks and cusps on loops. Typically when a new loop is formed from\na smooth segment of string the orbit of the new loop will contain an\ninfinitesimal element of string that moves at the speed of light for\nan infinitesimal time, repeating once per period. This is a cusp, a\nwell-characterized, periodic strong source of gravitational wave\nemission. Cusp emission is the principle target of gravitational wave\nsearches from string loops because it is strong, beamed and has a\nwell-understood signal form\n\\cite{Vachaspati:1984gt,Economou:1991bc,Battye:1997ji,Damour:2000wa,Damour:2001bk,Damour:2004kw,Siemens:2006vk,Hogan:2006we,Siemens:2006yp,Abbott:2006vg,Abbott:2009rr,Abbott:2009ws,Aasi:2013vna,TheLIGOScientific:2016dpb}.\nRef.~\\cite{Polchinski:2008hu} has argued that a scaling\nnetwork may be inefficient at forming loops with cusps for the\nfollowing reason. Scaling requires chopping out a significant\nfraction of the long strings' length each time the universe doubles in\nsize. The chopping removes loops and inevitably adds kinks (derivative\ndiscontinuities) to the remaining long string segments. Smooth long\nstrings accumulate kinks and grow dense with small scale structure as\nthe universe ages. New loops inherit the small scale structure. The\nfirst time that the loop begins to form a\ncusp-like structure the kinky string reconnects, effectively excising\nthe part of the loop responsible for the cusp. Such a loop is left with\nnothing but kinks. Kinks may also be detected by gravitational wave\nsearches but are not as strong or as unidirectional. Recent\ncosmological network simulations support this\ntheoretical prediction\n\\cite{BlancoPillado:2011dq,Blanco-Pillado:2013qja}. In particular,\nthey show that loops with kinks are\nformed preferentially and there are few cusps\\footnote{It must be noted\nthat it isn't clear whether the string substructure in even the biggest simulations has entered a\nscaling regime or is still in the process of evolving.}.\n\nThis general evolutionary outline\nprompts a number of questions related to how gravitational\nbackreaction influences the evolution of derivative discontinuities on\nloops and long strings. Qualitatively, we understand that\ngravitational backreaction will smooth kink discontinuities (lessening\nthe size of the jump in the tangent vector from one side to the other) and\ntheoretically allow new cusps to form. There is a competition between\nthe rate at which the discontinuity diminishes and the rate at which the loop\nshrinks.\nOne question is whether the loop fully evaporates before the\ncusp reforms. Another question\nis whether\na reformed cusp has the same scale\nas the loop itself or an intrinsically\nsmaller scale.\nThese can be answered by calculating the dynamical evolution of\na string loop with backreaction for many orbits.\n\nAnother aspect that requires a full treatment of backreaction is how a\nloop with many kinks evolves (since the scaling solution suggests the\nubiquity of kinks). If the total rate of gravitational\nwave emission scales linearly with the number of kinks \\cite{Bohe:2011rk}\nthen the loop's lifetime is shortened. However, the backreaction of many\nclosely spaced radiating kinks may qualitatively effect the evolution\npredicted on the basis of a single kink. It is therefore of interest to\nunderstand how backreaction operates when there is a high density of\nkinks on long strings and loops.\n\n\\subsection{Theory and simulation}\n\nIn this paper we develop a complete formalism for computing the gravitational backreaction on\ncosmic string loops, and demonstrate the method by computing the gravitational self-force for\nseveral specific cosmic string configurations. Some similar studies were previously done in\nRefs.~\\cite{Quashnock:1990wv,Scherrer:1990pj}, but these were limited in scope and did not include\nmany of the details considered here.\n\nQuashnock and Spergel (QS) \\cite{Quashnock:1990wv} derived linearized equations of motion for a\nstring interacting with its own gravitational field (in this context, linearized means first order\nin $G \\mu\/c^2$ expanded about flat spacetime). They worked with particular coordinates and gauge\nchoices that were chosen to simplify many aspects of the calculation. The weak field approximation\nbreaks down at kinks, cusps and self-intersections, but these freely moving line singularities were\ntreated in a perturbative sense.\n\nQS computed the self-force at a field point as sourced by elements\nof the retarded, distant string image. They concluded that only finite divergence-free backreaction\nforces existed for field points with smooth sources, and that the contribution to the backreaction\nforces tended to zero as the source point approached the field point. This situation stands in\ncontrast to the analogous point particle case studied by Dirac \\cite{Dirac:1938nz}, in which\nself-interaction leads to a renormalized mass. Carter and Battye \\cite{Carter:1998ix} and Buonanno\nand Damour \\cite{Buonanno:1998is} showed that while a general string has a local divergent part to\nits perturbed metric, the Nambu-Goto string is special and the total force density due to all the\nlocal divergent pieces exactly vanishes. The remaining force is given by long-range interactions.\n\nKinks and cusps are examples where smoothness in the vicinity of\nthe field point fails to hold. QS did not explicitly\ndiscuss the limiting behavior near a kink but did argue on general\ngrounds that the backreaction force per source coordinate interval at\na cusp would be infinite, but integrable. They also solved numerically for\nthe evolution of the loop represented both as a continuous function\nand as a set of kinks (straight line segments with small tangent\nvector discontinuities) by integrating the backreaction over a full\nperiod. The simulations showed that cusps survive backreaction but are\ndeformed and delayed. Longer integrations suggested that the amplitude\nof the cusp and the associated asymmetric rocket effects were\nsuppressed by backreaction. Finally, QS also showed that small (compared to\nthe size of the loop) kinks decay more rapidly than the string as a\nwhole. The magnitude of the discontinuity at a kink (change in tangent\nvectors) lessens but the discontinuity itself is not\nsmoothed out by dissipation.\n\nIt is some measure of the complexity of the problem that most work since the QS investigation has\ndealt with specific issues and not attempted such an ambitious numerical treatment. Anderson\n\\cite{Anderson:2005qu} analytically calculated the gravitational backreaction forces for the\nAllen-Casper-Ottewill (ACO) loop \\cite{Allen:1994bs}, a rotating loop configuration with a pair of\nkinks (one tangent vector is continuous and the other is discontinuous). The coordinates and gauge\nconditions used were equivalent to those of \\cite{Quashnock:1990wv}. Anderson demonstrated\nexplicitly that all the components of 4-vector acceleration diverged near the kink. The calculated\nforces were, however, integrable so that the equations of motion in the weak field limit were\nintegrable too\\footnote{\\cite{Anderson:2005qu} did not evaluate forces at the kink itself where the\nmetric is ill-determined.}.\n\nIn this paper we do not evolve the string configuration (that\nwill be for a followup) but study in detail the method of\ncalculation of the first order self-force. Certain intermediate quantities in our calculations exhibit divergences.\nThe occurence of these calculational divergences is tied to three interrelated factors:\nthe choice of worldsheet gauge (eg conformal or other), the specification of residual gauge freedom\nin the choice of worldsheet coordinates (eg null or non-null coordinates), and the existence of discontinuous\nsources anywhere on the loop's retarded image (the intersection of the\nworldsheet with the past lightcone of the field point). However, the\ntotal integrated self-force at any point\non a smooth region of the worldsheet is always finite due to cancelations of divergences, and is independent of these choices.\nThis finiteness is consistent with the lack of renormalization of the string tension discussed in \\cite{Buonanno:Damour:1998}\nand with the the general conclusions of smoothness of \\cite{Quashnock:1990wv}.\n\nWhile the self-force is finite in smooth regions of the worldsheet,\nit diverges in the limit when the field point approaches cusps or kinks on the\nworldsheet. However, when one solves\nthe linearized equation of motion for the perturbation to the\nworldsheet, the linearized displacement of the worldsheet is finite.\nGoing beyond this treatment will involve critically examining\nthe linearized approximation and the\ndistributional representation of features such as\nkinks and cusps. The question of whether\nphysical divergences occur in a fully self-consistent evolution\nis beyond the scope of this paper. Nevertheless, the methodology we\ndevelop in this paper should allow adressing certain aspects of\nthe question in the\nfuture. Our methodology will allow us to refine the gauge during the\ncourse of a self-consistent evolution (continuing to use\nlinearization with distributional models)\nto separate invariant physical divergences from\ncalculational divergences. In the case of the cusp, for example, we\nwould need to step carefully through a single period of oscillation\nto handle the occurrence of the divergence at a single spacetime\npoint.\n\n\nAs a result of the work in this paper\nthere is evidence that any such singularity is weak in the\n``physical'' sense. In particular, period-averaged changes are given\nby simple quadratures over the worldsheet. Orbit-averaging does not require\ninstant-by-instant evolution but presumes the metric and string\nare only mildly perturbed in some average sense.\nWe find that over an oscillation both\nthe kink and cusp lead to finite displacements of the\nworldsheet and finite small changes in energy, momentum and angular\nmomentum. All period-averaged physical divergences are small and bounded in the\nsense of being proportional to $G \\mu$. This is quite mild compared\nto the character of the singular behavior of point masses in general relativity,\nfor example.\n\nRecently, Wachter and Olum \\cite{Wachter:2016hgi,Wachter:2016rwc}\nhave studied the evolution of\nloops composed of linear pieces (both right and left moving modes are\ngiven by a set of fixed tangent vectors which generate\nkinks). Using the methodology of\n\\cite{Quashnock:1990wv} they found\nthe metric perturbations and the loop's acceleration and analytically\nevaluated the backreaction\nfor a planar rectangular loop \\cite{Garfinkle:1987yw}. They deduced the energy loss, changes to the left and\nright moving modes and the kink smoothing (diminishing the tangent\nvector jumps). Small angle kinks (acute angles) disappeared more\nquickly than large angle kinks (of order $\\pi\/4$). This observation\nis complementary to that of\n\\cite{Quashnock:1990wv} which\nreported small\nsized kinks (length small compared to the loop size) disappeared more\nquickly than large sized kinks (length a fraction of the full loop\nsize). Refs.~\\cite{Wachter:2016hgi,Wachter:2016rwc} compared the loop evaporation time to the kink\nsmoothing time and found that the loop angle was a key\nparameter. For small angles, kinks disappeared rapidly. For large\nangles, the loop evaporated first. Finally, the analysis of the\npiecewise loops showed that the straight line segments begin to curve\nafter a short period for all except loops with special symmetry.\n\n\\subsection{Lagrangian Methodology}\n\nCarter pioneered the treatment of perturbations in an arbitrarily curved\nspacetime background with relativistic string, membrane or other brane\nmodels where $p$, the spatial dimension of the brane, is less than $n$,\nthe spatial dimension of spacetime\n(\\cite{Battye:1995hv,Carter:1993wy,Battye:1998zk};\nsee \\cite{Carter:1997pb} for a review). The action in such models is\n\\begin{eqnarray}\n {\\cal I} & = & \\int {\\cal L} d{\\bar \\Sigma} \\\\\n d{\\bar \\Sigma} & = & | \\gamma |^{1\/2} d^{p+1} \\zeta\n\\end{eqnarray}\nwhere $d{\\bar \\Sigma}$ is the surface measure element induced on the\ntimelike world sheet by the background metric, $\\gamma$ is the determinant\nof the induced metric and $\\zeta$ stands for the $(p+1)$ internal\ncoordinates. We may assume a constant scalar Lagrangian ${\\cal L} = -m^{p+1}$\nwhere $m$ is a characteristic mass scale and $\\hbar=c=1$. For $p 0$ is a conformal factor. A\nconsequence of this choice is that the worldsheet derivatives, $\\partial_{\\zeta^1} z^\\mu$ and\n$\\partial_{\\zeta^2} z^\\mu$, must satisfy certain orthogonality conditions (the details of which\ndepend on the particular choice of worldsheet coordinates) and that the equation of motion is given\nby\n\\begin{equation}\n\\label{eq:conformal-eom}\n \\phi^{-1} \\eta^{ab}\\partial_a \\partial_b z_{(0)}^\\gamma = 0.\n\\end{equation}\nThis is just the $1$+$1$D flat space scalar wave equation for each spacetime component of the\nstring worldsheet vector $z_{(0)}^\\alpha$. The solutions to this equation are periodic in both $\\zeta^1$ and $\\zeta^2$ in\nthe sense that for a loop of length $L$ we have $z^\\alpha(\\zeta^1, \\zeta^2) = z^\\alpha(\\zeta^1+L\/2,\n\\zeta^2+L\/2)$.\n\n\n\nWeak solutions of equations \\eqref{eq:general-eom} and \\eqref{eq:conformal-eom}\nallow derivative discontinuities, so generic solutions are not smooth.\nThe tangent sphere representation provides a description\nof the derivatives of the two components of a solution \\cite{Vilenkin:2000jqa}. Perfectly smooth\nstring loop solutions have two continuous paths on the tangent sphere. However,\nthere may be long-lived kinks (corresponding to gaps in the\ntangent sphere) that propagate around the string along null worldsheet directions, and cusps\n(corresponding to intersections in the tangent sphere) that only exist instantaneously. There may\nalso be self-intersections, where the string crosses over on itself.\n\n\\subsection{First order equation of motion for the string worldsheet}%\nWe now return to the general case (no specialization of gauge or metric) to write down the\nperturbed equation of motion. Demanding that the perturbed trace of the extrinsic curvature vanish as\nin Eq.~\\eqref{eq:zeroorder}, and assuming that the zeroth order equation of motion is satisfied\ngives \\cite{Battye:Carter:1995,Battye:1998zk}\n\\begin{align}\n&&\\perp^\\rho_{\\ \\,\\chi} {\\bar \\nabla}_\\mu {\\bar \\nabla}^\\mu z_{(1)}^\\chi - 2\n{\\bar \\nabla}_\\mu z_{(1)}^\\alpha K^{\\mu\\ \\, \\rho}_{\\ \\,\\alpha} +\n\\perp^{\\beta\\rho} P^{\\mu\\nu} R_{\\mu\\varepsilon\\nu\\beta}\nz_{(1)}^\\varepsilon \\nonumber \\\\\n&&= K^{\\alpha\\beta\\rho} h_{\\alpha\\beta} - \\perp^\\rho_{\\\n \\beta} P^{\\lambda\\tau} \\left( \\nabla_\\lambda h^\\beta_{\\ \\,\\tau} - {1\n \\over 2} \\nabla^\\beta h_{\\lambda\\tau} \\right).\n\\label{eq:final}\n\\end{align}\nThe homogeneous version of this equation is a higher dimensional analog of the geodesic deviation\nequation.\n\nIdentifying the term on the right hand side as a self-force, it is convenient to split this force into separate contributions, one involving the metric perturbation and the other involving its derivative,\n\\begin{subequations}\n \\label{eq:self-force}\n\\begin{eqnarray}\n F^\\rho &=&\\, F_1^\\rho + F_2^\\rho, \\\\\n F_1^\\rho &\\equiv&\\, - \\perp^\\rho{}_\\lambda P^{\\mu\\nu} \\Big(\\nabla_\\mu h_\\nu{}^\\lambda - \\frac12 \\nabla^\\lambda h_{\\mu\\nu}\\Big), \\\\\n F_2^\\rho &\\equiv&\\, K^{\\mu \\nu \\rho} h_{\\mu \\nu}.\n\\end{eqnarray}\n\\end{subequations}\nUsing the definition \\eqref{eq:Kdef} of $K^{\\mu\\nu\\rho}$ we can write\n$F_2^\\rho$ in terms of $H_{ab} \\equiv h_{\\mu\\nu} \\partial_a\nz^\\mu \\partial_b z^\\nu$ (the projection of $h_{\\mu\\nu}$ along the\nworldsheet),\n\\begin{align}\n \\label{eq:F2-hproj}\n F_2^\\rho &= \\left(\\gamma^{ac} \\gamma^{bd} \\partial_c z^{\\sigma} \\partial_d z^\\lambda \\nabla_\\lambda P_\\sigma{}^\\rho \\right) H_{ab}\\nonumber \\\\\n &= \\perp^\\rho{}_\\sigma \\gamma^{ac} \\gamma^{bd} \\left(\\partial_c \\partial_d z^\\sigma+ \\Gamma^\\sigma_{\\lambda\\tau} \\partial_c z^\\lambda \\partial_d z^\\tau\\right) H_{ab}.\n\\end{align}\nWe can also write the first term as\n\\begin{equation}\nF_1^\\rho = - \\frac{1}{\\sqrt{\\gamma}} \\perp^\\rho{}_\\lambda {\\cal\n F}^\\lambda_{\\rm conf},\n\\end{equation}\nwhere\\footnote{We use a caligraphic font for ${\\cal F}^\\mu_{\\rm conf}$\n since it is not a gauge-specialized version of the general self\n force $F^\\mu$, because the left hand side of Eq.~\\eqref{eq:simple} is not\n obtained from the left hand side of \\eqref{eq:final} by a gauge\n specialization.}\n\\begin{align}\n \\label{eq:trad-self-force}\n {\\cal F}_{\\rm conf}^\\rho \\equiv&\\, \\sqrt{-\\gamma} P^{\\mu\\nu} \\Big(\\nabla_\\mu h_\\nu{}^\\rho - \\frac12 \\nabla^\\rho h_{\\mu\\nu}\\Big).\n\\end{align}\nis the quantity that appears on\nthe right hand side of the conformal gauge equation of motion\n\\eqref{eq:simple} below.\nBattye and Carter \\cite{Battye:Carter:1995} showed that for a general choice of\ngauge it is crucial to both project orthogonal to the worldsheet\nand to include the additional term involving $K^{\\mu\\nu\\rho}$ in order to get the correct\ngravitational self-force\\footnote{In fact, a sequence of papers provided derivations of the\nfundamental equations of motion with increasing degrees of rigor. Following on from\nRef.~\\cite{Battye:Carter:1995}, Battye and Carter \\cite{Battye:1998zk} performed a more\ncareful analysis using a second order Lagrangian variational treatment to derive the first order\nequations of motion for the displacement vector of the world sheet and for the metric\nperturbations. When restricted to the linearized backreaction regime, their final results (given in\nEqs.~(30), (31) and (33) of \\cite{Battye:1998zk} with terms involving $K_\\rho$ identically zero for linearized backreaction) are consistent with their earlier results and with the\nexpressions above.}. In a subsequent work \\cite{Carter:Battye:1998} they showed that, despite the\npresence of divergences in the metric perturbation, the\ngravitational self-force \\eqref{eq:self-force} is finite for strings in four spacetime\ndimensions with smooth worldsheets.\n\nThe very general form for the equations of motion given by Eq.~\\eqref{eq:final} allows arbitrary\nchoice of gauge for the background, both for the spacetime coordinates and for the worldsheet\ncoordinates. It also allows separate arbitrary gauge transformations for the perturbations, and is\ninvariant under two different types of linearized gauge transformations:\n\\begin{itemize}\n\\item Linearized coordinate transformations in spacetime, which induce changes in the worldsheet and\n metric perturbations, $z_{(1)}^\\alpha \\to z_{(1)}^\\alpha + \\xi^\\alpha$,\n $h_{\\alpha\\beta} \\to h_{\\alpha\\beta} - 2 \\nabla_{(\\alpha} \\xi_{\\beta)}.$\n\\item Linearized coordinate transformations on the worldsheet, which induce the changes\n \\begin{equation}\n z_{(1)}^\\alpha \\to z_{(1)}^\\alpha + \\partial_a z^\\alpha \\xi^a.\n \\label{eq:gaugec}\n \\end{equation}\n This gauge freedom shows that only the component of $z_{(1)}^\\alpha$ that is perpendicular to\n the worldsheet contains physical information.\n\\end{itemize}\n\n\\subsection{Choices of Gauge}%\n\n\\subsubsection{Gauge choice to zeroth order}%\n\nWe now once again specialize to Minkowski spacetime in Lorentzian coordinates at zeroth order. Then,\nthe third term on the left hand side of Eq.~\\eqref{eq:final} vanishes identically. The first term\nsimplifies to\n\\begin{equation}\n\\perp^\\rho{}_{\\chi} {1 \\over \\sqrt{-\\gamma}} \\partial_a \\left( \\sqrt{-\\gamma} \\gamma^{ab} \\partial_b z_{(1)}^\\chi \\right),\n\\end{equation}\nand the second term is\n\\begin{equation}\n-2 \\partial_a z^{(1)}_{\\alpha} \\gamma^{ab} z^\\sigma_{(0),bd} z^\\alpha_{(0),c}\n\\gamma^{cd} \\perp_\\sigma{}^{\\rho}.\n\\end{equation}\nThe first two terms simplify further if we use the conformal gauge [Eq.~\\eqref{eq:conformal}] to\nzeroth order, in which case the left hand side becomes\n\\begin{equation}\n\\perp^\\rho{}_{\\chi} {1 \\over \\sqrt{-\\gamma}} \\eta^{ab} \\partial_a \\partial_b z_{(1)}^\\chi.\n\\end{equation}\n\n\\subsubsection{Gauge choice to first order}%\n\nAt first order we adopt Lorenz gauge\\footnote{This gauge condition is often referred to as Lorentz gauge but is actually due to Lorenz \\cite{5672647}.} for the spacetime coordinates. For the worldsheet coordinates\nthere are several natural choices. We focus here on the conformal gauge as it is computationally\nthe most convenient, and direct the reader to Appendix \\ref{app:gauge} for a discussion of other\npossible choices.\n\nThe choice of conformal gauge at first order amounts to choosing the worldsheet coordinates so that\nthe conformal flatness condition \\eqref{eq:conformal} holds to first order as well as zeroth order.\nAnderson \\cite{Anderson:2005qu} showed that in this gauge the equation of motion,\nEq.~\\eqref{eq:final} takes the simple form\n\\begin{equation}\n \\eta^{ab} \\partial_a \\partial_b z_{(1)}^\\chi =\n - {\\cal F}^\\chi_{\\rm conf}\n\\label{eq:simple}\n\\end{equation}\nWhen our sign convention for the metric is taken into account, this form is consistent with that\nused by Buonanno and Damour \\cite{Buonanno:Damour:1998}.\n\nComparing with the covariant equation, Eq.~\\eqref{eq:final}, we see a number of differences due to\nthe gauge specialization:\n\\begin{itemize}\n\\item The right hand side of Eq.~\\eqref{eq:simple} corresponds to the second term on the right\n hand side of Eq.~\\eqref{eq:final}, but with the projection tensor dropped.\n\\item The left hand side of Eq.~\\eqref{eq:simple} corresponds to the first term on the left hand\n side of Eq.~\\eqref{eq:final}, but again with the projection tensor dropped.\n\\item The remaining two terms in Eq.~\\eqref{eq:final} involving couplings to the extrinsic\n curvature tensor have been dropped -- they cancel against the effect of dropping the\n projection tensors in this gauge. (We have already dropped the term involving the Riemann\n tensor since we are working in flat spacetime.)\n\\end{itemize}\n\nA simple proof of this can be obtained by starting with the general coordinate expression\n\\eqref{eq:coordK} for $K^\\rho$ before considering perturbations, and applying the conformal gauge\ncondition \\eqref{eq:conformal}. We have\n\\begin{equation}\nK^\\rho = {1 \\over \\sqrt{-\\gamma}} \\eta^{ab} \\partial_a \\partial_b z^\\rho\n+ \\gamma^{ab} z^\\lambda_{,a} z^\\mu_{,b} \\Gamma^\\rho_{\\lambda\\mu}.\n\\label{eq:expr}\n\\end{equation}\nwithout approximation ($z^\\rho$, $\\gamma_{ab}$, $g_{\\alpha\\beta}$, $\\Gamma^\\rho_{\\lambda\\mu}$ exact). Now consider evaluating this expression with the metric $\ng_{\\alpha\\beta} \\to \\eta_{\\alpha\\beta} + h_{\\alpha\\beta}$ and\nworldsheet $z^\\alpha \\to z_{(0)}^\\alpha + z_{(1)}^\\alpha$. The zeroth order term vanishes by assumption. The\nvariation of the first term in Eq.~\\eqref{eq:expr} comes from replacing $z^\\rho$ with\n$z_{(0)}^\\rho + z_{(1)}^\\rho$, since the zeroth order quantity $\\eta^{ab} \\partial_a \\partial_b\nz_{(0)}^\\rho$ vanishes. Therefore this term yields the left hand side of Eq.~\\eqref{eq:simple}.\nSimilarly, the variation of the second term in Eq.~\\eqref{eq:expr} comes from the variation\nin $\\Gamma^\\rho_{\\lambda\\mu}$, since this quantity vanishes in the background by assumption (we are\nworking in Lorentzian coordinates in Minkowski spacetime). Using expression\n\\eqref{eq:projection} for the projection tensor we see that the variation of this term yields the\nright hand side of Eq.~\\eqref{eq:simple}.\n\nFor the specific choice of gauge in this section ${\\cal F}^\\rho_{\\rm conf}$\nnaturally appears in the balance laws for energy and momentum relating\nthe flux of radiation at infinity to the local dissipation forces (see\nAppendix \\ref{sec:energymomentumlossdiscussion}).\n\n\\subsection{First order metric perturbation}\nThe stress tensor for a Nambu-Goto cosmic string is given by \\cite{Vilenkin:2000jqa}\n\\begin{equation}\n T^{\\alpha \\beta} (x)=\n - G \\mu \\iint P^{\\alpha \\beta} \\delta_4 (x,z) \\sqrt{-\\gamma} \\, d \\zeta^{1'} d\\zeta^{2'}\n \\label{exactstressenergy}\n\\end{equation}\nwhere $\\delta_4 (x,z) = \\frac{\\delta_4 (x-z)}{\\sqrt{-g}}$ is the four-dimensional invariant Dirac\ndelta distribution and $z$, $P^{\\alpha \\beta}$ and $\\gamma$ are all functions of $\\zeta^{a'}$. A\ncoupling of the string to gravity leads to deviations of the spacetime from the background. For\nsufficiently small string tensions, $G \\mu\/c^2 \\ll 1$, this deviation may be treated perturbatively\nby expanding the metric about the background spacetime,\n\\begin{equation}\n g_{\\alpha \\beta} = \\mathring{g}_{\\alpha \\beta} + h_{\\alpha \\beta}.\n\\end{equation}\nThe perturbation satisfies the linearized Einstein equation, which in Lorenz gauge is just\nthe wave equation,\n\\begin{equation}\n \\Box \\bar{h}_{\\alpha \\beta} + 2 R_\\alpha{}^\\gamma{}_\\beta{}^\\delta h_{\\gamma \\delta} = -16 \\pi T_{\\alpha \\beta}\n\\end{equation}\nwhere $\\bar{h}_{\\alpha \\beta} \\equiv h_{\\alpha \\beta} - \\tfrac12 \\mathring{g}_{\\alpha \\beta} \\mathring{g}^{\\gamma \\delta} h_{\\gamma \\delta}$\nis the trace-reversed metric perturbation.\nWe can invert this equation using the retarded Green function, which satisfies the wave equation,\n\\begin{equation}\n \\Box G_{\\alpha \\beta}{}^{\\alpha' \\beta'}\n + 2 R_\\alpha{}^\\gamma{}_\\beta{}^\\delta G_{\\gamma \\delta}{}^{\\alpha' \\beta'} =\n - g_{\\alpha}{}^{\\alpha'} g_{\\beta}{}^{\\beta'} \\delta^4 (x,x').\n\\end{equation}\nIn a four-dimensional Minkowski background ($\\mathring{g}_{\\alpha \\beta} = \\eta_{\\alpha \\beta}$) the solution is\n\\begin{equation}\nG^{\\rm ret}_{\\alpha \\beta}{}^{\\alpha' \\beta'} (x,x') = \\tfrac{1}{4\\pi} \\Theta_{-}(x,x') \\delta_{(\\alpha}^{\\alpha'} \\delta_{\\beta)}^{\\beta'} \\delta[\\sigma(x,x')].\n\\end{equation}\nHere, $\\sigma(x,x')$ is the Synge world-function, defined to be one-half of the square of the\ngeodesic distance between $x$ and $x'$, so that the Dirac delta function is non-zero only when\n$x$ and $x'$ are null-separated. In Minkowski spacetime, we have the closed form\n\\begin{equation}\n \\label{eq:Synge}\n \\sigma (x, x') = \\frac12 \\eta_{\\alpha \\beta} (x^\\alpha - x^{\\alpha'}) (x^\\beta - x^{\\beta'}).\n\\end{equation}\nThe metric perturbation is then given by convolving the Green function with the source,\n\\begin{align}\n\\label{eq:hbar-convolution}\n \\bar{h}_{\\alpha \\beta}(x) =&\\, 16 \\pi \\int G^{\\rm ret}_{\\alpha \\beta}{}^{\\alpha' \\beta'} (x,x') T_{\\alpha' \\beta'} (x') \\sqrt{-g(x')} d^4 x'\n\\nonumber \\\\\n =& \\, - 4 \\,G \\mu \\iint P_{\\alpha \\beta} \\delta [\\sigma(x, z)] \\sqrt{-\\gamma} d \\zeta^{1'} d\\zeta^{2'}.\n\\end{align}\nwhere $P_{\\alpha \\beta}$, $z^\\alpha$ and $\\gamma$ are all functions of $\\zeta^{1'}$ and\n$\\zeta^{2'}$.\n\nIn practical calculations it is convenient to perform one of the integrals immediately using the\nidentity\n\\begin{equation}\n \\delta\\Big[\\sigma\\big(x,z(\\zeta^1, \\zeta^2)\\big)\\Big]\n = \\frac{\\delta\\big[\\zeta^1 - \\zeta^1_{\\rm ret}(x, \\zeta^2)\\big]}{|r_1|}\n\\end{equation}\nwhere $r_1 \\equiv \\partial_{\\zeta^{1'}} \\sigma = (\\partial_{\\zeta^1}\nz^{\\alpha'}) (\\partial_{\\alpha'} \\sigma)$ and $\\zeta^1_{\\rm ret}(x,\n\\zeta^2)$ parameterizes the retarded image, defined by\n\\begin{equation}\n\\sigma[ x, z(\\zeta^1_{\\rm ret}, \\zeta^2) ] =0.\n\\end{equation}\nThis gives\n\\begin{equation}\n\\label{eq:hbar-zeta2-convolution}\n \\bar{h}_{\\alpha \\beta}(x) = \\, - 4 G \\mu \\oint \\Bigg[\\frac{\\sqrt{-\\gamma} P_{\\alpha \\beta}}{|r_1|}\\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'},\n\\end{equation}\nwhere the quantity in square brackets is evaluated at\n$\\zeta^{1'} = \\zeta^{1'}_{\\rm ret}(\\zeta^{2'})$.\nThe one-dimensional integration traces exactly one period of the loop's\nretarded image and there is no boundary; it is a closed loop.\nEquivalently, the non-trace-reversed metric\nperturbation is given by\n\\begin{equation}\n\\label{eq:h-zeta2-convolution}\n h_{\\alpha \\beta}(x) = \\, - 4 G \\mu \\oint \\Bigg[\\frac{\\sqrt{-\\gamma}}{|r_1|} \\Sigma_{\\alpha \\beta} P \\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'},\n\\end{equation}\nwhere $\\Sigma_{\\alpha \\beta} \\equiv P_{\\alpha \\beta}-\\tfrac12 \\eta_{\\alpha \\beta} P$ with $P \\equiv\nP^{\\gamma}{}_{\\gamma}$. Note that the integral does not converge when $x$ is\na point on the worldsheet; this is because the integrand diverges whenever $r_1 = 0$,\nwhich occurs when source and field points coincide, i.e. $x = z$.\n\nDerivatives of the first order metric perturbation may be computed in a similar manner to\n$h_{\\alpha \\beta}$ itself, with the caveat that care must be taken in non-smooth regions of the\nstring. These non-smooth regions occur at kinks and cusps, and also in the vicinity of the field\npoint, $x$, if it is on the worldsheet.\n\nIgnoring the issue of smoothness for now, and differentiating Eq.~\\eqref{eq:hbar-convolution} with\nrespect to the field point, $x$, we get\n\\begin{align}\n\\label{eq:dh-convolution}\n \\partial_\\gamma & \\bar{h}_{\\alpha \\beta}(x)\n = - 4 \\,G \\mu \\iint P_{\\alpha \\beta} \\, \\partial_\\gamma \\big( \\delta [\\sigma(x, z)] \\big) \\sqrt{-\\gamma} \\, d \\zeta^{1'} d\\zeta^{2'}\n\\nonumber \\\\ &\n = - 4 \\,G \\mu \\iint P_{\\alpha \\beta} \\,\\partial_\\gamma \\sigma \\, \\delta'[\\sigma(x, z)] \\sqrt{-\\gamma}\\, d \\zeta^{1'} d\\zeta^{2'}\n\\nonumber \\\\ &\n = - 4 \\,G \\mu \\iint P_{\\alpha \\beta} \\frac{\\partial_\\gamma \\sigma}{\\partial_{\\zeta^{1'}} \\sigma} \\partial_{\\zeta^{1'}} \\big( \\delta [\\sigma(x, z)] \\big) \\sqrt{-\\gamma}\\, d \\zeta^{1'} d\\zeta^{2'}\n\\nonumber \\\\ &\n = - 4 \\,G \\mu \\iint \\frac{P_{\\alpha \\beta} \\Omega_\\gamma}{r_1} \\partial_{\\zeta^{1'}} \\big( \\delta [\\sigma(x, z)] \\big) \\sqrt{-\\gamma}\\, d \\zeta^{1'} d\\zeta^{2'},\n\\end{align}\nwhere $\\Omega^\\alpha \\equiv x^\\alpha - x^{\\alpha'}$ is the coordinate distance\nbetween $x$ and $x'$. On a smooth worldsheet, this may be integrated by parts to give\n\\begin{align}\n\\label{eq:dhbar-zeta2-convolution}\n \\partial_\\gamma & \\bar{h}_{\\alpha \\beta}(x)\n\\nonumber \\\\\n & = 4 \\,G \\mu \\iint \\partial_{\\zeta^{1'}} \\bigg[ \\frac{\\sqrt{-\\gamma} P_{\\alpha \\beta} \\Omega_\\gamma}{r_1} \\bigg] \\delta [\\sigma(x, z)] d \\zeta^{1'} d\\zeta^{2'}\n\\nonumber \\\\\n & = 4 \\,G \\mu \\oint \\Bigg[\\frac{1}{|r_1|} \\partial_{\\zeta^{1'}} \\bigg( \\frac{\\sqrt{-\\gamma} P_{\\alpha \\beta} \\Omega_\\gamma}{r_1} \\bigg) \\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'}.\n\\end{align}\nNote that there are no boundary terms introduced in the integration by parts as the integration is\nover a closed loop. Additionally, note that we can also arrive at the same equation by\ndifferentiating Eq.~\\eqref{eq:hbar-zeta2-convolution} and accounting for the fact that the\ndependence on $x$ appears both through $r_1$ and through $\\zeta^{1'}_{\\rm ret}$, along with the\nrelation $\\partial_\\gamma \\zeta^{1'}_{\\rm ret} = -\\Omega_\\gamma \/ r_1$ (see Sec.~10 of\n\\cite{Poisson-review}). Again, we may write this in the non-trace-reversed form,\n\\begin{align}\n\\label{eq:dh-zeta2-convolution}\n \\partial_\\gamma & h_{\\alpha \\beta}(x) =\n\\nonumber \\\\\n & 4 \\,G \\mu \\oint \\Bigg[\\frac{1}{|r_1|} \\partial_{\\zeta^{1'}} \\bigg( \\frac{\\sqrt{-\\gamma} \\Sigma_{\\alpha \\beta} \\Omega_\\gamma}{r_1} \\bigg) \\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'}.\n\\end{align}\n\n\\subsection{First order self-force}\nWith the results from the previous section at hand, it is straightforward to obtain an integral\nexpression for the first-order gravitational self-force. Substituting\nEqs.~\\eqref{eq:h-zeta2-convolution} and \\eqref{eq:dh-zeta2-convolution} into \\eqref{eq:self-force}\nwe obtain\n\\begin{align}\n\\label{eq:F1-convolution}\n F_1^{\\mu}(z) &= - 4 \\,G \\mu \\perp^{\\mu \\gamma} P^{\\alpha \\beta} \\times \\nonumber \\\\\n\t\t& \\hspace*{-0.5cm} \\oint \\Bigg[\n \\frac{1}{|r_1|} \\partial_{\\zeta^{1'}} \\Bigg( \\frac{\\sqrt{-\\gamma}\\big(\\Sigma_{\\beta \\gamma} \\Omega_\\alpha\n - \\tfrac12 \\Sigma_{\\alpha \\beta} \\Omega_\\gamma\\big)}{r_1} \\Bigg) \\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'},\n\\\\\n\\label{eq:F2-convolution}\n F_2^{\\mu}(z) &= \\, - 4 G \\mu K^{\\beta \\alpha \\mu} \\oint \\Bigg[\\frac{\\sqrt{-\\gamma}\\Sigma_{\\alpha \\beta}}{|r_1|} \\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'}.\n\\end{align}\nHere, it is understood that the $\\perp^{\\mu \\gamma}$, $P^{\\alpha \\beta}$ and $K^{\\beta \\alpha \\mu}$\nappearing outside the integral are to be evaluated at $z$, whereas the $P^{\\alpha \\beta}$ and\n$\\gamma$ appearing inside the integral are to be evaluated at the retarded point $z'$.\n\nOne may expect a difficulty to arise from the fact that $\\bar{h}_{\\alpha\\beta}$ diverges\nlogarithmically (and $\\partial_\\gamma \\bar{h}_{\\alpha\\beta}$ is even more divergent) when the\nsource and field points coincide. This would appear to be a major obstacle for computing the\nself-force since the integral expressions for $\\bar{h}_{\\alpha\\beta}$ and $\\partial_\\gamma\n\\bar{h}_{\\alpha\\beta}$ will not converge when the field point, $x$, is on the worldsheet.\nFortunately, it turns out that for field points on smooth parts of the worldsheet, some miraculous\ncancellations in the particular combination appearing in the equation of motion [and hence the\nself-force, Eq.~\\eqref{eq:self-force}] lead to many of the divergent terms canceling. The result\nis that one obtains a convergent integral and a finite self-force. This was shown to hold in\n\\cite{Buonanno:Damour:1998} for the conformal gauge and in \\cite{Carter:Battye:1998} for an\narbitrary gauge. However, both cases implicitly assumed a smooth string worldsheet. It turns out\nthat the conclusions continue to hold for a non-smooth worldsheet provided the field point is on\na smooth part of the worldsheet. As a field point approaches a\nnon-smooth point on the worldsheet the total self-force diverges\n\nDespite this latter divergence, there is one further important consideration, namely the physical\nsignificance of the self-force itself. It is possible that a divergence in the self-force is a\nspurious artifact arising from, for example, an unfortunate choice of gauge or from a distributional\ntreatment of non-smooth worldsheet features. Indeed,\nAnderson \\cite{Anderson:2005qu} computed explicit closed form expressions for the self-force in the\ncase of the ACO string. His expressions diverge logarithmically and as negative\npowers in the vicinity of the kink. However, this divergence is integrable and he was able to solve\nthe equations of motion to compute finite deviations in both the position and velocity of the\nstring\\footnote{More precisely, the derivative along the direction orthogonal to the kink's\npropagation direction was divergent at the kink, however Anderson was able to obtain a gauge\ntransformation which eliminated this divergence and so it can be attributed to nonphysical\ncoordinate effects.}. Similar conclusions have also been drawn in other work \\cite{Quashnock:1990wv,Wachter:2016rwc,Wachter:2016hgi}.\n\nIn this work, we empirically find results that are consistent with these previous conclusions;\nalthough the equation of motion has a divergent self-force term it turns out to give a finite\nchange to the worldsheet. Any physical measurement must be consistent\nwith the inferred finite displacement. With a distributional description of\nkinks and cusps as adopted here finite displacements can lead to singular\nchanges in derivative quantities such as tangent\nvectors on the worldsheet.\\footnote{Divergent behavior of this sort\n (changes of order $G \\mu\/c^2$ in the tangent vector direction over a single\n period of oscillation) has recently been reported by\n Blanco-Pillado, Olum and Wachter [see acknowledgements]. \n In our treatment here we emphasize that\n we ignore the possibility of additional contributions coming from the kink\nitself. It is difficult to validate this assumption within a distributional approach. It is\nlikely that a matched asymptotic approach along the lines of Ref.~\\cite{pound:2010} for point\nparticles would be required to provide a definitive answer to the question\nof whether the distributional treatment omits any important physical\neffects. We anticipate that such a treatment would also regularize singular\ntangent vector derivatives so that all physical measurements\nare finite, not merely consistent with the inferred finite worldsheet\ndisplacement.}\nOptimistically, we can expect the finiteness of worldsheet displacements\nto carry through to more general scenarios, and hope that the divergences in the force are\nalways integrable. A proof of this fact can likely be obtained from a local expansion of the type\ngiven in Sec.~\\ref{sec:local-expansion} below, adapted to allow for a kink or cusp within the\n``local'' region. Since there are considerable subtle details in this calculation, we will leave\nits exploration for future work.\n\n\n\\section{Evaluating the Gradient of the Retarded Metric Perturbation}\n\\label{sec:dh}\n\nIn the previous section, we obtained integral expressions for the metric perturbation, its\nderivative, and the gravitational self-force. The latter two are valid provided the retarded image\nof the worldsheet is smooth. In reality, we do not have the luxury of a smooth worldsheet for at\nleast two reasons:\n\\begin{enumerate}\n \\item We are interested in studying strings with kinks and cusps, and the worldsheet is non-smooth at the\n location of any kink or cusp;\n \\item We are interested in computing the self-force, which requires us to evaluate the metric\n perturbation and its derivative in the limit $x \\to z$. In that case, if one considers the\n retarded image of a point directly on the string, $x = z$, one finds that it is not in\n general smooth at the field point, $\\zeta^{2'} = \\zeta^{2'}(x)$.\n\\end{enumerate}\nThese can lead to important distributional-type contributions\nto the integrand in the expression for the self force\nwhich are easily missed.\nIn the following subsections, we extend Eqs.~\\eqref{eq:dh-zeta2-convolution} and\n\\eqref{eq:F1-convolution} above to allow for these non-smooth\nfeatures. We begin with a general covariant derivation of the integral\nto explain how coordinate-dependent divergences arise,\n and follow up with an explicit\ntreatment of both issues mentioned above.\n\n\\subsection{Covariant evaluation of the worldsheet integral}\n\\label{sec:coord-depend-integral}\n\nThe expression for the gradient of the metric perturbation at a point $x^\\alpha$ is of the form (dropping spacetime tensor indices)\n\\begin{equation}\nI = \\int_{\\cal W} \\omega_{ab} \\delta'(\\sigma).\n\\label{dds}\n\\end{equation}\nHere ${\\cal W}$ is the worldsheet defined by $x^\\alpha = z^\\alpha(\\zeta^a)$, $\\omega_{ab}$ is some\ngiven smooth two-form on the worldsheet, and the function $\\sigma$ is as defined in\nEq.~\\eqref{eq:Synge}. In this subsection we will derive some identities for integrals of the form\n\\eqref{dds} for arbitrary $\\omega_{ab}$ and arbitrary smooth $\\sigma$, and in the next subsection\nwe will specialize to the specific form \\eqref{eq:Synge} of $\\sigma$ for our application here.\n\nAs a warm up, let us first consider a simpler version of the integral \\eqref{dds}, namely\n\\begin{equation}\nJ = \\int_{\\cal W} \\omega_{ab} \\delta(\\sigma).\n\\label{Jdef}\n\\end{equation}\nLet ${\\cal C}$ be the curve given by $\\sigma = 0$. We would like to derive an expression for $J$ of the form\n\\begin{equation}\nJ = \\int_{\\cal C} \\theta_a\n\\label{ans22}\n\\end{equation}\nwhere $\\theta_a$ is a one-form on the worldsheet. The result for $\\theta_a$ is\n\\begin{equation}\n\\theta_a = \\frac{D_a h}{ \\omega^{bc} D_b \\sigma D_c h}\n\\label{oneforma}\n\\end{equation}\nHere $D_a$ can be taken to be either a covariant or a partial derivative on the worldsheet, and\n$\\omega^{ab}$ is the inverse of $\\omega_{ab}$. Finally $h$ can be taken to be any smooth function\non the worldsheet which has the property that $dh \\wedge d\\sigma \\ne 0$.\n\nNote that the expression \\eqref{oneforma} for the one-form, when pulled back onto the curve ${\\cal\nC}$, is independent of the choice of $h$. To see this, suppose we replace $h$ with a function $H$\nof $h$ and $\\sigma$,\n\\begin{equation}\nh \\to H(h,\\sigma).\n\\label{hchange}\n\\end{equation}\nUnder this transformation\n\\begin{equation}\nD_a h \\to H_{,h} D_a h + H_{,\\sigma} D_a \\sigma.\n\\end{equation}\nWhen this expression is inserted into the one-form \\eqref{oneforma}, the contribution from the\nsecond term to the denominator vanishes because of antisymmetrization, and the contribution to the\nnumerator vanishes when the one form is pulled back to ${\\cal C}$, since $\\sigma=0$ on ${\\cal C}$.\nThe factors of $H_{,h}$ cancel between the numerator and the denominator, and so we see that the\npullback of $\\theta_a$ to ${\\cal C}$ is invariant under the transformation. Thus it is independent\nof the choice of $h$.\n\nWe now turn to the derivation of the formula \\eqref{oneforma}. We specialize to coordinates\n$\\zeta^{\\bar 0} = \\sigma$, $\\zeta^{\\bar 1} = h$. The integral \\eqref{Jdef} becomes\n\\begin{equation}\nJ = \\int d\\sigma \\int dh \\, \\omega_{\\sigma h}(\\sigma,h) \\delta(\\sigma)\n\\end{equation}\nwhere $\\omega_{\\sigma h} = \\omega_{{\\bar 0}{\\bar 1}}$. Evaluating the integral using the delta\nfunction gives\n\\begin{equation}\nJ = \\int dh \\, \\omega_{\\sigma h}(0,h).\n\\label{b9}\n\\end{equation}\nWe now rewrite this in a form which is valid in arbitrary coordinate systems. The factor $\\int dh$ can be written as the integral over ${\\cal C}$ of the one-form $D_a h$. The factor $\\omega_{\\sigma h}$ can be written as\n\\begin{equation}\n\\omega_{\\sigma h} = \\frac{1}{\\omega^{\\sigma h}}.\n\\label{b10}\n\\end{equation}\nUsing the tensor transformation law we have\n\\begin{equation}\n\\omega^{\\sigma h} = \\omega^{{\\bar 0}{\\bar 1}} = \\omega^{ab} \\frac{\\partial \\zeta^{\\bar 0}}{\\partial \\zeta^a}\n\\frac{\\partial \\zeta^{\\bar 1}}{\\partial \\zeta^b}\n= \\omega^{ab} \\frac{\\partial \\sigma}{\\partial \\zeta^a}\n\\frac{\\partial h}{\\partial \\zeta^b}.\n\\label{b11}\n\\end{equation}\nCombining Eqs.~\\eqref{b9}, \\eqref{b10}, and \\eqref{b11} now yields the result given by\nEqs.~\\eqref{ans22} and \\eqref{oneforma}.\n\n\nTurn next to the corresponding analysis for the integral \\eqref{dds}. Suppose that instead of\nintegrating over the entire worldsheet, we integrate over a region $\\Delta {\\cal W}$ of it. The\nintersection of the boundary $\\partial \\Delta {\\cal W}$ of this region with the curve ${\\cal C}$\nwill consist of a set of discrete points ${\\cal P}_i$. The formula for the integral is\n\\begin{equation}\nI = \\int_{\\Delta {\\cal W}} \\omega_{ab} \\delta'(\\sigma) = I_{\\rm boundary} + I_{\\rm bulk}\n\\label{int5}\n\\end{equation}\nwhere the contribution from the boundary is\n\\begin{equation}\nI_{\\rm boundary} = \\sum_i \\pm \\frac{1}{\\varphi} \\frac{k^a D_a h}{ k^b D_b \\sigma}.\n\\label{Ib}\n\\end{equation}\nHere $k^a$ is the tangent to the boundary $\\delta \\Delta {\\cal W}$\nand\n\\begin{equation}\n\\varphi = \\omega^{ab} D_a \\sigma D_b h.\n\\label{varphidef}\n\\end{equation}\nThe contribution from the bulk is\n\\begin{equation}\nI_{\\rm bulk} = \\int_{\\cal C} \\theta_a\n\\end{equation}\nwhere the one-form $\\theta_a$ is\n\\begin{equation}\n\\theta_a = \\frac{1}{\\varphi^3} \\left( \\omega^{bc} D_b \\varphi D_c h \\right) D_a h.\n\\label{xyy}\n\\end{equation}\n\n\nUnder a change of the function $h$ of the form \\eqref{hchange}, the one-form $\\theta_a$ is no\nlonger invariant. Instead, it transforms by an exact form\\footnote{This formula is valid when\npulled back to the curve ${\\cal C}$.}\n\\begin{equation}\n\\theta_a \\to \\theta_a + D_a \\lambda,\n\\label{ccc}\n\\end{equation}\nwhere $\\lambda = H_{,\\sigma} \/ (\\varphi H_{,h})$. The change in the boundary integral is\n\\begin{equation}\n\\sum_i \\pm \\frac{H_{,\\sigma}}{\\varphi H_{,h}},\n\\end{equation}\nwhich cancels against the change \\eqref{ccc} in the one-form. Thus we make the important\nobservation that the integral \\eqref{int5} is independent of choice of $h$, but the split into\nboundary and integral terms is not.\n\n\nWe now turn to the derivation of the formula \\eqref{int5}. As before we initially specialize to\ncoordinates $(\\zeta^{\\bar 0}, \\zeta^{\\bar 1}) = (\\sigma,h)$. Inserting the identity\n\\begin{equation}\n\\omega_{\\sigma h} \\delta'(\\sigma ) d\\sigma \\wedge dh = d \\left[ \\omega_{\\sigma h} \\delta(\\sigma ) dh \\right] - \\delta(\\sigma ) d\\omega_{\\sigma h} \\wedge dh\n\\end{equation}\ninto the integral \\eqref{int5}and using Stokes's theorem gives a result of the form of the right\nhand side of \\eqref{int5}, with\n\\begin{equation}\nI_{\\rm boundary} = \\int_{\\partial \\Delta {\\cal W}} \\omega_{\\sigma h} \\delta(\\sigma ) dh\n\\end{equation}\nand\n\\begin{equation}\nI_{\\rm bulk} = -\\int_{\\Delta {\\cal W}} \\delta(\\sigma ) \\, \\omega_{\\sigma h,\\sigma } \\, d\\sigma dh.\n\\label{bulkf}\n\\end{equation}\nWe evaluate the first term by taking the parameter along the boundary $\\delta \\Delta {\\cal W}$ to\nbe $\\sigma$ and using $dh = d\\sigma (dh\/d\\sigma)$. This gives\n\\begin{equation}\nI_{\\rm boundary} = \\sum_i \\, \\omega_{\\sigma h} \\, \\frac{dh}{d\\sigma},\n\\end{equation}\nUsing Eqs.~\\eqref{b11} and \\eqref{varphidef} this reduces to the formula \\eqref{Ib}.\n\n\nFor the bulk contribution, from the formula (\\ref{bulkf}) and using arguments similar to those given for the integral $J$, we find\n\\begin{equation}\n\\theta_a = - \\partial_\\sigma (1\/\\varphi) D_a h = \\frac{\\varphi_{,\\sigma }}{\\varphi^2} D_a h,\n\\label{xy0}\n\\end{equation}\nwhere we have used $\\varphi = 1\/\\omega_{\\sigma h}$. We evaluate the $\\sigma$ derivative using\n\\begin{equation}\n\\varphi_{,\\sigma } = \\frac{\\partial \\varphi}{\\partial \\zeta^{\\bar 0}} = \\frac{\\partial \\varphi}{\\partial \\zeta^a} \\, \\frac{\\partial \\zeta^a}{\\partial \\zeta^{\\bar 0}}.\n\\label{xy}\n\\end{equation}\nWe express the Jacobian matrix in terms of its inverse using\n\\begin{equation}\n\\frac{\\partial \\zeta^a}{\\partial \\zeta^{\\bar a}}\n= \\frac{2}\n{\\left[ \\omega^{cd} \\omega_{{\\bar c}{\\bar d}}\n\\frac{\\partial \\zeta^{\\bar c}}{\\partial \\zeta^c}\n\\frac{\\partial \\zeta^{\\bar d}}{\\partial \\zeta^d}\\right]}\\, \\omega^{ab} \\omega_{{\\bar a}{\\bar b}} \\frac{\\partial \\zeta^{\\bar b}}{\\partial \\zeta^b}.\n\\end{equation}\nThis formula is specific to two dimensions, and is valid for any choice of two-form. Specializing to ${\\bar a} = {\\bar 0}$ gives\n\\begin{equation}\n\\frac{\\partial \\zeta^a}{\\partial \\zeta^{\\bar 0}} = \\frac{ \\omega^{ab} D_b h}{\\omega^{cd} D_c \\sigma \\, D_d h}.\n\\end{equation}\nInserting this into \\eqref{xy} and then into \\eqref{xy0} finally gives the result \\eqref{xyy}.\n\n\nFinally, although the results derived in this subsection are\ncovariant, they do depend on a choice of arbitrary function $h$ on the\nworldsheet. While the complete final result (\\ref{int5}) does not\ndepend on $h$, the integrand (\\ref{xyy}) of the bulk integral, as well as\nthe splitting into bulk and boundary terms, do depend on $h$.\nElsewhere in this paper, we choose to identify $h$ with one of the\nworldsheet coordinates, which explains the coordinate dependence of the\nintegrand and of the splitting.\n\n\n\\subsection{Worldsheets with kinks}\n\\label{sec:kinks}\nWe may now consider how our 1-D integral expressions \\eqref{eq:h-zeta2-convolution} and \\eqref{eq:dh-zeta2-convolution}\nfor the retarded metric perturbation and its gradient must be modified to allow for the presence of\na kink. A cosmic string with a kink may be treated as piecewise smooth, with discontinuities in\ncertain tangent vectors whenever a kink is crossed. To obtain an expression allowing for these\ndiscontinuities we assume that the retarded image on the worldsheet is non-smooth at $\\zeta^2 = k$, where $k$ may\ndepend on the field point $x$.\\footnote{Although there are worldsheet coordinates where one of the\ncoordinates is a constant along a kink, we will not restrict ourselves to only that case here.}\nThen, one way to achieve the desired result is to break up the integration in\nEq.~\\eqref{eq:hbar-zeta2-convolution} at the discontinuity,\n\\begin{equation}\n\\label{eq:h-zeta2-discontinuous-convolution}\n \\bar{h}_{\\alpha \\beta}(x) = \\, - 4 G \\mu \\int_{k^+}^{k^- + L} \\Bigg[\\frac{\\sqrt{-\\gamma} P_{\\alpha \\beta}}{|r_1|} \\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'},\n\\end{equation}\nwhere $k^+$ ($k^-$) is a point just to the right (left) of the kink. Now, when we differentiate\nthis expression we have to take account of the possible dependence of the end points on $x$. If the\ndiscontinuity in the string is at a fixed value of $\\zeta^2$ (i.e. in the case of a kink\npropagating along the $\\zeta^1$ direction), then $k$ does not depend on $x$ and the boundary terms\nvanish. If, instead, the discontinuity is at a fixed value of $\\zeta^1$ (i.e. in the case of a kink\npropagating along the $\\zeta^2$ direction), then $k$ does depend on $x$. Then, using\n$\\partial_\\gamma \\zeta^{2}(\\zeta^{1}_{\\rm ret}) = -\\Omega_\\gamma \/ r_2$, where $r_2 \\equiv\n\\partial_{\\zeta^{2'}} \\sigma = (\\partial_{\\zeta^2} z^{\\alpha'}) (\\partial_{\\alpha'} \\sigma)$, we get\n\\begin{align}\n\\label{eq:dh-zeta2-discontinuous-convolution}\n \\partial_\\gamma & \\bar{h}_{\\alpha \\beta}(x)\n\\nonumber \\\\\n & = 4 \\,G \\mu \\Bigg\\{\\int_{k^+}^{k^- + L} \\Bigg[\\frac{1}{|r_1|} \\partial_{\\zeta^{1'}} \\bigg( \\frac{\\sqrt{-\\gamma} P_{\\alpha \\beta} \\Omega_\\gamma}{r_1} \\bigg) \\Bigg]_{\\zeta^{1'}_{\\rm ret}} d\\zeta^{2'}\n\\nonumber \\\\\n & \\qquad + \\bigg[\\frac{\\sqrt{-\\gamma} P_{\\alpha \\beta} \\Omega_\\gamma}{|r_1| r_2}\\bigg]_{k^-}\n - \\bigg[\\frac{\\sqrt{-\\gamma} P_{\\alpha \\beta} \\Omega_\\gamma}{|r_1| r_2}\\bigg]_{k^+}\n \\Bigg\\}.\n\\end{align}\nIt is easy to check that one can arrive at the same expression\nby appropriately including the boundary\nterms from the integration by parts described in Sec.~\\ref{sec:coord-depend-integral} above.\nThe presence (or lack thereof) of boundary terms is then manifestly dependent on the particular\nchoice of worldsheet coordinates. Importantly, this apparent worldsheet coordinate dependence only\nappears in the split between boundary and bulk terms; the sum of the two does\n\\emph{not} have any worldsheet coordinate dependence.\n\nIn the case of a smooth string, the two boundary terms are identical and cancel, so we recover the\nsame formula as we had before. In the presence of a kink, however, the boundary terms in the two\nlimits $k^+$ and $k^-$ yield different values and so we pick up an overall\ncontribution from the kink in addition to the integral over the smooth portion of the string.\n\n\\subsection{Worldsheets with cusps}\n\nWe have already seen that care must be taken in computing the self-force for cosmic strings with\nkinks. Since cusps also introduce non-smoothness in the worldsheet, one may expect similar care to\nbe required for cuspy strings. However, there is one crucial difference between a string with\nkinks and one with cusps: cusps typically occur at a single \\emph{point} on a worldsheet while kinks\noccur along a one-dimensional curve. The result is that, in the case of kinks, all points on the\nstring ``see'' a kink at some point in their retarded image, and hence the integrand in Eq.~\\eqref{eq:dh-zeta2-convolution}\nwill always be supplemented by a boundary term somewhere. Conversely, there is\nonly a one-dimensional set of points on the string which ``see'' a cusp in their retarded image;\neverywhere else the integrand does not encounter a discontinuity.\n\nThis suggests that strings with cusps may not need the same careful treatment as those with kinks.\nThis appears to be the case in our test case in Sec.~\\ref{sec:results} below, where we probe the\nregion around the one-dimensional cusp-seeing curve and find no evidence of unusual behavior. This\nis, of course, merely empirical evidence, and should be followed with a more formal treatment; it\nis likely that the local expansions developed in Sec.~\\ref{sec:local-expansion} will prove useful\nin such an analysis.\n\n\n\\subsection{Contribution from the field point}\n\\label{sec:local-expansion}\n\nThe final place where we must take care is in the case where the field point itself is on the\nstring. Then, just as in the case of a kink, the retarded image may have a discontinuity at the\nfield point. While it may be possible in such cases to use a similar treatment to what we have done\nfor kinks, there is subtlety in taking the limit of the field point to the worldsheet which makes\nsuch a treatment difficult. Instead we choose a more robust approach, by using a local expansion of\nthe integrand for field points nearby\\footnote{Here, we use the term ``nearby'' loosely as such a\nnotion is obviously dependent on the choice of worldsheets, and in particular on the choice of\ncoordinate which is used as the variable of integration. Not surprisingly, we will find that the\nconclusions we draw will depend on the choice of worldsheet coordinates. Nevertheless, just as in\nSec.~\\ref{sec:coord-depend-integral}, this apparent coordinate dependence is merely an artifact of\nhow we choose to split up the self-force into contributions from various integrals and boundary\nterms. In reality, the total self-force obtained by combining all of these contributions is\nindependent of the choice of worldsheet coordinates.} the string and then analytically taking the\nlimit of the field point to the worldsheet.\n\nThe purpose of the following subsections is to develop the pieces required for such an expansion. In doing so we make some assumptions:\n\\begin{enumerate}\n \\item We will study the contribution to the self-force integral nearby where the force is to be\n computed and will ultimately shrink the size of this region down to zero;\n \\item We will assume that the worldsheet is smooth in this region. This is true everywhere except\n when the field point exactly lies on a kink or cusp; points arbitrarily close to a kink or\n cusp will, however, be perfectly acceptable.\n \\item We will assume that the induced metric does not diverge (or vanish) on the string. This will be true\n everywhere except where a field point lies exactly on a cusp.\n \\item We will assume conformal gauge for the background worldsheet, in particular Eq.~\\eqref{eq:conformal} and the orthogonality relations for $\\partial_{\\zeta^1} z^\\mu$ and\n$\\partial_{\\zeta^2} z^\\mu$ which follow from it. This step is not a strict requirement of the approach, but does significantly simplify the tensor algebra in the calculation.\n\\end{enumerate}\n\nBefore we proceed with the derivation of the local expansion, we point out one interesting feature,\nnamely that the divergence in the self-force that arises on kinks and cusps comes purely from the\nshort-distance portion of the self-field, i.e. the contribution to the integral from nearby\npoints. It is therefore likely that a more careful treatment of what happens to the self-force\nexactly on a kink or cusp may be obtained from a local expansion of the kind given here. We leave\nthe exploration of this issue for future work.\n\n\n\n\\subsubsection{Setup of the local expansion}\nWe wish to compute the contribution to the self-force for points near the field point. To do so,\nwe will construct a local expansion of the self-force integrand about a point on the worldsheet\nwhich is assumed to be nearby the field point, $x^\\alpha$, and to lie on its retarded image,\n$z^\\alpha[\\zeta^1_{\\rm ret}(x, \\zeta^{2}), \\zeta^{2}]$. We denote this expansion point by\n$\\bar{z}^\\alpha \\equiv z^{\\alpha}[\\bar{\\zeta}^{1}, \\bar{\\zeta}^{2}]$ with $\\bar{\\zeta^1} \\equiv\n\\zeta^1_{\\rm ret}(x, \\bar{\\zeta}^{2})$ for a particular choice of $\\bar{\\zeta}^2$.\nThe conformal factor at this point is $\\bar{\\phi} \\equiv \\phi(\\bar{\\zeta}^1,\\bar{\\zeta}^2)$ and we assume\nthe expansion has a radius of convergence that includes\npart of the image. We can then simplify the evaluation of the\nlocal integration over that part of the image utilizing the approximate\nexpansion.\n\nWe will now seek an expansion of the self-force integrand \\eqref{eq:F1-convolution} (note that there is no contribution to $F_2^\\mu$ from the field point since it does not involve derivatives of $h_{\\alpha \\beta}$) in $\\Delta \\zeta^2 \\equiv \\zeta^{2} -\n\\bar{\\zeta}^{2}$.\\footnote{Notationally, the integration in \\eqref{eq:dh-zeta2-convolution} is over the dummy variable $\\zeta^{2'}$ but we suppress these primes for clarity.} The first stage in our calculation is to find an expansion of the retarded\ncoordinate $\\zeta^1_{\\rm ret}(x, \\zeta^{2})$ about $\\bar{\\zeta}^1 =\n{\\zeta}^1_{\\rm ret}(x, \\bar{\\zeta}^{2})$.\nWe denote the difference between these two quantities $\\Delta \\zeta^1$ and will seek an expansion\nof it in powers of $\\Delta \\zeta^2$. In doing so, we will need to be careful about what our\nparticular choice of worldsheet coordinate is. We will also need to separately consider the cases\nwhere $\\Delta \\zeta^2$ is positive or negative, as in some instances the expansion has a different\nform in the two cases.\n\n\n\\subsubsection{Expansion of the light-cone condition: space-time coordinates}\n\nIn this section we focus on a pair of spacelike and timelike type coordinates which we will denote by $\\zeta$\n(for space) and $\\tau$ (for time), i.e. $(\\zeta^1,\\zeta^2) = (\\tau,\\zeta)$. The important defining feature of these coordinates are the conformal gauge orthogonality relations\n\\begin{align}\n g_{\\alpha \\beta} \\partial_\\tau z^\\alpha \\partial_\\tau z^\\beta &= - \\phi, \\\\\n g_{\\alpha \\beta} \\partial_\\tau z^\\alpha \\partial_\\zeta z^\\beta &= 0, \\\\\n g_{\\alpha \\beta} \\partial_\\zeta z^\\alpha \\partial_\\zeta z^\\beta &= \\phi.\n\\end{align}\nWe can also obtain similar relations involving higher derivatives (with respect to $\\tau$ and\/or\n$\\zeta$) of $z^\\alpha$ by differentiating these fundamental relations. We additionally have the conformal gauge equation of motion, Eq.~\\eqref{eq:conformal-eom}, which in $\\tau-\\zeta$ coordinates gives us a relation between second $\\tau$ and second $\\zeta$ derivatives of $z^\\alpha$:\n\\begin{equation}\n \\partial_{\\tau\\tau} z^\\alpha = \\partial_{\\zeta\\zeta} z^\\alpha.\n\\end{equation}\nWe will use these identities throughout the following calculation to simplify the results we obtain.\n\nWe will start from the fact that the (retarded) source point $z^{\\alpha'}$ and the field point\n$z^{\\alpha}$ are null-separated, $\\sigma(z^\\alpha, z^{\\alpha'}) = 0$. Expanding this about $\\bar{\\sigma}\n\\equiv \\sigma(z^\\alpha, \\bar{z}^{\\alpha})$ we obtain a power series in $\\Delta \\tau$ and $\\Delta\n\\zeta$,\n\\begin{align}\n \\label{eq:sigma-expansion-tau-zeta}\n \\sigma &= \\bar{\\sigma} + \\bar{\\sigma}_{,\\tau} \\Delta \\tau + \\bar{\\sigma}_{,\\zeta} \\Delta \\zeta \\nonumber \\\\\n &\\quad+ \\tfrac12 (\\bar{\\sigma}_{,\\tau\\tau} \\Delta \\tau^2 + 2\\bar{\\sigma}_{,\\tau\\zeta} \\Delta \\tau \\Delta \\zeta + \\bar{\\sigma}_{,\\zeta\\zeta} \\Delta \\zeta^2) \\nonumber \\\\\n & \\quad+\\tfrac16 (\\bar{\\sigma}_{,\\tau\\tau\\tau} \\Delta \\tau^3 + 3 \\bar{\\sigma}_{,\\tau\\tau\\zeta} \\Delta \\tau^2 \\Delta \\zeta + 3 \\bar{\\sigma}_{,\\tau\\zeta\\zeta} \\Delta \\tau \\Delta \\zeta^2 \\nonumber \\\\\n & \\qquad + \\bar{\\sigma}_{,\\zeta\\zeta\\zeta} \\Delta \\zeta^3) + \\cdots\n\\end{align}\nUsing $\\partial_a = (\\partial_a z^\\alpha) \\nabla_\\alpha$ (acting upon the second\nargument of $\\bar{\\sigma}$) along with the identities above and the\nfact that $\\nabla_\\alpha \\nabla_\\beta \\sigma = g_{\\alpha \\beta}$ for Minkowski spacetime, it is\nstraightforward to rewrite the coefficients in terms of worldsheet derivatives of $\\bar{z}^\\alpha$\nand $\\bar{\\phi}$:\n\\begin{align}\n \\bar{\\sigma}_{,\\tau\\tau} &= \\bar{z}^\\alpha_{,\\zeta\\zeta} \\bar{\\sigma}_\\alpha - \\phi, \\\\\n \\bar{\\sigma}_{,\\tau\\zeta} &= \\bar{z}^\\alpha_{,\\tau\\zeta} \\bar{\\sigma}_\\alpha, \\\\\n \\bar{\\sigma}_{,\\zeta\\zeta} &= \\bar{z}^\\alpha_{,\\zeta\\zeta} \\bar{\\sigma}_\\alpha + \\phi, \\\\\n \\bar{\\sigma}_{,\\tau\\tau\\tau} &= \\bar{z}^\\alpha_{,\\tau\\zeta\\zeta} \\bar{\\sigma}_\\alpha - \\tfrac32 \\phi_{,\\tau}, \\\\\n \\bar{\\sigma}_{,\\tau\\tau\\zeta} &= \\bar{z}^\\alpha_{,\\zeta\\zeta\\zeta} \\bar{\\sigma}_\\alpha - \\tfrac12 \\phi_{,\\zeta}, \\\\\n \\bar{\\sigma}_{,\\tau\\zeta\\zeta} &= \\bar{z}^\\alpha_{,\\tau\\zeta\\zeta} \\bar{\\sigma}_\\alpha + \\tfrac12 \\phi_{,\\tau}, \\\\\n \\bar{\\sigma}_{,\\zeta\\zeta\\zeta} &= \\bar{z}^\\alpha_{,\\zeta\\zeta\\zeta} \\bar{\\sigma}_\\alpha + \\tfrac32 \\phi_{,\\zeta},\n\\end{align}\nand likewise for higher order terms (for the current calculation of the contribution to the self-force from the field point it is only necessary to\ngo to the cubic order given here).\n\n\\subsubsection{Expansion of the retarded time}\nIn order to obtain the desired expansion of $\\Delta \\tau(\\Delta \\zeta)$, we now make the ansatz\nthat $\\Delta \\tau$ has an expansion in integer powers of an order counting parameter $\\epsilon \\sim\n\\Delta \\zeta$ and that $\\bar{\\sigma} = \\mathcal{O}(\\epsilon^2)$. Substituting our ansatz into\nEq.~\\eqref{eq:sigma-expansion-tau-zeta} and solving order by order in $\\epsilon$ then yields the\ndesired expansion of $\\Delta \\tau$ in terms of $\\epsilon$,\n\\begin{equation}\n \\label{eq:tret-spatial-expansion}\n \\Delta \\tau = \\frac{1}{\\phi}\\bigg[\\bar{\\sigma}_{,\\tau} - \\sqrt{\\bar{\\sigma}_{,\\tau}^2 + \\phi(2\\Delta \\zeta \\bar{\\sigma}_{,\\zeta} + 2 \\bar{\\sigma} + \\phi \\Delta \\zeta^2)}\\bigg] \\epsilon + \\mathcal{O}(\\epsilon^2).\n\\end{equation}\nThe expressions for the higher order coefficients are somewhat cumbersome, but are fortunately not\nrequired for the current calculation.\n\n\n\\subsubsection{Expansion of quantities appearing in the integrand for the self-force}\n\nWe now expand each of the quantities appearing in the integrand of $F^\\mu_1$ [Eq.~\\eqref{eq:F1-convolution}]:\n\\begin{equation}\n \\Sigma_{\\mu \\nu} = \\Sigma_{\\mu\\nu}^{(0,0)} + \\Sigma_{\\mu\\nu}^{(1,0)} \\Delta \\tau + \\Sigma_{\\mu\\nu}^{(0,1)} \\Delta \\zeta + \\cdots,\n\\end{equation}\nwhere\n\\begin{align}\n \\Sigma_{\\mu\\nu}^{(0,0)} &= \\partial_\\zeta z_\\mu \\partial_\\zeta z_\\nu - \\partial_\\tau z_\\mu \\partial_\\tau z_\\nu-\\phi g_{\\mu\\nu}, \\\\\n \\Sigma_{\\mu\\nu}^{(1,0)} &= \\partial_\\tau \\partial_\\zeta z_\\mu \\partial_\\zeta z_\\nu + \\partial_\\zeta z_\\mu \\partial_\\tau \\partial_\\zeta z_\\nu - \\partial_\\tau z_\\mu \\partial_\\zeta \\partial_\\zeta z_\\nu \\nonumber \\\\\n &\\quad- \\partial_\\zeta \\partial_\\zeta z_\\mu \\partial_\\tau z_\\nu - \\partial_\\tau\\phi g_{\\mu\\nu}, \\\\\n \\Sigma_{\\mu\\nu}^{(0,1)} &= \\partial_\\zeta \\partial_\\zeta z_\\mu \\partial_\\zeta z_\\nu + \\partial_\\zeta z_\\mu \\partial_\\zeta \\partial_\\zeta z_\\nu - \\partial_\\tau z_\\mu \\partial_\\tau \\partial_\\zeta z_\\nu \\nonumber \\\\\n &\\quad- \\partial_\\tau \\partial_\\zeta z_\\mu \\partial_\\tau z_\\nu - \\partial_\\zeta \\phi g_{\\mu\\nu}.\n\\end{align}\nWe also have\n\\begin{align}\n r &= \\bar{\\sigma}_{,\\tau} + \\bar{\\sigma}_{,\\tau\\tau} \\Delta \\tau + \\bar{\\sigma}_{,\\tau\\zeta} \\Delta \\zeta \\nonumber \\\\\n &\\quad + \\tfrac12 (\\bar{\\sigma}_{,\\tau\\tau\\tau} \\Delta \\tau^2 + 2\\bar{\\sigma}_{,\\tau\\tau\\zeta} \\Delta \\tau \\Delta \\zeta + \\bar{\\sigma}_{,\\tau\\zeta\\zeta} \\Delta \\zeta^2) + \\cdots,\n\\end{align}\nand\n\\begin{align}\n\\Omega_\\mu &= \\bar{\\Omega}_\\mu - z_{\\mu, \\tau} \\Delta \\tau - z_{\\mu,\\zeta} \\Delta \\zeta - \\tfrac12 z_{\\mu, \\tau\\tau} \\Delta \\tau^2 \\nonumber \\\\\n& \\quad - z_{\\mu,\\tau\\zeta} \\Delta \\tau \\Delta \\zeta - \\tfrac12 z_{\\mu,\\zeta\\zeta} \\Delta \\zeta^2 + \\cdots.\n\\end{align}\nNote that there are three potentially small parameters in these expansions: $\\Delta \\tau$,\n$\\Delta \\zeta$ and the distance of the field point from the string, which we will denote $\\Delta x$.\nIn the above, the dependence on $\\Delta \\tau$ and $\\Delta \\zeta$ appears explicitly; the dependence\non $\\Delta x$ appears through $\\bar{\\Omega}_\\mu \\sim \\Delta x$ and $\\bar{\\sigma}_\\alpha \\sim \\Delta x$.\n\nTo make further progress, we will assume that all three are of the same order, $\\Delta \\tau \\sim\n\\epsilon$, $\\Delta \\zeta \\sim \\epsilon$ and $\\Delta x \\sim \\epsilon$. Now, substituting the\nexpansions into the integral equation for the derivative of the metric perturbation,\nEq.~\\eqref{eq:dh-zeta2-convolution} and expanding out in powers of $\\epsilon$, we find that the\nintegrand has a contribution at order $\\epsilon^{-2}$ and at order $\\epsilon^{-1}$, plus higher\norder terms. More explicitly, the $\\mathcal{O}\\left(\\epsilon^{-2}\\right)$ piece is given by\n\\begin{widetext}\n\\begin{align}\n\\partial_\\gamma h_{\\alpha \\beta} \\approx -4 \\int\n \\frac{\n [\\bar{\\sigma}_{,\\tau\\tau}] \\Sigma_{\\alpha\\beta}^{(0,0)} \\bar{\\Omega}_\\gamma\n - [\\bar{\\sigma}_{,\\tau\\tau}] \\Sigma_{\\alpha\\beta}^{(0,0)} z_{\\gamma,\\zeta}\\Delta \\zeta\n + \\Sigma_{\\alpha\\beta}^{(0,0)} z_{\\gamma,\\tau} \\bar{\\sigma}_{,\\tau}\n }{\\left(\\bar{\\sigma}_{,\\tau} + [\\bar{\\sigma}_{,\\tau\\tau}] \\Delta \\tau\\right)^3} d\\zeta + \\mathcal{O}(\\epsilon^{-1})\n\\end{align}\nwhere square brackets denote a coincidence limit, $[\\bar{\\sigma}_{,\\tau\\tau}] \\equiv \\lim_{\\Delta x \\to 0} \\bar{\\sigma}_{,\\tau\\tau}$. Now, it is\nimmediately apparent that if we instead substitute our expansions into the integral expression for\n$F_1^\\mu$ this leading order piece identically vanishes since $P^{\\mu \\nu} \\Sigma_{\\mu \\nu} = 0$\n\\footnote{Strictly speaking, this depends on how we extend the definition of $P^{\\mu \\nu}$ off the\nworldsheet. However, since we are in the end only interested in taking the limit to the worldsheet\nthe particular choice of extension is irrelevant and does not change the result.} Likewise, since\n$\\Sigma_{\\mu\\nu}^{(0,1)} z_{\\mu, \\zeta} = \\Sigma_{\\mu\\nu}^{(1,0)} z_{\\mu, \\tau}$, many other terms\neither identically vanish or simplify significantly. Then, the only remaining piece of the\n$\\mathcal{O}(\\epsilon^{-1})$ contribution to the derivative of the metric perturbation \\emph{which does not\nvanish upon substitution into the self-force} is given by\n\\begin{align}\n & -\\int \\tfrac{4}{\\left(\\bar{\\sigma}_{,\\tau} + [\\bar{\\sigma}_{,\\tau\\tau}] \\Delta \\tau\\right)^3} \\bigg(\n [\\bar{\\sigma}_{,\\tau\\tau}] \\Sigma_{\\alpha \\beta}^{(0,1)} \\bar{\\Omega}_{\\gamma} \\Delta \\zeta -\n \\Sigma_{\\alpha \\beta}^{(1,0)} \\bar{\\Omega}_{\\gamma} \\bar{\\sigma}_{,\\tau} +\n \\Sigma_{\\alpha \\beta}^{(1,0)} z_{\\gamma,\\zeta} \\bar{\\sigma}_{,\\tau} \\Delta \\zeta +\n \\Sigma_{\\alpha \\beta}^{(0,1)} z_{\\gamma,\\tau} \\bar{\\sigma}_{,\\tau} \\Delta \\zeta \\nonumber \\\\ & \\qquad \\qquad \\qquad \\qquad \\qquad -\n [\\bar{\\sigma}_{,\\tau\\tau}] \\Sigma_{\\alpha \\beta}^{(1,0)} z_{\\gamma,\\tau} \\Delta \\zeta^2 +\n 2 \\Sigma_{\\alpha \\beta}^{(1,0)} z_{\\gamma,\\tau} \\bar{\\sigma}_{,\\tau} \\Delta \\tau +\n [\\bar{\\sigma}_{,\\tau\\tau}] \\Sigma_{\\alpha \\beta}^{(1,0)} z_{\\gamma,\\tau} \\Delta \\tau^2 \\bigg)\n d\\zeta .\n\\end{align}\n\\end{widetext}\n\nOur final step is to substitute in the expansion of the retarded time, rescale our integration range\nby $\\epsilon$ and integrate from $\\Delta \\zeta \/ \\epsilon = -\\infty$ to $+\\infty$.\nThe factor of $\\epsilon$ in the integral weight cancels with the $1\/\\epsilon$ in the integrand\nand so the result is ultimately independent of $\\epsilon$.\n\n\n\\subsubsection{Expansion of the self-force}\n\nPerforming the integral explicitly in the limit where the field field point tends to the\nworldsheet, we finally arrive at a surprisingly simple expression for the field point contribution to the\nself-force. In $\\tau-\\zeta$ coordinates, this is given by\n\\begin{equation}\n \\label{eq:F-local-ST}\n F^\\mu_{\\rm{field,ST}} = 4 \\, \\phi^{-2} \\perp^\\mu{}_\\alpha \\left(z^\\alpha_{,\\zeta} \\phi_{,\\zeta} + z^\\alpha_{,\\tau} \\phi_{,\\tau} - 2 z^\\alpha_{,\\zeta\\zeta} \\phi\\right).\n\\end{equation}\n\nOne can go through a similar procedure in the null case (see Appendix\n\\ref{sec:local-expansion-null} for details of the retarded time expansion in null coordinates).\nThen, if we use $\\zeta^-$ as our integration variable, the\nequivalent expression for the field point contribution to the self-force is\n\\begin{equation}\n \\label{eq:F-local-N}\n F^\\alpha_{\\rm{field,N}} = 4 \\, \\phi^{-2} \\perp^\\mu{}_\\alpha \\left(z^\\alpha_{,\\zeta^+} \\phi_{,\\zeta^+} - z^\\alpha_{,\\zeta^+\\zeta^+} \\phi\\right) .\n\\end{equation}\nLikewise, one can change $+ \\to -$ when $\\zeta^+$ is used as the integration variable.\nThe expressions \\eqref{eq:F-local-ST} or \\eqref{eq:F-local-N} must be\nadded to the previous results given by Eq.~\\eqref{eq:F1-convolution} to obtain the total\ncontribution to $F_1^\\alpha$.\n\n\n\\section{Numerical Methods and Regularization}\nFor this work have developed several different techniques to\nevaluate the self-force on the string by completely finite, numerical\ncalculations. In the next section, we will compare these calculations to validate\nthe exact methods we have discussed. Before doing so, here we will\nschematically outline the different approaches. The\nabbreviation for the methods are given in square brackets.\n\n\\subsection*{2D, smoothed kink or cusp [2D]}\nThe most general approach is to do the 2D integration over the\nworldsheet in Eq.~\\eqref{eq:hbar-convolution}. This circumvents having\nto eliminate one worldsheet coordinate in terms of another\n(e.g. solving for the retarded time in $\\tau-\\zeta$ coordinates) and possibly\nhaving to patch different coordinate systems (e.g. two different null\ncoordinate systems either side of the field point). The worldsheet integration\nproduces manifestly coordinate invariant results.\n\nSchematically, we replace the singular retarded Green function with a finite\napproximation. For a source at $x_s$ and field at $x_f$\n\\begin{eqnarray}\n {\\cal G}(x_f,x_s) & = & \\Theta(x_s,x_f) \\delta(\\sigma)\n\\end{eqnarray}\nwhere the $\\Theta=1$ when the time of the source $t_s$ precedes the\ntime of the field point $t_f$ and $0$ otherwise. We transform\n\\begin{eqnarray}\n \\delta(\\sigma) & \\to & \\frac{e^{-\\sigma^2\/(2 w_1^2)}}{\\sqrt{2 \\pi} w_1} \\\\\n \\Theta & \\to &\\frac{1-\\tanh((t_s-t_f)\/w_2)}{2}\n\\end{eqnarray}\nto generate a smooth, finite integrand. The parameters $w_1$ and $w_2$\ndescribe the width of the smoothed delta function and the width of the causal\nfunction. (We use $w_i$ schematically in this discussion. In\nAppendix \\ref{sec:2D} we introduce unique symbols.)\n\nSource points are over-retarded and appear slightly\ninside the field point's backwards light cone. Over-retardation\n\\cite{1975NCimB..26..157D} is a covariant method for classical renormalization.\nWe modify the Synge function\n\\begin{eqnarray}\n \\sigma(x,z) & = & \\frac{1}{2} (x-z)^\\alpha g_{\\alpha\\beta}(x-z)^\\beta + w_3\n\\end{eqnarray}\nwhere $w_3 \\ge 0$ is the parameter. Over-retardation disallows the\nsource-field point coincidence.\n\nFinally, we round off discontinuous features on the string. For kinks\nthe transition from one derivative value to another is smoothed. For\ncusps a small patch of the worldsheet near the cusp is excised.\nWe introduce a parameter $w_4$ that yields the discontinuous\nsolution when $w_4 \\to 0$. Smoothing must be implemented separately\nfor each loop of interest. In the 2D approach any discontinuity, even if it\nwere not on the field point's exact light cone, must be smoothed\nbecause all worldsheet points are sampled by the smoothed delta function.\n\nThe 2D calculation does not require any special treatment for\nboundaries, any special choice of coordinates or any special handling\nof the field point. The discontinuities in the source must be\nsmoothed. We let $\\{w_1,w_2,w_3,w_4\\} \\to 0$ in lockstep together.\nWe have found that the limit is not impacted if we\nset $w_2=0$ (the smoothing of the causal step function)\nand $w_3=0$ (the over-retardation) from the beginning. Using the Gaussian\napproximation to the delta function and smoothing the discontinuities\non the string are sufficient to regulate the calculation.\n\n\\subsection*{1D, over-retarded, smoothed kink or cusp [1DOS]}\n\\label{sec:1DOS}\n\nThe 1D calculations in which the Green function has been integrated\nout must handle the field equals source point, the string\ndiscontinuities and coordinate changes along the retarded loop\nimage.\n\nIn the [1DOS] method we use over-retardation and smooth the discontinuities on the\nstring if they are visible on the field point's exact light cone.\nWe integrate Eq.~\\eqref{eq:F1-convolution} over the\nimage if coordinate $\\zeta^2$ covers the entire image;\nwe additionally include boundary terms of the type given in\nEq.~\\eqref{eq:dh-zeta2-discontinuous-convolution} if multiple\ncoordinate systems are utilized. Here a boundary term\narises not because of a string discontinuity\nbut because of the coordinate change. We let $\\{w_3,w_4\\} \\to 0$.\n\n\\subsection*{1D, over-retarded, discontinuous kink or cusp [1DO]}\n\\label{sec:1DO}\n\nAs above we use over-retardation for the [1DO] method, but we do not smooth\nthe kink. We numerically locate the kink and\nuse boundary terms of the type given in Eq.~\\eqref{eq:dh-zeta2-discontinuous-convolution}\nto handle both jumps in the string source and coordinate changes.\nWe can evaluate the force for the cusp as long as the cusp is not on the\nlight cone (almost all worldsheet points). We let $\\{w_3\\} \\to 0$.\n\n\\subsection*{1D, discontinuous kink or cusp [1D]}\n\\label{sec:1D}\n\nFor the [1D] method we use the analytic results \\eqref{eq:F-local-ST}\nfor the source equals field point\nand boundary terms of the form given in Eq.~\\eqref{eq:dh-zeta2-discontinuous-convolution} for\njumps in the string source and coordinate changes. As above\nwe can evaluate the force for strings with cusps as long as the cusp is not\non the light cone. This is the computationally most efficient\nmethod and the one we are primarily interested in validating\nfor future calculations of loops evolving under the effect of\ngravitational backreaction. It does not require any regularization\nparameters $w_i$.\n\nThere are many related questions that we address using\nthese techniques. For example, we compare the self force\ncalculated utilizing different coordinate systems (this\nis possible for all the methods, but we\nconcentrate on the [1D] case). We also consider a limiting\nprocess in which the [1DO] method is used for a field point off the worldsheet, and verify\nthe correct behavior is recovered as the field point approaches the worldsheet.\n\n\\section{Numerical results}\n\\label{sec:results}\n\nWe now apply the derivations of the previous sections to some specific examples, numerically\ncomputing the self-force for a range of nontrivial string configurations that feature kinks, cusps\nand self-intersections. We perform several consistency checks in the process:\n\\begin{enumerate}\n \\item For strings with a particularly simple structure we compare against existing calculations in the literature;\n \\item For more non-trivial strings we compare different versions of the [1D] integration done with different choices of worldsheet coordinates;\n \\item We compare against the smoothed approaches [1DOS] and [1DO] for handling kinks and field point contributions. The field point contribution is recovered by evaluating the integral for the force with a small over-retardation of the retarded time and numerically taking the limit as this over-retardation vanishes. The kink contribution is similarly recovered by introducing a small smoothing to the kink and taking the limit of the smoothing parameter going to zero.\n \\item We further compare against our other entirely independent [2D] approach, whereby the force is directly determined from a full 2D integration over the worldsheet, approximating the Dirac $\\delta$ distribution in the Green function by a narrow Gaussian.\n \\item We verify that the flux of radiation to infinity (as computed using standard frequency domain methods\n\\cite{2001PhRvD..63f3507A}) appropriately balances the local self-force.\n\\end{enumerate}\n\nThere are infinitely many possible cosmic string loops which satisfy \\eqref{eq:conformal-eom}. The\nexamples which have typically been studied in the literature are those with a low number of\nharmonics. As a demonstration of our prescription for computing the self-force, we will compute the\nself-force for several of these strings. Our goal is not to be exhaustive, but rather to select a\nset of test cases that cover all scenarios (kinks, cusps, self-intersections, and strings without\ntoo much symmetry). In all cases below, we define the worldsheet in terms of two functions\n$a^{\\alpha} (\\zeta^+)$ and $b^{\\alpha} (\\zeta^-)$, where $\\zeta^+ \\equiv \\tau + \\zeta$ and $\\zeta^-\n\\equiv \\tau - \\zeta$ are null worldsheet coordinates. Then, the spacetime position of the string is\n$z^\\mu = (1\/2)[a^\\mu(\\zeta^+) + b^\\mu(\\zeta^-)]$. Throughout the discussion, we will also refer to\nthe three-vectors $\\mathbf{a}$ and $\\mathbf{b}$, which are defined to be the spatial projections of\n$a^\\alpha(\\zeta^+)$ and $b^\\alpha(\\zeta^-)$. Finally, we will specialize to the specific case\n$t=\\tau$ within the class of conformal gauges.\n\n\n\\subsection{Allen, Casper and Ottewill self-similar string}\n\\label{sec:ACO}\nAllen, Casper and Ottewill (ACO) \\cite{Allen:1994bs} identified a particularly simple class of\nstrings for which the average power radiated is easily calculated in closed form. All strings in\nthe class have a pair of kinks, each propagating along lines of constant $\\zeta^+_{k_1} = 0$ and\n$\\zeta^+_{k_2} = L\/2$, respectively. ACO's motivation was to find the string which radiates most\nslowly and is thus most long-lived. Our motivation for studying the ACO string\\footnote{We will\nstudy just one case in the class of ACO strings, the one which is simplest and which radiates power\nmost slowly. ACO call this particular string ``case (1) with $M=1$''. We will simply refer to it as\n\\emph{the} ACO string.} stems from a different consequence of the simplicity of the ACO solution.\nAnderson \\cite{Anderson:2005qu} showed that the description of the ACO string worldsheet is\nsufficiently simple that it is possible to determine the self-force analytically.\\footnote{In fact,\nin \\cite{Anderson:2008wa} Anderson was able to go one step further and analytically\nself-consistently evolve the string under the influence of gravitational backreaction.} This\nprovides a valuable reference point against which we can check our numerical approach.\n\nThe ACO string worldsheet is given in Cartesian coordinates by\n\\begin{align}\n a^{\\alpha} (\\zeta^+) =& A [\\zeta^+\/A, 0, 0, |\\zeta^+|],\n\\nonumber \\\\\n b^{\\alpha} (\\zeta^-) =& A [\\zeta^- \/A, \\cos (\\zeta^- \/ A), \\sin (\\zeta^- \/ A), 0],\n\\end{align}\nwhere $A \\equiv \\tfrac{L}{2\\pi}$ and $L$ is the length of the string. For $\\zeta^+<-\\tfrac{L}{2}$\nor $\\zeta^+>\\tfrac{L}{2}$ the periodic extension of $a^z$ is used, i.e. $a^z$ is the triangle\nfunction centered about the origin and with period $L$. The ACO string can be visualised as shown\nin Fig.~\\ref{fig:ACOloopspacetime}; its evolution is a rigid rotation of this shape about the\n$z$-axis.\\footnote{In \\cite{Anderson:2008wa} Anderson showed that this shape is preserved when\nbackreaction is taken into account, in which case the string evolves (shrinks) self-similarly.} We\ncharacterize the ACO string in terms of its tangent-sphere representation, as shown in\nFig.~\\ref{fig:ACO-tangent-sphere}.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.6\\linewidth]{figures\/ACOString}\n\\caption{\n\\label{fig:ACOloopspacetime}\nSnapshot of the ACO string loop configuration in spacetime at time $\\tau=0$. At later times the\nconfiguration can be obtained by a rigid rotation about the $z$-axis.}\n~\\\\\n\\includegraphics[width=0.6\\linewidth]{figures\/ACO-tangent}\n\\caption{\n\\label{fig:ACO-tangent-sphere}\nTangent sphere representation of the ACO string loop configuration with $\\mathbf{a}'(\\zeta^+)$ denoted by the two blue dots and $\\mathbf{b}'(\\zeta^-)$ by the orange circle.}\n\\end{figure}\n\n\nAdopting conformal gauge to first order, Anderson \\cite{Anderson:2005qu} was able to compute the self-force (which, in the conformal gauge case is defined to be the right-hand side of Eq.~\\eqref{eq:simple}) by analytically determining the first-order metric perturbation generated by an ACO string. Factoring out the rigid rotation using the matrix\n\\begin{equation}\nM^\\alpha{}_\\beta = \\left(\n\\begin{array}{cccc}\n 1 & 0 & 0 & 0 \\\\\n 0 & \\cos (2 \\pi \\zeta^-) & \\sin (2 \\pi \\zeta^-) & 0 \\\\\n 0 & -\\sin (2 \\pi \\zeta^-) & \\cos (2 \\pi \\zeta^-) & 0 \\\\\n 0 & 0 & 0 & 1 \\\\\n\\end{array}\n\\right),\n\\end{equation}\nthe conformal gauge self-force in a co-rotating frame is given by\n$f^\\mu = M^\\mu{}_\\alpha {\\cal F}_{\\rm conf}^\\alpha$, where $f^\\mu = [f^t(\\zeta^+),\nf^L(\\zeta^+), f^N(\\zeta^+), \\sgn(\\zeta^+) f^t(\\zeta^+)]$. We can interpret $f^N$ and $f^L$ as\nthe normal and longitudinal components of the force in the $x$--$y$ plane, respectively. Note that\nthis factorized\nform is quite convenient as the dependence on $\\zeta^+$ is entirely contained within $f^\\mu$,\nwhile the dependence on $\\zeta^-$ is entirely in $M^\\chi{}_\\mu$.\n\n\\begin{figure}[H]\n \\center\n \\includegraphics[width=8cm]{figures\/ACOForce.pdf}\n \\caption{Co-rotating self-force for the ACO string.}\n\\label{fig:ACOForce}\n\\end{figure}\nAs an important consistency check on our work, we have verified that our numerical approach exactly\nreproduces the analytic result derived by Anderson. Figure \\ref{fig:ACOForce} shows the factored\ncomponents of the force as a function of $\\zeta^+$, with Anderson's expressions plotted as solid\nlines and our numerical values (computed using Eq.~\\eqref{eq:F1-convolution} plus boundary terms of\nthe type given in Eq.~\\eqref{eq:dh-zeta2-discontinuous-convolution} at the kinks and\nEq.~\\eqref{eq:F-local-ST} for the field point contribution) shown as dots.\n\nOne interesting feature is the divergence of the force components as a kink is approached. Although one may be concerned about the physical implications of this divergence, for the ACO string it turns out that it is a spurious gauge artifact, and that the string worldsheet itself only ever picks up a small perturbation from the self-force. The simplicity of the ACO solution makes it straightforward to see this explicitly: as shown by Anderson \\cite{Anderson:2005qu}, the explicit form of the divergence near the kink can be written as\n\\begin{align}\nf^t \\approx&\\,\\{-32 (\\tfrac{1}{6}\\pi^2)^{1\/3} \\mu \/|\\zeta^+|^{1\/3}, -128 \\pi^2 \\mu (\\zeta^+)^2\\}, \\nonumber \\\\\nf^L \\approx&\\,32 \\pi \\mu \\ln |\\zeta^+| \\{\\tfrac{1}{3}, 1\\}, \\nonumber \\\\\nf^N \\approx&\\,\\{-32 (\\tfrac{1}{6}\\pi^2)^{1\/3} \\mu \/|\\zeta^+|^{1\/3}, 128 \\pi^2 \\mu \\zeta^+ \\ln |\\zeta^+|\\},\n\\end{align}\ndepending on whether the limit $\\zeta^+ \\to 0$ is taken from the left or the right. Anderson goes on to show that integrating up the equation of motion, the physical (non-gauge) displacement of the string due to this divergent force is finite.\n\n\n\\subsection{Kibble and Turok strings with cusps and self-intersections}\nA simple family of string loop solutions of the zeroth order equations\nof motion was written down by Kibble\nand Turok~\\cite{Kibble:1982cb,Turok:1984cn}. The gravitational\nradiation of representative examples was\ncalculated by Vachaspati and Vilenkin~\\cite{Vachaspati:1984gt}.\nWe will refer to the family as KT strings. The family is described by the general form\n\\begin{align}\n a^{\\alpha} (\\zeta^+) =& A \\Big[\\zeta^+ \/ A,\n (1-\\alpha) \\sin (\\zeta^+ \/ A) + \\tfrac{\\alpha}{3} \\sin (3 \\zeta^+\/A),\n\\nonumber \\\\ & \\quad\n (\\alpha-1) \\cos (\\zeta^+ \/ A) - \\tfrac{\\alpha}{3} \\cos (3 \\zeta^+\/A),\n\\nonumber \\\\ & \\quad\n - 2 \\sqrt{\\alpha(1-\\alpha)} \\cos (\\zeta^+ \/ A)\\Big],\n\\nonumber \\\\\n b^{\\alpha} (\\zeta^-) =& A \\Big[\\zeta^- \/ A,\n \\sin (\\zeta^- \/ A),\n\\nonumber \\\\ & \\quad\n -\\cos \\phi \\cos (\\zeta^- \/ A),\n -\\sin \\phi \\cos (\\zeta^- \/ A)\\Big],\n\\end{align}\nwhere $0 \\le \\alpha \\le 1$ and $-\\pi \\le \\phi \\le \\pi$ are two\nparameters.\n\nWe first focus on the case $\\alpha=0$ and $\\phi=\\pi\/6$\n($N=M=1$ Burden loops \\cite{Burden:1985md}).\nNine snapshots of the spacetime configuration of the loop\nare shown in Fig.~\\ref{fig:VVloopspacetime}.\nThe loop generally possesses an elliptical shape. It tumbles in space\nwhile stretching and contracting. Twice per period it forms a\ndegenerate, line-like shape with a pair of cusps on opposite sides.\nThe tangent sphere representation is particularly simple:\nthere are two continuous great circles that cross\nat $\\tau + n \\pi = \\zeta + m \\pi = 0$ for any integers $n$ and $m$.\nEach crossing gives rise to a cusp and to a spacelike line\nof string overlap in the center of momentum frame.\nThese two effects make the calculation of the self-force\nparticularly challenging.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{VVloop.pdf}\n\\caption{\\label{fig:VVloopspacetime} Snapshots of the KT string loop\n ($\\alpha=0$ and $\\phi=\\pi\/6$)\n configuration in spacetime, each labeled by time in units of $L$. All\n the boxes have the same size axes, $-1$ to $1$ for $L=2 \\pi$, and\n fixed orientation. }\n\\end{figure}\n\nWe first compute the self-force at two points on the string (which we denote Case I and Case II):\n\\begin{align}\n (\\tau, \\zeta) &= (32 \\pi\/50,13 \\pi\/50)\n {\\rm \\ \\ \\ Case \\ \\ I} \\\\\n (\\tau, \\zeta) &\\simeq (0.42,\\pi\/5)\n {\\rm \\ \\ \\ Case \\ \\ II}\n\\end{align}\nFor Case I, the field point is such that no cusp is present on the retarded image of the string.\nSince the left and right moving modes are continuous the loop stress-energy source is completely smooth except at\nthe field point itself.\\footnote{In a patch of the world sheet that extends $\\pm \\pi$ about the\nfield point the cusps at $(\\tau,\\zeta)=(0,0)$ and $(0,\\pi)$ are potentially visible for a\ncausal off-shell Green function.} The Case I results calculated by the [1D] method described in Sec.~\\ref{sec:dh} are given in the first part of Table \\ref{tab:VVanswers}. In this case, there are two important contributions to $F_1^\\rho$:\nthe row labeled $\\int$ is the integral\ncontribution arising from the $1D$ integral over the smooth worldsheet using Eq.~\\eqref{eq:F1-convolution}; and\n$\\delta$ is the contribution from the field point obtained using Eq.~\\eqref{eq:F-local-ST}. The total is $F_1 = \\int + \\delta$.\n\\begin{table}[H]\n \\begin{center}\n\\begin{tabular}{|cc|cccc|}\n \\hline\n Case \\ \\ & Force & \\multicolumn{4}{c|}{contravariant spacetime components}\\\\\n & & $t$ & $x$ & $y$ & $z$ \\\\\n \\hline\n I & $\\int$ & 9.28612 & -4.96366 & 14.7739 & -1.68376 \\\\\n & $\\delta$ & -0.680917 & 1.09474 & 1.16578 & -4.35077 \\\\\n & $F_1$ & 8.6052 & -3.86891 & 15.9397 & -6.03453 \\\\\n & $F_2$ &-12.181 & 4.56246 & -25.3768 & 14.1391 \\\\\n \\hline\n II & $\\int$ & 44.5678 & 49.5374 & 22.8974 & -1.99924 \\\\\n & $\\delta$ & 1.7937 & 1.5892 & 1.1546 & -4.30897 \\\\\n & $F_1$ & 46.3615 & 51.1266 & 24.052 & -6.30821 \\\\\n & $F_2$ & -75.6739 & -82.35 & -39.8936 & 21.8117 \\\\\n \\hline\n\\end{tabular}\n\\end{center}\n \\caption{Self-force at two points on the KT string\n($\\alpha=0$ and $\\phi=\\pi\/6$)\ncalculated by the $1$D method.}\n\\label{tab:VVanswers}\n\\end{table}\n\nFor Case II, we have carefully chosen a field point such that the cusp at $(\\tau,\\zeta)=(0,0)$\nlies on the retarded string image. Numerical results for this case (which were again obtained using\nthe [1D] method) are given in the second part of Table \\ref{tab:VVanswers}.\n\nOne notable feature of these numerical results is that the field point\ncontribution is comparable in magnitude to the contribution from the\nintegral. As such, this case provides a valuable and stringent test of\nour derivation of the expression for the field point contribution. By comparing to a different\napproach which doesn't rely on these terms we may distinguish between\n$\\int$ and $F_1$. The [2D] integration method (described in detail in\nAppendix \\ref{sec:2D}) provides just such a comparison. In Table\n\\ref{tab:summarytab} we tabulate the results of the [2D] integration\nmethod and compare against the $1$D results for Case I in Table\n\\ref{tab:VVanswers}. This comparison unambiguously confirms that the\nfield point contribution is essential. The agreement provides a strong validation of our\nformalism. Appendix \\ref{sec:2D} includes analogous [2D] results for Case II. These are in equally good agreement so we omit additional discussion of\nthe comparison.\n\\begin{table}[H]\n \\begin{center}\n\\begin{tabular}{|cccc|}\n \\hline\n Force & Extrapolated Force & Extr. error & 2D-1D \\\\\n \\hline\n $F_1^t$ & $8.60882$ & $0.0020$ & $-0.0036$ \\\\\n $F_1^x$ & $-3.87143$ & $-0.0016$ & $0.0025$ \\\\\n $F_1^y$ & $15.9437$ & $0.0013$ & $-0.0040$ \\\\\n $F_1^z$ & $-6.0318$ & $0.0030$ & $-0.0027$ \\\\\n \\hline\n $F_2^t$ & $-12.181$ & $3.6 \\times 10^{-5}$ & $1.0 \\times 10^{-5}$ \\\\\n $F_2^x$ & $4.56246$ & $-1.5 \\times 10^{-6}$ & $-9.6 \\times 10^{-7}$ \\\\\n $F_2^y$ & $-25.3768$ & $-1.6 \\times 10^{-5}$ & $2.7 \\times 10^{-5}$ \\\\\n $F_2^z$ & $14.1391$ & $-7.5 \\times 10^{-6}$ & $1.5 \\times 10^{-5}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Extrapolated self-force calculated by the [2D] method in Case I of the KT string ($\\alpha=0$ and $\\phi=\\pi\/6$).}\n\\label{tab:summarytab}\n\\end{table}\n\nWe now proceed to compute the self-force at \\emph{all} points on the worldsheet. The results are shown in Figs.~\\ref{fig:Force-components-VV} and \\ref{fig:Force-VV}. Unlike the ACO case, the extra complexity in the KT solution means that there is no simple factorization of the force into a piece which only depends on $\\zeta^+$ and another piece which depends on $\\zeta^-$. As such, the self-force for the KT string is presented as a $2D$ surface plot, showing the force contributions to $F^\\mu$ ($\\log_{10}$ of the absolute value of a contribution, color coded by sign) at all\\footnote{In all of our plots we show the segment of the worldsheet defined by $\\tau \\in [0, L\/2]$, $\\zeta \\in [-L\/2,L\/2]$. This covers the entire set of unique points on the worldsheet; other values can be obtained by periodically extending in the $\\tau$ and\/or $\\zeta$ direction.} points on the two-dimensional worldsheet. The green and red curves trace the advanced images of the cusps on the loop; each point on these curves has the cusp at $(\\tau, \\zeta) = (0,0)$ (red) or at\n$(\\tau, \\zeta) = (0,L\/2)$ (green) on its past light cone. The gross\nvariation of the self-force depends on the product of two factors\nwhich have simple physical origins.\nFirst, the loop's line-like structure, periodically\nformed at $\\tau=0$ and $L\/2$, creates a ridge spanning all $\\zeta$\nat these particular times. Second, at any given time\nthe points along the\nstring loop which are least contracted and have the largest $\\sqrt{-\\gamma}$\noccur at $\\zeta = \\pm L\/4$. These produce a trough or minimum\nin the force at $\\zeta = \\pm L\/4$. The product of these two\nfactors yields the\negg-crate-like symmetry in the force with the cusps at the\ncorners.\n\n\\begin{widetext}\n~\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/VV}\n\\caption{\\label{fig:Force-components-VV}\nContributions to $F_1^\\mu$ for the KT string ($\\alpha=0$ and $\\phi=\\pi\/6$) when computed using the 1D integration method with integration with respect to $\\zeta$. Each sub-figure shows the relevant contribution to the force at all points on the string in the region $\\tau \\in (0,L\/2)$, $\\zeta \\in (-L\/2,L\/2)$; all other points can be obtained from the standard periodic extension of the string. Each column corresponds to a different component of the force: $F_1^t$, $F_1^x$, $F_1^y$, and $F_1^z$. The rows correspond to the contributions from: (i) the field point; and (ii) the integral over $\\zeta$ (ignoring distributional contributions at the field point). For the purposes of the plots, we have set the string tension, $\\mu$, and Newton's constant, $G$ equal to one; other values simply introduce an overall scaling. Note that we have used a logarithmic scale and denoted positive (negative) values by coloring the plot orange (blue).\n}\n\\end{figure}\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/VV-total}\n\\caption{\\label{fig:Force-VV}\n The two pieces of the self-force, $F_1^\\mu$ (row 1) and $F_2^\\mu$ (row 2),\n and the total self-force $F^{\\mu}$ (row 3) for the KT string ($\\alpha=0$ and $\\phi=\\pi\/6$) as a function of position on the string. The $F_1^\\mu$ part can be obtained by summing the two rows in Fig.~\\ref{fig:Force-components-VV}. For the purposes of the plots, we have set the string tension, $\\mu$, and Newton's constant, $G$ equal to one; other values simply introduce an overall scaling. Note that we have used a logarithmic scale and denoted positive (negative) values by coloring the plot orange (blue).\n}\n\\end{figure}\n\\end{widetext}\n\nThese plots show several interesting features:\n\\begin{enumerate}\n\\item The self-force is finite at almost all points on the worldsheet, the notable exceptions being the location of the two cusps, where it appears to diverge.\n\\item The two contributions to $F_1^\\mu$ (coming from the integral over the smooth worldsheet and from the field point) are comparable in magnitude. It is therefore crucial that both contributions be included.\n\\item The contributions from $F_1^\\mu$ and $F_2^\\mu$ are both comparable in magnitude and both exhibit the same qualitative behavior in terms of divergence at the cusp and finiteness elsewhere.\n\\end{enumerate}\n\n\nAlthough this case provides a good check of the general\nmethodology it involves special features\nthat can be traced to the self-intersections. In the next\nsection we modify the parameter choice to avoid self-intersections.\n\n\\subsection{KT strings with cusps without self-intersections}\n\\label{sec:VV-nonSI}\n\nNext we consider a KT string with parameter values $\\alpha=1\/2$ and $\\phi=0$. Snapshots of this\nloop are shown in Fig.~\\ref{fig:VVloopNonSIspacetime}. The loop rotates about the z-axis and forms\ncusps transiently at $(\\tau,\\zeta)=(0,0)$ and $(0,L\/2)$. There are no self-intersections except\ninfinitesimally close to the cusp itself.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{VVloop-NonSIA.pdf}\n\\caption{\\label{fig:VVloopNonSIspacetime}\n Snapshots of the KT string loop ($\\alpha=1\/2$ and $\\phi=0$) configuration in spacetime, each\n labeled by time in units of $L$. All the boxes have the same size axes, $-1$ to $1$ for $L=2 \\pi$,\n and fixed orientation.}\n\\end{figure}\n\n\\begin{widetext}\n~\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/VVNonSIA}\n\\caption{\\label{fig:Force-components-VVNonSIA}\nContributions to $F_1^\\mu$ for the KT string ($\\alpha=1\/2$ and $\\phi=0$) as otherwise described in Fig.~\\ref{fig:Force-components-VV}.\n}\n\\end{figure}\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/VVNonSIA-total}\n\\caption{\\label{fig:Force-VVNonSIA}\n The two pieces of the self-force, $F_1^\\mu$ (row 1) and $F_2^\\mu$ (row 2), and the total\n self-force $F^{\\mu}$ (row 3) for the KT string ($\\alpha=1\/2$ and $\\phi=0$)\n as otherwise described in Fig.~\\ref{fig:Force-VV}.\n}\n\\end{figure}\n\\end{widetext}\nFigs.~\\ref{fig:Force-components-VVNonSIA} and \\ref{fig:Force-VVNonSIA} show the self-force at all\npoints on the worldsheet of this KT string. These are analogous to the plots for the\nself-intersecting KT string shown Figs.~\\ref{fig:Force-components-VV} and \\ref{fig:Force-VV}. The\npeaks clearly show the cusp locations and the diagonal striping is related to the overall sense of\nrotation of the loop. The spacelike line of overlap and the egg-crate symmetry seen in the previous\nKT case are now absent.\n\nThis non-intersecting case allows for a detailed analysis of the behavior of the total backreaction\nforce in the vicinity of the cusp at $(\\tau,\\zeta)=(0,0)$. At times close to cusp formation the\ntip's position (the string coordinate at fixed $\\zeta=0$) is\n\\begin{eqnarray}\n {\\mathbf{z}}^i & \\sim & \\{0,-0.83,-0.5\\} + \\{1,0,0\\} \\tau + \\nonumber\\\\\n & & \\{0,1.5,0.5\\} \\frac{\\tau^2}{2} + \\{-3,0,0\\} \\frac{\\tau^3}{6} + \\nonumber\\\\\n & & \\{0,-7.5,-0.5\\} \\frac{\\tau^4}{24} + \\dots\n\\end{eqnarray}\nThe velocity lies in the x-direction and the acceleration in the y- and z-directions. Conversely,\nthe velocities in the y- and z-directions and the acceleration in the x-direction vanish. On\nphysical grounds we expect the y- and z-accelerations to source transverse gravitational waves and\nthe relativistic motion in the x-direction to lead to strong beaming.\n\nThe driving force $F^\\alpha$ which enters the string loop's equation of motion, Eq.~\\eqref{eq:final},\nencodes the fully non-local, self-interacting gravitational dynamics. If we were to adopt the\nconformal gauge at first order then ${\\cal F}_{\\rm conf}^\\alpha$ would naturally appear as the driving\nforce in the equation of motion. We will not restrict ourselves to that choice for much of the\ndiscussion in this section. We will show, however, that many of the features of the full worldsheet\nvariation of $F_1^\\alpha$ can be understood based on the observed properties of the formally\ndefined quantity ${\\cal F}_{\\rm conf}^\\alpha$ (which may be defined in any gauge; only its interpretation\nas the driving force is restricted to conformal gauge). We will be explicit whenever our statements\ndemand the specification of the conformal gauge.\n\n\\begin{widetext}\n ~\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\linewidth]{figures\/VVNonSIcuspFtrad1.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/VVNonSIcuspFtrad2.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/VVNonSIcuspFtrad3.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/VVNonSIcuspFtrad4.pdf}\n\\caption{\\label{fig:trad-VVNonSI} The four components ($t$, $x$, $y$ and $z$) of\n $\\ln |{\\cal F}_{\\rm conf}^\\alpha|$ on a small patch of the worldsheet about the cusp.\n Four quadrants in $\\{\\log \\tau, \\log \\zeta\\}$ are displayed\n with the smallest $|\\tau|$ and $|\\zeta|$ at the center of\n the picture, oriented in the same way as the usual linear\n system about $\\{0,0\\}$. Orange (blue) represent positive (negative) values.\n}\n\\end{figure}\n\\end{widetext}\nThe large dynamic range evident in Figs.~\\ref{fig:Force-components-VVNonSIA} and\n\\ref{fig:Force-VVNonSIA} necessitates looking at small patches to examine special features like the\ncusp. We begin by displaying ${\\cal F}^\\mu_{\\rm conf}$ in Fig.~\\ref{fig:trad-VVNonSI}. The special\ncoordinate system shows a small patch near the cusp which is located at $\\zeta=0=\\tau$. Results for\n$\\ln |{\\cal F}^\\mu_{\\rm conf}|$ are displayed in these figures, color coded according to the sign of the\nquantity: orange (blue) dots represent positive (negative) values. Each figure combines four plots\nwith axes $\\{ {\\rm sgn} (\\zeta) \\ln |\\zeta|,\\,{\\rm sgn} (\\tau) \\ln |\\tau|\\}$, arranged and oriented\nin the same way as a normal linear plot (plus a constant shift selected to bring small values close\nto the center). The lower left hand quadrant has $\\zeta < 0$ and $\\tau < 0$. Smaller values of\n$|\\tau|$ and $|\\zeta|$ lie near the center for all four quadrants. The gap encompasses all values\nnear the sign change of the independent coordinates.\n\nWe find that ${\\cal F}^t_{\\rm conf} < 0$ for the entire area of the patch. The magnitude of ${\\cal F}^t_{\\rm conf}$ is much less than $F^t$ and is less strongly divergent --- the two are related by a\nprojection factor and an overall factor of $1\/\\sqrt{-\\gamma}$ (see Eq.~\\eqref{eq:self-force}), both\nof which diverge as the cusp is approached. Likewise, ${\\cal F}^x_{\\rm\n conf} < 0$, ${\\cal F}^y_{\\rm conf} > 0$\nand ${\\cal F}^z_{\\rm conf} > 0$ have single, well-defined signs throughout most of the area of\ncorresponding patch.\n\nIn the conformal gauge the negative value for ${\\cal F}_{\\rm conf}^t$ implies (see\nEq.~\\eqref{eqn:tradiationalforcesecondorderloss} in Appendix\n\\ref{sec:energymomentumlossdiscussion}) that the string is losing energy and decelerating in the\nx-direction both before and after the cusp forms . This makes physical sense; the self-force saps\nthe mechanical energy during the period of large acceleration and the relativistic beaming ensures\nthat gravitational waves are emitted primarily in the x-direction, thus creating the largest\ndecelerating force in that direction. A small spatial segment of the string near where the cusp\nforms behaves in a coherent fashion before and after the moment of cusp formation in terms of the\nsigns of ${\\cal F}_{\\rm conf}^\\alpha$ for all components. ${\\cal F}_{\\rm conf}^\\alpha$ shows a net positive\nacceleration in y- and z-directions throughout most of the area of these figures.\n\nAs the figures of ${\\cal F}^\\mu_{\\rm conf}$ make clear, the asymptotic behavior near the cusp varies\ndepending upon the direction of approach. A common diagonal feature is the locus in the worldsheet\nwhere $\\sqrt{-\\gamma} \\ge 0$ is small. Only at the cusp is $\\gamma$ exactly equal to zero, but\nalong the visible fold its values are small.\n\\begin{widetext}\n~\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\linewidth]{figures\/cuspapproacht.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/cuspapproachx.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/cuspapproachy.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/cuspapproachz.pdf}\n\\caption{\\label{fig:VVloopNonSIcuspray} The components of the\n smooth integral contribution to ${\\cal F}^\\mu_{\\rm conf}$ along two rays\n at angles $\\theta=0$ and $\\pi\/2$ are shown. Red dots are numerical\n results and blue lines are fits of the form $\\log |{\\cal F}^\\mu| = a \\log |\\log r| + b \\log r + c$.\n For $\\mu=t$, $(a,b,c)=(0.098,-0.99,2.45)$ for $\\theta=0$ and $(0.046,-1,3.24)$ for $\\theta=\\pi\/2$;\n likewise, for $\\mu=x$, $(a,b,c)=(0.044,-1,2.52)$ and $(0.018,-1,3.28)$.\n These fits show that the dominant behavior in the direction of motion\n of the cusp as $r \\to 0$ is $1\/r$. In the other directions, the behavior\n is consistent with a logarithmic divergence at leading order:\n $\\mu=y$, $(a,b,c)=(0.79,-0.011,3.35)$ and $(1.23,0.013,2.39)$;\n $\\mu=z$, $(a,b,c)=(0.43,-0.025,2.13)$ and $(0.41,-0.026,2.17)$.\n}\n\\end{figure}\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\linewidth]{figures\/Ftradt-VVcusp.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/Ftradx-VVcusp.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/Ftrady-VVcusp.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/Ftradz-VVcusp.pdf}\n\\caption{\\label{fig:VVloopNonSIcusp} ${\\cal F}^\\mu_{\\rm conf}$ for the\n non-intersecting KT string ($\\alpha=1\/2$ and $\\phi=0$) in the\n neighborhood of the cusp at $\\tau = 0 = \\zeta$. The $t$ and $x$\n components have been scaled by the radial (Euclidean) distance from\n the cusp, and the $y$ and $z$ components have not been scaled at\n all. Each panel displays two separate contributions to the total force. For\n $t$ and $x$ components the upper (green solid and red dotted) lines give the field point\n contribution. It is antisymmetric in angle and integrates over angle to give zero\n (in fact, when taken over the whole worldsheet the integral also vanishes exactly).\n The lower (blue solid and orange dashed) lines give the smooth\n integral contribution. It is single signed and large where\n $\\sqrt{-\\gamma}$ is small. We plot scaled results for $r=0.02$\n (solid lines) and $r=0.002$ (dashed\/dotted lines) for each\n contribution to the total. These overlap and show in a qualitative\n fashion the dominant $1\/r$ scaling near the cusp for both contributions\n to the $t$ and $x$ components. In the lower panels we display the $y$ and $z$\n components. As before, the contribution from the field point is given by the green solid and\n red dotted curves. In this case it is independent of $r$ to lowest order (and\n integrates over angle to give $4\\pi$ at this order). The solid blue and dashed\n orange lines show the integral contributions increase slowly as $r$ decreases, consistent \t\t\t\twith the $\\log r$ type behavior.}\n\\end{figure}\n\\end{widetext}\nRegardless of direction, however, the scaling with radial distance from the cusp is clear and\nunambiguous in each of the components.\nThe smooth integral contribution to ${\\cal F}^\\mu_{\\rm conf}$\nis shown in Fig.~\\ref{fig:VVloopNonSIcuspray}\nfor rays approaching the cusp with angle $\\theta=0$ ($\\delta \\tau=0$) and $\\theta=\\pi\/2$ ($\\delta \\zeta=0$).\nThe red dots are numerical results and the blue lines are fits of the form $\\log |F^\\mu| = a \\log |\\log r| + b \\log r + c$.\\footnote{The occurence of both $\\log|\\log r|$ and $\\log r$ are consistent with\n recently reported analytic results of Blanco-Pillado, Olum and Wachter [see acknowledgements].}\nThe angular variation and scaling of the delta-function term and the smooth integral contribution\nare illustrated in Fig.~\\ref{fig:VVloopNonSIcusp} and discussed in the caption.\n\nTwo-dimensional numerical fits for the integral part of ${\\cal F}^\\mu_{\\rm conf}$ are\nsummarized in Appendix \\ref{sec:VV-fits}. We find that\n${\\cal F}^t_{\\rm conf}$ and ${\\cal F}^x_{\\rm conf}$ scale as the inverse distance from the cusp, and that\n${\\cal F}^y_{\\rm conf}$ and ${\\cal F}^z_{\\rm conf}$ are at worst much less singular (consistent with a log divergence).\nFrom this we conclude\nthat when one adopts the conformal gauge at first order the self-force near the cusp has a weak,\nintegrable divergence on the worldsheet and that any integrated quantities (such as the radiated\nenergy) are finite.\n\n\\begin{widetext}\n ~\n \\begin{table}[H]\n \\begin{center}\n \\begin{tabular}{|c|cccccc|cc|}\n \\hline\n Quantity & $a$ & $b$ & $c$ & $d$ & $e$ & $f$ & $\\log_{10} \\epsilon$ & $\\log_{10} Q$ \\\\\n \\hline\n$F^t-F^x$&$14.68$&$276.43$&$117.42$&$-41.51$&$12.47$&$-25.45$&$-3.08$&$1.06$ \\\\\n$F^z(\\tau+\\zeta)+F^y(3\\tau+\\zeta)$&$0$&$521.81$&$175.92$&$-126.45$&$38.64$&$16.21$&$-2.67$&$0.91$ \\\\\n \\hline\n$H_{\\tau\\tau}$&$11.49$&$-6.53$&$-15.33$&$18.76$&$84.03$&$101.7$&$-7.22$&$3.9$ \\\\\n$H_{\\zeta\\zeta}$&$0$&$0$&$0.01$&$64.4$&$196$&$237.13$&$-7.03$&$1.32$ \\\\\n$H_{\\tau\\zeta}$&$0$&$-7.66$&$-17.94$&$22.71$&$69.29$&$56.75$&$-7.05$&$3.8$ \\\\\n \\hline\n\\end{tabular}\n \\end{center}\n \\caption{First order fits\n for force combinations near the cusp with form\n $a + b \\tau + c \\zeta + (d \\tau^2 + e \\tau \\zeta + f \\zeta^2)\/r$ with\n $r=\\sqrt{\\tau^2+\\zeta^2}$ and\n second order fits for the worldsheet projected metric pertubations with\n form $a + b \\tau + c \\zeta + d \\tau^2 + e \\tau \\zeta + f \\zeta^2$\n over the radial range $2 \\times 10^{-8} \\le r \\le 2 \\times 10^{-4}$. The Table provides the two force\n combinations and 3 worldsheet projected metric perturbations that appear in the asymptotic\n forms for $F_1^\\mu$ and $F_2^\\mu$. The last two columns give the common log of $\\epsilon$ (the root mean square\n error between the data and the fit), and $Q$ (the ratio of the variation in the data divided\n by $\\epsilon$). \n }\\label{tab:fits}\n \\end{table}\n\\end{widetext}\n\n \\begin{table}[H]\n \\begin{center}\n \\begin{tabular}{|c|ccccc|}\n \\hline\n Order & $-4$ & $-3$ & $-2$ & $-1$ & $0$ \\\\\n Component& & & & & \\\\ \n \\hline \n$t$ &$ < $&$ < $&$-2.01$ &$ 1.28 $&$1.25$ \\\\\n$x$ &$ < $&$ < $&$-2.01$ &$ 1.28 $&$1.25$ \\\\\n$y$ &$ < $&$ < $&$ < $ &$ <$&$2.75$ \\\\\n$z$ &$ < $&$ < $&$ < $ &$ < $&$2.03$ \\\\\n \\hline\n\\end{tabular}\n \\end{center}\n \\caption{The numerical results for the\n expansion of $\\sqrt{-\\gamma}F^\\mu$ in $r$ (averaged over angle)\n for the fit given in Table \\ref{tab:fits}.\n For $\\sqrt{-\\gamma}(F_1 + F_2) \\sim \\sum_{n} c_n r^n$ the columns\n are the leading powers of\n the expansion ($n=-4$ to $0$), the rows are the spacetime\n components and the table values are the common log of the\n expansion coefficients ($d_n=\\log_{10} | c_n | $).\n The symbol $<$ means a numerical result $|c_n| < 10^{-12}$.\n When the fitting range is narrowed about the cusp the $1\/r^2$ contribution\n decreases $\\propto r$ and the other pieces are fixed.\n From this we infer that the leading non-zero piece of $F^\\mu$\n varies as $1\/r$.\n }\\label{tab:cancellations}\n \\end{table}\nGiven the scalings for ${\\cal F}^\\mu_{\\rm conf}$ nearby the cusp, it is straightforward to deduce the\ncorresponding scaling for $F^\\mu_1 = -\\tfrac{1}{\\sqrt{-\\gamma}} \\perp^\\mu{}_\\nu {\\cal F}^\\nu_{\\rm conf}$.\nWorking with the exact expression for the determinant of the induced metric in this case,\n\\begin{equation}\n \\gamma = -\\frac{1}{16} \\left[2 - \\cos (2 \\zeta) - \\cos (4\\zeta+2\\tau) \\right]^2,\n\\end{equation}\nand expanding the relationship between $F^\\mu_1$ and ${\\cal F}^\\mu_{\\rm conf}$ to next from leading order, we find\n\\begin{align}\n \\label{eq:F1asym1}\n F^t_1 & \\approx F^x_1 \\approx - \\frac{1}{\\gamma}\\bigg\\{\\left({\\cal F}^t_{\\rm conf}-{\\cal F}^x_{\\rm conf}\\right) \\\\\n & - \\frac12\\left[\\left({\\cal F}^z_{\\rm conf}+3 {\\cal F}^y_{\\rm conf}\\right)\\tau + \\left({\\cal F}^z_{\\rm conf}+ {\\cal F}^y_{\\rm conf}\\right)\\zeta\\right] + \\cdots\\bigg\\}, \\nonumber \\\\\n \\label{eq:F1asym2}\n F^y_1 & \\approx -\\frac{1}{2\\gamma} \\left({\\cal F}^t_{\\rm conf}-{\\cal F}^x_{\\rm conf}\\right) \\left(\\zeta + 3 \\tau\\right) + \\cdots, \\\\\n \\label{eq:F1asym3}\n F^z_1 & \\approx -\\frac{1}{2\\gamma} \\left({\\cal F}^t_{\\rm conf}-{\\cal F}^x_{\\rm conf}\\right) \\left(\\zeta + \\tau\\right) + \\cdots.\n\\end{align}\nSince $\\gamma$ scales as the fourth power of the distance from the cusp we infer that $F^\\mu_1$ is\nnaively four orders more singular than ${\\cal F}^\\mu_{\\rm conf}$. However, as can be seen in Table\n\\ref{tab:fits}, it turns out that ${\\cal F}_{\\rm conf}^t \\approx {\\cal F}_{\\rm conf}^x$ near the cusp so the\nleading-order divergence cancels and at worst $F^\\mu_1$ diverges as the inverse fourth power of the\ndistance from the cusp. At next from leading order the asymptotic expression for $F^\\mu_1$ is\nantisymmetric about the cusp. (This behavior, combined with the mixing of components is what makes\nthe analysis of ${\\cal F}^\\mu_{\\rm conf}$ clearer than working directly with $F^\\mu_1$.)\nWith the worldsheet weighting we naively infer that $\\sqrt{-\\gamma} F^\\mu_1$\ndiverges as the inverse quadratic power of the distance from the cusp,\none power worse than ${\\cal F}^\\mu_{\\rm conf}$. Now we must consider the role of $F^\\mu_2$.\n\nTo understand the behaviour of $F^\\mu_2$ near the cusp we begin with the perturbed metric projected\nalong the worldsheet vectors $\\partial_\\tau z^\\alpha$ and $\\partial_\\zeta z^\\alpha$ according to\n\\begin{eqnarray}\n H_{\\tau\\tau}& =& \\partial_\\tau z^\\alpha h_{\\alpha\\beta} \\partial_\\tau z^\\beta \\\\\n H_{\\tau\\zeta}& =& \\partial_\\tau z^\\alpha h_{\\alpha\\beta} \\partial_\\zeta z^\\beta \\\\\n H_{\\zeta\\zeta}& =& \\partial_\\zeta z^\\alpha h_{\\alpha\\beta} \\partial_\\tau z^\\beta .\n\\end{eqnarray}\nEvaluating the simple expression Eq.~\\eqref{eq:F2-hproj} for the relationship between $F^\\mu_2$ and\nthe worldsheet projections of the metric perturbation, we find\n\\begin{align}\n \\label{eq:F2asym1}\n F_2^t &\\approx F_2^x \\approx \\frac{1}{2(-\\gamma)^{3\/2}}\\bigg[2\\left(5H_{\\tau \\zeta} - H_{\\tau \\tau} - H_{\\zeta \\zeta}\\right) \\zeta \\nonumber \\\\\n & \\qquad \\qquad \\qquad + \\left(4H_{\\tau \\zeta} - H_{\\tau \\tau} - H_{\\zeta \\zeta}\\right) \\tau + \\cdots \\bigg], \\\\\n \\label{eq:F2asym2}\n F_2^y &\\approx \\frac{2\\left(6H_{\\tau \\zeta} - H_{\\tau \\tau} - H_{\\zeta \\zeta}\\right) (\\zeta^2+3 \\tau \\zeta + \\tau^2) + \\cdots}{2(-\\gamma)^{3\/2}}, \\\\\n \\label{eq:F2asym3}\n F_2^z &\\approx \\frac{2\\left(2H_{\\tau \\zeta} - H_{\\tau \\tau} - H_{\\zeta \\zeta}\\right) (\\zeta^2+3 \\tau \\zeta + \\tau^2) + \\cdots}{2(-\\gamma)^{3\/2}}.\n\\end{align}\nWith the finite behavior of the the worldsheet projections of the metric perturbation, it is\nstraightforward to deduce the corresponding scaling of the divergence in $F^\\mu_2$. We find that at\nworst $F_2^t$ and $F_2^x$ diverge as the inverse fifth power of distance from the cusp. With the\nworldsheet weighting $\\sqrt{-\\gamma} F^\\mu_2$ diverges as the\ninverse cubic power of the distance from the cusp. However, as this leading-order divergence is\nantisymmetric about the cusp its integral over a patch around the cusp cancels the leading-order\ndivergence to leave only subleading pieces.\n\nWe are now left with asymptotic forms for $\\sqrt{-\\gamma} F^\\mu_1$ and $\\sqrt{-\\gamma} F^\\mu_2$ each\nscaling as the square of the inverse distance from the cusp. These\nare individually non-integrable. However, once the asymptotic forms given in Table \\ref{tab:fits} for the quantities\nin Eqs.~\\eqref{eq:F1asym1}-\\eqref{eq:F1asym3} and \\eqref{eq:F2asym1}-\\eqref{eq:F2asym3} are taken\ninto account, we find that the leading order divergent behavior exactly cancels (see Table \\ref{tab:cancellations}) to the level of\naccuracy of the numerically fitted coefficients in the combination\n$\\sqrt{-\\gamma} (F^\\mu_1+ F^\\mu_2)$ yielding the full force $\\sqrt{-\\gamma} F^\\mu$ which\nat worst diverges as the inverse distance from the cusp, and hence is integrable.\nThis divergence is no worse than ${\\cal F}^\\mu_{\\rm conf}$ itself.\n\nA detailed understanding of the behavior of these divergences near cusps\nallows us to solve either the general covariant equation of motion\nEq.~\\eqref{eq:final} or the corresponding Eq.~\\eqref{eq:simple}\nin which specific conformal gauge choices have been adopted.\n\n\\subsection{Garfinkle and Vachaspati string with kinks}\n\nThe third case we will explore is from a class of strings found by Garfinkle and Vachaspati (GV)\n\\cite{Garfinkle:1987yw}. These strings contain two kinks that travel in the same direction on an\noscillating and twisting string loop. We choose a particular representation from the general class\nwith the following right and left-moving modes\n\\begin{align}\n a^\\mu(\\zeta^+) & = \\left[\\zeta^+, 0, a^2(\\zeta^+), a^3(\\zeta^+) \\right] \\\\\n b^\\mu(\\zeta^-) & = \\left[\\zeta^-,\n \\frac{L}{2 \\pi} \\cos \\frac{2 \\pi \\zeta^-}{L}, 0,\n \\frac{L}{2 \\pi} \\sin \\frac{2 \\pi \\zeta^-}{L} \\right]\n\\end{align}\nwhere\n\\begin{align}\n a^2(x) & = \\frac{L}{\\pi} \\sum_j \\delta_{j, \\lfloor \\frac{2x}{L} \\rfloor}\n \\left( -1 \\right)^{\\lfloor \\frac{j+1}{2} \\rfloor}\\times \\nonumber \\\\&\n \\cos \\left( \\frac{\\pi}{4} + \\left( - 1\\right)^j \\frac{\\pi x}{L} \\right) \\nonumber \\\\\n a^3(x) & = \\frac{L}{\\pi} \\sum_j \\delta_{j, \\lfloor \\frac{2x}{L} \\rfloor}\n \\left( -1 \\right)^{\\lfloor \\frac{j}{2} \\rfloor} \\times \\nonumber \\\\\n &\\left[\n \\sin \\left( \\frac{\\pi}{4} + j \\left( -1 \\right)^j \\frac{\\pi^2}{L} \\right)\n -\n \\sin \\left( \\frac{\\pi}{4} + \\left( -1 \\right)^j \\frac{\\pi x}{L} \\right)\n \\right],\n\\end{align}\nand where the sums are over all integers $j$, $L$ is the invariant length, $\\lfloor x \\rfloor$ is\nthe floor function and $\\delta_{j,k}$ is the Kronecker delta.\n\nFigure \\ref{fig:GVloopspacetime} illustrates\nthe configuration in spacetime at equally spaced moments in the oscillation cycle. The kink\ndiscontinuities are visible in all four snapshots. In the tangent sphere representation (shown in\nFig.~\\ref{fig:GVlooptangentplot}), ${\\mathbf b}'$ traverses a complete great circle through the\nNorth and South poles at a steady rate; ${\\mathbf a}'$ follows two disjoint segments of a great\ncircle (longitude offset by $\\pi\/2$ from the one traced by ${\\mathbf b}'$) between latitudes\n$\\theta = \\pm \\pi\/4$, also at a steady rate. The vector ${\\mathbf a}'$ traces one segment and then\nabruptly jumps from the point $(0,y,z)$ to $(0,-y,z)$ and\ntraces out the mirrored arc at a steady rate (and repeats).\nEach jump from one segment to the other yields a kink discontinuity in the spacetime representation.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{GVloop.pdf}\n\\caption{\\label{fig:GVloopspacetime} The GV string loop configuration in spacetime at four equally spaced moments in the basic loop oscillation cycle $\\tau=0$, $L\/8$, $L\/4$ and $3L\/4$. Each box is the same size with fixed axes $-1$ to $1$\n and fixed orientation. }\n~\\\\\n\\includegraphics[width=0.9\\linewidth]{tangentplot.pdf}\n\\caption{\\label{fig:GVlooptangentplot} The arcs on the tangent sphere\n traced by ${\\mathbf b}'(x)$ (red line, a complete great circle passing\n through poles) and ${\\mathbf a}'(x)$ (two green segments symmetrically\n cut from a great circle) for the GV string. The line is made by a series of points,\n equally spaced in argument $x$ for the left and right moving mode. A\n blow up of the line in the figure would show equal intervals between\n the points. When the kink is ``rounded off'' (non-zero $\\Delta$ as\n described in the text) a few of these points will sit between the\n pictured green arcs and the green line is formally continuous. The\n ${\\mathbf a}'(x)$ tangent vector moves very rapidly from one side to the\n other.}\n\\end{figure}\n\nWhile the ACO string provided a useful analytic test case with kinks against which we could compare our numerical results, it turns out that the simplicity of the ACO string (in particular, that many quantities are a constant along the string) means that many important terms that appear in the general expression for the self-force are identically zero for the ACO string. Fortunately, the GV string is sufficiently general that this is not the case. Unfortunately, however, there is no known analytic solution for the self-force for the GV string. Instead, in order to use the GV string as a test of our method, we chose a particular point on the string ($\\tau = 0.3L$, $\\zeta = 0.4L$) and computed the self-force at that point using an extensive set of different and independent methods:\n\\begin{enumerate}\n\\item We used our exact $1$D method including a field point contribution and contributions from the two kinks (this is method [1D] discussed in Sec.~\\ref{sec:1D}).\n\\item We repeated our $1$D calculation (again, method [1D]) using multiple choices of integration variable ($\\zeta^+$, $\\zeta^-$ and $\\zeta$). In each case, the various contributions (from the integral, field point, and two kinks) were different. Indeed, in some cases there was no contribution picked up from the kinks.\n\\item We again repeated our $1$D calculation using method [1D], but using a mixed coordinate choice; we used $\\zeta^+$ on one side of the field point and $\\zeta^-$ on the other side. We then included a contribution at the point where these two segments meet up again, to account for the change in integration variable at that point. This contribution is exactly the one discussed in Secs.~\\ref{sec:kinks}, and an explicit expression is the same as one obtains when breaking the integration at a kink, as discussed in Sec.~\\ref{sec:coord-depend-integral}.\n\\item We repeated the previously mentioned $1$D calculations again, but instead of including the exact field point term, we considered an over-retarded image of the string (method [1DO]). In that case, we find that the over-retarded integrand picks up a $\\delta$-function type feature nearby the field point (see Fig.~\\ref{fig:over-retarded}). For finite over-retardation this manifests itself as a narrow Gaussian, and the Gaussian gets narrower and sharper as the over-retardation parameter is shrunk towards zero. Reassuringly, in the limit of the Gaussian shrinking down to zero size we recover a result which agrees with the previous calculations and can identify the $\\delta$-function with the field point contribution.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/F-sigma}\n\\includegraphics[width=0.9\\linewidth]{figures\/Ft-sigma}\n\\caption{\\label{fig:over-retarded}\nIntegrand used to compute to the self-force for the Garfinkle-Vachaspati string.\nThese correspond to the values in the $\\zeta$ column of Table \\ref{tab:comparison}.\nDistributional contributions from the kinks and field point are denoted by dashed and\nsolid arrows, respectively.\n}\n\\end{figure}\n\\item Finally, we repeated the calculation in a completely independent way, directly evaluating the self-force from the $2$D integral (method [2D]) without the reduction to a $1$D integral. This was significantly less efficient, but provided an important check as there is no need to consider any split into field-point-plus-integral\u2013plus-kink contributions. Instead, we smeared out the $\\delta$ function in the Green function and also introduced a slight smoothing of the kinks, as discussed in detail in Appendix \\ref{sec:2D}. Yet again, reassuringly, in the limit of our smearing and smoothing parameters going to zero we recovered a result which was in perfect agreement with all of the other methods.\n\\end{enumerate}\n\n\\begin{table*}\n \\begin{center}\n\\begin{ruledtabular}\n\\begin{tabular}{c|c|cccccccc}\n & Contribution & $\\zeta^-$ & $\\zeta^+$ & $\\zeta^-$\/$\\zeta^+$ & $\\zeta$ & $2$D & $\\zeta^-_\\epsilon$ & $\\zeta^+_\\epsilon$ & $\\zeta_\\epsilon$ \\\\\n \\hline \\hline\n \\multirow{6}{*}{$F_1^t$} & $\\int$ & \\multirow{2}{*}{$-330.558$} & \\multirow{2}{*}{$-12.73(1)$} & $6.89229$ & $-25.6962$ & $-12.76(2)$ & $-330.60(4)$ & $-12.7488(1)$ & $-22.4707(2)$ \\\\\n & $\\delta$ & $ $ & $ $ & - & $3.22536$ & - & - & - & - \\\\\n & Kink $1$ & $358.819$ & - & - & $16.7989$ & - & $358.819$ & - & $16.7989$ \\\\\n & Kink $2$ & $-41.0096$ & - & - & $-7.07683$ & - & $-41.0096$ & - & $-7.07683$ \\\\\n & $\\zeta^-\\leftrightarrow\\zeta^+$ & - & - & $-19.6411$ & - & - & - & - & - \\\\\n \\cline{2-10}\n & Total & $-12.7488(1)$ & $-12.73(1)$ & $-12.7488$ & $-12.7488$ & $-12.76(2)$ & $-12.8(1)$ & $-12.7488(1)$ & $-12.7487(2)$ \\\\\n \\hline \\hline\n \\multirow{6}{*}{$F_1^x$} & $\\int$ & \\multirow{2}{*}{$-300.675(1)$} & \\multirow{2}{*}{$-2.64(7)$} & $8.62259$ & $-21.7023$ & $-2.58(3)$ & $-300.63(4)$ & $-2.56339(6)$ & $-14.8591(9)$ \\\\\n & $\\delta$ & $ $ & $ $ & - & $6.84317$ & - & - & - & - \\\\\n & Kink $1$ & $287.628$ & - & - & $14.7570$ & - & $287.628$ & - & $14.7570$ \\\\\n & Kink $2$ & $10.4832$ & - & - & $-2.46129$ & - & $10.4832$ & - & $-2.46129$ \\\\\n & $\\zeta^-\\leftrightarrow\\zeta^+$ & - & - & $-11.186$ & - & - & - & - & - \\\\\n \\cline{2-10}\n & Total & $-2.5637(8)$ & $-2.64(7)$ & $-2.56342$ & $-2.56342$ & $-2.58(3)$ & $-2.52(8)$ & $-2.56339(6)$ & $-2.56333(9)$ \\\\\n \\hline \\hline\n \\multirow{6}{*}{$F_1^y$} & $\\int$ & \\multirow{2}{*}{$-304.564$} & \\multirow{2}{*}{$-10.7066(4)$} & $6.6211$ & $-23.4832$ & $-10.72(2)$ & $-304.59(3)$ & $-10.7068(1)$ & $-20.0615(2)$ \\\\\n & $\\delta$ & $ $ & $ $ & - & $3.42159$ & - & - & - & - \\\\\n & Kink $1$ & $326.143$ & - & - & $15.4176$ & - & $326.143$ & - & $15.4176$ \\\\\n & Kink $2$ & $-32.2859$ & - & - & $-6.06281$ & - & $-32.2859$ & - & $-6.06281$ \\\\\n & $\\zeta^-\\leftrightarrow\\zeta^+$ & - & - & $-17.3279$ & - & - & - & - & - \\\\\n \\cline{2-10}\n & Total & $-10.7069$ & $-10.7066(4)$ & $-10.7068$ & $-10.7068$ & $-10.72(2)$ & $-10.7(1)$ & $-10.7068(1)$ & $-10.7067(2)$ \\\\\n \\hline \\hline\n \\multirow{6}{*}{$F_1^z$} & $\\int$ & \\multirow{2}{*}{$-190.140(1)$} & \\multirow{2}{*}{$-13.82(7)$} & $2.25465$ & $-15.9946$ & $-13.90 (1)$ & $-190.22(8)$ & $-13.8960(1)$ & $-16.9795(2)$ \\\\\n & $\\delta$ & $ $ & $ $ & - & $-0.985094$ & - & - & - & - \\\\\n & Kink $1$ & $234.551$ & - & - & $10.0429$ & - & $234.551$ & - & $10.0429$ \\\\\n & Kink $2$ & $-58.3072$ & - & - & $-6.95921$ & - & $-58.3072$ & - & $-6.95921$ \\\\\n & $\\zeta^-\\leftrightarrow\\zeta^+$ & - & - & $-16.1507$ & - & - & - & - & - \\\\\n \\cline{2-10}\n & Total & $-13.8958(7)$ & $-13.82(7)$ & $-13.896$ & $-13.8960$ & $-13.90 (1)$ & $-13.97(9)$ & $-13.8960(1)$ & $-13.8958(2)$\n\\end{tabular}\n\\end{ruledtabular}\n\\caption{\\label{tab:comparison}\nComparison of methods for computing the self-force at a generic point ($\\tau=0.3L$, $\\zeta=0.4L$) on the GV string.}\n\\end{center}\n\\end{table*}\nThe results of this extensive set of tests are given in Table~\\ref{tab:comparison}. We see that all methods produce results which are consistent within their respective error bars. The [2D] method is least accurate, due the need for a $2$D rather than $1$D numerical integral. The [1DO] method also poses challenges for numerical accuracy due to the presence of sharp features (i.e. the Gaussian approximation to the delta function for the field point contribution), as does the [1DOS] method for portions of the integral nearby kinks.\n\nThe three exact [1D] methods all work reasonably well, however even in this case not all methods\nare equally computationally efficient. In particular, calculations based on a single null\ncoordinate encounter a strong divergence in the integrand as the field point is approached from one\nside (the particular side is dependent on whether one uses $\\zeta^+$ or $\\zeta^-$ as integration\nvariable). This diverging integral largely cancels against the field point contribution\\footnote{In practice, we were only able to obtain finite results by evaluating the integral up to a short distance from the field point and evaluating the expression for the field point contribution at the point where the integral was cut off. We recovered a unique and consistent result as the cut-off point was pushed towards the actual field point.}, leaving a relatively small overall contribution from the field-point-plus-integral combination. We found that the remaining two approaches (integration with respect to $\\zeta$; and half-$\\zeta^+$ half-$\\zeta^-$ plus coordinate change term) were comparable in terms of computational efficiency.\n\nImportantly, other than accuracy concerns, all methods produced results which are unambiguous in agreeing with each other.\n\n\\begin{widetext}\n~\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/GV}\n\\caption{\\label{fig:Force-components-GV}\nContributions to $F_1^\\mu$ for the Garfinkle and Vachaspati string when computed using the 1D integration method with integration with respect to $\\zeta$. Each sub-figure shows the relevant contribution to the force at all points on the string in the region $\\tau \\in (0,L\/2)$, $\\zeta \\in (-L\/2,L\/2)$; all other points can be obtained from the standard periodic extension of the string. Each column corresponds to a different component of the force: $F_1^t$, $F_1^x$, $F_1^y$, and $F_1^z$. The rows correspond to the contributions from: (i) the kink that passes through ($\\tau = 0$, $\\zeta = 0$); (ii) the kink that passes through ($\\tau = 0$, $\\zeta = \\pi$); (iii) the field point; and (iv) the integral over $\\zeta$ (ignoring distributional contributions at the kinks and field point). The two kinks are denoted by diagonal black lines. For the purposes of the plots, we have set the string tension, $\\mu$, and Newton's constant, $G$ equal to one; other values simply introduce an overall scaling.\n}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/GV-total}\n\\caption{\\label{fig:Force-GV}\n The two pieces of the self-force, $F_1^\\mu$ (row 1) and $F_2^\\mu$ (row 2), and the total force $F^\\mu = F^\\mu_1+F^\\mu_2$ (row 3), for the Garfinkle and Vachaspati string as a function of position on the string. The $F_1^\\mu$ part can be obtained by summing the four rows in Fig.~\\ref{fig:Force-components-GV}. For the purposes of the plots, we have set the string tension, $\\mu$, and Newton's constant, $G$ equal to one; other values simply introduce an overall scaling.\n}\n\\end{figure}\n\\end{widetext}\n\nFinally, we used the [1D] method (specifically, integrating with respect to $\\zeta$ and including field point and kink contributions) to evaluate the self-force at all points on the GV string. The results are shown in Figs.~\\ref{fig:Force-components-GV} and \\ref{fig:Force-GV}. Fig.~\\ref{fig:Force-components-GV} shows how each of the contributions to $F^\\mu_1$ contribute to the overall result, while Fig.~\\ref{fig:Force-GV} shows $F^\\mu_1$ and $F^\\mu_2$ themselves, as well as their sum. As in the other string test cases, we find that the self-force is finite almost everywhere on the string, with the exception of exactly on the kinks, where it diverges.\n\nWe have analyzed the form of the divergence near the kink\nby calculating the total backreaction force to high accuracy along a\nset of worldsheet points for a line that runs perpendicular to the kink\nwith coordinates\n$(\\tau,\\zeta)=(\\pi\/2 + \\zeta^+\/2, -\\pi\/2 + \\zeta^+\/2)$ for $-21 < \\log |\\zeta^+| < -11$ for positive and negative $\\zeta^+$.\nLocally, the kink can be described in terms of the changes to the unit tangent\nvector $e_t$, the velocity vector $e_v$ and\n$e_\\perp = e_v \\times e_t\/|e_v \\times e_t|$ which form the\nperpendicular coordinate system $\\{e_t, e_v, e_\\perp\\}$.\nWe find $e_v$ and $e_t$ lie in the y-z plane and\n$e_\\perp$ along the x-direction. Letting $\\Delta e = e_{+} - e_{-}$ stand for\nthe change in time of each unit vector,\n\\begin{eqnarray}\n \\Delta e_\\perp & = & \\{2,0,0\\} \\\\\n \\Delta e_t & \\simeq & \\{0,-1.85,0\\} \\\\\n \\delta e_v & \\simeq & \\{0,-0.77,0\\} .\n\\end{eqnarray}\nThe kink is a y-reflection of the\nvelocity and tangent vectors in the y-z plane.\n\nOn each side of the kink we fit each component of $F^\\alpha$\nwith forms that include combinations of constant, linear\nand ln terms in $|\\zeta^+|$. We select the linear\nor ln fit whichever is best; it turns out that this corresponds to\nthe term with coefficients that are of order unity.\nWe report the inferred scaling in Table \\ref{tab:GVkinkasymptotics}.\n\\begin{table}[H]\n \\begin{center}\n\\begin{tabular}{|c|cccc|}\n \\hline\n sign & $F^t$ & $F^x$ & $F^y$ & $F^z$ \\\\\n$\\zeta^+ < 0$ & $|\\zeta^+|^{-0.33}$ & $1$ & $|\\zeta^+|^{-0.33}$ & $|\\zeta^+|^{-0.33}$ \\\\\n $\\zeta^+ > 0$ & $1$ & $1$ & $1$ & $1$ \\\\\n \\hline\n\\end{tabular}\n \\end{center}\n \\caption{Asymptotic form for the force near the kink; $1$ means non-zero\n constant.}\n\\label{tab:GVkinkasymptotics}\n \\end{table}\nThe results are similar to but not identical to the ACO\ncase. First, note that there is one redundancy $F^t=-F^z$\nso we have 3 GV force components to compare to ACO.\nThe GV coordinate directions of the force\nare not the same as the normal and longitudinal\ndirections in the ACO case and this complicates a\ndirect one-for-one comparison. Nonetheless, we see analogous\nbehavior. Most prominently the GV divergence for $\\zeta^+<0$\nof $F^t$ and $F^y$ scales close to $\\propto (|\\zeta^+|)^{-1\/3}$\nlike ACO's $F^t$ and $F^N$. One difference is that the GV force for all\ncomponents with $\\zeta^+>0$ approach non-zero constant values.\nThe ACO loop has no curvature\non one side of the kink, which is probably responsible for\nthe fact that two of its components approach zero. Curiously,\nthe ACO divergence for $F^L \\propto \\ln | \\zeta^+ |$\non both sides of the kink is\nabsent for any components in the GV case. Likewise, the completely\nfinite GV result for $F^x$ on both sides of the kink\nis absent in the ACO case. Despite these differences\nthe most important observation is that the GV divergent self-force\n$\\propto (|\\zeta^+|)^{-1\/3}$ integrates to a finite value\nso we expect the\nphysical displacement of the string to be finite.\n\n\\subsection{Kibble self-intersecting strings}\nThe ACO and GV string possess a pair of traveling kinks that circulate\naround the loop throughout the period of oscillation while the KT\nstring forms two transient cusps each period. In the tangent sphere\nrepresentation the kink discontinuities are jumps in ${\\mathbf a}'$\nand\/or ${\\mathbf b}'$ while the cusps form whenever ${\\mathbf a}'$ and\n${\\mathbf b}'$ cross. The nature of the self-intersections of string\nloops is not immediately apparent from the tangent sphere\nrepresentation. In the case of the KT string with $\\alpha=0$ and\n$\\phi=\\pi\/6$ the string collapses to a line and the overlap is a\nspacelike length of string. Unless nature prefers special loop\nconfigurations the generic type of self-intersection will be weaker\nthan in the above KT case. Here we investigate the Kibble string loop\n\\cite{Kibble:1976sj} which is simpler than any of the previous cases\nin these respects: it has no discontinuities or crossings on the\ntangent sphere, i.e. the loop is smooth and continuous everywhere,\n{\\it and} it self-intersects at a spacetime point not along a\nspacelike line.\n\nWe integrate the tangent vectors \\cite{Garfinkle:1987yw} to give\nexplicit forms for the right and left modes:\n\\begin{align}\n a^\\mu(\\zeta^+) & = \\left[\\zeta^+,\n f_1(\\zeta^+), f_2(\\zeta^+), f_3(\\zeta^+) \\right] \\\\\n b^\\mu(-\\zeta^-) & = \\left[\\zeta^-,\n -f_1(\\zeta^-), -f_3(\\zeta^-), -f_2(\\zeta^-) \\right]\n\\end{align}\nwhere\n\\begin{align}\n f_1(x) & = \\frac{L}{2\\pi} \\left(\n \\frac{\n (1+p^2)^2 \\sin 2 y +\n (p^2\/4) \\sin 4 y}{2 + 5 p^2 + 2 p^4}\n \\right)\n \\\\\n f_2(x) & = \\frac{L}{2 \\pi} \\cos 2 y \\times \\nonumber \\\\\n & \\quad \\left(\n \\frac{ -2 + 4 p^2 + 2 p^4 + p^2 \\cos 2 y }\n {4 + 10 p^2 + 4 p^4}\n \\right) \\\\\n f_3(x) & = \\frac{L}{2\\pi} 2^{3\/2} p \\cos y \\times \\nonumber \\\\\n & \\quad \\left(\n \\frac{ 5 + 3 p^2 + 2 \\cos 2 y }\n {6 + 15 p^2 + 6 p^4}\n \\right)\n\\\\\n y & = \\frac{2 \\pi x}{L}\n\\end{align}\nwhere $p$ is a constant. We choose for the numerical example $p=1\/2$.\nThis is a more complicated loop in terms of harmonic\ncontent than either the GV or KT loops.\nFig. \\ref{fig:Kibbleloopspacetime} shows 6 equally spaced\nsnapshots of the loop during the fundamental oscillation period. The\ndashed and dotted lines show the times when a self-intersection\noccurs at the center (red dot). Figure \\ref{fig:Kibblelooptangentplot}\ngives the tangent sphere representation which resembles the seams\nof a baseball.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Kibble-solid.pdf}\n\\caption{\\label{fig:Kibbleloopspacetime} The Kibble string loop\n configuration for $p=1\/2$ in spacetime at six equally\n spaced moments $\\tau = j \\pi\/6$ for $j=0$ to $5$ (invariant\n length $2 \\pi$) in the basic loop oscillation cycle.\n The blue dashed and dotted loops self-intersect at\n the central red dot. The solid blue lines are non-intersecting configurations.}\n~\\\\\n\\includegraphics[width=0.9\\linewidth]{Kibble-tangentsphere.pdf}\n\\caption{\\label{fig:Kibblelooptangentplot} The arcs on the tangent\n sphere traced by ${\\mathbf a}'(x)$ and $-{\\mathbf b}'(x)$ for the\n Kibble string resemble the seams on a baseball. The green and red\n lines are smooth and continuous and do not intersect each\n other. They satisfy an integral condition such that the loop has\n zero total momentum. }\n\\end{figure}\n\n\n\n\\begin{widetext}\n~\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/K}\n\\caption{\\label{fig:Force-components-K}\nContributions to $F_1^\\mu$ for the Kibble string when computed using the [1D] integration method with integration with respect to $\\zeta$. Each sub-figure shows the relevant contribution to the force at all points on the string in the region $\\tau \\in (0,L\/2)$, $\\zeta \\in (-L\/2,L\/2)$; all other points can be obtained from the standard periodic extension of the string. Each column corresponds to a different component of the force: $F_1^t$, $F_1^x$, $F_1^y$, and $F_1^z$. The rows correspond to the contributions from: (i) the field point; and (ii) the integral over $\\zeta$ (ignoring distributional contributions at the field point). For the purposes of the plots, we have set the string tension, $\\mu$, and Newton's constant, $G$ equal to one; other values simply introduce an overall scaling. Note that we have used a logarithmic scale and denoted positive (negative) values by coloring the plot orange (blue).\n}\n\\end{figure}\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/K-total}\n\\caption{\\label{fig:Force-K}\nThe two pieces of the self-force, $F_1^\\mu$ (row 1) and $F_2^\\mu$ (row 2), for the Kibble string as a function of position on the string. The $F_1^\\mu$ part can be obtained by summing the two rows in Fig.~\\ref{fig:Force-components-K}. For the purposes of the plots, we have set the string tension, $\\mu$, and Newton's constant, $G$ equal to one; other values simply introduce an overall scaling. Note that we have used a logarithmic scale and denoted positive (negative) values by coloring the plot orange (blue).\n}\n\\end{figure}\n\\end{widetext}\n\nThis loop has collisions at worldsheet coordinates\n$\\{\\tau,\\zeta\\}=\\{0,\\pm\\pi\/2\\}$ and $\\{\\tau,\\zeta\\}=\\{\\pi\/2,0\\}$\nand $\\{\\pi\/2,\\pi\\}$.\nWe describe the limiting behavior near $\\{\\tau,\\zeta\\}=\\{0,\\pm\\pi\/2\\}$.\nThe velocities of the two\nbits of string are equal and opposite: ${\\dot z}^i = \\pm \\{\n0, -0.26, -0.26\\}$. The tangent vectors are\n$dz^i\/d\\zeta = \\{-0.85, \\pm 0.26, \\mp 0.26 \\}$ (an angle of\n$\\sim 0.94$ rad). The acceleration vectors are\n${\\ddot z}^i = \\{0,-0.41, 0.41\\}$. The gravitational radiation\nemitted by each piece of string should be similar.\n\nThe net effect of the crossing is small. The bumps at the collision\npoints on the full scale worldsheet representations in Fig.\n\\ref{fig:Force-K}\nare difficult to distinguish\nat all. Here we look in more detail near those crossings.\n\nComponent $F^t$ is displayed in a small two-dimensional patch about\nthe crossing point in the top left plot of Fig. \\ref{fig:Cross1-Kibble}. As $\\tau \\to 0$\nat fixed $\\zeta=\\pi\/2$ (the vertical line of small dots in\nthe picture) $F^t$ diverges $\\propto \\tau^{-1}$ with change\nof sign as $\\tau$ passes through zero. The results at $\\pm \\tau$ are nearly\nequal and opposite. We find that the sum of the two components at $\\pm \\tau$ is\nnearly constant as $|\\tau| \\to 0$, numerically approximately $\\propto\n|\\tau|^{0.05}$. As $\\zeta$ varies near $\\pi\/2$ (fixed\n$\\tau=0$, the horizontal line of small dots)\nthe results on each side of the crossing point are finite and the zero\nvalue is not exactly at $\\delta \\zeta=0$. These results are quite\nsensitive to the size of $\\delta \\tau$ since the surface changes\nsign (from plus to minus infinity) near $\\delta \\tau=0$.\n\\begin{widetext}\n\t~\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\linewidth]{figures\/KibbleCrossingCorrected1.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/KibbleCrossingCorrected2.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/KibbleCrossingCorrected3.pdf}\n\\includegraphics[width=0.45\\linewidth]{figures\/KibbleCrossingCorrected4.pdf}\n\\caption{\\label{fig:Cross1-Kibble} The four components ($t$, $x$, $y$, and $z$) of\n $\\ln |{\\cal F}^t_{\\rm conf}|$ for a small patch of the worldsheet about the crossing\n point.\n Four quadrants in $\\{\\delta \\tau, \\delta \\zeta\\}$ are displayed in\n log absolute value coordinates (oriented to match the usual linear\n system about $\\{\\tau,\\zeta\\}=\\{0,\\pi\/2\\}$).\n Blue (orange) are negative (positive) values. The small dots have\n been added for $\\delta \\tau=0$ and $\\delta \\zeta=0$.\n}\n\\end{figure}\n\\end{widetext}\n\nWe have\nformally fit the power law variation for $\\delta \\tau$ near $\\tau=0$\nand for $\\delta \\zeta$ near $\\zeta = \\pi\/2$.\nTable \\ref{tab:Kibblescaling} summarizes the slopes extracted for\n$F^\\alpha$ along fixed $\\tau$ and fixed $\\zeta$ coordinates passing\nexactly through the crossing point. Some components vary such that an\nintegral over just one side would yield a divergent quantity, however,\nthe symmetric sum is always integrable.\n\\begin{table}[H]\n \\begin{center}\n\\begin{tabular}{|c|cc|cc|}\n \\hline\n Component & \\multicolumn{2}{c|}{$\\zeta$ varies; $\\tau=0$} & \\multicolumn{2}{c|}{$\\tau$ varies; $\\zeta=\\pi\/2$} \\\\\n & One side & Net & One side & Net \\\\\n \\hline\n $F^t$ & - & - & $-1.0$ & $0.04$ \\\\\n $F^x$ & $-1.1$ & $-0.04$ & $-0.04$ & $-0.03$ \\\\\n $F^y$ & $-1.1$ & $-0.06$ & $-1.0$ & $-0.05$ \\\\\n $F^z$ & $-1.1$ & $-0.05$ & $-0.96$ & $-0.04$ \\\\\n \\hline\n\\end{tabular}\n\\end{center}\n \\caption{Kibble loop divergent behavior at the crossing point\n $\\tau=0$ and $\\zeta=\\pi\/2$. The columns labeled $\\tau=0$ give $\\nu$\n for the scaling of the force component along the string near the\n crossing point $\\propto |\\zeta-\\pi\/2|^\\nu$. Likewise,\n the ones labeled $\\zeta=\\pi\/2$ describe the scaling\n $\\propto |\\tau|^\\nu$ for times before and after the appearance of\n the crossing. ``One side'' means the scaling of the absolute value\n (approximately the same on each side);\n ``Net'' means the scaling of the symmetric sum of points on opposite sides\n of the crossing point. Small $\\nu$ results are numerically close to\n finite limits but in any case are integrable.\n The ``-'' indicates values that are not\n well-defined because of a zero-crossing at $\\tau=0$.}\n\\label{tab:Kibblescaling}\n\\end{table}\n\nThe plot of $F^t$ shows it to be\napproximately a product of individual functions of\n$\\tau$ and $\\zeta$. The other force components are more complicated.\nComponents $F^x$, $F^y$ and $F^z$ are shown in small two-dimensional\npatches in the other plots of Fig.~\\ref{fig:Cross1-Kibble}. The small dots show the\nvariation along the coordinate axes.\n\nIn summary, we find the effect of the string crossing leads to integrable forces\nfor all components in this example.\n\n\\subsection{Comparisons to radiated quantities evaluated in the far field}%\n\nAs an additional consistency check on our results, we compute the radiated energy and\ncompare it to the energy dissipated through the local self-force. The latter can be computed using the change in the 4-momentum of the string,\n\\begin{eqnarray}\n \\Delta P^\\mu & = & \\mu \\int {\\cal F}^{\\mu}_{\\rm conf} d\\zeta d\\tau,\n\\end{eqnarray}\nwhere the conformal-gauge force ${\\cal F}^{\\mu}_{\\rm conf}$ is given by Eq.~\\eqref{eq:trad-self-force}\nand where the region of integration is given by the fundamental period of the worldsheet: $-L\/2 \\le\n\\zeta < L\/2$ and $0 \\le \\tau < L\/2$. In practice, we evaluate the integrand at\n$N \\sim 10^4$ equally spaced\npoints on a two dimensional surface and approximate the integral\nas the sum of the function values at the points\ntimes the worldsheet area per point. This is a low accuracy method that is\nsuited to the occurrence of steep spikes at various points on the worldsheet; we estimate the accuracy of the results to be within 1-5\\%.\n\nThe work done on the string by the self-force lowers its energy, $\\Delta P^0 < 0$, and should be\nexactly balanced by the flux carried to infinity, which must be $-\\Delta P^0 > 0$. We separately\ncompute this flux to infinity using the formalism of Allen and Ottewill\n\\cite{2001PhRvD..63f3507A}, in\nwhich the stress energy tensor is a sum of individual Fourier components of the undamped string.\nFor each overtone $n$ we numerically integrate $dP^{(n)}\/d\\Omega$ over the sphere. We compute $N$\novertones and then fit and sum a power law extrapolation for $N \\to \\infty$. This yields a result\nwhich is approximately 1\\% accurate.\n\n\nTable \\ref{tab:energy} compares the results of the two calculations. We find good numerical\nagreement within the expected accuracy of the result.\n\\begin{table}[H]\n \\begin{center}\n\\begin{tabular}{|c|cc|cc|}\n \\hline\n Case & far field & far field & direct & direct \\\\\n & (numerical) & (analytic) & (numerical) & (analytic) \\\\\n \\hline\n ACO & 122.537 & 122.53 & 125.515 & 122.53 \\\\\n \\hline\n KT & 349.677 & & 355.643 & \\\\\n $\\alpha=0, \\phi=\\pi\/6$& & & & \\\\\n \\hline\n KT & 241.321 & & 238.259 & \\\\\n $\\alpha=1\/2, \\phi=0$& & & & \\\\\n \\hline\n GV & 131.304 & & 132.486 & \\\\\n \\hline\n Kibble & 137.6 & & 135.428 & \\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{The total energy loss integrated over one fundamental period of the loop oscillation in\n the center of mass frame of the loop. The far field is calculated with the formalism of Allen and Ottewill \\cite{2001PhRvD..63f3507A}.\n The analytic result for the ACO loop in the far field is from\nRef.~\\cite{Allen:1994bs}. The numerical results for the\ndirect energy loss integrate ${\\cal F}^{\\mu}_{\\rm conf}$ over the world sheet according to the\ndescription in this section. The analytic results for the direct energy loss for the ACO loop is\nfrom Ref.~\\cite{Anderson:2005qu}}\n\\label{tab:energy}\n\\end{table}\n\n\\section{Discussion}\nWe have developed a general method for calculating the self-force due to gravitational\nperturbations of a lightly damped string loop. Our approach breaks up the calculation into smooth\nintegrals over the retarded image of the loop plus boundary terms. The latter are used to take\naccount of the special contributions when the source and field point coincide and when\ndiscontinuities are visible on the past image of the loop. These may be from kinks or cusps or crossings (spacetime points where\nintercommutation events might occur). Our methodology is quite general and can be used for arbitrary choices of\nspacetime and worldsheet gauges.\n\nThere are some existing calculations of the gravitational self-force for cosmic strings\n\\cite{Quashnock:1990wv,Anderson:2005qu,Wachter:2016rwc}, however these results have all relied on\nsimplifications or approximations that do not hold in general. For example, although Quashnock and\nSpergel \\cite{Quashnock:1990wv} used a numerical approach not too different from ours, they do not\ndiscuss any of the various distributional-type contributions (near kinks or the field point; they\ndo, however, discuss transitions between integration variables) that we have studied in detail\nhere. Our results\\footnote{For example see the third column in Table \\ref{tab:comparison} for the\nGV case, but we also performed the same check for the other configurations discussed in this\npaper.} suggest that their use of a pair of null coordinates sidesteps the issue of a contribution\nfrom the field point. The issue of contributions from kinks, however, remains unaddressed.\nAdditionally, given the limited computational resources available at the time, their numerical\ncalculations were restricted to a low-resolution study in a restricted set of cases. In the case of\nRefs.~\\cite{Anderson:2005qu,Wachter:2016rwc}, approximations based on simple string configurations\nwere made which, while reasonable in some cases, do not fully capture the behavior for generic\nstring configurations.\n\nOur numerical calculations have passed a number of validation checks including: comparisons with\nexisting analytical results; comparisons of the integrated power radiated over a fundamental period\nagainst the flux of gravitational energy measured at large distances; and cross-comparisons of\nseveral semi-independent methods for computing the self-force. From the perspective of\ncomputational efficiency it is clear that the [1D] methods based on either integration with respect\nto $\\zeta$ or a Quashnock-Spergel type mixed integration with respect to $\\zeta_+$ and $\\zeta_-$\nare the best choice. The other [1D] methods (using a single null coordinate or over-retardation)\ninevitably encounter large numerical cancellations nearby the field point, making them significantly\nmore computationally demanding. The [2D] method is even worse, and is orders of magnitude more\ndemanding than any of the [1D] methods.\n\nWhile the preferred [1D] methods work well in general, there are certain cases where they also run\ninto numerical challenges. Since the self-force diverges as one approaches kinks and cusps (in a\nway such that the displacement of the worldsheeet is finite) it is unavoidable\nthat one would encounter numerically divergent quantities at one point or another. In this work, we\nhandled the issue of divergences in a brute force manner by simply evaluating quantities to a\nsufficiently high accuracy that they can be canceled to leave a residual which is still accurately\ndetermined. While this approach works reasonably well, the calculation could be made significantly\nmore efficient by developing an alternative approach to the problem. One promising possibility is\nto borrow from results in the point particle case\n\\cite{Vega:Detweiler:2008,Barack:Golbourn:2007,Wardell:2015ada}, where it was found that the\nseparation of the full metric perturbation into a so-called ``puncture field'' that captures the\nsingular behavior plus a ``residual field'' that is more numerically well-behaved. In the point\nparticle case, by basing the puncture field on an approximation to the singular field proposed by\nDetweiler and Whiting \\cite{Detweiler-Whiting-2003}, one can work directly with the residual field\nas it is entirely responsible for driving the motion. In the case of a cosmic string we do not yet\nhave an analogous Detweiler-Whiting type singular field. One could attempt to derive one following\nthe matched expansion methods of Ref.~\\cite{Pound:2009}. Alternatively, even without such a\nderivation a local analysis of the type done in Sec.~\\ref{sec:local-expansion} may yield an\napproximation to the singular behavior of the metric perturbation which leaves a numerically\nwell-behaved residual field, and which is sufficiently simple that its integrated contribution to\nthe motion can be determined analytically. Indeed, a preliminary analysis for the ACO string (where\nthe self-force is known analytically) suggests that exactly this approach will work well, and has\nbeen found to significantly improve the accuracy with which the integrated motion can be\ndetermined, even in the presence of a divergent self-force at the kinks.\n\nThe ultimate goal of our program is to evolve cosmic strings under the influence of the\nself-force, and to study the consequences of backreaction on cusp formation, smoothing of kinks,\nand other astrophysically relevant features of cosmic strings. This paper represents the first step\nin such an endeavor. We can now compute the self-force for an arbitrary cosmic string with a\nreasonable level of accuracy and with the freedom to arbitrarily choose coordinates and gauges which\nare most suitable for evolution. The next step is to implement this into a numerical evolution\nscheme. This will be presented in a future work.\n\n\\section*{Acknowledgments}\nConcurrent to our\nown work Blanco-Pillado, Olum and Wachter did related\nwork on cosmic string back-reaction; that paper and this one were\nsubmitted at the same time. As far as we know, the results are in\nagreement where they overlap.\nWe thank J.J. Blanco-Pillado, David Nichols, Ken Olum,\nAdrian Ottewill, Joe Polchinski, Leo Stein, Peter Taylor, Henry Tye, Jeremy Wachter\nand Yang Zhang for helpful conversations.\nB.W. and D.C. gratefully acknowledge support from the John Templeton\nFoundation New Frontiers Program under Grant No.~37426 (University of\nChicago) - FP050136-B (Cornell University). D.C. acknowledges that this\nmaterial is based upon work supported by the National Science\nFoundation under Grant No. 1417132. EF acknowledges the support of\nthe National Science Foundation under Grant Nos. 1404105 and 1707800.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Supplementary material}\n\nWe have created three different random subsets of MegaFace for each of the distractor sizes (10,100,1K,10K,100K,1M) and reran all the algorithms with each of the set for each of the probe sets (FaceScrub and FGNET). We present the results in the below figures. Set \\#1 is the one presented in the main paper. \n\n\\begin{figure*}\n\t\\begin{tabular}{ccc}\n\t\t\\includegraphics[width=.17\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\\includegraphics[width=.31\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_rank-1_cmbnd_set_1.pdf} &\n\t\t\\includegraphics[width=.31\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_rank-1_cmbnd_set_1.pdf} \\\\\n\t\t\\includegraphics[width=.17\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\\includegraphics[width=.31\\linewidth, valign=t]{plots16\/graphs\/set_2\/facescrub_rank-1_cmbnd_set_2.pdf} &\n\t\t\\includegraphics[width=.31\\linewidth, valign=t]{plots16\/graphs\/set_2\/fgnet_rank-1_cmbnd_set_2.pdf} \\\\\n\t\t\\includegraphics[width=.17\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\\includegraphics[width=.31\\linewidth, valign=t]{plots16\/graphs\/set_3\/facescrub_rank-1_cmbnd_set_3.pdf} &\n\t\t\\includegraphics[width=.31\\linewidth, valign=t]{plots16\/graphs\/set_3\/fgnet_rank-1_cmbnd_set_3.pdf} \\\\\n\t\t& {\\footnotesize (a) FaceScrub + MegaFace} & {\\footnotesize (b) FGNET + MegaFace}\\\\\n\t\t\\vspace{.05in}\n\t\\end{tabular}\n\t\\caption{Sets 1--3 (each row represents a different random gallery set). The MegaFace challenge evaluates identification and verification as a function of increasing number of gallery distractors (going from 10 to 1 Million). We use two different probe sets (a) FaceScrub--photos of celebrities, (b) FGNET--photos with a large variation in age per person. We present rank-1 identification of state of the art algorithms that participated in our challenge. On the left side of each plot is current major benchmark LFW scale (i.e., 10 distractors, see how all the top algorithms are clustered above 95\\%). On the right is mega-scale (with a million distractors). Observe that rates drop with increasing numbers of distractors, even though the probe set is fixed, and that algorithms trained on larger sets (dashed lines) generally perform better. }\n\t\\label{fig:rank1}\n\\end{figure*}\n\t\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_1000000_distractors_cmbnd_verif_set_1.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_10000_distractors_cmbnd_verif_set_1.pdf} \\\\\n\t\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} \\\\\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_1000000_distractors_cmbnd_verif_set_1.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_10000_distractors_cmbnd_verif_set_1.pdf} \\\\\n\t\t\t&{\\footnotesize (c) FGNET + 1M} & {\\footnotesize (d) FGNET + 10K} \\\\\n\t\t\\end{tabular}\n\t\t\\caption{\\textbf{Verification (random gallery set 1)} performance with (a,c) 1 Million and (b,d) 10K distractors on both probe sets. Note the performance at low false accept rates (left side of each plot). }\\label{fig:verf}\n\t\t\\label{fig:dataset_size_roc} \n\t\\end{figure*}\n\t\t\n\t\t\t\\begin{figure*}\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tabular}{ccc}\n\t\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_2\/facescrub_1000000_distractors_cmbnd_verif_set_2.pdf} & \n\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_2\/facescrub_10000_distractors_cmbnd_verif_set_2.pdf} \\\\\n\t\t\t\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} \\\\\n\t\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_2\/fgnet_1000000_distractors_cmbnd_verif_set_2.pdf} & \n\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_2\/fgnet_10000_distractors_cmbnd_verif_set_2.pdf} \\\\\n\t\t\t\t\t&{\\footnotesize (c) FGNET + 1M} & {\\footnotesize (d) FGNET + 10K} \\\\\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\caption{\\textbf{Verification (random gallery set 2)} performance with (a,c) 1 Million and (b,d) 10K distractors on both probe sets. Note the performance at low false accept rates (left side of each plot). }\\label{fig:verf}\n\t\t\t\t\\label{fig:dataset_size_roc} \n\t\t\t\\end{figure*}\n\t\t\t\n\t\t\t\n\t\t\t\t\\begin{figure*}\n\t\t\t\t\t\\centering\n\t\t\t\t\t\\begin{tabular}{ccc}\n\t\t\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_3\/facescrub_1000000_distractors_cmbnd_verif_set_3.pdf} & \n\t\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_3\/facescrub_10000_distractors_cmbnd_verif_set_3.pdf} \\\\\n\t\t\t\t\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} \\\\\n\t\t\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_3\/fgnet_1000000_distractors_cmbnd_verif_set_3.pdf} & \n\t\t\t\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_3\/fgnet_10000_distractors_cmbnd_verif_set_3.pdf} \\\\\n\t\t\t\t\t\t&{\\footnotesize (c) FGNET + 1M} & {\\footnotesize (d) FGNET + 10K} \\\\\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\\caption{\\textbf{Verification (random gallery set 3)} performance with (a,c) 1 Million and (b,d) 10K distractors on both probe sets. Note the performance at low false accept rates (left side of each plot). }\\label{fig:verf}\n\t\t\t\t\t\\label{fig:dataset_size_roc} \n\t\t\t\t\\end{figure*}\n\t\t\t\t\n\t\t\t\t\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_1000000_distractors_cmbnd_ident_set_1.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_10000_distractors_cmbnd_ident_set_1.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_rank-10_cmbnd_set_1.pdf}\\\\ \n\t\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} & {\\footnotesize (c) FaceScrub + rank-10} \\\\\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_1000000_distractors_cmbnd_ident_set_1.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_10000_distractors_cmbnd_ident_set_1.pdf} &\n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_rank-10_cmbnd_set_1.pdf}\\\\\n\t\t\t&\t\t{\\footnotesize (d) FGNET + 1M} & {\\footnotesize (e) FGNET + 10K} & {\\footnotesize (f) FGNET + rank-10} \n\t\t\\end{tabular}\n\t\t\n\t\t\\caption{\\textbf{Identification (random gallery set 1)} performance for all methods with (a,d) 1M distractors and (b,e) 10K distractors, and (c,f) rank-10 for both probe sets. Fig.~\\ref{fig:teaser} also shows rank-1 performance as a function of number of distractors on both probe sets. }\n\t\t\\label{fig:dataset_size_cmc}\n\t\\end{figure*}\n\n\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_2\/facescrub_1000000_distractors_cmbnd_ident_set_2.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_2\/facescrub_10000_distractors_cmbnd_ident_set_2.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_2\/facescrub_rank-10_cmbnd_set_2.pdf}\\\\ \n\t\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} & {\\footnotesize (c) FaceScrub + rank-10} \\\\\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_2\/fgnet_1000000_distractors_cmbnd_ident_set_2.pdf} & \n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_2\/fgnet_10000_distractors_cmbnd_ident_set_2.pdf} &\n\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_2\/fgnet_rank-10_cmbnd_set_2.pdf}\\\\\n\t\t\t&\t\t{\\footnotesize (d) FGNET + 1M} & {\\footnotesize (e) FGNET + 10K} & {\\footnotesize (f) FGNET + rank-10} \n\t\t\\end{tabular}\n\t\t\n\t\t\\caption{\\textbf{Identification (random gallery set 2)} performance for all methods with (a,d) 1M distractors and (b,e) 10K distractors, and (c,f) rank-10 for both probe sets. Fig.~\\ref{fig:teaser} also shows rank-1 performance as a function of number of distractors on both probe sets. }\n\t\t\\label{fig:dataset_size_cmc}\n\t\\end{figure*}\n\t\n\t\t\\begin{figure*}\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{cccc}\n\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_3\/facescrub_1000000_distractors_cmbnd_ident_set_3.pdf} & \n\t\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_3\/facescrub_10000_distractors_cmbnd_ident_set_3.pdf} & \n\t\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_3\/facescrub_rank-10_cmbnd_set_3.pdf}\\\\ \n\t\t\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} & {\\footnotesize (c) FaceScrub + rank-10} \\\\\n\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_3\/fgnet_1000000_distractors_cmbnd_ident_set_3.pdf} & \n\t\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_3\/fgnet_10000_distractors_cmbnd_ident_set_3.pdf} &\n\t\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_3\/fgnet_rank-10_cmbnd_set_3.pdf}\\\\\n\t\t\t\t&\t\t{\\footnotesize (d) FGNET + 1M} & {\\footnotesize (e) FGNET + 10K} & {\\footnotesize (f) FGNET + rank-10} \n\t\t\t\\end{tabular}\n\t\t\t\n\t\t\t\\caption{\\textbf{Identification (random gallery set 3)} performance for all methods with (a,d) 1M distractors and (b,e) 10K distractors, and (c,f) rank-10 for both probe sets. Fig.~\\ref{fig:teaser} also shows rank-1 performance as a function of number of distractors on both probe sets. }\n\t\t\t\\label{fig:dataset_size_cmc}\n\t\t\\end{figure*}\n\n\n\n\\section{Introduction}\n\n\nFace recognition has seen major breakthroughs in the last couple of years, with new results by multiple groups \\cite{schroff2015facenet,taigman2014deepface,sun2015deepid3} surpassing human performance on the leading \nLabeled Faces in the Wild (LFW) benchmark \\cite{huang2007labeled} and achieving near perfect results. \n\nIs face recognition solved? \nMany applications require accurate identification at {\\em planetary scale}, i.e., finding the best matching face in a database of billions of people. This is truly like finding a needle in a haystack. Face recognition algorithms did not deliver when the police were searching for the suspect of the Boston marathon bombing~\\cite{klontz2013case}. Similarly, do you believe that current cell-phone face unlocking programs will protect you against anyone on the planet who might find your lost phone? These and other face recognition applications require finding the true positive match(es) with negligible false positives. They also require training and testing on datasets that contain vast numbers of different people.\n\nIn this paper, we introduce the {\\em MegaFace} dataset and benchmark to evaluate and encourage development of face recognition algorithms at scale. The goal of MegaFace is to evaluate the performance of current face recognition algorithms with up to a million {\\em distractors}, i.e., up to a million people who are not in the test set. Our key objectives for assembling the dataset are that 1) it should contain a million photos \\textbf{``in the wild''}, i.e., with unconstrained pose, expression, lighting, and exposure, 2) be broad rather than deep, i.e., \\textbf{contain many different people} rather than many photos of a small number of people, and most importantly 3) it will be \\textbf{publicly available}, to enable benchmarking and distribution within the research community. \n\n\n\n\n\n\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[width=1\\linewidth]{plots16\/datasets}\n\t\\end{center}\n\t\\caption{Representative sample of recent face recognition datasets (in addition to LFW). Current public datasets include up to 10K unique people, and a total of 500K photos. Several companies have access to orders of magnitude more photos and subjects, these however are subject to privacy constraints and are not public. MegaFace (this paper) includes 1M photos of more than 690K unique subjects, collected from Flickr (from creative commons photos), and is available publicly.}\n\t\\label{fig:datasets}\n\\end{figure*}\n\nWhile recent face datasets have leveraged celebrity photos crawled from the web, such datasets have been limited to a few thousand unique individuals; it is challenging to find a million or more unique celebrities.\nInstead, we leverage \nYahoo's recently released database of Flickr photos \\cite{thomee2015new}. The Yahoo dataset includes 100M \\textbf{creative commons} photographs and hence can be released for research. \nWhile these photos are unconstrained and do not target face recognition research per se, they capture a large number of faces. Our algorithm samples the Flickr set searching for faces while optimizing for large number of unique people via analysis of Flickr user IDs and group photos. MegaFace includes 1 Million photos of more than 690,000 unique subjects. \n\nThe MegaFace challenge evaluates how face recognition algorithms perform with a very large number of ``distractors,'' i.e., individuals that are not in the probe set.\nMegaFace is used as the gallery; the two probe sets we use are FaceScrub \\cite{ng265data} and FG-NET \\cite{cootes2008fg,kemelmacher2014illumination}. \t\nWe address fundamental questions and introduce the following key findings (Fig.~\\ref{fig:teaser}):\n\\begin{itemize}\n\n\\item{\\bf How well do current face recognition algorithms scale?} \nAlgorithms that achieve above 95\\% performance on LFW (equivalent of 10 distractors in our plots), achieve 35-75\\% identification rates with 1M distractors. Baselines (Joint Bayes and LBP) while achieving reasonable results on LFW drop to less than 10\\%. \n\n\n\\item \\vspace{-0.1in} {\\bf Is the size of training data important?} \nWe observe that algorithms that were trained on larger sets (top two are FaceNet that was trained on more than 500M photos of 10M people, and FaceN that was trained on 18M of 200K people) tend to perform better at scale. Interestingly, however, FaceN (trained on 18M) compares favorably to FaceNet (trained on 500M) on the FaceScrub set. \n\n\n\\item \\vspace{-0.1in} {\\bf How does age affect recognition performance?} We found that the performance with 10 distractors for FGNET as a probe set is lower than for FaceScrub, and the drop off spread is much bigger (Fig.~\\ref{fig:teaser} (b)) . A deeper analysis also reveals that children (below age 20) are more challenging to recognize than adults, possibly due to training data availability, and that larger gaps in age (between gallery and probe) are similarly more challenging to recognize. These observations become evident by analyzing at large scale. \n\n\n\\item \\vspace{-0.1in} {\\bf How does pose affect recognition performance?} Recognition drops for larger variation in pose between matching probe and gallery, and the effect is much more significant at scale. \n\n\n\n\n\\end{itemize}\n\n\n\nIn the following sections we describe how the MegaFace database was created, explain the challenge, and describe the outcomes.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\\subsection{Benchmarks}\n\nEarly work in face recognition focused on controlled datasets where subsets of lighting, pose, or facial expression were kept fixed, e.g., \\cite{georghiades1997yale, gross2010multi}. With the advance of algorithms, the focus moved to unconstrained scenarios with a number of important benchmarks appearing, e.g., FRGC, Caltech Faces, and many more (see \\cite{huang2007labeled}, Fig. 3, for a list of all the datasets), and thorough evaluations \\cite{grother2010report,zhao2003face}. A big challenge, however, was to collect photos of large number of individuals. \n\nLarge scale evaluations were previously performed on \\textit{controlled} datasets (visa photographs, mugshots, lab captured photos) by NIST \\cite{grother2010report}, and report recognition results of 90\\% on 1.6 million people. However, these results are not representative of photos in the wild.\n\nIn 2007, Huang et al. \\cite{huang2007labeled} created the benchmark Labeled Faces in the Wild (LFW). The LFW database includes 13K photos of 5K different people. It was collected by running Viola-Jones face detection \\cite{viola2004robust} on Yahoo News photos. LFW captures celebrities photographed under unconstrained conditions (arbitrary lighting, pose, and expression) and it has been an amazing resource for the face analysis community (more than 1K citations). Since 2007, a number of databases appeared that include larger numbers of photos per person (LFW has 1620 people with more than 2 photos), video information, and even 3D information, e.g., \\cite{kumar2009attribute, beveridge2013challenge, yi2014learning, wolf2011face, chen2012bayesian, ng265data}. However, LFW remains the leading benchmark on which all state of the art recognition methods are evaluated and compared. Indeed, just in the last year a number of methods (11 methods at the time of writing this paper), e.g., \\cite{schroff2015facenet,sun2014deeply,sun2015deepid3,taigman2014deepface,taigman2014web} reported recognition rates above 99\\%+ \\cite{hu2015face} (better than human recognition rates estimated on the same dataset by \\cite{kumar2011describable}). The perfect recognition rate on LFW is 99.9\\% (it is not 100\\% since there are 5 pairs of photos that are mislabeled), and current top performer reports 99.77\\%. \n\n\n\n\n\\subsection{Datasets}\n\nWhile, some companies have access to massive photo collections, e.g., Google in \\cite{schroff2015facenet}\ntrained on 200 Million photos of 8 Million people (and more recently on 500M of 10M), these datasets are not available to the public and were used only for training and not testing. \n\nThe largest public data set is\nCASIA-WebFace \\cite{yi2014learning} that includes 500K photos of 10K celebrities, crawled from the web. While CASIA is a great resource, it contains only 10K individuals, and does not have an associated benchmark (i.e., it's used for training not testing). \n\nOrtiz et al. \\cite{ortiz2014face} experimented with large scale identification from Facebook photos assuming there is more than one gallery photo per person. Similarly Stone et al. \\cite{stone2008autotagging} show that social network's context improves large scale face recognition. Parkhi et al. \\cite{parkhi2015deep} assembled a dataset of 2.6 Million of 2600 people, and used it for training (testing was done on the smaller scale LFW and YouTube Faces~\\cite{wolf2011face}). Wang et al. \\cite{wang2015face} propose a hierarchical approach on top of commercial recognizer to enable fast search in a dataset of 80 million faces. Unfortunately, however, none of these efforts have produced publicly available datasets or public benchmarks.\nNote also that \\cite{parkhi2015deep} and \\cite{wang2015face} are contemporaneous, as their arxiv papers appeared after ours \\cite{miller2015megaface}.\n\n\n\n\\subsection{Related Studies}\n\nAge-invariant recognition is an important problem that has been studied in the literature, e.g., \\cite{chen14cross,li2011discriminative}. \nFG-NET \\cite{cootes2008fg} includes 975 photos of 82 people, each with several photos spanning many ages. More recently, \nChen et al. \\cite{chen14cross} created a dataset of 160k photos of 2k celebrities across many ages. However, most modern face recognition algorithms have not been evaluated for age-invariance. We attempt to rectify this by including an FG-NET test (augmented with a million distractors) in our benchmark.\n\nOther recent studies have considered both identification as well as verification results on LFW \\cite{best2014unconstrained, taigman2014web, sun2014deeply, sun2015deepid3}. \nFinally, \nBest-Rowden et al. \\cite{best2014unconstrained} performed an interesting Mechanical Turk study to evaluate human recognition rates on LFW and YouTube Faces datasets. They report that humans are better than computers when recognizing from videos due to additional cues, e.g., temporal information, familiarity with the subject (celebrity), workers' country of origin (USA vs. others), and also discovered errors in labeling of YouTube Faces via crowdsourcing. In the future, we will use this study's useful conclusions to help annotate MegaFace and create a training set in addition to the currently provided distractor set. \n\n\n\n\n\n\\section{Assembling MegaFace}\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[width=1\\linewidth]{plots16\/megaface_stats}\n\t\\end{center}\n\t\\caption{MegaFace statistics. We present randomly selected photographs (with provided detections in red), along with distributions of Flickr tags, GPS locations, and camera types. We also show the pose distribution (yaw and roll), number of faces per photograph, and number of faces for different resolutions (compared to LFW in which faces are approximately 100x100).}\n\t\\label{fig:data_stats} \n\\end{figure*}\n\nIn this section, we provide an overview of the MegaFace dataset, how it was assembled, and its statistics.\nWe created MegaFace to evaluate and drive the development of face recognition algorithms that work at scale.\nAs motivated in Section 1, we sought to create a public dataset, free of licensing restrictions, that captures photos taken with unconstrained imaging conditions, and with close to a million unique identities.\nAfter exploring a number of avenues for data collection, we decided to leverage Yahoo's 100M Flickr set \\cite{thomee2015new}. Yahoo's set was not created with face analysis in mind, however, it includes a very large number of faces and satisfies our requirements.\n\n\n\n\n\\textbf{Optimizing for large number of unique identities.} \nOur strategy for maximizing the number of unique identities is based on two techniques: 1) drawing photos from many different Flickr users---there are 500K unique user IDs---and 2) assuming that two or more faces appear in the same photo, they are likely different identities. Note that these assumptions do not need to be infallible, as our goal is to produce a very diverse distractor set--it is not a problem if we have a small number of photos of the same person.\nOur algorithm for detecting and downloading faces is as follows. We generated a list of images and user IDs in a round-robin fashion,\nby going through each of the 500K users and selecting the first photo with a face larger than $50\\times 50$ and adding it to the dataset. If the photo contains multiple faces above that resolution, we add them all, given that they are different people with high probability. We then repeated this process (choosing the second, then the third, etc. photo from each user), until a sufficient number of faces were assembled.\nBased on our experiments face detection can have up to 20\\% false positive rate. Therefore, to ensure that our final set includes a million faces, the process was terminated once $1,296,079$ faces were downloaded. Once face detection was done, we ran additional stricter detection, and removed blurry faces. We assembled a total of $690,572$ faces in this manner that have a high probability of being unique individuals.\nWhile not guaranteed, the remaining $310$K in our dataset likely also contain additional unique identities. Figure~\\ref{fig:data_stats} presents a histogram of number of faces per photo. \n\n\n\n\n\\textbf{Face processing.} We downloaded the highest resolution available per photo. The faces are detected using the HeadHunter\\footnote{{\\scriptsize \\url{http:\/\/markusmathias.bitbucket.org\/2014_eccv_face_detection\/}}} algorithm by Mathias et al. \\cite{Mathias2014Eccv}, which reported state of the art results in face detection, and is especially robust to a wide range of head poses including profiles. We crop detected faces such that the face spans 50\\% of the photo height, thus including the full head (Fig.~\\ref{fig:data_stats}). We further estimate 49 fiducial points and yaw and pitch angles, as computed by the IntraFace\\footnote{{\\scriptsize \\url{http:\/\/www.humansensing.cs.cmu.edu\/intraface\/}}} landmark model \\cite{xiong2013supervised}.\n\n\\textbf{Dataset statistics.}\nFigure~\\ref{fig:data_stats} presents MegaFace's statistics: \n\\begin{itemize}\n\t\\item \\vspace{-.1in} Representative photographs and bounding boxes. Observe that the photographs contain people from different countries, gender, variety of poses, glasses\/no glasses, and many more variations. \n\t\\item \\vspace{-.1in} Distribution of Flickr tags that accompanied the downloaded photos. Tags range from 'instagram' to 'wedding,' suggesting a range of photos from selfies to high quality portraits (prominence of '2013' likely due to timing of when the Flickr dataset was released).\n\t\\item \\vspace{-.1in} GPS locations demonstrate photos taken all over the world.\n\t\\item \\vspace{-.1in} Camera types dominated by DSLRs (over mobile phones), perhaps correlated with creative commons publishers, as well as our preference for higher resolution faces.\n\t\\item \\vspace{-.1in} 3D pose information: more than 197K of the faces have yaw angles \\textit{larger} than $\\pm 40$ degrees. Typically unconstrained face datasets include yaw angles of \\textit{less} than $\\pm 30$ degrees.\n\t\\item \\vspace{-.1in} Number of faces per photo, to indicate the number of group photos. \n\t\\item \\vspace{-.1in} Face resolution: more than 50\\% (514K) of the photos in MegaFace have resolution more than 40 pixels inter-ocular distance (40 IOD corresponds to 100x100 face size, the resolution in LFW).\n\\end{itemize}\n\\vspace{-.1in} We believe that this dataset is extremely useful for a variety of research areas in recognition and face modeling, and we plan to maintain and expand it in the future. In the next section, we describe the MegaFace challenge. \n\n\n\n\n\n\n\\section{The MegaFace Challenge}\n\n\nIn this section, we describe the challenge and evaluation protocols. \nOur goal is to test performance of face recognition algorithms with up to a million distractors, i.e., faces of unknown people.\nIn each test, a {\\em probe} image is compared against a {\\em gallery} of up to a million faces drawn from the Megaface dataset.\n\n\n\\textbf{Recognition scenarios} The first scenario is identification: given a probe photo, and a gallery containing at least one photo of the same person, the algorithm rank-orders all photos in the gallery based on similarity to the probe. \nSpecifically, the probe set includes $N$ people; for each person we have $M$ photos. We then test each of the $M$ photos (denote by $i$) per person by adding it the gallery of distractors and use each of the other $M-1$ photos as a probe. Results are presented with Cumulative Match Characteristics (CMC) curves-- the probability that a correct gallery image will be chosen for a random probe by rank $=K$.\n\n\n\nThe second scenario is verification, i.e., a pair of photos is given and the algorithm should output whether the person in the two photos is the same or not. To evaluate verification we computed all pairs between the probe dataset and the Megaface distractor dataset.\nOur verification experiment has in total 4 billion negative pairs. We report verification results with ROC curves; this explores the trade off between falsely accepting non-match pairs and falsely rejecting match pairs. \n\nUntil now, verification received most of the focus in face recognition research since it was tested by the LFW benchmark \\cite{huang2007labeled}. Recently, a number of groups, e.g., \\cite{best2014unconstrained, taigman2014web, sun2014deeply, sun2015deepid3} also performed identification experiments on LFW. The relation between the identification and verification protocols was studied by Grother and Phillips \\cite{grother2004models} and DeCann and Ross \\cite{decann2012can}. In our challenge, we evaluate both scenarios with an emphasis on very large number of distractors. For comparison, testing identification on LFW is equivalent to 10 distractors in our challenge. \n\n\n\\textbf{Probe set.} MegaFace is used to create a gallery with a large number of distractors. For the probe set (testing known identities), we use two sets: \n\\begin{enumerate}\n\t\\item The FaceScrub dataset \\cite{ng265data}, which includes 100K photos of 530 celebrities, is available online.\n\n\tFaceScrub has a similar number of male and female photos (55,742 photos of 265 males and 52,076 photos of 265 females) and a large variation across photos of the same individual which reduces possible bias, e.g., due to backgrounds and hair style \\cite{kumar2011describable}, that may occur in LFW. For efficiency, the evaluation was done on a subset of FaceScrub which includes 80 identities (40 females and 40 males) by randomly selecting from a set of people that had more than 50 images each (from which 50 random photos per person were used). \n\t\\item The FG-NET aging dataset \\cite{cootes2008fg,kemelmacher2014illumination}: it includes 975 photos of 82 people. For some of the people the age range in photos is more than 40 years. \n\\end{enumerate}\n\n\n\n\\textbf{Evaluation and Baselines.} Challenge participants were asked to calculate their features on MegaFace, full FaceScrub, and FGNET. We provided code that runs identification and verification on the FaceScrub set. After the results were submitted by all groups we re-ran the experiments with FaceScrub and 3 different random distractor sets per gallery size. We further ran the FGNET experiments on all methods\\footnote{Google's FaceNet was ran by the authors since their features could not be uploaded due to licensing conditions} and each of the three random MegaFace subsets per gallery size. The metric for comparison is $L_2$ distance. Participants were asked not to train on FaceScrub or FGNET. As a baseline, we implemented two simple recognition algorithms: 1) comparison by LBP \\cite{ahonen2006face} features--it achieves 70\\% recognition rates on LFW, and uses no training, 2) a Joint Bayesian (JB) approach represents each face as the sum of two Gaussian variables\n$x = \\mu + \\epsilon$ where $\\mu$ is identity and $\\epsilon$ is inter-personal variation. To determine whether two faces, $x_1$ and $x_2$ belong to the same identity, we calculate $P(x_1, x_2 | H_1)$ and $P(x_1, x_2 | H_2)$ where $H_1$ is the hypothesis that the two faces are the same and $H_2$ is the hypothesis that the two faces are different. These distributions can also be written as normal distributions, which allows for efficient inference via a log-likelihood test. JB algorithm was trained on the CASIA-WebFace dataset \\cite{yi2014learning}.\n\n\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{plots16\/trainingsize}\n\t\t\\caption{Number of training photos and unique people used by each participating method. }\\label{fig:trainingsize}\n\t\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\nAn ultimate face recognition algorithm should perform with billions of people in a dataset. While testing with billions is still challenging, we have done the first step and created a benchmark of a million faces. MegaFace is available to researchers and we presented results from state of the art methods. Our key discoveries are 1) algorithms' performance degrades given a large gallery even though the probe set stays fixed, 2) testing at scale allows to uncover the differences across algorithms (which at smaller scale appear to perform similarly), 3) age differences across probe and gallery are still more challenging for recognition. We will keep maintaining and updating the MegaFace benchmark online, as well as, create more challenges in the future. Below are topics we think are exciting to explore. First, we plan to release all the detected faces from the 100M Flickr dataset. Second, companies like Google and Facebook have a head start due to availability of enormous amounts of data. We are interested to level the playing field and provide large \\textbf{training} data to the research community that will be assembled from our Flickr data. Our dataset will be separated to testing and training sets for fair evaluation and training. Finally, the significant number of high resolution faces in our Flickr database \nwill also allow to explore resolution in more depth. Currently, it is mostly untouched topic in face recognition literature due to lack of data. \n\n\\vspace{.1in}\n\n\\textbf{Acknowledgments} We are grateful to the early challenge participants that allowed a very quick turn out and provided great feedback on the baseline code and data. We thank Samsung and Google for the generous support of this research. \n \n\\section{Human Performance}\nWhile a lot of effort goes into developing automated face recognition algorithms, still the human visual system seems to perform better especially at very low false accept rate. \nIt is therefore very interesting to estimate the human recognition rate on the same sets of photos on which algorithms operate. \nVerification rates of humans were previously estimated on controlled data, i.e., photos taken in laboratory condition \\cite{o2007face, o2012comparing, phillips2010frvt,adler2007comparing}, and more recently on unconstrained photo collections: \\cite{kumar2011describable} evaluated verification rates on the LFW dataset, and \\cite{best2014unconstrained, chen2012dictionary} estimated verification rates on videos. Human studies made on unconstrained photos, e.g., \\cite{kumar2011describable}, fused human judgments by averaging ratings over participants, which helped remove outliers. \nUntil recently, none of the algorithms was able to outperform humans on LFW. Moreover, all the previous human experiments were done on small scale and did not evaluate identification. It is of great interest, however, to discover how humans perform at \\textit{scale} to provide a lower bound for machine performance. \n\n\n \n\nOne of the key contributions of this paper is an extremely large (about 4 million pairs of faces) human study on Mechanical Turk that evaluates human performance on unconstrained photos (Flickr), and specifically targets identification rather than verification. \n\nWe have performed the following experiment on Mechanical Turk. Since all the identities in the FaceScrub dataset (our test set) are celebrities, human recognition rates may be biased due to familiarity with the person \\cite{sinha2006face}. We therefore sorted all the names in FaceScrub according to the number of results that Google image search returns per person as a measure of popularity. We then chose 50 most popular people, and 50 least popular people as our human experiment test set. Each person had 100 photos, we randomly selected one photo as the probe image, and used the rest 99 as gallery images. We then produced 99 positive pairs per person. For the distractor set, for each input photo we randomly selected 10K photos from our MegaFace dataset, and produced 10K pairs of probe with each of the distractors. This results in total of $100\\times (99+10K)$ pairs. Since the number of positive pairs in this setting is very low, we introduced additional positive pairs by randomly pairing gallery images that are not the probe. This is to remove possible bias in human rating, i.e., if most pairs are negative people may miss the positive ones. We presented to turkers 10 pairs per page and asked to click on all the pairs that contained the same person. We paid 1 cent per page of 10 pairs.\n\nOnce this experiment was done we collected the pairs that received 1 click or more, and created a sorting experiment.\nWe selected only the pairs that include the probe photos, and created a set of possible matches per probe.\nWe generated triples of probe, and two matches, presented 10 triples in each page and asked which one of the matches is the person in the probe. Generally, to get a full ranking of all images, the number of possible triples per probe is $n^2$ where $n$ is the number of matches from round 1. For efficiency (and less cost) only determined the position of each gallery photo relative to the distractor images. That is, our experiment determined the number of distractors that would be ranked above and below each gallery image, but not the ordering within those groups. On every pair\/triple of photos in both experiments worked 3 different people. We paid 7 cents for each page of 10 triples. The total cost of this experiment was \\$10,000.\n\n\n\n\\begin{figure}\n\t\\begin{center}\n \\caption*{Identification}\n \\begin{tabular}{ c | c | c }\n \\hline\n & Rank-1 & Rank-10 \\\\ \\hline\n All & 23.9 & 91.13 \\\\ \\hline\n Males & 23.35 & 89.98 \\\\ \\hline\n Females & 24.01 & 92.5 \\\\ \\hline\n Less Popular & 22.7 & 90.9 \\\\ \\hline\n More Popular & 25.1 & 91.3 \\\\\n \\hline\n \\end{tabular}\n\\end{center}\n\n \\begin{center}\n \\caption*{Verification}\n \\begin{tabular}{ c | c | c }\n \\hline\n & TAR @ $2 \\times 10^{-3}$ & TAR @ $5 \\times 10^{-2}$ \\\\ \\hline\n All & 41.6 & 76.5 \\\\ \\hline\n Males & 43.7 & 79.0 \\\\ \\hline\n Females & 39.4 & 73.9 \\\\ \\hline\n Less Popular & 39.4 & 74.7 \\\\ \\hline\n More Popular & 43.6 & 78.2 \\\\\n \\hline\n \\end{tabular}\n\\end{center}\n\n\t\\caption{Human recognition rates (verification and identification).\n Our experiments also show that humans perform better on more popular people and are better at the verification task when comparing males.}\n\t\\label{fig:human_id}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Evaluation Methodology}\n\n\n\\section{Recognition Methods}\n\nWe have selected to experiment with four recognition algorithms that represent very different types of techniques. \n\n\\paragraph{Basic LBP comparison.} We have implemented a comparison based on Local Binary Pattern (LBP) descriptors. \\red{give details about what were the sizes, cell size, how many, chi square distance, ...}. This approach achieves \\red{xxx} recognition rates on LFW, and uses no training. \n\n\n\\paragraph{Joint Bayesian.} The Joint Bayesian model represents each face as the sum of two Gaussian variables\n$$x = \\mu + \\epsilon$$ where $\\mu$ represents identity and $\\epsilon$ represents inter-personal variation. To determine whether two faces, $x_1$ and $x_2$ belong to the same identity, we calculate $P(x_1, x_2 | H_1)$ and $P(x_1, x_2 | H_2)$ where $H_1$ is the hypothesis that the two faces are the same and $H_2$ is the hypothesis that the two faces are different. These distributions can also be written as normal distributions, which allows for efficent inference via a log-likelihood test. \\red{add detail, trained on CASIA... achieves on LFW...}\n\n\n\n\n\n\n\n\\paragraph{Commercial software by VisionLabs.} \nVisionLabs achieved \\red{xxx} recognition rates on LFW, is trained on outside data \\red{???}. \n\n\n\\paragraph{Googel's FaceNet} Google FaceNet \\cite{schroff2015facenet} is the most recent and highest performing of several Deep Learning algorithms applied to the LFW benchmark. \nUnlike DeepFace and DeepID the FaceNet which have a bottleneck layer and are optimized by minimizing cross-entropy, FaceNet learns an embedding such that the extracted features are directly comparable using the Euclidean distance.\n It is trained on 290 Million photos of 1.8 Million people. \n \n\\section{Results}\n\t\n\t\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_1000000_distractors_cmbnd_verif_set_1.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_10000_distractors_cmbnd_verif_set_1.pdf} \\\\\n\t\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} \\\\\n\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_1000000_distractors_cmbnd_verif_set_1.pdf} & \n\t\t\t\\includegraphics[width=0.27\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_10000_distractors_cmbnd_verif_set_1.pdf} \\\\\n\t\t\t&{\\footnotesize (c) FGNET + 1M} & {\\footnotesize (d) FGNET + 10K} \\\\\n\t\t\\end{tabular}\n\t\t\\caption{\\textbf{Verification} performance with (a,c) 1 Million and (b,d) 10K distractors on both probe sets. Note the performance at low false accept rates (left side of each plot). }\\label{fig:verf}\n\t\t\\label{fig:dataset_size_roc} \n\t\\end{figure*}\n\t\t\nThis section describes the results and analysis of the challenge. Our challenge was released on Sep 30, 2015. Groups were given three weeks to finish their evaluations. More than 100 groups registered to participate. We present results from 5 groups that uploaded all their features by the deadline. \tWe keep maintaining the challenge and data--currently 20 more groups are working on their submissions. \n\t\n\t\\textbf{Participating algorithms} In addition to baseline algorithms LBP, and Joint Bayes, we present results of the following methods (some provided more than 1 model):\n\t\\begin{enumerate}\n\t\t\\item \\vspace{-.1in}Google's FaceNet: achieves 99.6\\% on LFW, was trained on more than 500M photos of 10M people (newer version of \\cite{schroff2015facenet}).\t\n\t\t\\item \\vspace{-.1in}FaceAll (Beijing University of Post and Telecommunication), was trained on 838K photos of 17K people, and provided two types of features. \n\t\t\\item\\vspace{-.1in} NTechLAB.com (FaceN algorithm): provided two models (small and large)--small was trained on 494K photos of 10K people, large on more than 18M of 200K. \n\t\t\\item \\vspace{-.1in} BareBonesFR (University group): was trained on 365K photos of 5K people. \n\t\t\\item \\vspace{-.1in} 3DiVi.com: was trained on 240K photos of 5K people. \n\t\\end{enumerate}\n\t\\vspace{-.1in}Figure~\\ref{fig:trainingsize} summarizes the models, training sizes (240K-500M photos, 5K-10M people) and availability of the training data. Below we describe all the experiments and key conclusions. \n\t\n\n\t\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\\begin{figure}\n\t\t\\includegraphics[width=1\\linewidth]{plots16\/rank1numbers.pdf}\n\t\t\\caption{Rank-1 identification results (in\\%) with 1M distractors on the two probe sets.}\n\t\t\\label{fig:rank1numbers}\n\t\\end{figure}\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\t\\begin{figure*}\n\t\t\t\t\t\t\\centering\n\t\t\t\t\t\t\\begin{tabular}{cccc}\n\t\t\t\t\t\t\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_1000000_distractors_cmbnd_ident_set_1.pdf} & \n\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_10000_distractors_cmbnd_ident_set_1.pdf} & \n\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/facescrub_rank-10_cmbnd_set_1.pdf}\\\\ \n\t\t&{\\footnotesize (a) FaceScrub + 1M} & {\\footnotesize (b) FaceScrub + 10K} & {\\footnotesize (c) FaceScrub + rank-10} \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\\includegraphics[width=.15\\linewidth, valign=t]{plots16\/ll.pdf} & \n\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_1000000_distractors_cmbnd_ident_set_1.pdf} & \n\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_10000_distractors_cmbnd_ident_set_1.pdf} &\n\t\t\t\t\t\t\t\t\t\\includegraphics[width=0.25\\linewidth, valign=t]{plots16\/graphs\/set_1\/fgnet_rank-10_cmbnd_set_1.pdf}\\\\\n\t\t\t\t\t\t\t&\t\t{\\footnotesize (d) FGNET + 1M} & {\\footnotesize (e) FGNET + 10K} & {\\footnotesize (f) FGNET + rank-10} \n\t\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\t\n\t\t\t\t\t\t\\caption{\\textbf{Identification} performance for all methods with (a,d) 1M distractors and (b,e) 10K distractors, and (c,f) rank-10 for both probe sets. Fig.~\\ref{fig:teaser} also shows rank-1 performance as a function of number of distractors on both probe sets. }\n\t\t\t\t\t\t\\label{fig:dataset_size_cmc}\n\t\t\t\t\t\\end{figure*}\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\\begin{figure*}\n\t\t\t\t\t\\centering\n\t\t\t\t\t\\begin{tabular}{ccccc}\n\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_barebones_1000_rank1.pdf}\t &\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_3divi_1000_rank1.pdf} &\n\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_faceall_norm_1000_rank1.pdf} &\n\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_ntech_1000_rank1.pdf} & \n\t\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_facenet_1000_rank1.pdf}\n\t\t\t\t\t\t\\\\\n\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_barebones_1000000_rank1.pdf} &\n\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_3divi_1000000_rank1.pdf} &\n\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_faceall_norm_1000000_rank1.pdf} &\n\t\t\t\t\t\t\\includegraphics[width=0.18\\linewidth]{plots16\/age_ntech_1000000_rank1.pdf} &\n \\includegraphics[width=0.18\\linewidth]{plots16\/age_facenet_1000000_rank1.pdf}\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\n\t\t\t\t\t\\caption{Analysis of rank-1 identification with respect to varying \\textbf{ages} of gallery and probe. Columns represent five algorithms, rows 1K and 1M distractors. X-axis represents a person's age in the gallery photo and Y-axis age in the probe. The colors represent identification accuracy going from 0(=blue)--none of the true pairs were matched to 1(=red)--all possible combinations of probe and gallery were matched per probe and gallery ages. Lower scores on left and bottom indicate worse performance on children, and higher scores along the diagonal indicate that methods are better at matching across small age differences.}\n\t\t\t\t\t\\label{fig:age}\n\t\t\t\t\\end{figure*}\n\t\t\t\t\n\t\t\t\n\t\t\t\t\\begin{figure}\n\t\t\t\t\t\\begin{center}\n\t\t\t\t\t\t\\includegraphics[width=0.32\\linewidth]{plots16\/pose2_3divi_1000_rank1.pdf}\n\t\t\t\t\t\t\\includegraphics[width=0.32\\linewidth]{plots16\/pose2_faceall_norm_1000_rank1.pdf}\t\t\t\t\t\t\t\t\t\t\t\\includegraphics[width=0.32\\linewidth]{plots16\/pose2_facenet_1000_rank1.pdf}\\\\\n\t\t\t\t\t\t\\includegraphics[width=0.32\\linewidth]{plots16\/pose2_3divi_1000000_rank1.pdf}\n\t\t\t\t\t\t\\includegraphics[width=0.32\\linewidth]{plots16\/pose2_faceall_norm_1000000_rank1.pdf}\n\t\t\t\t\t\t\\includegraphics[width=0.32\\linewidth]{plots16\/pose2_facenet_1000000_rank1.pdf}\n\t\t\t\t\t\\end{center}\n\t\t\t\t\t\\caption{Analysis of rank-1 identification with varying \\textbf{poses} of gallery and probe, for three algorithms. Top: 1K distractors, Bottom: 1M distractors.\n\t\t\t\t\tThe colors represent identification accuracy going from 0 (blue) to 1 (red), where 0 means that none of the true pairs were matched, and 1 means that all possible combinations of probe and gallery were matched per probe and gallery ages. White color indicates combinations of poses that did not exist in our test set. We can see that evaluation at scale (bottom) reveals large differences in performance, which is not visible at smaller scale (top): frontal poses and smaller difference in poses is easier for identification. }\n\t\t\t\t\t\\label{fig:pose}\n\t\t\t\t\\end{figure}\n\t\t\t\t\n\n\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\\textbf{Verification results.} Fig.~\\ref{fig:verf} shows results of the verification experiment for our two probe sets, (a) and (b) show results on FaceScrub and (c) and (d) on FGNET. We present results of one random fixed set of distractors per gallery size (see the other two in the supplementary). \n\t\t\t\t\n\t\t\t\tWe see that, for FaceScrub, at lower false accept rates the performance of algorithms drops by about 40\\% on average. FaceNet and FaceN lead with only about 15\\%. Interestingly, FaceN that was trained on 18M photos is able to achieve comparable results to FaceNet that was trained on 500M. Striving to perform well at low false accept rate is important with large datasets. Even though the chance of a false accept on the small benchmark is acceptable, it does not scale to even moderately sized galleries. Results at LFW are typically reported at equal error rate which implies false accept rate of 1\\%-5\\% for top algorithms, while for a large set like MegaFace, only FAR of $10^-5$ or $10^-6$ is meaningful. \n\t\t\t\t\t\n\t\t\t\t\tFor FGNET the drop in performance is striking--about 60\\% for everyone but FaceNet, the latter achieving impressive performance across the board. One factor may be the type of training used by different groups (celebrities vs. photos across ages, etc.).\n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\tVerification rate stays similar when scaling up the gallery, e.g., compare (a) and (b). The intuition is that verification rate is normalized by the size of the dataset, so that if a probe face is matched incorrectly to 100 other faces in a 1000 faces dataset, assuming uniform distribution of the data, the rate will stay the same, and so in a dataset of a million faces one can expect to find 10,000 matches at the same false accept rate (FAR). \n\t\t\t\t\n\t\t\\textbf{Identification results.} In Fig.~\\ref{fig:dataset_size_cmc} we show the performance with respect to different ranks, i.e., rank-1 means that the correct match got the best score from the whole database, rank-10 that the correct match is in the first 10 matches, etc. (a,b,c) show performance for the FaceScrub dataset and (d,e,f) for FGNET. We observe that rates drop for all algorithms as the gallery size gets larger. This is visualized in Fig.~\\ref{fig:teaser}, the actual accuracies are in Fig.~\\ref{fig:rank1numbers}. The curves also suggest that when evaluated on more than 1M distractors (e.g., 100M), rates will be even lower. Testing on FGNET \\textbf{at scale} reveals a dramatic performance gap. All algorithms perform much worse, except for FaceNet that has a similar performance to its results on FaceScrub. \n\t\t\n\t\n\t\t\n\t\t\\textbf{Training set size.} Dashed lines in all plots represent algorithms that were trained on data larger than 500K photos and 20K people. We can see that these generally perform better than others. \n\t\t\n\t\t\t\n\t\n\t\t\n\t\t\n\t\t\t\n\t\t\\begin{figure}\n\t\t\t\\begin{tabular}{c}\n\t\t\t\t\\includegraphics[width=1\\linewidth]{plots16\/facenet_fgnet_truepos_10dist} \\\\\n\t\t\t{\\small \t(a) true positives}\\\\\n\t\t\t\t\\includegraphics[width=1\\linewidth]{plots16\/facenet_fgnet_falseneg_10dist}\\\\\n\t\t\t{\\small (b) false negatives}\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Examples pairs from FGNET using top performing FaceNet with 10 distractors. \n\t\t\tEach consecutive left-right pair of images is the same person.\n\t\t\tAll algorithms match better with smaller age differences. }\n\t\t\t\\label{fig:agematches}\n\t\t\\end{figure}\n\n\n\n\n\t\n\t\\textbf{Age.} Evaluating performance using FGNET as a probe set also reveals a major drop in performance for most algorithms when attempting to match across differences in age. We present a number of results: Fig.~\\ref{fig:age} shows differences in performances with varying age across gallery and probe. Each column represents a different algorithm, rows present results for 1K and 1M distractors. Red colors indicate higher identification rate, blue lower rate. We make two key observations: 1) algorithms perform better when the difference in age between gallery and probe is small (along the diagonal), and 2) adults are more accurately matched than children, at scale. \n\tFig.~\\ref{fig:agematches} shows examples of matched pairs (true positives and false negatives) using FaceNet and 10 distractors. Notice that false negatives have a bigger age gap relative to true positives. It is impressive, however, that the algorithm was able to these and many other true positives, given the variety in lighting, pose, and quality of the photo in addition to age changes. \n\n\t\t\t\n\t\t\\textbf{Pose.} Fig.~\\ref{fig:pose} evaluates error in recognition as a function of difference in yaw between the probe and gallery. The results are normalized by the total number of pairs for each pose difference. We can see that recognition accuracy depends strongly on pose and this difference is revealed more prominently when evaluated at scale. Top row show results of three different algorithms (representative of others) with 1K distractors. Red colors indicate that identification is very high and mostly independent of pose. However, once evaluated at scale (bottom row) with 1M distractors we can see that variation across algorithms as well as poses is more dramatic. Specifically, similar poses identified better, and more frontal (center of the circle) poses are easier to recognize. \n\n\n\t\n\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\n\t\n\t\n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\n\n\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\n\t\n\n\n\t\n\t\n\n\n\n\n\t\n\n\n\n\n\t\n\t\n\n\t\n\n\t\n\t\n\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{supplementary}\n\n\t\\begin{figure*}\n\t\t\\begin{center}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_1\/facescrub_1000000_distractors_cmbnd_ident_set_1.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_1\/facescrub_1000000_distractors_cmbnd_verif_set_1.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_1\/facescrub_rank-1_cmbnd_set_1.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_1\/fgnet_1000000_distractors_cmbnd_ident_set_1.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_1\/fgnet_1000000_distractors_cmbnd_verif_set_1.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_1\/fgnet_rank-1_cmbnd_set_1.pdf}\n\t\t\t\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_2\/facescrub_1000000_distractors_cmbnd_ident_set_2.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_2\/facescrub_1000000_distractors_cmbnd_verif_set_2.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_2\/facescrub_rank-1_cmbnd_set_2.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_2\/fgnet_1000000_distractors_cmbnd_ident_set_2.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_2\/fgnet_1000000_distractors_cmbnd_verif_set_2.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_2\/fgnet_rank-1_cmbnd_set_2.pdf}\n\t\t\t\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_3\/facescrub_1000000_distractors_cmbnd_ident_set_3.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_3\/facescrub_1000000_distractors_cmbnd_verif_set_3.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_3\/facescrub_rank-1_cmbnd_set_3.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_3\/fgnet_1000000_distractors_cmbnd_ident_set_3.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_3\/fgnet_1000000_distractors_cmbnd_verif_set_3.pdf}\n\t\t\t\\includegraphics[width=0.3\\linewidth]{plots16\/graphs\/set_3\/fgnet_rank-1_cmbnd_set_3.pdf}\n\t\t\t\n\t\t\t\\end{center}\n\t\t\t\\caption{Analysis of rank-1 identification with respect to varying ages of gallery and probe and gallery size. We analyze two methods: (top row) 3divi (bottom row) FaceNet. X-axis represents a person's age in the gallery photo and Y-axis age in the probe. The colors represent identification accuracy going from 0 (blue) to 1 (red), where 0 means that none of the true pairs were matched, and 1 means that all possible combinations of probe and gallery were matched per probe and gallery ages. White color indicates combinations of ages that did not exist in our dataset. }\n\t\t\t\\label{fig:age}\n\t\t\t\\end{figure*}\n\t\t\t\n\t\t\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nConsider a bandlimited signal (or field) quantization problem, where the\nsamples are affected by additive independent and identically distributed\n(i.i.d.) Gaussian noise. For example, a spatial signal affected by\nadditive i.i.d.~Gaussian noise has to be sampled using an array of sensors.\nIn a distributed setup, where filtering before sampling is not possible,\nnoise in bandlimited signals can be reduced by statistical averaging of\nindependent noisy samples. In addition, quantization error can be reduced\nby oversampling as well as by increasing the analog-to-digital converter\n(ADC) or quantizer precision. The \\textit{fundamental tradeoff} between\noversampling, quantizer precision, and (statistical) average distortion is\nof interest. \n\nThe tradeoffs between all subsets of these three quantities has been\nstudied in the literature. Tradeoffs for average distortion and\noversampling, additive Gaussian noise and average distortion with\nunquantized samples, and oversampling and quantization have a flurry of\nwork (e.g.,\nsee~\\cite{mallatSA2009,pinskerO1980,grayo1987,zoranDLS2007,kumarIRH2011}).\nIn this work, the tradeoff between oversampling, quantizer precision, and\nthe average distortion is of interest.\n\nIf extremely high-precision ADCs are used, then the sample distortion is\nnoise limited. On the other hand, if lowest precision single-bit ADCs are\nused, then the sample distortion is limited by quantization. At a\nhigh-level, it is expected that the distortion optimal ADC precision should\nbe `in between' these two extreme cases, in the sense that it should be\nable to resolve the signal up to the noise level. Contrary to this\nintuition, in this work it is shown that a distortion inversely\nproportional to the oversampling above the Nyquist rate is\n\\textit{achievable} with single-bit quantizers. With unquantized (infinite\nprecision) samples, the optimal distortion is \\textit{speculated} to be\ninversely proportional to the oversampling above the Nyquist rate in the\npresence of independent Gaussian noise. Accordingly, the focus of this\nwork is on the quantization of an additive independent Gaussian noise\naffected bandlimited signal using single-bit ADCs and oversampling. The key\nresult of this paper is the uncovering of a quantization \\textit{precision\nindifference} principle, which is stated next.\n\n\\textit{Precision indifference principle:} Consider a bounded-dynamic range\nbandlimited signal with samples affected by additive independent Gaussian\nnoise and observed through quantizers. If $N$ is the oversampling ratio,\nwith respect to the Nyquist rate, then the optimal law for maximum\npointwise mean-squared error is $O(1\/N)$, irrespective of the quantizer\nprecision. In other words, for large $N$, the quantizer precision only\naffects the proportionality constant of the distortion.\n\n\\textit{Prior art:} Averaging and other properties of independent random\nvariables are well studied in statistics~\\cite{bickelDM2001}. Quantization\nerror can be reduced by oversampling as well as by increasing the ADC\n(quantizer)\nprecision~(see~\\cite{grayo1987,GrayNIT98,zoranDLS2007,kumarIRH2011} for the\nentire range of results). Estimation of square-integrable signals in the\npresence of Gaussian noise was studied by Pinsker~\\cite{pinskerO1980};\nhowever, quantization is not addressed in his work. Signal quantization\nwith additive noise as a dither has been studied by\nMasry~\\cite{masryT1981}, however signal was not assumed to be bandlimited.\nMasry's results give a decay of $O(1\/N^{2\/3})$ for bandlimited signals,\nwhere $N$ is the oversampling above the Nyquist rate; this decay is slower\nthan an $O(1\/N)$ decay that we are after. The sampling of signals defined\non a finite support, while using single-bit quantizers in the presence of\nambient noise, has been also studied \\cite{wangID2009,masryIF2009}.\n\n\\textit{Notation:} The set of bounded signals and the set of finite energy\nsignals will be denoted by ${\\cal L}^\\infty({\\mathbb R})$ and ${\\cal L}^2({\\mathbb R})$, respectively.\nThe signal of interest will be denoted by $g(t)$. For a signal $s(t)$ in\n${\\cal L}^2({\\mathbb R})$, the Fourier transform will be denoted by $\\tilde{s}(\\omega)$.\nThe Fourier transform and its inverse are defined as,\n\\begin{align}\n\\tilde{s}(\\omega) = \\int_{{\\mathbb R}} s(t) \\exp(- j\\omega t)\\mbox{d}t; \\ \\ s(t) =\n\\frac{1}{2\\pi} \\int_{{\\mathbb R}} \\tilde{s}(\\omega) \\exp(j \\omega t)\n\\mbox{d}\\omega. \\nonumber\n\\end{align}\nThe indicator function of a set $A$ is denoted by ${\\mathbbm{1}}(x \\in A)$.\nRandom variables or processes will be denoted by uppercase letters. The\nadditive independent Gaussian noise is denoted by $W(t)$. The set of reals\nand integers will be denoted by ${\\mathbb R}$ and ${\\mathbb Z}$, respectively. The\ncumulative distribution function (cdf) of a Gaussian random variable with\nmean zero and variance $\\sigma^2$ will be denoted by $F(x), x\\in {\\mathbb R}$. The\nconvolution and expectation operation will be denoted by $\\star$ and ${\\mathbb E}$,\nrespectively. It is assumed that all probability models have an underlying\nsample space, sigma-field, and probability measure such that (weighted)\naverages and indicator functions are measurable.\n\n\\textit{Organization:} The mathematical formulation of our sampling problem\nis discussed in Sec.~\\ref{sec:formulation}. Short review of stable\ninterpolation kernels and smoothness properties of associated signals are\ndiscussed in Sec.~\\ref{sec:background}. The discussion on precision\nindifference principle appears in Sec.~\\ref{sec:estimation}. Estimation\nwith perfect samples and single-bit quantized samples are discussed in\nSec.~\\ref{sec:perfectsamples} and Sec.~\\ref{sec:onebit}, respectively.\nConclusions are presented in Sec.~\\ref{sec:conclusions}. To maintain the\nflow of the paper, long proofs appear in the Appendix.\n\n\\section{Problem formulation}\n\\label{sec:formulation}\n\nThe discussion begins with a quick review of a stable bandlimited kernel,\nwhich is essential for stable interpolation in ${\\cal L}^\\infty({\\mathbb R})$ and for\ndefining bandlimited signals. For $\\lambda > 1$ and $a = (\\lambda - 1)\/2$,\nconsider the kernel $\\phi (t)$ that is given by\n\\begin{align}\n\\phi(t) = \\frac{1}{\\pi a t^2} \\sin ((\\pi + a) t) \\sin (a t); \\ \\ \\phi(0) =\n1+ \\frac{a}{\\pi}. \\label{eq:phit}\n\\end{align}\nThe kernel decreases sufficiently fast (approximately as $1\/t^2$) and\ntherefore it is absolutely and square integrable. Its Fourier transform is\nillustrated in Fig.~\\ref{fig:stablesincsquare}.\n\\begin{figure}[!htb]\n\\begin{center} \\scalebox{1.0}{\\input{stablesincsquare.pstex_t}} \n\\end{center}\n\\caption{\\label{fig:stablesincsquare} \\sl \\small \\textbf{Stable\ninterpolation filter:} The kernel $\\phi(t) \\leftrightarrow\n\\tilde{\\phi}(\\omega)$ is defined in (\\ref{eq:phit}); this kernel is\nabsolutely integrable and will be used to define bounded bandlimited\nsignals.}\n\\end{figure}\nThis kernel can be used to define the set of bounded bandlimited signals,\nwhich is a subset of the Zakai class of bandlimited\nsignals~\\cite{zakaiB1965}. Consider\n\\begin{align}\n{BL}_{\\mbox{\\footnotesize int}} := \\{g(t) : |g(t)| \\leq 1 \\mbox{ and } g(t) \\star \\phi(t) =\ng(t) \\ \\forall t \\in {\\mathbb R}\\}. \\label{eq:blinterest}\n\\end{align}\nThe above definition ensures that $g(t)$ is continuous everywhere. It is\neasy to verify that the set of bounded bandlimited signals in ${\\cal L}^2({\\mathbb R})$\nwith Fourier spectrum zero outside $[-\\pi, \\pi]$ also belongs to the set\n${BL}_{\\mbox{\\footnotesize int}}$. The set ${BL}_{\\mbox{\\footnotesize int}}$ also includes (almost-surely) any\nsample path of a bounded-dynamic range bandlimited wide-sense stationary\nprocess~\\cite{cambanisMZ1976}. The quantization of bandlimited signals from\nthe set ${BL}_{\\mbox{\\footnotesize int}}$ in the presence of additive independent Gaussian\nnoise is studied in this work. \\textit{The derived results are applicable\nto finite energy bounded bandlimited signals as well as (almost surely) to\nany sample path of a bounded wide-sense stationary bandlimited process.}\n\nThe signal affected by additive noise, $g(t) + W(t)$, is available for\nsampling. It is assumed that $W(t) \\sim {\\cal N}(0, \\sigma^2)$ for all\n$t\\in{\\mathbb R}$. Independence of noise implies that $W(t_1), W(t_2), \\ldots,\nW(t_n)$ for distinct $t_1, t_2, \\ldots, t_n \\in {\\mathbb R}$ are i.i.d.~with ${\\cal\nN}(0, \\sigma^2)$ distribution. The Nyquist rate at which $g(t)$ should be\nsampled for perfect reconstruction is one sample\/second. In the noise-free\nregime, when $\\sigma = 0$, it is sufficient to sample $g(t)$ at the Nyquist\nrate for convergence in ${\\cal L}^\\infty({\\mathbb R})$. In the noise-limited regime, when\n$\\sigma > 0$, the reconstruction based on samples of $g(t)$ will have\ndistortion (statistical mean-squared error). This distortion can be\nreduced by oversampling. Let $N$, a positive integer, be the oversampling\nrate. For any statistical estimate $\\widehat{G}_{\\mbox{\\footnotesize rec}}(t)$ of the signal\n$g(t)$, the maximum pointwise mean-squared error $D_{\\mbox{\\footnotesize rec}}$ is defined as\nthe \\textit{distortion}, i.e.,\n\\begin{align}\nD_{\\mbox{\\footnotesize rec}} := \\sup_{t\\in{\\mathbb R}} D_{\\mbox{\\footnotesize rec}}(t) = \\sup_{t\\in{\\mathbb R}} {\\mathbb E}\\left|\n\\widehat{G}_{\\mbox{\\footnotesize rec}}(t) - g(t) \\right|^2. \\label{eq:distortion}\n\\end{align}\nFor a pointwise-consistent reconstruction, the distortion in\n(\\ref{eq:distortion}) should decrease to zero as the oversampling rate $N$\nincreases to infinity~\\cite{bickelDM2001}. Consistent reconstruction of\nsmooth signals with a random dither, in the presence of single-bit\nquantizers, has been obtained in the past~\\cite{masryT1981}; therefore, the\nasymptotic rate of decrease in $D_{\\mbox{\\footnotesize rec}}$ with $N$ is of interest to us.\nDue to finite precision limitations (ADC operation) during acquisition, the\nsignal samples are quantized. Since quantization is a lossy\noperation~\\cite{gershogray}, $D_{\\mbox{\\footnotesize rec}}$ is expected to depend upon the ADC\nprecision employed. As mentioned in Sec.~\\ref{sec:introduction}, it will be\nshown that $D_{\\mbox{\\footnotesize rec}}$ decreases as $O(1\/N)$, irrespective of the sensor\nprecision. Thus, the ADC precision only manifests in the proportionality\nconstant (independent of the oversampling factor $N$) in the optimal\nasymptotic reconstruction distortion.\n\nTo show the proposed precision indifference principle, two extreme cases of\nquantizer precision will be analyzed and their distortion will be compared:\n(i) signal distortion with perfect samples; and (ii) signal distortion with\nsamples quantized using single-bit ADCs. The sampling setup for these two\ncases are illustrated in Fig.~\\ref{fig:bitdestructionblocks}. In\nFig.~\\ref{fig:bitdestructionblocks}(a), the estimator works with infinite\nprecision (unquantized) noisy samples while in\nFig.~\\ref{fig:bitdestructionblocks}(b), the estimator works with poorest\nprecision (one-bit) noisy samples. The role of extra dither noise $W_d(t)$\nwill be explained later in Sec.~\\ref{sec:onebit}. The estimator\n$\\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t)$ will be designed and its distortion performance\nwill be analyzed in this work.\n\\begin{figure}[!htb]\n\\begin{center} \\scalebox{1.0}{\\input{bitdestructionblocks.pstex_t}} \n\\end{center}\n\\caption{\\label{fig:bitdestructionblocks} \\sl \\small \\textbf{Two extreme\nscenarios of quantization:} In both the scenarios the signal $g(t)$ is\nobserved with additive independent Gaussian noise $W(t)$. In (a), the\nestimator works with infinite precision (unquantized) samples $\\{Y(n \\tau),\nn \\in {\\mathbb Z}\\}$. In (b), the estimator works with poorest precision (one-bit)\nsamples $\\{X(n) , n \\in {\\mathbb Z}\\}$ where $X(n \\tau) = {\\mathbbm{1}}(Y(n\\tau) \\geq\n0)$.}\n\\end{figure}\n\nBefore we move on to the next section, it should be noted that the kernel\n$\\phi(t)$ and its derivative $\\phi'(t)$ are absolutely integrable. This\nabsolute integrability and square integrability of $\\phi(t)$ can be\ntranslated into the following observations, which will be useful in\nSec.~\\ref{sec:onebit} during distortion analysis:\n\\begin{align}\nC_{\\phi} & := \\int_{t \\in {\\mathbb R}} |\\phi(t)| \\mbox{d} t < \\infty,\n\\label{eq:integrablephi} \\\\\nC_{\\phi}' & := \\sup_{\\{t_k: t_k \\in \\left[k\/\\lambda, (k+1)\/\\lambda \\right],\nk \\in {\\mathbb Z}\\}} \\sum_{k\\in {\\mathbb Z}} |\\phi'(t_k)| < \\infty.\n\\label{eq:summablephiprime} \\\\\n\\mbox{and } C_{\\phi}'' & := \\sup_{t \\in {\\mathbb R}}\\sum_{k \\in {\\mathbb Z}} \\left| \\phi\n\\left(t - \\frac{k}{\\lambda}\\right) \\right|^2 < \\infty.\n\\label{eq:squaresummablephi}\n\\end{align}\nThe next section will review pertinent mathematical results which will be\nused in the later sections.\n\n\\section{Background}\n\\label{sec:background}\n\nThe stable interpolation formula for Zakai sense bandlimited signals is\ndiscussd first. The necessity of $W_d$ and associated variance-conditions\non Gaussian noise (see~Fig.~\\ref{fig:bitdestructionblocks}(b)) are\ndiscussed. If the pointwise error in interpolation is bounded with bounded\nperturbation of samples, it is called as a \\textit{stable interpolation}.\nThe properties of stable interpolation and its implications on filtering of\nbounded signals are given at the end of this section.\n\n>From the interpolation formula for Zakai sense bandlimited signals, the\nsignal of interest $g(t)$ can be perfectly reconstructed from its samples\ntaken at the Nyquist rate. For $g(t) \\in {BL}_{\\mbox{\\footnotesize int}}$, the interpolation\nformula is given by~\\cite[Lemma~3.1]{kumarIRH2011},\n\\begin{align}\ng(t) = \\lambda \\sum_{n \\in {\\mathbb Z}} g \\left(\\frac{n}{\\lambda}\\right) \\phi\n\\left(t - \\frac{n}{\\lambda}\\right) . \\label{eq:sincinterp}\n\\end{align}\nwhere the equality holds absolutely, pointwise, and in ${\\cal L}^\\infty({\\mathbb R})$. Thus,\nin the absence of noise, it is sufficient to sample $g(t)$ at a rate of\n$\\lambda$ sample per second (or per meter in the context of spatial\nfields). In the presence of quantization, the reconstruction in\n(\\ref{eq:sincinterp}) is stable in ${\\cal L}^\\infty({\\mathbb R})$. \n\nThe role of $W_d(t)$ in Fig.~\\ref{fig:bitdestructionblocks}(b) will now be\nhighlighted. If the noise variance $\\sigma$ is very small compared to the\ndynamic range of the signal $g(t)$, i.e., $|\\sigma| \\ll 1$, then the\nsamples ${\\mathbbm{1}}( g(t) + W(t) \\geq 0)$ will not capture small scale\nlocal variation in $g(t)$. Due to quantization, estimators such as maximum\nlikelihood are expected to be non-linear and their analysis is too complex.\nTo alleviate this issue, if $\\mbox{var}(W(t)) = \\sigma^2$ is very small, then an\nextra additive independent Gaussian dither $W_d(t)$ can be added to ensure\nthat ${\\mathbbm{1}}( g(t) + W(t) + W_d(t) \\geq 0)$ is sufficiently random. It\nis assumed that $W_d(t)$ and $W(t)$ are independent. Such dithering allows\nus to use an analytically tractable reconstruction procedure, which has\n\\textit{order-optimal distortion}. The block-diagram for sampling with\none-bit ADCs is illustrated in Fig.~\\ref{fig:bitdestructionblocks}(b). The\ntechnical condition on $\\sigma^2 = \\mbox{var}(W(t)) + \\mbox{var}(W_d(t))$ is stated\nusing the cdf of $W+W_d$. Let $F: {\\mathbb R} \\rightarrow [0,1]$ be the cdf of\n$W+W_d$. Let $f(x)$ be the associated probability density function with\n$f(\\pm C_\\phi) = \\delta$ and $f(0) = \\Delta$. Observe that $\\Delta >\n\\delta$, since $f(x) = \\frac{1}{\\sqrt{2\\pi} \\sigma} \\exp( - x^2\/2\n\\sigma^2)$. It is required that there is a parameter $\\mu > 0$ such that\n\\begin{align}\n\\left( 1 - \\frac{1}{\\sqrt{2} C^2_\\phi} \\right) \\frac{1}{\\delta} < \\mu <\n\\frac{1}{\\Delta}, \\label{eq:variancecondition}\n\\end{align}\nwhere $C_\\phi$ is the constant in (\\ref{eq:integrablephi}). First fix a\n$\\lambda > 1$. Then, $C_\\phi = \\int_{t \\in {\\mathbb R}} |\\phi(t)| \\mbox{d}t >\n\\int_{t \\in {\\mathbb R}} \\phi(t) \\mbox{d} t = \\tilde{\\phi}(0) = 1$. That is,\n$C^2_\\phi \\sqrt{2} > \\sqrt{2} > 1$. Therefore, the lower bound on $\\mu$ in\n(\\ref{eq:variancecondition}) is positive. Next, observe that if $\\sigma$\nis large but fixed, then $\\delta = f(C_\\phi) \\approx f(0) = \\Delta$. Then\n$\\delta$ and $\\Delta$ are close enough and the inequality in\n(\\ref{eq:variancecondition}) can be satisfied. In other words, for a fixed\n$\\lambda$ and hence $C_\\phi$, there is a \\textit{finite} number $\\sigma_0$\nfor which (\\ref{eq:variancecondition}) is satisfied for all $\\sigma >\n\\sigma_0$. If $\\mbox{var}(W(t)) < \\sigma_{0}^2$, then $\\mbox{var}(W_d(t)) >\n\\sigma_{0}^{2} - \\mbox{var}(W(t))$ will ensure that $\\mbox{var}(W + W_d) >\n\\sigma_{0}^2$. If $\\mbox{var}(W(t)) \\geq \\sigma_{0}^2$, then the extra dither is\nnot needed. This condition will be used in the distortion analysis in\nSec.~\\ref{sec:onebit}.\n\nFor single-bit estimation, the signal $F(g(t)) - 1\/2$ will be encountered,\nwhere $F:{\\mathbb R} \\rightarrow [0,1]$ is the cumulative distribution function of\nthe stationary noise random variable $W(t) + W_d(t)$. Since $g(t) \\in [-1,\n1]$, and $F(x)$ has a wider support than the dynamic range of signal (i.e.,\n$[-1, 1]$), therefore $F'(x)$ is finite and non-zero for $x \\in [-1,1]$.\nSince $F(0) = 1\/2$ by symmetry, therefore, $F(g(t)) - 1\/2$ is more\nconvenient than $F(g(t))$ to work with. For simplicity of notation, let\n$l(t) = F(g(t)) - 1\/2$. Then $|l(t)| \\leq |F(1)| - 1\/2$, i.e., $l(t)$ is\nbounded. The bound depends only on the noise distribution and the dynamic\nrange of $g(t)$. Finally $|l'(t)| = |F'(g(t)) g'(t)| \\leq |F'(0) 2 \\pi^2|$\nsince $F'(0)$ maximizes $F'(x)$ in $[-1,1]$ and $|g'(t)|\\leq 2 \\pi^2$\n(see~\\cite[Proposition~3.1]{kumarIRH2011}).\n\nThe definition of ${BL}_{\\mbox{\\footnotesize int}}$ involves convolution with a stable\nkernel and convolution will often appear in the context of error\nanalysis. The following short lemma will be quite useful later on.\n\\begin{lemma\n\\label{lemma:propagation}\nLet $p(t)$ be a signal such that $||p||_\\infty$ is finite and $P(t)$\nbe any random process such that $P(t)$ is bounded (i.e., $\\sup_{t \\in\n{\\mathbb R}} {\\mathbb E}(P^2(t))$ is finite). Then,\n\\begin{align}\n||p \\star \\phi ||_\\infty & \\leq C_\\phi ||p||_\\infty,\n\\label{eq:stableerror}\\\\\n\\mbox{and } {\\mathbb E}[( |P(t)| \\star |\\phi(t)|)^2] & \\leq C^2_\\phi \\sup_{t\n\\in {\\mathbb R}} {\\mathbb E}(P^2(t)), \\label{eq:stablevariance}\n\\end{align}\nwhere the convolutions are well defined since $\\phi(t)$ is absolutely\nintegrable.\n\\end{lemma}\n\n\\IEEEproof The proof follows by the definition of convolution and the\ntriangle inequality. We have\n\\begin{align}\n|p(t) \\star \\phi(t)| & = \\left| \\int_{u \\in {\\mathbb R}} p(u) \\phi(t-u)\n\\mbox{d}u \\right|, \\nonumber \\\\\n& \\leq \\int_{u \\in {\\mathbb R}} |p(u)| |\\phi(t-u)| \\mbox{d}u, \\nonumber \\\\\n& \\leq ||p||_\\infty \\int_{u \\in {\\mathbb R}} |\\phi(t - u)| \\mbox{d}u,\n\\nonumber \\\\\n& = C_\\phi ||p||_\\infty. \\nonumber\n\\end{align}\nFor the second moment bound, note that\n\\begin{align}\n& {\\mathbb E}(||P(t)| \\star |\\phi(t)||^2) \\nonumber \\\\\n& = {\\mathbb E}\\left( \\iint_{u, v \\in {\\mathbb R}} |P(u)| |P(v)| |\\phi(t-u)\n|\\phi(t-v)| \\mbox{d}u \\mbox{d}v \\right), \\nonumber \\\\\n& = \\iint_{u, v \\in {\\mathbb R}} {\\mathbb E}( |P(u)| |P(v)|) |\\phi(t-u)| |\\phi(t-v)|\n\\mbox{d}u \\mbox{d}v ,\\nonumber \\\\\n& \\stackrel{(a)}{\\leq} \\sup_{t \\in {\\mathbb R}} {\\mathbb E}(P^2(t)) \\iint_{u, v \\in\n{\\mathbb R}} |\\phi(t-u)| |\\phi(t-v)| \\mbox{d}u \\mbox{d}v, \\nonumber \\\\\n& = C^2_\\phi \\sup_{t \\in {\\mathbb R}} {\\mathbb E}(P^2(t)), \\nonumber \n\\end{align}\nwhere $(a)$ follows by ${\\mathbb E}(2|P(u)| |P(v)|) \\leq {\\mathbb E}(P^2(u) + P^2(v))\n\\leq 2 \\sup_{t} {\\mathbb E}(P^2(t))$. Thus the proof is complete. {\\hfill $\\clubsuit$ \\medskip}\n\nThe two extreme scenarios of quantization as depicted in\nFig.~\\ref{fig:bitdestructionblocks} and their distortions will now be\nanalyzed in the next section.\n\n\\section{Estimation of bandlimited signal}\n\\label{sec:estimation}\n\nInterpolation of bandlimited signals with perfect samples is a well\nknown topic~\\cite{mallatSA2009}. Loosely speaking, a bandlimited\nsignal of duration $T$ and bandwidth $\\pi$ has $2\\pi T$ degrees of\nfreedom~\\cite{slepianO1976}. With $N T$ noisy samples of the field\n$g(t) + W(t)$ in duration $T$, the optimal distortion is expected to\nbe $O(1\/N)$.\\footnote{A single bounded constant in additive\nindependent Gaussian noise with $N$ independent readings can be\nestimated up to a distortion of $O(1\/N)$~\\cite{bickelDM2001}.} With\nthis note, sampling schemes with oversampling rate $N$ are designed to\nachieve a distortion of $O(1\/N)$ for sampling $g(t)$.\n\n\\subsection{Estimation with perfect samples}\n\\label{sec:perfectsamples}\n\nA brief review of estimation with perfect samples will be highlighted\nfirst. Optimal minimum mean-squared method can be found in the work of\nPinsker~\\cite{pinskerO1980}. For illustration and to get a distortion\nproportional to $O(1\/N)$, it suffices to use the frame expansion. Let\nthe integer-valued oversampling ratio (above the Nyquist rate) be $N$,\nand $\\tau = 1\/(\\lambda N)$. Then, the samples $\\{Y(n \\tau), n \\in {\\mathbb Z}\n\\}$ are available for the reconstruction of $g(t)$. Using frame\nexpansion or the shift-invariance of bandlimited signals, \n\\begin{eqnarray}\ng(t) &=& \\frac{1}{N} \\sum_{n \\in {\\mathbb Z}} \\lambda g(n \\tau)\\phi(t - n\n\\tau), \\nonumber \\\\\n& = & \\frac{1}{N} \\sum_{i = 0}^{N-1} \\sum_{k \\in {\\mathbb Z}} \\lambda g\\left(\n\\frac{k}{\\lambda} + i \\tau \\right) \\phi\\left(t - \\frac{k}{\\lambda} -\n\\frac{i}{N \\lambda}\\right) ,\\label{eq:frameexpansion}\n\\end{eqnarray}\nwhere the equality holds pointwise and in ${\\cal L}^\\infty({\\mathbb R})$. It must be\nnoted that the basic operation in (\\ref{eq:frameexpansion}) is that of\naveraging; hence, the noise is expected to average out while the\nsignal will be retained. This intuition motivates the following\nestimator for $g(t)$ from noisy data (see\nFig.~\\ref{fig:bitdestructionblocks}(a)). Define\n\\begin{align}\n\\widehat{G}_{\\mbox{\\footnotesize fr}}(t) & := \\frac{1}{N} \\sum_{n \\in {\\mathbb Z}} \\lambda Y(n\n\\tau)\\phi(t - n \\tau), \\label{eq:frameestimate}\\\\\n& = \\frac{1}{N} \\sum_{i = 0}^{N-1} \\sum_{k \\in {\\mathbb Z}} \\lambda [g+W]\n\\left( \\frac{k}{\\lambda} + i \\tau \\right) \\phi\\left(t -\n\\frac{k}{\\lambda} - \\frac{i}{N \\lambda}\\right).\\nonumber\n\\end{align}\nThe distortion of $\\widehat{G}_{\\mbox{\\footnotesize fr}}(t)$ is given by the following\nproposition.\n\\begin{proposition}[Frame estimate with $O(1\/N)$ distortion]\n\\label{prop:mse_unquantized}\nLet $\\widehat{G}_{\\mbox{\\footnotesize fr}}(t)$ in (\\ref{eq:frameestimate}) be an\nestimate for the bandlimited field $g(t)$ corrupted by additive\nindependent Gaussian noise. Let $D_{\\mbox{\\footnotesize fr}}(t) := {\\mathbb E} |\n\\widehat{G}_{\\mbox{\\footnotesize fr}}(t) - g(t)|^2$. Then,\n\\begin{eqnarray}\n\\sup_{t \\in {\\mathbb R}} D_{\\mbox{\\footnotesize fr}}(t) \\leq \\frac{C_{\\phi}'' \\lambda^2\n\\sigma^2}{N}\n\\end{eqnarray}\nwhere the constants $\\sigma^2$ and $C_{\\phi}''$ from\n(\\ref{eq:squaresummablephi}) do not depend on $N$. \n\\end{proposition}\n\n\\IEEEproof See Appendix~\\ref{ap:unquantizedMSE}. {\\hfill $\\clubsuit$ \\medskip}\n\nThe signal term in (\\ref{eq:frameestimate}) converges in ${\\cal L}^\\infty({\\mathbb R})$\nto $g(t)$. The noise term results in an independent sum of zero-mean\nrandom variables at every $t \\in {\\mathbb R}$. This sum of random variables\nhas a variance that decreases as $(1\/N)$ due to the finite energy of\nthe interpolation kernel $\\phi(t)$. The constant $C_{\\phi}''$ depends\non the properties of the kernel $\\phi(t)$. The estimation with\nsingle-bit quantizers and associated distortion analysis will be\npresented next.\n\n\\subsection{Estimation with single-bit quantized samples}\n\\label{sec:onebit}\n\nThis section will present the key result of this work. Consider the system\nillustrated in Fig.~\\ref{fig:bitdestructionblocks}(b). In this section, a\n$\\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t)$ will be obtained such that $D_{\\mbox{\\footnotesize 1-bit}}$ scales\nas $O(1\/N)$. This is non-trivial to achieve because the \\textit{non-linear}\nquantization operation is coupled with the statistical estimation\nprocedure. The result will be established in two parts: (i) it will be\nshown that suitable interpolation of one-bit samples converges to a\nnon-linear one-to-one function of $g(t)$ with an error term having a\npointwise variance of $O(1\/N)$; and (ii) the obtained non-linear function\nof $g(t)$ can be inverted in a stable manner using recursive computation\nbased on contraction-mapping. It will be assumed that $\\mbox{var}(W_d(t)) =\n(\\sigma^2 - \\mbox{var}(W(t)))_+$, where $\\sigma$ is such that\n(\\ref{eq:variancecondition}) is satisfied.\n\nThe stability property of kernel $\\phi(t)$ has been discussed\nSec.~\\ref{sec:background}. For this section, fix $\\tau = 1\/(N \\lambda)$,\nwhere $\\lambda > 1$ is an arbitrary stability constant. Analogous to\n(\\ref{eq:frameexpansion}), consider the random process obtained from the\nsingle-bit samples $X(n\\tau), n \\in {\\mathbb Z}$,\n\\begin{align}\nH_N(t) = \\tau \\sum_{n \\in {\\mathbb Z}} (X(n \\tau) - 1\/2) \\phi \\left( t - n \\tau\n\\right).\\label{eq:intermediateestimate}\n\\end{align}\nThen, the following proposition establishes the convergence of $H_N(t)$ to\na function of the signal of interest $g(t)$.\n\\begin{proposition}[Convergence of single-bit interpolation]\n\\label{prop:htconvergence}\nLet $l(t) = (F(g(t)) - 1\/2)$ and $H_N(t)$ be as defined in\n(\\ref{eq:intermediateestimate}). Then\n\\begin{align}\n\\sup_{t\\in {\\mathbb R}} {\\mathbb E}(H_N(t) - l(t) \\star \\phi(t))^2 \\leq \\frac{C_2}{N} +\n\\frac{C_3}{N^2}, \\label{eq:intermediateestimatelimit}\n\\end{align}\nwhere $C_2>0$ and $C_3 > 0$ are constants independent of $N$.\n\\end{proposition}\n\n\\IEEEproof See Appendix~\\ref{ap:onebitestimatoraccuracy}. {\\hfill $\\clubsuit$ \\medskip}\n\nThe factor $\\tau = 1\/(\\lambda N)$ provides the normalization for averaging\nin (\\ref{eq:intermediateestimate}), while the terms $(X(n\\tau) - 1\/2)\\phi(t\n- n \\tau)$ are weighted independent one-bit samples. The average in\n(\\ref{eq:intermediateestimatelimit}) converges in mean-square to a\nconvolution. The signal $l(t) \\in {\\cal L}^\\infty({\\mathbb R})$ and the limit $l(t) \\star\n\\phi(t)$ is a lowpass version of $l(t)$. The dependence of $l(t) \\star\n\\phi(t)$ on $g(t)$ is non-linear due to quantization, which results in the\n$F(g(t))$ term. The original signal $g(t)$ is Zakai sense bandlimited and\nit has \\textit{one degree of freedom} per unit time. The degree of freedom\nper unit time of $l(t) \\star \\phi(t)$ can be up to \\textit{one} as well,\nand $F(x)$ has `nice' properties as a function. Thus, it is not\nunreasonable to expect that there might be a class of $F(x)$ such that\n$(F(g(t)) - 1\/2) \\star \\phi(t)$ can be inverted to find $g(t)$, even though\nthis equation is nonlinear.\n\nConsider compandors defined by Landau and Miranker~\\cite{landauMT1961}.\n\\begin{definition} \\cite[pg~100]{landauMT1961}\nA compandor is a monotonic function $Q(x)$ which has the property that\n$Q(m(t)) \\in {\\cal L}^2({\\mathbb R}) $ if $m(t) \\in {\\cal L}^2({\\mathbb R})$. \n\\end{definition}\nLandau and Miranker have shown that if $g(t) \\in {\\cal L}^2({\\mathbb R})$ and\n$\\tilde{g}(\\omega)$ is zero outside $[-\\pi, \\pi]$, and if $Q:[-1, 1]\n\\rightarrow {\\mathbb R}$ is a compandor with non-zero slope, then there is one to\none correspondence between $g(t)$ and $Q(g(t)) \\star\n\\mbox{sinc}(t)$~\\cite{landauMT1961}. Further, given any signal $m(t) \\in\n{\\cal L}^2({\\mathbb R})$ and $\\tilde{m}(\\omega)$ zero outside $[-\\pi, \\pi]$, there\nexists a unique $g_m(t) \\in {\\cal L}^2({\\mathbb R})$ with $\\tilde{g_m}(\\omega)$ zero\noutside $[-\\pi, \\pi]$ and $Q(g_m(t)) \\star \\mbox{sinc}(t) = m(t)$.\n\nIn our case, $g(t)$ need not be in ${\\cal L}^2({\\mathbb R})$, even though $l(t) =\nF(g(t)) - 1\/2$ is a compandor. Thus, the procedure of Landau and\nMiranker does not extend directly to bandlimited signals in\n${\\cal L}^\\infty({\\mathbb R})$, \\textit{especially} in the presence of statistical\nperturbations. Suitable modifications of their approach will be used\nto obtain the results for our problem.\n\nThe dependence between $g(t)$ and $l(t) \\star \\phi(t)$ is quite non-linear.\nThere is no clear or obvious equation by which $g(t)$ can be obtained from\n$l(t) \\star \\phi(t)$. Therefore, this inversion problem is casted into a\nrecursive setup, where Banach's fixed-point theorem can be leveraged along\nwith contraction mapping~\\cite[Ch.~5]{kreyszigI1989}. This approach is\ninspired from the work of Landau and Miranker. Their recursive setup is\nnoted to be stable to perturbations of $g(t)$ in\n${\\cal L}^2({\\mathbb R})$~\\cite{landauMT1961}. This work will use a variant of their\nprocedure, since the perturbation due to statistical noise with finite\nvariance is not in ${\\cal L}^2({\\mathbb R})$. Therefore, our recursive procedure to\nobtain an estimate of $g(t)$ from $H_N(t)$ (see\n(\\ref{eq:intermediateestimatelimit})) and its analysis is non-trivial and\nit will be presented in detail.\n\nIn summary, an estimate for $g(t)$ is required. Due to quantization and\nnoise, which is a non-linear operation, an approximation $H_N(t)$ of\n$(F(g(t)) - 1\/2) \\star \\phi(t)$ is available. The estimate $H_N(t)$, which\nconverges to $(F(g(t)) - 1\/2) \\star \\phi(t)$ with sample density $N\n\\uparrow \\infty$, will be inverted to obtain an estimate\n$\\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t)$ for the signal $g(t)$. To establish the\nprecision indifference principle, we wish to show that the mean-square\nerror $\\sup_{t \\in {\\mathbb R}} {\\mathbb E}|\\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t) - g(t)|^2$ decreases\nas $O(1\/N)$. The details are presented next.\n\nA `clip to one' function $\\mbox{Clip}[x]$ is defined first.\n\\begin{align}\n\\mbox{Clip}[x] & = x \\quad \\quad \\quad \\mbox{ if } |x| \\leq 1 \\nonumber \\\\\n& = \\mbox{sgn}(x) \\quad \\mbox{ otherwise}. \\label{eq:clipto1}\n\\end{align}\nSince $g(t)$ has a dynamic range bounded by one, by assumption, it will be\nunaffected by clipping. Note that under the ${\\cal L}^\\infty$ norm, this\ntransformation reduces the distance between any two scalars $x_1$ and\n$x_2$, i.e., $|\\mbox{Clip}[x_1] - \\mbox{Clip}[x_2]| \\leq |x_1 - x_2|$. This can be\nverified on a case by case basis. For example, if $x_1 > 1$ and $x_2 \\in\n[-1,1]$, then $|\\mbox{Clip}[x_1] - \\mbox{Clip}[x_2]| = |1 - x_2| \\leq |x_1 - x_2|$.\nOther cases can be similarly enumerated. This clipping procedure is\nnon-linear and complicates some of the presented analysis; however, we feel\nthat its presence is essential for analysis.\n\nLet $\\psi(t) = \\phi(\\lambda t)$. Then $\\tilde{\\psi}(\\omega) =\n\\phi(\\omega\/\\lambda)$. Thus, $\\tilde{\\psi}(\\omega)$ is flat in\n$[-\\lambda \\pi, \\lambda \\pi]$ and in $\\pm[\\lambda \\pi, \\lambda^2 \\pi]$\ndecreases linearly to zero. Consider the set of bandlimited signals\ndefined by,\n\\begin{align}\n{\\cal S}_{\\mbox{\\footnotesize BL,bdd}} = \\{ m(t): |m(t)| \\leq C_\\phi \\mbox{ and } m(t) \\star\n\\psi(t) = m(t) \\}. \\label{eq:blbounded}\n\\end{align}\nThen, ${\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ is a complete subset of the Banach space\n${\\cal L}^\\infty({\\mathbb R})$.\n\\begin{lemma}[${\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ is a complete metric space]\n\\label{lem:metric}\nLet ${\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ be as defined in (\\ref{eq:blbounded}). Then\n$({\\cal S}_{\\mbox{\\footnotesize BL,bdd}}, ||.||_\\infty)$ is a complete subset of $({\\cal L}^\\infty({\\mathbb R}),\n||.||_\\infty)$.\n\\end{lemma}\n\n\\IEEEproof Define the distance function $d: {\\cal S}_{\\mbox{\\footnotesize BL,bdd}} \\times {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}\n\\rightarrow {\\mathbb R}^+$ as $d(m_1, m_2) = ||m_1 - m_2||_\\infty$ with\n$m_1(t), m_2(t) \\in {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$. It is easy to verify the axioms of\ndistance metric~\\cite{kreyszigI1989}: (i) $d \\geq 0$ and $d \\leq\n\\infty$; (ii) $d(m_1, m_2) \\equiv 0$ if and only if $m_1(t) = m_2(t)$;\n(iii) $d(m_1, m_2) = d(m_2, m_1)$; and (iv) $d(m_1, m_2) \\leq d(m_1,\nm_3) + d(m_3, m_2)$ for any $m_1(t), m_2(t), m_3(t) \\in {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$.\n\nIt is straightforward to see that ${\\cal S}_{\\mbox{\\footnotesize BL,bdd}} \\subset {\\cal L}^\\infty({\\mathbb R})$\nsince $||m||_\\infty \\leq C_\\phi$ for every $m(t) \\in {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$. To\nshow that the subset is complete, consider any Cauchy sequence $m_n(t)\n\\in {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$. Since ${\\cal L}^\\infty({\\mathbb R})$ is complete, therefore $m_n(t)\n\\rightarrow s(t)$, where $s(t) \\in {\\cal L}^\\infty({\\mathbb R})$. It remains to show\nthat $s(t)$ belongs to ${\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$. \n\nFor any $\\epsilon > 0$, there is an $n_0$ such that $||m_n -\ns||_\\infty < \\epsilon$ for all $n > n_0$. Since $\\int_{{\\mathbb R}} |\\psi(t)|\n\\mbox{d}t = C_\\phi\/\\lambda$, therefore, $||m_n \\star \\psi - s \\star\n\\psi ||_\\infty \\leq ||m_n - s|| (C_\\phi\/\\lambda) = C_\\phi \\epsilon\n\/\\lambda$ for all $n > n_0$ (see Lemma~\\ref{lemma:propagation}). Thus,\n$m_n(t) \\star \\psi(t) \\rightarrow s(t) \\star \\psi(t)$. However,\n$m_n(t) \\star \\psi(t) \\equiv m_n(t)$ since $m_n(t) \\in {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$.\nTherefore, it follows that $s(t) = s(t) \\star \\psi(t)$, or $s(t) \\in\n{\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$. Thus, ${\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ is complete. {\\hfill $\\clubsuit$ \\medskip}\n\nA map $T: {\\cal S}_{\\mbox{\\footnotesize BL,bdd}} \\longrightarrow {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ will be defined next.\nThis map will result in a recursive procedure to obtain $g(t)$ from\n$h(t) := l(t) \\star \\phi(t)$. Define\n\\begin{align}\nT[m(t)] & = \\mbox{Clip} \\Big[ \\mu h(t) + \\left[ m(t) - \\mu (F(m(t)) - 1\/2) \\right]\n\\star \\phi(t) \\Big] \\star \\phi(t). \\label{eq:keymap}\n\\end{align}\nIt will be shown that $T$ is a contraction on $({\\cal S}_{\\mbox{\\footnotesize BL,bdd}},\n||.||_\\infty)$.\n\\begin{lemma}[$T$ is a contraction]\n\\label{lem:tcontraction}\nLet $({\\cal S}_{\\mbox{\\footnotesize BL,bdd}}, ||.||_\\infty)$ be the metric space as defined in\n(\\ref{eq:blbounded}). Let $T : {\\cal S}_{\\mbox{\\footnotesize BL,bdd}} \\longrightarrow {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$\nbe a map as defined in (\\ref{eq:keymap}). If the condition in\n(\\ref{eq:variancecondition}) is satisfied, then there is a choice of\n$\\mu$ such that $T$ is a contraction, i.e.,\n\\begin{align}\n||T[m_1] - T[m_2] ||_\\infty \\leq \\alpha ||m_1 - m_2||_\\infty,\n\\end{align}\nfor some $0 < \\alpha < 1$ and any $m_1(t), m_2(t) \\in {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$. The\nparameter $\\alpha$ does not depend on the choice of $m_1$ and $m_2$.\n\\end{lemma}\n\\IEEEproof See Appendix~\\ref{ap:lem_cont}. {\\hfill $\\clubsuit$ \\medskip}\n\nNow the key recursive equation will be stated. Let $l(t) = (F(g(t)) -\n1\/2)$ and $h(t) = l(t) \\star \\phi(t)$ be available for obtaining\n$g(t)$. Then,\n\\begin{align}\ng_{k+1}(t) := T[g_{k}(t)] = \\mbox{Clip} \\Big[ \\mu h(t) + \\left[ g_{k}(t) -\n\\mu (F(g_{k}(t)) - 1\/2) \\right] \\star \\phi(t) \\Big] \\star \\phi(t),\n\\label{eq:recursion}\n\\end{align}\nwhere $k \\geq 0, k \\in {\\mathbb Z}$ and $\\mu > 0$ is a constant that will be\nchosen according to Lemma~\\ref{lem:tcontraction}. Set $g_0(t) \\equiv\n0$. The original signal $g(t)$ is a fixed point of this equation and\nit can be verified by substitution. The following proposition shows\nthat $g(t)$ is the \\textit{only} fixed point of the equation in\n(\\ref{eq:recursion}). The proof hinges on Banach's fixed point theorem\nor contraction theorem~\\cite[Ch.~5]{kreyszigI1989}.\n\\begin{proposition}[Signal of interest is the fixed point of $T$]\n\\label{prop:fixedpt}\nLet $g(t) \\in {BL}_{\\mbox{\\footnotesize int}} \\subset {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ be a continuous bounded\nbandlimited signal. Let $h(t) = l(t) \\star \\phi(t)$, where $l(t) = F(g(t))\n- 1\/2$. Consider the recursion $g_{k}(t) = T[g_{k-1}(t)]$, where $T$ is as\ndefined in (\\ref{eq:keymap}). Set $g_0(t) \\equiv 0$. If $\\mu$ is selected\nas in (\\ref{eq:variancecondition}), then\n\\begin{align}\n\\lim_{k \\rightarrow \\infty } ||g_k - g||_\\infty = 0. \\label{eq:linfconv}\n\\end{align}\n\\end{proposition}\n\\IEEEproof The proof is straightforward with Lemma~\\ref{lem:metric} and\nLemma~\\ref{lem:tcontraction} in place. Define $d(m_1, m_2) = ||m_1 -\nm_2||_\\infty$ for any $m_1(t), m_2(t) \\in {\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$. From\nLemma~\\ref{lem:metric}, note that $({\\cal S}_{\\mbox{\\footnotesize BL,bdd}}, d)$ is a complete metric\nspace. The signal $g(t)$ is in ${\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ and it satisfies $g(t) =\nT[g(t)]$, i.e., it is a fixed point for $T$ defined in (\\ref{eq:keymap}).\n\nPick $\\mu$ as in (\\ref{eq:variancecondition}). Then $T$ is a contraction on\n$({\\cal S}_{\\mbox{\\footnotesize BL,bdd}}, d)$. Thus, by Banach's fixed point theorem (contraction\ntheorem)~\\cite[Ch.~5]{kreyszigI1989}, there is \\textit{exactly} one fixed\npoint in ${\\cal S}_{\\mbox{\\footnotesize BL,bdd}}$ for the equation $g(t) = T[g(t)]$. Since $g_k(t)$\nconverges to a fixed point, it must converge to $g(t)$ in the distance\nmetric $d$. Thus the proof is complete.\n{\\hfill $\\clubsuit$ \\medskip}\n\nPropostion~\\ref{prop:fixedpt} holds with perfect information about $l(t)\n\\star \\phi(t)$. The estimation of signal from $H_N(t)$, the statistical\napproximation of $l(t) \\star \\phi(t)$, will be discussed now. Let\n$G_{k}(t)$ be the sequence of random waveforms generated from $H_N(t)$ when\nit is applied to the recursion in (\\ref{eq:recursion}). That is, fix\n$G_0(t) \\equiv 0$ and define \n\\begin{align}\n& G_{k+1}(t) := T[G_{k}(t)] = \\mbox{Clip} \\Big[ \\mu H_N(t) + \\left[ G_{k}(t) -\n\\mu (F(G_{k}(t)) - 1\/2) \\right] \\star \\phi(t) \\Big] \\star \\phi(t).\n\\label{eq:recursionG}\n\\end{align}\nLet $\\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t) = \\lim_{k \\rightarrow \\infty}\nG_k(t)$.\\footnote{This limit exists since it can be shown that $||G_k -\nG_{k-1}||_\\infty \\leq \\alpha ||G_{k-1} - G_{k-2}||_\\infty$ for some $0 <\n\\alpha < 1$ by using an analogous procedure as in\nLemma~\\ref{lem:tcontraction}.} For the same choice of $\\mu$ which ensures\nthat $T$ is a contraction on $({\\cal S}_{\\mbox{\\footnotesize BL,bdd}}, ||.||_\\infty)$, the distortion\nof $|\\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t) - g(t)|$ has to be established. To this end,\nthe following proposition is noted.\n\\begin{proposition}[1-bit estimation has distortion $O(1\/N)$]\n\\label{prop:contraction_mse}\nLet $H_N(t)$ be the estimate of $l(t)$ as described in\n(\\ref{eq:intermediateestimate}) and $\\mu$ be selected as in\n(\\ref{eq:variancecondition}). With $G_0(t) \\equiv 0$, let $G_k(t)$ be the\nsequence of random waveforms as defined in (\\ref{eq:recursionG}). Define\n$\\lim_{k \\rightarrow \\infty} G_k(t) = \\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t)$. Then,\n\\begin{align}\nD_{\\mbox{\\footnotesize 1-bit}} := \\sup_{t \\in {\\mathbb R}} {\\mathbb E}( \\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t) - g(t))^2 =\nO(1\/N), \\nonumber\n\\end{align}\ni.e., the distortion $D_{\\mbox{\\footnotesize 1-bit}}$ decreases as $O(1\/N)$.\n\\end{proposition}\n\n\\IEEEproof See Appendix~\\ref{ap:contraction_accuracy}. {\\hfill $\\clubsuit$ \\medskip}\n\nThe results of Proposition~\\ref{prop:mse_unquantized} and\nProposition~\\ref{prop:contraction_mse} can be summarized into the\nfollowing theorem.\n\\begin{theorem}[Precision indifference principle]\n\\label{thm:bitdestruction}\n\\textit{Let $g(t)$ be a bounded dynamic-range bandlimited-signal as\ndefined in (\\ref{eq:blinterest}). Assume that $g(t) + W(t)$ is\navailable for sampling, where $W(t)$ an additive independent Gaussian\nrandom process with finite variance. Fix an oversampling factor of\n$N$, where $N$ is large for statistical averaging. There exists an\nestimate $\\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t)$ obtained from single-bit samples\nof $g(t) + W(t)$ such that\n\\begin{align}\n\\sup_{t\\in {\\mathbb R}} {\\mathbb E}| \\widehat{G}_{\\mbox{\\footnotesize 1-bit}}(t) - g(t) |^2 = O(1\/N).\n\\nonumber\n\\end{align}\nThis distortion is proportional to the best possible distortion of\n$O(1\/N)$ that can be obtained with unquantized or perfect samples.}\n\\end{theorem}\nA few remarks highlighting the importance of the results obtained will\nconclude this section.\n\n\\subsection{Remarks on the results obtained}\n\n\\subsubsection{Comparison with the bit-conservation principle} The\nbit-conservation principle~\\cite{kumarIRH2011} is somewhat in contrast\nto the precision indifference principle. Loosely speaking,\nbit-conservation principle states that for sampling a bandlimited\nsignal in a noiseless setting, the oversampling density can be\ntraded-off against ADC precision while maintaining a fixed bit-rate\nper Nyquist interval and an order-optimal pointwise distortion. In\nthe presence of additive independent Gaussian noise, this tradeoff\nbetween ADC precision and oversampling is absent while studying\npointwise mean-squared distortion. In the noisy setup, the distortion\nis proportional to $1\/N$, where $N$ is the oversampling density\nirregardless of the ADC precision. The presence of noise shifts the\nrole of ADC precision towards only the proportionality constant in\ndistortion!\n\n\\subsubsection{Interpretation of precision-indifference principle}\nFirst, it can be argued that the precision indifference principle\nholds while estimating a constant signal (one degree of freedom) in\nadditive independent Gaussian noise. Assume that a constant $c \\in\n[-1,1]$ has to be estimated based on $N$ noisy readings $Y_i = c+W_i,\n1 \\leq i \\leq N$, where $\\{W_i, 1 \\leq i \\leq N\\}$ are i.i.d.~${\\cal\nN}(0, \\sigma^2)$. In the absence of quantization, $\\widehat{C}_N =\n(\\sum_{i = 1}^N Y_i)\/N$ converges to $c$ in the mean-square sense,\nand ${\\mathbb E}(\\widehat{C}_N - c)^2 = \\sigma^2\/N$. This is the optimal\ndistortion if perfect (unquantized) samples are available. Now\nconsider the case where single-bit readings $B_i = {\\mathbbm{1}}(c + W_i\n\\geq 0), 1 \\leq i \\leq N$ are available. The random variables $\\{B_i,\n1 \\leq i \\leq N\\}$ are i.i.d.~$\\mbox{Ber}(q)$ where $q = {\\mathbb P}(W \\geq -\nc) = {\\mathbb P}(W \\leq c) = F(c)$. Assume $\\widehat{B}_N = (\\sum_{i = 1}^N\nB_i)\/N$. It can be shown that ${\\mathbb E}(\\widehat{B}_N - F(c))^2 \\leq\n1\/(4N)$ since each $\\mbox{var}(B_i) \\leq F(c)(1-F(c)) \\leq 1\/4$.\nDefine $\\widehat{C}_{\\mbox{\\footnotesize 1-bit}} = F^{-1}(\\widehat{B}_N)$ if\n$\\widehat{B}_N \\in [F(-1), F(1)]$ and $\\widehat{C}_{\\mbox{\\footnotesize 1-bit}} = \\pm 1$\notherwise. Since $F(x)$ is invertible and\n$\\mbox{d}F^{-1}(x)\/\\mbox{d}x$ is bounded for $x \\in [F(-1),F(1)]$,\ntherefore, using the delta method, $\\widehat{C}_{\\mbox{\\footnotesize 1-bit}}$ obtained\nfrom $\\widehat{B}_N$ has a mean-squared error which decreases as\n$(1\/N)$~\\cite{bickelDM2001}. Next, it should be noted that\nbandlimited signals have one degree of freedom in every Nyquist\ninterval. An oversampling factor of $N$ means that there are $N$\nsamples to observe each degree of freedom on an average. Finally,\nobserving the Nyquist samples of a bandlimited signals with a\ndistortion of $O(1\/N)$ results, by stable interpolation with kernel\n$\\phi(t)$, in a pointwise distortion of $O(1\/N)$ for the signal\nestimate at any point.\n\n\\subsubsection{Precision-indifference for a larger class of noise}\nConsider the model where each sample $Y(n\\tau) = g(n\\tau) + V(n\\tau)$ is\naffected by some non-Gaussian noise. Focus on the case where $V(n\\tau)$\ncan be written as $V(n\\tau) = W(n\\tau) + U(n\\tau)$, where $W(n\\tau)$ and\n$U(n\\tau)$ are i.i.d.~for all $n \\in {\\mathbb Z}, \\tau \\in {\\mathbb R}$. If $W(n\\tau)$ is\nGaussian, $\\mbox{var}(V(n\\tau)) = \\sigma^2 < \\infty$, and $F_V(x)$ satisfies\n(\\ref{eq:variancecondition}), then the precision indifference principle\nwill hold. The extension of existing proofs is simple and only its key\nsteps will be mentioned here. In the perfect sample case (see\nFig.~\\ref{fig:bitdestructionblocks}(a)), the Gaussian part of $V(t)$ will\nlimit the best possible (optimal) distortion to $O(1\/N)$; this is because\neven if the values of $U(n\\tau)$ are (magically) known the residual\n$W(n\\tau)$ will limit the distortion. With single-bit quantization, note\nthat all the proofs in Sec.~\\ref{sec:onebit} only depend upon the existence\nof a $\\delta$ and $\\Delta$ such that (\\ref{eq:variancecondition}) is\nsatisfied, monotonicity of $F_V(x)$ such that its derivative is bounded\naway from zero, and $F_V(0) = 1\/2$. The recursive procedure in\n(\\ref{eq:recursion}), however, requires the knowledge of $F_V(x)$.\n\n\\section{Conclusions and future work}\n\\label{sec:conclusions}\n\nThe sampling, quantization, and estimation of a bounded dynamic-range\nbandlimited signal affected by additive independent Gaussian noise was\nstudied. Such setup naturally arises in distributed sampling or where the\nsampling device itself is noisy. For bandlimited signals, the distortion\ndue to additive independent Gaussian noise can be reduced by oversampling\n(statistical diversity). The maximum pointwise expected mean-squared error\n(statistical ${\\cal L}^2$ error) was used as a distortion metric. Using two\nextreme scenarios of quantizer precision, namely infinite precision and\nsingle-bit precision, a quantizer precision indifference principle was\nillustrated. It was shown that the optimal law for distortion is $O(1\/N)$,\nwhere $N$ is the oversampling ratio with respect to the Nyquist rate. This\nscaling of distortion is unaffected by the \\textit{quantizer precision},\nwhich is the key message of the precision indifference principle. In other\nwords, the reconstruction distortion law, up to a proportionality constant,\nis unaffected by quantizer precision.\n\nExtensions of the precision indifference principle to other classes of\nparametric or non-parametric signals is of immediate interest. Further,\nthis work assumed sufficient dithering by noise because the estimators were\nlinear. It is of interest to look towards estimation techniques which do\nnot require extra dithering.\n\n\n\\section*{Acknowledgment}\n\nThe problem of sampling a smooth signal in the presence of noise in a\ndistributed setup was suggested by Prof. Kannan Ramchandran, EECS,\nUniversity of California, Berkeley, CA. Discussions on this problem\nwith Prof. Kannan Ramchandran and Prof.~Martin Wainwright, EECS,\nUniversity of California, Berkeley, CA, Prof. H.~Narayanan, EE, IIT\nBombay, and Prof. Prakash Ishwar, ECE, Boston University, Cambridge,\nMA were insightful.\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nEstablishing pixel-level correspondences between pair of images including instances of the same object category can benefit several important applications, such as motion estimation~\\cite{pan2015efficient}, medical imaging~\\cite{annunziata2016fully,iglesias2018joint}, object recognition~\\cite{girshick2015fast} and 3D reconstruction~\\cite{izadi2011kinectfusion}. As a result, this has become a fundamental problem in computer vision~\\cite{forsyth2003modern, hartley2003multiple}. Typically, pixel-level correspondences between two images are computed by extracting sparse local feature descriptors (e.g., SIFT~\\cite{lowe2004distinctive}, HOG~\\cite{dalal2005histograms}, SURF~\\cite{bay2006surf}, SCIRD~\\cite{annunziata2015scale, annunziata2016accelerating}), then matching the extracted descriptors, and finally pruning mismatches based on geometric constraints. Although this approach has been applied successfully in different domains, its performance can degrade significantly due to factors such as intra-class variations, non-rigid deformations, partial occlusions, illumination, image blur, and visual clutter. Recently, the representational power of Convolutional Neural Networks (CNNs) has been leveraged to improve the overall process. In particular, several CNN-based methods for learning powerful feature descriptors have been introduced~\\cite{simo2015discriminative, zagoruyko2015learning, noh2017largescale}. More recently, end-to-end trainable CNNs for learning image descriptors as well as estimating the geometric transformation between the two images have been introduced in~\\cite{rocco2019convolutional, kanazawa2016warpnet}.\n\\begin{figure}[!t]\n\\centering\n\\setlength\\tabcolsep{1.5pt}\n\\begin{tabular}{cc}\n\\includegraphics[align=c, width=0.225\\textwidth]{figures\/figure_1\/image_TRAIN_warp0.png} & \\includegraphics[align=c, width=0.225\\textwidth]{figures\/figure_1\/image_TRAIN_warp4.png}\\vspace{-0.3cm}\\\\\\\\\n\\includegraphics[align=c, width=0.225\\textwidth]{figures\/figure_1\/input_lfw_scaled.png} & \\includegraphics[align=c, width=0.225\\textwidth]{figures\/figure_1\/lwf_output_scaled.png} \\\\\n(a) & (b) \n\\end{tabular}\n\\caption{Unsupervised joint alignment (a.k.a. \\textit{congealing}) results obtained by the proposed method on digit `$2$' from \\textit{affNIST}~\\cite{afnist} and \\texttt{Jennifer\\_Capriati} from LFW~\\cite{huang2008labeled}. (a) input images before alignment (initialisation in red), (b) output images aligned with the proposed method.}\n\\label{fig:main_page}\n\\end{figure}\nThe majority of previously proposed methods focus on the problem of finding pixel-level correspondences between a \\textit{pair} of images. However, a plethora of other tasks such as, co-segmentation, image edit propagation, video stabilisation and structure-from-motion, require global correspondences between a \\textit{set} of images containing a specific object. A straightforward way to address this problem is to identify pixel correspondences between each pair of images in the dataset and solve the problem in a sequential manner. However, this approach would be prone to important limitations, as (i)~it would fail to take into account valuable cross-image appearance information (i.e. statistics of local patches across the entire dataset) during the optimisation; and (ii)~the computational complexity of the problem would increase exponentially with the number of images, therefore significantly limiting the scalability to large datasets. Thus, estimating global correspondences of a set of images (image ensemble) by jointly aligning them in an unsupervised manner can be immensely valuable.\n\nCongealing (joint image alignment) was originally introduced by Learned-Miller in~\\cite{learned2006data}. His approach aligns (by estimating rigid transformations) an ensemble of images of a particular object by minimising the sum of entropies of pixel values at each pixel location. Although this method has been effectively applied to handwritten digits and magnetic resonance image volumes, it has shown some limitations, including slow and\/or sometimes poor convergence and relatively high sensitivity to hyper-parameters. Later, Huang~\\textit{et al.} improved the performance of~\\cite{learned2006data} by using hand-crafted SIFT features~\\cite{huang2007unsupervised}. To overcome the optimisation problems of the original congealing approach, Cox \\textit{et al.}~\\cite{cox2008least, cox2009least} proposed to utilise a reference image (i.e., template) and then minimise the sum of squared differences instead of the sum of entropies. This way, standard Gauss-Newton gradient descent method could be adopted to make the optimisation efficient. Later, motivated by lossy compression principles, Vedaldi \\textit{et al.}~\\cite{vedaldi2008joint} proposed a joint alignment approach based on log-determinant estimation. \n\nA common drawback of the aforementioned methods is that they cannot simultaneously handle variability in terms of illumination, gross pixel corruptions and\/or partial occlusions. RASL, an image congealing method that overcomes this drawback was proposed in~\\cite{peng2012rasl}. The key assumptions made in RASL and its multiple variants, e.g.~\\cite{likassa2018modified, chen2016nonconvex, oh2015partial} are that (i) an ensemble of well-aligned images of the same object is approximately low-rank and (ii) that gross pixel errors are sparsely distributed. Therefore, image congealing is performed by seeking a set of optimal transformations such that the ensemble of misaligned images is written as the superposition of two components i.e., a low-rank component and a sparse error component. RASL has been widely used for jointly aligning multiple images in different applications such as face landmarks localisation~\\cite{sagonas2014raps, peng2015piefa, sagonas2017robust}, pose-invariant face recognition~\\cite{sagonas2017robust, sagonas2015robust}, and medical imaging~\\cite{bise2016vascular}. Despite its wide applicability, it is worth noting that (i)~RASL joint alignment performance can be severely affected by non-optimal initialisation and high intra-class variability in the image ensemble; (ii)~scalability to large ensembles is limited by the formulation of the low-rank minimisation problem and related SVD-based sub-routines; and (iii)~a new optimisation is required for every new image added to the ensemble. To address some of these limitations, t-GRASTA~\\cite{he2014iterative} and PSSV~\\cite{oh2015partial} have been recently proposed.\n\nThe first deep learning approach to unsupervised joint image alignment was proposed by Huang \\textit{et al.}~\\cite{huang2012learning}. A modified version of the convolutional restricted Boltzmann machine was introduced to obtain features that could better represent the image at differing resolutions, and that were specifically tuned to the statistics of the data being aligned. They then used those learnt features to optimise the standard entropy-based congealing loss and achieved excellent joint alignment results on the Labelled Faces in the Wild (LFW) benchmark.\n\\begin{figure}[!tb]\n\\centering\n\\includegraphics[align=c, width=0.47\\textwidth]{figures\/figure_0\/flow_chart_v2.pdf}\n\\caption{Block diagram of the proposed method. Black arrows correspond to forward pass, while red and blue to back-propagation.}\n\\label{fig:flow_chart}\n\\end{figure}\n\nHere, we propose a congealing method to solve large-scale joint alignment problems, which is of significant practical importance in light of the ever increasing availability of image data. The proposed method consists of two main modules: (i)~the aligner and (ii)~the low-capacity auto-encoder. Specifically, the joint alignment task is cast as a batch-based optimisation problem in which the aligner is used to estimate the global transformation required to warp each image to a reference. The alignment error is quantified via $\\ell_1$-norm between the transformed batch images and the reference. Motivated by the observation that a set of well-aligned images require less modelling capacity to be reconstructed well (e.g. reconstruction with low-rank bases~\\cite{peng2012rasl}), the aligned batch is subsequently processed by a \\textit{low-capacity} auto-encoder and reconstruction errors are back-propagated to the aligner (a snapshot of the results is displayed in Fig.~\\ref{fig:main_page}).\n\n\\textbf{Contributions:} In summary, the main contributions of this paper are: (i)~a congealing method which is shown to be capable of handling large-scale joint alignment problems i.e., up to one million data points, simultaneously; (ii)~a novel \\textit{differentiable} formulation of the congealing problem, which combines the advantages of previously proposed similarity- and rank-based approaches and that can be easily optimised with Stochastic Gradient Descent (SGD), end-to-end; (iii)~an extensive experimental evaluation of the proposed method and state-of-the-art approaches on several benchmark datasets, including digits and faces at different resolutions, assessing joint alignment performance and robustness to linear and non-linear geometric perturbations of different magnitude and type.\n\\section{Methodology}\nIn the following, we briefly summarise the approaches most related to ours. Then, we introduce the proposed method.\n\n\\textbf{RASL.}~Let us assume we have $N$ misaligned images $\\{\\bs{I}_i\\}_{i=1}^N \\in \\mathbb{R}^{w\\times h}$ of a particular object and let $\\{\\bs{p}_i\\}_{i=1}^N$ be a set of transformations such that $\\{\\bs{I}_i^0 = \\bs{I}_i\\circ\\bs{p}_i\\}_{i=1}^N$ becomes a set of well-aligned images. If we define ${\\text{vec}:\\mathbb{R}^{w\\times h} \\rightarrow \\mathbb{R}^m}$ as the operator that vectorises an image, the main assumption of the RASL method is that the matrix:\n\\begin{equation}\n \\bs{D}\\circ\\bs{P} = [\\text{vec}(\\bs{I}_1^0)~|~\\cdots~|~\\text{vec}(\\bs{I}_N^0)] = \\bs{A}\n\\end{equation}\nwill be approximately \\textit{low-rank}. However, in practice this assumption can be violated when the object of interest is affected by occlusion, shadows, and noise. Therefore, the authors assume that each aligned image is corrupted with non-Gaussian-but-sparse errors $\\bs{E}\\in \\mathbb{R}^{m\\times N}$, such that $ \\bs{D}\\circ\\bs{P} = \\bs{A} + \\bs{E}$.\nGiven the observation of the misaligned and corrupted images, the goal is to estimate a set of transformations $\\{\\bs{p}_i\\}_{i=1}^N$ such that the rank of the transformed noise-free images $\\{\\bs{I}_i\\circ\\bs{p}_i\\}_{i=1}^N \\in \\mathbb{R}^{w\\times h}$ becomes as small as possible. Formally,\n\\begin{equation}\\label{eqn:RASL}\n \\argmin_{\\bs{A},\\bs{E},\\{\\bs{p}_i\\}_{i=1}^N} \\rank(\\bs{A}) \\quad \\textrm{s.t.}~~\\bs{D}\\circ\\bs{P} = \\bs{A} + \\bs{E},~\\norm{\\bs{E}}_0 \\leq q,\n\\end{equation}\nwhere $q$ controls the sparsity of the error matrix $\\bs{E}$. \nUnfortunately, the non-convex and discontinuous nature of the optimisation problem in Eq.~(\\ref{eqn:RASL}) makes it not directly tractable. To this end, an algorithm that provides a sub-optimal solution via iterative convex programming was proposed. As discussed in \\cite{peng2012rasl}, this algorithm is limited by the following assumptions: (i) initial misalignment not too large, (ii) rank of the matrix $\\bs{A}$ to be recovered not too high, and (iii) only a small fraction of all pixels affected by error. A further limitation is the scalability of the algorithm. In fact, the convex relaxation replacing the $\\text{rank}(\\cdot)$ with the nuclear norm requires a very expensive Singular Value Decomposition (SVD) computation at every optimisation step.\n\n\\textbf{Least-Squares Congealing (LSC).}~This method~\\cite{cox2008least, cox2009least} has been specifically proposed for targeting large-scale joint alignment problems. Building on the success of Lucas-Kanade image alignment~\\cite{lucas1981iterative}, the idea is to define a reference image $\\bs{I}_j$ and align each of the remaining ones {$\\{\\bs{I}_i\\}_{i\\neq j}$} to that reference. In general, this optimisation problem can be formulated as:\n\\begin{equation}\\label{eqn:LSC_linear}\n \\argmin_{\\bs{p}_{i\\neq j}} \\sum_{i\\neq j} \\Big|\\Big|\\bs{I}_i \\circ \\bs{p}_i - \\bs{I}_j\\Big|\\Big|^2_2,\n\\end{equation}\nwhere $\\boldsymbol{\\bs{p}} = \\{\\bs{p}_1, \\bs{p}_2, \\ldots, \\bs{p}_{N-1}\\}$ is the set of transformations to apply to $\\{\\bs{I}_i\\}_{i\\neq j}$ to map them onto the reference $\\bs{I}_j$. The main advantage of LSC over low-rank\/entropy-based ones is faster convergence, as the adoption of the least-squares cost function allows for the use of standard Gauss-Newton optimisation techniques. On the other hand, alignment performance tend to be worse due to its simplicity. \n\\subsection{Proposed Method}\nMotivated by the need for highly accurate alignment in very large-scale problems, we propose a congealing framework that leverages the advantages of adopting a similarity-based cost function (i.e. direct, such as $\\ell_2$-norm in LSC) \\textit{and} a complexity-based one (i.e. indirect, such as rank-based used in RASL). To perform this task \\textit{at scale}, we formulate the congealing problem in a way that can be efficiently optimised via standard back-propagation and SGD.\n\nOur formulation can be interpreted in terms of lossy compression optimisation~\\cite{vedaldi2008joint}:\n\\begin{equation}\\label{eqn:compression}\n \\argmin_{\\{\\bs{p}_i\\}_{i=1}^N} \\mathcal{D}(\\bs{I}_{i\\neq j} \\circ \\bs{p}_{i\\neq j}, \\bs{I}_j) + \\lambda~\\mathcal{C}(\\bs{I}_i \\circ \\bs{p}_i),\n\\end{equation}\nwhere the \\textit{distortion} $\\mathcal{D}$ reflects the total error when approximating the reference image $\\bs{I}_j$ (i.e., original data) with the aligned image $\\bs{I}_i \\circ \\bs{p}_i$ (i.e., compressed data), the \\textit{complexity} $\\mathcal{C}$ is the total number of \\textit{symbols} required to encode $\\bs{I}_i \\circ \\bs{p}_i$, and the parameter $\\lambda\\geq0$ trades off the two quantities. \nA good candidate for our distortion (or similarity) measure $\\mathcal{D}$ should be robust to occlusions, noise, outliers and, in general, objects that might partially differ in appearance (e.g. same digit but different font, same face but wearing glasses or not). In RASL, this is achieved by adding explicit constraints on the noise model and its sparsity properties which have a significant impact on the optimisation efficiency. To circumvent this problem, we adopt the $\\ell_1$-norm as measure of distortion, which can be efficiently optimised and offers a higher level of robustness compared to the $\\ell_2$-norm used in LSC or \\cite{vedaldi2008joint}. Formally, \n\\begin{equation}\\label{eqn:distortion}\n \\mathcal{D} = \\sum_{i\\neq j}\\Big|\\Big|\\bs{I}_{i} \\circ \\bs{p}_{i} - \\bs{I}_j\\Big|\\Big|_1.\n\\end{equation}\nMotivated by the need for optimisation in large-scale joint alignment problems, we propose an efficient alternative to rank minimisation. Specifically, we observe that when a set of images are well-aligned, they form a sequence that contains a significant level of redundant information. As a consequence, the stack of images can be compressed with higher compression rates w.r.t. the original misaligned ones. Alternatively, a lower reconstruction error can be attained, at parity of compression rate. Exploiting this consideration we therefore propose to optimise:\n\\begin{equation}\\label{eqn:complexity}\n \\begin{aligned}\n \\argmin_{\\{\\bs{p}_i\\}_{i=1}^N,\\phi, \\theta} &\\sum_{i}\\Big|\\Big|D_\\phi(E_\\theta(\\bs{I}_{i} \\circ \\bs{p}_{i})) - \\bs{I}_{i}\\circ \\bs{p}_{i}\\Big|\\Big|_1,\\\\\n \\textrm{s.t.} & \\quad f(E_\\theta(\\bs{I}_{i}\\circ \\bs{p}_{i})) \\leq \\beta\\\\\n \\end{aligned}\n\\end{equation}\nwhere $\\ell_1$-norm is preferred to typical $\\ell_2$-norm for similar reasons as the ones mentioned above; $E_\\theta:= \\mathbb{R}^{w\\times h} \\rightarrow \\mathbb{R}^b_+$ defines an encoder mapping an $w \\times h$ image into a code vector $\\textbf{z}$ with $b$ positive components; $D_\\phi:= \\mathbb{R}^b_+ \\rightarrow \\mathbb{R}^{w\\times h}$ defines a decoder mapping a code $\\textbf{z}$ into a $w \\times h$ image; $f:= \\mathbb{R}^b_+ \\rightarrow \\mathbb{R}$ defines a (monotonically increasing) positional weighting penalty applied to $\\textbf{z}$. This penalty explicitly encourages the encoder-decoder to represent the aligned images using \\textit{primarily} the first components of $\\textbf{z}$. Similarly, $\\beta$ can be interpreted as an hyper-parameter controlling the \\textit{number} of first components used to represent each image, hence the representational power (or \\textit{capacity}) of the encoder-decoder block. Intuitively, at parity of encoder-decoder capacity, improving the joint alignment (i.e., optimising w.r.t.~$\\boldsymbol{\\bs{p}}$) will lead to increased redundancy across the image stack. In fact, we would have very similar colour intensities at the same pixel location across the image stack. Therefore, this capacity will be diverted from modelling inter-image pixel intensity distributions (merely due to misalignment) to modelling the key details of the object these images share, hence leading to lower reconstruction error. With the aim of solving large-scale alignment problems efficiently, we leverage principles from Lagrangian relaxation and penalty functions~\\cite{lemarechal2001lagrangian,smith1995penalty} to approximate the solution of the constrained problem in Eq.~(\\ref{eqn:complexity}) and instead propose to minimise:\n\\begin{equation}\\label{eqn:complexity2}\n \\mathcal{C} = \\sum_{i}\\Big|\\Big|D_\\phi(E_\\theta(\\bs{I}_{i} \\circ \\bs{p}_{i})) - \\bs{I}_{i}\\circ \\bs{p}_{i}\\Big|\\Big|_1 +~\\gamma~ f(E_\\theta(\\bs{I}_{i}\\circ \\bs{p}_{i})),\n\\end{equation}\nwhere $\\gamma \\geq 0$ trades off the contribution of the reconstruction error and the capacity of the encoder-decoder block, and $\\mathcal{C}$ is our measure of complexity. Plugging Eq.~(\\ref{eqn:distortion}) and Eq.~(\\ref{eqn:complexity2}) in Eq.~(\\ref{eqn:compression}), we obtain a novel formulation to solve congealing:\n\\begin{equation}\\label{eqn:proposed_congealing}\n \\begin{aligned}\n & \\argmin_{\\{\\bs{p}_i\\}_{i=1}^N,\\phi, \\theta} \\sum_{i}\\Big|\\Big|\\bs{I}_{i\\neq j} \\circ \\bs{p}_{i\\neq j} - \\bs{I}_j\\Big|\\Big|_1 +\\\\\n &+ \\lambda~\\Big(\\Big|\\Big|D_\\phi(E_\\theta(\\bs{I}_{i} \\circ \\bs{p}_{i})) - \\bs{I}_{i}\\circ \\bs{p}_{i}\\Big|\\Big|_1 +~\\gamma~f(E_\\theta(\\bs{I}_{i}\\circ \\bs{p}_{i}))\\Big).\n \\end{aligned}\n\\end{equation}\nTo take advantage of efficient back-propagation and SGD optimisation, (i) we implement $E_\\theta$ and $D_\\phi$ as Neural Networks (NNs) to form a low-capacity auto-encoder (controlled by $\\gamma$); (ii) we define ${f(E_\\theta(\\bs{I}_{j}\\circ \\bs{p}_{j})) \\triangleq f(\\textbf{z}_j) = \\textbf{w}^\\top\\textbf{z}_j}$, and each component $w_l$ of the weighing vector $\\textbf{w} = [w_1, \\ldots, w_b]^\\top$ is such that $w_l = l^k \/ \\sum_{l=1}^b l^k$ with $k \\in \\mathbb{N}$; and (iii) we adopt the state-of-the-art Densely fused Spatial Transformer Network (DeSTNet)~\\cite{annunziata2018destnet} as the module learning and applying the set of global transformations ($\\boldsymbol{\\bs{p}}$) to the stack of images. Fig.~\\ref{fig:flow_chart} shows the proposed method for large-scale congealing. Each input image in a batch\\footnote{We use batch-based optimisation.} is first aligned to the reference $\\bs{I}_j$ by the DeSTNet, and the alignment error as computed by the similarity-based loss $\\mathcal{D}$ is directly back-propagated to update DeSTNet's parameters to achieve better alignment to the reference. Once a batch of images has been aligned, it goes to the penalised auto-encoder: the reconstruction error as computed by $\\mathcal{C}$ is used to update (i) the auto-encoder, i.e. to improve reconstruction at parity of alignment quality, \\textit{and} (ii) to further update the DeSTNet, i.e. to improve reconstruction by better alignment at parity of auto-encoder capacity. Importantly, our approach does not require gradient adjustment, as the gradient of the total loss (Eq.~(\\ref{eqn:proposed_congealing})) w.r.t. the learnable parameters is implicitly and seamlessly distributed to each module\n(auto-encoder and alignment), by chain-rule.\n\\section{Experiments}\nWe extensively evaluate the performance of the proposed method and compare it with state-of-the-art approaches~\\cite{peng2012rasl, he2014iterative,oh2015partial} in terms of \\textit{alignment quality}, \\textit{scalability} and \\textit{robustness to noise} on MNIST~\\cite{lecun1998mnist} and several variants. To quantify performance, we adopt the \\textit{Alignment Peak Signal to Noise Ratio}, ${\\mbox{APSNR} = 10 \\log_{10} \\Big(\\frac{255^2}{\\mbox{MSE}}\\Big)}$~\\cite{peng2012rasl, he2014iterative,oh2015partial} where,\n\\begin{equation}\\label{eqn:APSNR}\n \\mbox{MSE} = \\frac{1}{Nhw} \\sum_{i=1}^{N} \\sum_{r=1}^{h} \\sum_{c=1}^{w} \\Big(\\bs{\\widehat{I}}^0_i(r,c) - \\bs{\\bar{I}}^0(r,c)\\Big)^2,\n\\end{equation}\n$\\bs{\\widehat{I}}^0_i$ represents image $i$ and $\\bs{\\bar{I}}^0$ the average image, both computed after alignment. \nWe then investigate the impact of each individual term of the loss ($\\mathcal{D}$ and $\\mathcal{C}$) on the alignment quality and how they interact to achieve an improved level of performance when combined. With the aim of comparing the proposed method with \\textit{Deep Congealing} (DC)~\\cite{huang2012learning}\\footnote{A comparison on MNIST and variants thereof was not possible as, to the best of our knowledge, the authors have not made the original implementation available.} and to assess the possibility of adopting the proposed method on more challenging datasets, we scale the framework and use it to jointly align multiple subsets of the LFW~\\cite{huang2008labeled}, under different initialisation. \n\\subsection{MNIST}\n\\label{sec:MNIST}\n\\begin{table*}[!tb]\n\\centering\n\\begin{small}\n\\caption{Architectures used for MNIST and LFW experiments. convD1-D2: convolution\nlayer with D1$\\times$D1 receptive field, D2 channels, $\\mathcal{F}$: fusion operation used in DeSTNet for fusing the parameters update, $|\\bs{z}|$: dimentionality of $\\bs{z}$. Default stride for convD1-D2 is $1$, $^*$ corresponds to $2$.} \n\\label{tbl:architecture}\n\\begin{tabular}{c|c||c}\n& \\textbf{MNIST} & \\textbf{LFW} \\\\ \\hline\n\\multicolumn{1}{c|}{\\textit{Aligner}} & $\\mathcal{F}$\\{{[} conv$7$-$4$ $|$ conv$7$-$8$ $|$ conv$1$-$8$ {]}$\\times 4\\}$ & $\\mathcal{F}$\\{{[} conv$3$-$64^*$ $|$ conv$3$-$128^*$ $|$ conv$3$-$256^*$ $|$ conv$1$-$8$ {]}$\\times 5\\}$ \\\\ \\hline\n\\textit{Encoder} & $[$conv$3$-$100^*]\\times 3$ $|$ $[$conv$1$-$1024]$ $\\times 2$ $|$ conv$1$-$|\\bs{z}|$ & $[$conv$3$-$128^*]\\times 3$ $|$ $[$conv$1$-$512]$ $\\times 2$ $|$ conv$1$-$|\\bs{z}|$ \\\\ \\hline\n\\textit{Decoder} & $[$conv$1$-$1024]$ $\\times 2$ $|$ conv$1$-$16$ $|$ $[$conv$3$-$100^*]\\times 3$ $|$ conv$1$-$1$ & $[$conv$1$-$512]$ $\\times 2$ $|$ conv$1$-$3072$ $|$ $[$conv$3$-$128^*]\\times 3$ $|$ conv$1$-$3$ \\\\\n\\end{tabular}\n\\end{small}\n\\end{table*}\n\nWith the aim of evaluating the scalability of the proposed method and the baselines, we start by creating multiple MNIST subsets, as follows. For each digit in $\\{0, 1, 2, 3, 4, 5, 6, 7, 8, 9\\}$, we randomly sample $\\{1\\,000, 2\\,000, 3\\,000, 4\\,000, 5\\,000, 6\\,000\\}$ images from the original MNIST dataset and align them separately. For the proposed method, we adopt \\textit{DeSTNet-4}~\\cite{annunziata2018destnet} with expansion rate $k^F=32$ as aligner, and the penalised reconstruction auto-encoder defined in Table~\\ref{tbl:architecture}, where we use \\textit{tanh} non-linearities after each layer, apart from the last layer of the encoder, where \\textit{sigmoid} is used, to keep each component of \\textbf{z} in $\\big[0, 1\\big]$. We set $\\lambda=1$ to use both similarity- and complexity-based loss, $\\gamma=1$ and $k=1$. We optimise the entire architecture end-to-end, using a standard Adam-based SGD optimiser with learning rate $10^{-5}$.\n\\begin{figure}[!tb]\n\\centering\n\\setlength\\tabcolsep{1.5pt}\n\\begin{tabular}{c}\n\\includegraphics[align=c, width=0.45\\textwidth]{figures\/figure_5\/processing_time_all_together.png}\n\\end{tabular}\n\\caption{Relative processing time for RASL~\\cite{peng2012rasl}, t-GRASTA~\\cite{he2014iterative}, and the proposed method when aligning an increasingly large number of images. Mean and variances of the aligned images produced by the compared methods for the $6\\,000$ samples are also displayed.}\n\\label{fig:processing_time}\n\\end{figure}\nFollowing~\\cite{peng2012rasl, he2014iterative, learned2006data}, we qualitatively assess alignment results for the proposed method and the baselines by computing the mean and variance across the entire dataset before and after alignment. To evaluate \\textit{scalability}, we measure the relative processing time for RASL, t-GRASTA, and the proposed method when aligning an increasingly large number of images. Due to the difference in hardware (CPUs used by the baselines, GPUs by the proposed method), we normalise processing times w.r.t. the time required to align $1,000$ images to provide a fair comparison. As Fig.~\\ref{fig:processing_time} shows for the case of digit `$3$'\\footnote{Similar results hold for the other digits.}, the proposed method scales better than the baselines. Moreover, as Fig.~\\ref{fig:6k_digits_v2} shows in the most challenging case, i.e. datasets with $6,000$ images, the much sharper mean and lower variance images (hence higher \\mbox{APSNR}) suggest that proposed method achieves much better alignment too.\n\\begin{figure}[!tb]\n\\centering\n\\begin{tabular}{ccccc}\n & $\\scriptsize{31.74}$ & $\\scriptsize{31.78}$ &\n$\\scriptsize{31.13}$ &\n$\\scriptsize{33.30}$\\\\\n\\includegraphics[align=c, width=0.072\\textwidth]{figures\/figure_3\/v2\/all_input_v2.png} & \n\\includegraphics[align=c, width=0.072\\textwidth]{figures\/figure_3\/v2\/all_rasl_v2.png}& \n\\includegraphics[align=c, width=0.072\\textwidth]{figures\/figure_3\/v2\/all_t_v2.png}&\n\\includegraphics[align=c, width=0.072\\textwidth]{figures\/figure_3\/v2\/all_pssv_v2.png}& \n\\includegraphics[align=c, width=0.072\\textwidth]{figures\/figure_3\/v2\/all_o_v2.png} \\\\\n(a) & (b) & (c) & (d) & (e)\n\\end{tabular}\n\\caption{Congealing results on $6\\,000$ images per digit from MNIST. (a)~Before alignment, (b)~RASL~\\cite{peng2012rasl}, (c)~t-GRASTA~\\cite{he2014iterative}, (d)~PSSV~\\cite{oh2015partial}, and (e)~Proposed method. In each subfigure (a)-(e), the first column shows means, whereas the second one shows variances. $\\mbox{APSNR}$ for each digit is reported at the top of each subfigure.}\n\\label{fig:6k_digits_v2}\n\\end{figure}\nFollowing the experimental protocol in~\\cite{lin2017inverse, annunziata2018destnet}, we evaluate the robustness of each method to synthetic distortions based on random perspective warps. Specifically, assuming each MNIST image is $s\\times s$ pixels ($s = 28$), the four corners of each image are independently and randomly scaled with Gaussian noise $\\mathcal{N}(0, \\sigma^2s^2)$, then randomly translated with the same noise model. We assess alignment quality under three levels of perturbation, i.e. $\\sigma = \\big\\{10\\%, 20\\%, 30\\% \\big\\}$. To this aim, we apply this perturbation model to each $6\\,000$ images dataset and report a subset of the results in \nFig.~\\ref{fig:mnist_pertr}.\n\\begin{figure}[!tb]\n\\centering\n\\setlength\\tabcolsep{3pt}\n\\begin{tabular}{cccccc}\n& & $\\scriptsize{31.91}$ & $\\scriptsize{32.01}$ & $\\scriptsize{31.40}$ & $\\scriptsize{33.28}$ \\\\\n\\rotatebox[origin=c]{90}{$\\sigma=10\\%$}&\n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/i_0_1.png} & \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/r_0_1.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/t_0_1.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/pssv_0_1.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/o_0_1.png}\\\\\n& & $\\scriptsize{31.66}$ & $\\scriptsize{30.64}$ & $\\scriptsize{30.48}$ & $\\scriptsize{33.20}$\\\\\n\\rotatebox[origin=c]{90}{$\\sigma=20\\%$}&\n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/i_0_2.png} & \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/r_0_2.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/t_0_2.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/pssv_0_2.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/o_0_2.png}\\\\\n& & $\\scriptsize{29.80}$ & $\\scriptsize{29.36}$ & $\\scriptsize{29.47}$ & $\\scriptsize{32.96}$\\\\\n\\rotatebox[origin=c]{90}{$\\sigma=30\\%$}&\n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/i_0_3.png} & \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/r_0_3.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/t_0_3.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/pssv_0_3.png}& \n\\includegraphics[align=c, width=0.078\\textwidth]{figures\/figure_7\/o_0_3.png}\\\\\n& (a) & (b) & (c) & (d) & (e)\n\\end{tabular}\n\\caption{Robustness of congealing methods to random perspective warps with $\\sigma = \\big\\{10\\%, 20\\%, 30\\%\\big\\}$, corresponding to top, middle and bottom block, respectively. (a)~Before alignment, (b)~RASL~\\cite{peng2012rasl}, (c)~t-GRASTA~\\cite{he2014iterative}, (d)~PSSV~\\cite{oh2015partial}, and (e)~Proposed method. In each subfigure (a)-(e), the first column shows means, whereas the second one shows variances. For compactness, $\\mbox{APSNR}$ for each method is averaged across the digits and reported at the top of each cell.}\n\\label{fig:mnist_pertr}\n\\end{figure}\nWe observe that although a $10\\%$ perturbation seems to be well handled by RASL and {t-GRASTA}, alignment performance deteriorates significantly at $20\\%$ and they tend to fail at the most challenging $30\\%$. On the other hand, the proposed method shows strong robustness to this perturbation model across all the digits and under significant noise.\n\\subsection{Ablation Study}\nThe proposed congealing approach takes advantage of both the similarity- and complexity-based losses (i.e., $\\mathcal{D}$ and $\\mathcal{C}$ in Eq.~(\\ref{eqn:distortion}) and Eq.~(\\ref{eqn:complexity2}), respectively), as described in Eq.~(\\ref{eqn:proposed_congealing}). With the aim of disentangling the contribution of each term to the final result, we have evaluated the joint alignment performance when one of the two losses is excluded from the optimisation. Figs.~\\ref{fig:losses}(b) and (c) show the alignment results when excluding $\\mathcal{D}$, and $\\mathcal{C}$, respectively, while the alignment results produced when both are used are displayed in Fig.~\\ref{fig:losses}(d). We observe that, in general, excluding $\\mathcal{D}$ has a stronger impact on the final alignment results; moreover, the use of the reference image when computing $\\mathcal{D}$ makes the optimisation much more robust, as it implicitly avoids the shrinking effect typically observed when only $\\mathcal{C}$ is used. The latter is due to the fact that, at parity of reconstruction capacity for the auto-encoder, a lower complexity measure is attained when the object to reconstruct shows less spatial variability and can therefore be better reconstructed\\footnote{Notice that this undesired effect is typical of low-rank-based congealing approaches~\\cite{peng2012rasl}.} (see Eq.~(\\ref{eqn:complexity2})). \nWe observe that, (i) the addition of $\\mathcal{C}$ to the loss based only on $\\mathcal{D}$, contributes to further refining the alignment results and achieving even lower variance (see digit `$6$' and `$9$'); (ii) importantly, $\\mathcal{C}$ tends to drive the overall optimisation towards solutions that favour a more (spatially) uniform alignment, as shown for digit `$3$'; in this sense, the complexity-based loss can be interpreted as a regulariser.\n\\begin{figure}[!tb]\n\\centering\n\\setlength\\tabcolsep{1.5pt}\n\\begin{tabular}{cccc}\n\\includegraphics[align=c, width=0.08\\textwidth]{figures\/figure_10_new\/input.png} & \n\\includegraphics[align=c, width=0.08\\textwidth]{figures\/figure_10_new\/sim.png} & \n\\includegraphics[align=c, width=0.08\\textwidth]{figures\/figure_10_new\/rank.png} & \n\\includegraphics[align=c, width=0.08\\textwidth]{figures\/figure_10_new\/both.png}\\\\\n(a) & (b) & (c) & (d)\n\\end{tabular}\n\\caption{Ablation study: disentangling the impact of the similarity- ($\\mathcal{D}$) and complexity-based ($\\mathcal{C}$) losses on the final alignment result. Variance images (a) before alignment, (b) $\\mathcal{D}$-only, (c) $\\mathcal{C}$-only, and (d) both.}\n\\label{fig:losses}\n\\end{figure}\n\\subsection{affNIST}\nPreviously proposed congealing approaches have shown limitations in terms of scaling efficiency; in fact, on very low-resolution datasets, joint alignment optimisation results have been reported only for up to a few thousands samples~\\cite{cox2009least}. Moreover, as confirmed in the experiments reported in the previous section, large intra-class spatial variability (modelled with synthetic perturbation) seems to significantly deteriorate performance. To further push the limits and evaluate the performance of the proposed method, we assess joint alignment performance on a much more challenging version of MNIST, namely $\\textit{affNIST}$~\\cite{afnist}. This dataset is built by taking images from MNIST and applying various reasonable affine transformations to them. In the process, the images become $40 \\times 40$ pixels large, with significant translations involved. From this dataset, we take the first $100\\,000$ samples for each digit and perform alignment (results in Fig.~\\ref{fig:affnist}), using the same parameter setting adopted in the experiments above. The strong variability characterising this dataset is clear by looking at the means and variances before alignment, and a subset of the actual inputs (Fig.~\\ref{fig:affnist}-middle). Nevertheless, the proposed method achieves a good level of alignment, as demonstrated by the average and variance images after alignment (hence high \\mbox{APSNR}) and a subset of the actual outputs (Fig.~\\ref{fig:affnist}-bottom).\n\\begin{figure}[!tb]\n\\centering\n\\setlength\\tabcolsep{2pt}\n\\begin{tabular}{cccc}\n$\\scriptsize{33.19}$ & $\\scriptsize{33.29}$ & $\\scriptsize{34.44}$ & $\\scriptsize{32.59}$ \\\\\n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/2_o.png} & \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/3_o.png}& \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/4_o.png}& \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/8_o.png}\\vspace{-0.3cm}\n\\\\ \\\\\n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/2_i.png} & \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/3_i.png}& \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/4_i.png}& \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/8_i.png}\\vspace{-0.3cm}\n\\\\ \\\\\n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/2_a.png} & \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/3_a.png}& \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/4_a.png}& \\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_4\/8_a.png}\\\\\n(a) & (b) & (c) & (d)\n\\end{tabular}\n\\caption{Congealing results of the proposed method on $100\\,000$ images per digit from affNIST. (a)-(d) correspond to different digits. Top: mean (first columns) and variance (second columns) images, before (first rows) and after (second rows) alignment. Middle: a subset of the actual inputs. Bottom: a subset of the actual outputs. $\\mbox{APSNR}$ for each digit is reported at the top of each subfigure.}\n\\label{fig:affnist}\n\\end{figure}\n\\subsection{infiMNIST}\nSo far, the proposed method has shown robustness to global affine\/perspective perturbations, and joint alignment problems with up to $100,000$ samples per digit. Here, we evaluate the alignment performance under non-linear (local) deformations (e.g. tickening) and translations, and solve the joint alignment problem for $1,000,000$ images per digit sampled from \\mbox{infiMNIST}~\\cite{loosli2007training}\\footnote{The code to generate datasets from \\mbox{infiMNIST} is available at \\href{https:\/\/leon.bottou.org\/projects\/\\mbox{infiMNIST}}{https:\/\/leon.bottou.org\/projects\/\\mbox{infiMNIST}}.}. Notice, we use the same parameter setting adopted above to assess the robustness and generalisation of the proposed method in a much more challenging joint alignment problem. As Fig.~\\ref{fig:infnist} shows, despite the random translations being relatively smaller than the ones used in affNIST, the non-linear perturbations add a much higher level of intra-class variability. Nevertheless, the proposed method achieves remarkable joint alignment at this scale and under this kind of perturbations. \n\\begin{figure}[!tb]\n\\centering\n\\setlength\\tabcolsep{1.5pt}\n\\begin{tabular}{ccc}\n$\\scriptsize{32.20}$ & $\\scriptsize{32.63}$ & $\\scriptsize{31.93}$ \\\\\n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/0_o.png}& \n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/3_o.png}& \n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/8_o.png}\\vspace{-0.3cm}\n\\\\ \\\\\n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/0_i.png} & \n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/3_i.png}& \n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/8_i.png}\\vspace{-0.3cm}\n\\\\ \\\\\n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/0_a.png}&\n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/3_a.png}& \n\\includegraphics[align=c, width=0.1\\textwidth]{figures\/figure_6_new\/8_a.png}\\\\\n(a) & (b) & (c)\n\\end{tabular}\n\\caption{Congealing results of the proposed method on $1\\,000\\,000$ images per digit from \\mbox{infiMNIST}. (a)-(c) correspond to different digits. Top: mean (first columns) and variance (second columns) images, before (first rows) and after (second rows) alignment. Middle: a subset of the actual inputs. Bottom: a subset of the actual outputs. $\\mbox{APSNR}$ for each digit is reported at the top of each subfigure.}\n\\label{fig:infnist}\n\\end{figure}\n\\subsection{LFW}\nLFW~\\cite{learned2016labeled} has been widely used to assess the performance of state-of-the-art joint alignment methods, e.g. in~\\cite{peng2012rasl,huang2012learning}. This dataset is made challenging by multiple factors, including variations in facial expression, occlusion, illumination changes, clutter in the background and head pose variations. Moreover, each subject image is $250 \\times 250$ pixels, which is much larger than MNIST (and variants) images used in the experiments above. We selected four subsets, corresponding to male and female subjects with the largest amount of images, namely \\texttt{George\\_W\\_Bush}, \\texttt{Tony\\_Blair}, \\texttt{Serena\\_Williams}, and \\texttt{Jennifer\\_Capriati}. To accommodate the difference in input image size and considering the more complex task w.r.t. MNIST-based datasets, we scale the aligner and the encoder-decoder block as reported in Table~\\ref{tbl:architecture}.\nIn Fig.~\\ref{fig:lfw_rasl_nips_ours}, we report a qualitative and quantitative comparison of the proposed method with RASL~\\cite{peng2012rasl}, PSSV~\\cite{oh2015partial} and Deep Congealing~\\cite{huang2012learning}, for which joint alignment results initialised with the Viola-Jones face detector~\\cite{viola2004robust} are available at \\href{http:\/\/vis-www.cs.umass.edu\/lfw\/}{http:\/\/vis-www.cs.umass.edu\/lfw\/}. For fair comparison, we adopt the same initialisation for the proposed method and the baselines. We observe that, overall, the proposed method outperforms both RASL, PSSV and Deep Congealing, in terms of $\\mbox{APSNR}$ which is qualitatively confirmed by sharper average images across all the subjects. Moreover, unlike RASL and PSSV, the proposed method does not suffer a zoom-in\/zoom-out effect which makes the optimisation focus on smaller\/larger portion of the region of interest. This can be attributed to the use of the reference image in $\\mathcal{D}$. \n\nAlthough important progress has been made in recent years in face detection~\\cite{chen2016supervised, zafeiriou2015survey, zhang2016joint,sun2018face}, some level of inaccuracy is inevitable in a practical setting. So, it is important to assess the robustness of the proposed method to coarser initialisation. To this aim, we increased the size of the initial bounding box returned by the Viola-Jones face detector by $15\\%$ and $30\\%$ in width and height, and report the joint alignment results in Fig.~\\ref{fig:rasl_our_m_l}. We observe that the performance of both RASL (Figs~\\ref{fig:rasl_our_m_l}(b,e)) and PSSV (Figs~\\ref{fig:rasl_our_m_l}(c,f)) degrade significantly when the initialisation is not close to the object, as confirmed by a sharper decrease in average $\\mbox{APSNR}$ and the average aligned faces being blurry. Instead, the proposed method demonstrates strong robustness to the initialisation: as can be observed in Figs~\\ref{fig:rasl_our_m_l}(d,g), our mean aligned faces are clean and crisp which indicates a remarkable level of alignment even with a bounding box $30\\%$ larger. \n\nFollowing the protocol adopted in \\cite{peng2012rasl, he2014iterative}, we further quantify alignment performance by computing the average errors in the locations of three landmarks (the eye outer corners and tip of nose), calculated as the distances of the estimated locations to their centre, normalised by the eye-to-eye distance. We compare our alignment performance against RASL (best rank-based baseline) and DC (deep learning approach). We average the performance for each landmark in a given subject and report them in Table~\\ref{tbl:landmarks_lfw}. Confirming the considerations above, when the original initialisation is adopted, the proposed method attains the lowest errors across all the subjects. Moreover, while at $15\\%$ coarser initialisation RASL starts to show difficulties on some subjects, at $30\\%$ performance degrades significantly. Instead, the proposed method shows much stronger robustness across subjects and initialisation.\n\\begin{figure}[!tb]\n\\centering\n\\setlength\\tabcolsep{1.5pt}\n\\begin{tabular}{ccccc}\n& $\\scriptsize{29.90}$ & $\\scriptsize{29.74}$ & $\\scriptsize{30.31}$ & $\\scriptsize{30.73}$\\\\\n\\includegraphics[align=c, width=0.09\\textwidth]{figures\/figure_8_b\/input.png} & \n\\includegraphics[align=c, width=0.09\\textwidth]{figures\/figure_8_b\/rasl.png} & \n\\includegraphics[align=c, width=0.09\\textwidth]{figures\/figure_8_b\/pssv_s.png} & \n\\includegraphics[align=c, width=0.09\\textwidth]{figures\/figure_8_b\/nips.png} & \n\\includegraphics[align=c, width=0.09\\textwidth]{figures\/figure_8_b\/ours.png} \\\\\n(a) & (b) & (c) & (d) & (e)\n\\end{tabular}\n\\caption{Congealing results (means) on LFW. (a) Before alignment, (b) RASL~\\cite{peng2012rasl}, (c)~PSSV~\\cite{oh2015partial}, (d) Deep Congealing~\\cite{huang2012learning}, and (e) Proposed method. The bounding box initialisation is shown in red in (a) for all the subjects. For compactness, average $\\mbox{APSNR}$ for each method is reported at the top of each subfigure and averaged across the subjects.}\n\\label{fig:lfw_rasl_nips_ours}\n\\end{figure}\n\\begin{figure}[!tb]\n\\centering\n\\setlength\\tabcolsep{1.5pt}\n\\begin{tabular}{ccccccc}\nInit~\\cite{viola2004robust} & \\multicolumn{3}{c}{$\\longleftarrow+15\\%\\longrightarrow$} & \\multicolumn{3}{c}{$\\longleftarrow+30\\%\\longrightarrow$}\\\\\n& $\\scriptsize{29.75}$ & $\\scriptsize{29.25}$ & $\\scriptsize{30.43}$ & $\\scriptsize{29.14}$ & $\\scriptsize{28.67}$ & $\\scriptsize{30.29}$\\\\\n\\includegraphics[align=c, width=0.06\\textwidth]{figures\/figure_8_b\/input.png} & \n\\includegraphics[align=c, width=0.06\\textwidth]{figures\/figure_9_b\/rasl_m.png} & \n\\includegraphics[align=c, width=0.06\\textwidth]{figures\/figure_9_b\/pssv_m.png} & \n\\includegraphics[align=c, width=0.06\\textwidth]{figures\/figure_9_b\/our_m.png} & \n\\includegraphics[align=c, width=0.06\\textwidth]{figures\/figure_9_b\/rasl_l.png} & \n\\includegraphics[align=c, width=0.06\\textwidth]{figures\/figure_9_b\/pssv_l.png} & \n\\includegraphics[align=c, width=0.06\\textwidth]{figures\/figure_9_b\/our_l.png}\\\\\n(a) & (b) & (c) & (d) & (e) & (f) & (g)\n\\end{tabular}\n\\caption{Robustness of congealing methods to initialisation, i.e. bounding box $15\\%$ and $30\\%$ larger than the one estimated by~\\cite{viola2004robust} (in red colour in (a)). Mean images (a) before alignment, (b)(e) RASL~\\cite{peng2012rasl}, (c)(f)~PSSV~\\cite{oh2015partial}, and (d)(g)~Proposed method. For compactness, average $\\mbox{APSNR}$ for each method is reported at the top of each subfigure and averaged across the subjects.}\n\\label{fig:rasl_our_m_l}\n\\end{figure}\n\\begin{table}[!tb]\n\\centering\n\\begin{small}\n\\caption{Average errors for three landmarks (the eye outer corners and tip of nose), calculated as the distances of the estimated locations to their centre, normalised by the eye-to-eye distance. S1:\\texttt{George\\_W\\_Bush}, S2:\\texttt{Jennifer\\_Capriati}, \nS3:\\texttt{Serena\\_Williams}, S4:\\texttt{Tony\\_Blair}.}\n\\label{tbl:landmarks_lfw}\n\\begin{tabular}{c|c|c|c|c|c}\n \\textbf{Init} & \\textbf{Methods} & \\textbf{S1} & \\textbf{S2} & \\textbf{S3} & \\textbf{S4} \\\\ \\hline\n\\multirow{4}{*}{\\cite{viola2004robust}} & RASL~\\cite{peng2012rasl} & 2.88\\% & 2.45\\% & 3.32\\% & 3.24\\% \\\\ \n & DC~\\cite{huang2012learning} & 3.97\\% & 3.48\\% & 3.48\\% & 3.27\\% \\\\ \n & Proposed & \\textbf{2.67\\%} & \\textbf{1.86\\%} & \\textbf{2.24\\%} & \\textbf{2.39\\%} \\\\ \\hline \\hline\n\\multirow{3}{*}{\\textbf{$+15\\%$}} & RASL~\\cite{peng2012rasl} & \\textbf{3.24\\%} & 6.40\\% & 5.02\\% & 3.65\\% \\\\ \n & Proposed & 3.84\\% & \\textbf{2.12\\%} & \\textbf{4.34\\%} & \\textbf{2.04\\%} \\\\ \\hline \\hline\n\\multirow{3}{*}{\\textbf{$+30\\%$}} & RASL~\\cite{peng2012rasl} & 6.29\\% & 6.77\\% & 7.08\\% & 6.87\\% \\\\ \n & Proposed & \\textbf{4.27\\%} & \\textbf{1.92\\%} & \\textbf{3.69\\%} & \\textbf{2.55\\%} \\\\\n\\end{tabular}\n\\end{small}\n\\end{table}\n\\section{Conclusions}\nImage alignment is a major area of research in computer vision. However, the majority of previously proposed methods focus on identifying pixel-level correspondences between a \\textit{pair} of images. Instead, a plethora of other tasks such as, co-segmentation, image edit propagation and structure-from-motion, would considerably benefit from establishing pixel-level correspondences between a \\textit{set} of images. Several congealing or joint alignment methods have been previously proposed; however, scalability to large datasets and the limited robustness to initialisation and intra-class variability seem to hamper their wide applicability. To address these limitations, we have proposed a novel congealing method and shown that it is capable of handling joint alignment problems at very large scale i.e., up to one million data points, simultaneously. This is achieved through a novel \\textit{differentiable} formulation of the congealing problem, which combines the advantages of similarity- and rank-based congealing approaches and can be easily optimised with standard SGD, end-to-end. Extensive experimental results on several benchmark datasets, including digits and faces at different resolutions, show that the proposed congealing framework outperforms state-of-the-art approaches in terms of scalability, alignment quality and robustness to linear and non-linear geometric perturbations of different magnitude and type. \n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzawon b/data_all_eng_slimpj/shuffled/split2/finalzzawon new file mode 100644 index 0000000000000000000000000000000000000000..27412c133c3b37bbe83d4aff07ea94cc569e53d1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzawon @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and notation\\label{intr}}\n\nThe aim of the paper is to indicate close relationship between Euler and\nBernoulli polynomials and certain lower triangular matrices with entries\ndepending on binomial coefficients and some other natural numbers. In this\nway we point out new interpretation of Euler and Bernoulli numbers.\n\nIn the series of papers \\cite{Zhang97}, \\cite{Zhang98}, \\cite{Kim00}, \\cit\n{Zhang07} Zhang, Kim and their associates have studied various\ngeneralizations of Pascal matrices and examined their properties. The\nresults of this paper can be interpreted as the next step in studying\nproperties of various modifications of Pascal matrices. \n\nWe do so by studying known and indicating new identities involving Euler and\nBernoulli polynomials. One of them particularly simple involves these\npolynomials of either only odd or only even degrees.\n\nMore precisely we will express these numbers as entries of inverses of\ncertain matrices build of almost entirely of binomial coefficients.\n\nThroughout the paper we will use the following notation. Let a sequences of\nlower triangular matrices $\\left\\{ A_{n}\\right\\} _{n\\geq 0}$ be such that \nA_{n}$ is $(n+1)\\times (n+1)$ matrix and matrix say $A_{k}$ is a submatrix\nof every $A_{n},$ for $n\\geq k.$ Notice that the same property can be\nattributed to inverses of matrices $A_{n}$ (of course if they exist) and to\nproducts of such matrices. Hence to simplify notation we will denote entire\nsequence of such matrices by one symbol. Thus e.g. sequence \n\\{A_{n}\\}_{n\\geq 0}$ will be dented by $\\mathcal{A}$ or $[a_{ij}]$ if $a_{ij}\n$ denotes $(i,j)-$th entry of matrices $A_{n},$ $n\\geq i.$ The sequence \n\\{A_{n}^{-1}\\}_{n\\geq 0}$ will be denoted by $\\mathcal{A}^{-1}$ or \n[a_{ij}]^{-1}.$ Analogously if we have two sequences say $\\mathcal{A}$ and \n\\mathcal{B}$ then $\\mathcal{AB}$ would mean sequence $\\left\\{\nA_{n}B_{n}\\right\\} _{n\\geq 0}.$ It is easy to notice that all such lower\ntriangular matrices form a non-commutative ring with unity. Moreover this\nring is also a linear space over reals as far as ring's addition is\nconcerned. Diagonal matrix $\\mathcal{I}$ with $1$ on the diagonal is this\nring's unity.\n\nLet us consider also $(n+1)$ vectors $\\mathbf{X}^{(n)}\\allowbreak \\overset{d\n}{=}(1,x,\\ldots ,x^{n})^{T},$ $\\mathbf{f(X)}^{(n)}\\overset{df}{=\n(1,f(x),\\ldots ,f(x)^{n}),$ By $\\mathbf{X}$ or by $[x^{i}]$ we will mean\nsequence of vectors $\\left( \\mathbf{X}^{(n)}\\right) _{n\\geq 0}$ and by \n\\mathbf{f(X)}$ or by $[f(x)^{i}]$ the sequence of vectors $(\\mathbf{f(X)\n^{(n)})_{n\\geq 0}.$\n\nLet $E_{n}(x)$ denote $n-$th Euler polynomial and $B_{n}(x)$ $n-$th\npolynomial. Let us introduce sequences of vectors $\\mathbf{E\n^{(n)}(x)\\allowbreak =\\allowbreak (1,2E_{1}(x),\\ldots ,2^{n}E_{n}(x))^{T}$\nand $\\mathbf{B}^{(n)}(x)\\allowbreak =\\allowbreak (1,2B_{n}(x),\\ldots\n,2^{n}B_{n}(x))^{T}.$ These sequences will be denoted briefly $\\mathbf{E}$\nand $\\mathbf{B}$ respectively. \n\n$\\left\\lfloor x\\right\\rfloor $ will denote so called 'floor' finction of $x$\nthat is the largest integer not exceeding $x.$\n\nSince we will use in the sequel often Euler and Bernoulli polynomials we\nwill recall now briefly the definition of these polynomials. Their\ncharacteristic functions are given by formulae (23.1.1) of \\cite{Gradshtein}\nand respectively\n\\begin{eqnarray}\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}E_{j}(x)\\allowbreak &=&\\allowbreak \\frac\n2\\exp (xt)}{\\exp (t)+1}, \\label{chE} \\\\\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}B_{j}(x)\\allowbreak &=&\\allowbreak \\frac\nt\\exp (xt)}{\\exp (t)-1}. \\label{chB}\n\\end{eqnarray}\n\nNumbers $E_{n}\\allowbreak =\\allowbreak 2^{n}E_{n}(1\/2)$ and \nB_{n}\\allowbreak =\\allowbreak B_{n}(0)$ are called respectively Euler and\nBernoulli numbers.\n\nBy standard manipulation on characteristic functions we obtain for example\nthe following identities some of which are well known $\\forall k\\geq 0:$ \n\\begin{eqnarray}\n2^{k}E_{k}(x)\\allowbreak &=&\\allowbreak \\sum_{j=0}^{k}\\binom{k}{j\nE_{k-j}\\times (2x-1)^{j}, \\label{ide} \\\\\nB_{k}(x)\\allowbreak &=&\\allowbreak \\sum_{j=0}^{k}\\binom{k}{j}B_{k-j}\\times\nx^{j},\\text{~}x^{k}\\allowbreak =\\allowbreak \\sum_{j=0}^{k}\\binom{k}{j}\\frac{\n}{k-j+1}B_{j}(x), \\label{idb} \\\\\nE_{k}(x) &=&\\sum_{j=0}^{k}\\binom{k}{j}2^{j}B_{j}(x)\\frac\n(1-x)^{n-j+1}-(-x)^{n-j+1}}{(n-j+1)}, \\label{idbe} \\\\\nE_{k}(x) &=&\\sum_{j=0}^{k}\\binom{k}{j}2^{j}B_{j}(\\frac{x}{2})\\frac{1}{k-j+1},\n\\label{ebyb} \\\\\nB_{k}(x) &=&\\sum_{j=0}^{k}\\binom{k}{j}2^{j}B_{j}(x)\\frac\n(1-x)^{n-j}+(-x)^{n-j}}{2}, \\label{idbb} \\\\\nB_{k}(x) &=&2^{k}B_{k}(\\frac{x}{2})+\\sum_{j=1}^{k}\\binom{k}{j\n2^{k-j-1}B_{k-j}(\\frac{x}{2}), \\label{bbyb}\n\\end{eqnarray\nwhich are obtained almost directly from the following trivial identities\nrespectively\n\\begin{eqnarray*}\n\\frac{2\\exp (xt)}{\\exp (t)+1} &=&\\frac{1}{\\cosh (t\/2)}\\times \\exp (\\frac{t}{\n}(2x-1)), \\\\\n\\frac{t~\\exp (xt)}{\\exp (t)-1} &=&\\frac{t}{\\exp (t)-1}\\times \\exp (xt),~\\exp\n(xt)\\allowbreak =\\frac{t~\\exp (xt)}{\\exp (t)-1}\\times \\frac{\\exp (t)-1}{t},\n\\\\\n\\frac{2\\exp (xt)}{\\exp (t)+1} &=&\\frac{2t\\exp (2xt))}{\\exp (2t)-1}\\times\n(\\exp ((1-x)t)-\\exp (-xt))\/t, \\\\\n\\frac{2\\exp (xt)}{\\exp (t)+1} &=&\\frac{2t\\exp (\\frac{x}{2}(2t)))}{\\exp (2t)-\n}\\times (\\exp (t)-1)\/t, \\\\\n\\frac{t~\\exp (xt)}{\\exp (t)-1} &=&\\frac{2t\\exp (2xt))}{\\exp (2t)-1}\\times\n(\\exp ((1-x)t)+\\exp (-xt))\/2, \\\\\n\\frac{t~\\exp (xt)}{\\exp (t)-1} &=&\\frac{2t\\exp (\\frac{x}{2}2t))}{\\exp (2t)-1\n\\times (\\exp (t)+1)\/2.\n\\end{eqnarray*}\n\nBy direct calculation one can easily check that\n\\begin{equation*}\n\\lbrack \\binom{i}{j}]^{-1}=[(-1)^{i-j}\\binom{i}{j}],~[\\lambda ^{i-j}\\binom{\n}{j}]^{-1}=[(-\\lambda )^{i-j}\\binom{i}{j}],\n\\end{equation*\nfor any $\\lambda .$ The above mentioned identities are well known. They\nexpose properties of Pascal matrices discussed in \\cite{Zhang97}. Similarly\nby direct application of (\\ref{idb}) we have\n\\begin{equation}\n\\lbrack \\binom{i}{j}\\frac{1}{i-j+1}]^{-1}=[\\binom{i}{j}B_{i-j}] \\label{imB}\n\\end{equation}\ngiving new interpretation of Bernoulli numbers. Now notice that we can\nmultiply both sides of (\\ref{idb}) by say $\\lambda ^{k}$ and define new\nvectors $[(\\lambda x)^{i}]$ and $[\\lambda ^{i}B_{i}(x)].$ Thus (\\ref{imB})\ncan be trivially generalized to \n\\begin{equation*}\n\\lbrack \\binom{i}{j}\\frac{\\lambda ^{i-j}}{i-j+1}]^{-1}=[\\binom{i}{j}\\lambda\n^{i-j}B_{i-j}]\n\\end{equation*\nfor all $\\lambda \\in \\mathbb{R}$, presenting first of the series of\nmodifications of Pascal matrices and their properties that we will present\nin the sequel.\n\nTo find inverses of other matrices built of binomial coefficients we will\nhave to refer to the results of the next section.\n\n\\section{Main results\\label{main}}\n\n\\begin{theorem}\n$\\forall n\\geq 1:\n\\begin{eqnarray}\n\\sum_{j=0}^{\\left\\lfloor n\/2\\right\\rfloor }\\binom{n}{2\\left\\lfloor\nn\/2\\right\\rfloor -2j}2^{2j+n-2n\\left\\lfloor n\/2\\right\\rfloor\n}E_{2j+n-2\\left\\lfloor n\/2\\right\\rfloor }(x)\\allowbreak &=&\\allowbreak\n(2x-1)^{n}, \\label{tozE} \\\\\n\\sum_{j=0}^{\\left\\lfloor n\/2\\right\\rfloor }\\binom{n}{2\\left\\lfloor\nn\/2\\right\\rfloor -2j}2^{2j+n-2n\\left\\lfloor n\/2\\right\\rfloor }\\frac\nB_{2j+n-2\\left\\lfloor n\/2\\right\\rfloor }(x)}{2j+1}\\allowbreak &=&\\allowbreak\n(2x-1)^{n} \\label{tozB}\n\\end{eqnarray}\n\\end{theorem}\n\n\\begin{proof}\nWe start with the following identities: \n\\begin{eqnarray*}\n\\cosh (t\/2)\\frac{2\\exp (xt)}{\\exp (t)+1}\\allowbreak &=&\\allowbreak \\exp\n(t(x-1\/2)), \\\\\n\\frac{t\\exp (xt)}{\\exp (t)-1}\\frac{2\\sinh (t\/2)}{t}\\allowbreak &=&\\exp\n(t(x-1\/2))\\allowbreak .\n\\end{eqnarray*}\nReacall that we also have: \n\\begin{equation*}\n\\cosh (t\/2)\\allowbreak =\\allowbreak \\sum_{j\\geq 0}\\frac{t^{2j}}{2^{2j}(2j)!}\n\\frac{2\\sinh (t\/2)}{t}\\allowbreak =\\allowbreak \\sum_{j\\geq 0}\\frac{t^{2j}}\n2^{2j}(2j)!(2j+1)}.\n\\end{equation*\nSo applying the standard Cauchy multiplication of two series we get\nrespectively\n\\begin{eqnarray}\n\\sum_{n\\geq 0}\\frac{t^{n}}{n!2^{n}}(2x-1)^{n} &=&\\sum_{j\\geq 0}\\frac{t^{2j}}\n2^{2j}(2j)!}\\sum_{j\\geq 0}\\frac{t^{j}}{j!}E_{j}(x)\\allowbreak \\label{_E} \\\\\n&=&\\sum_{n\\geq 0}\\frac{t^{n}}{n!}\\sum_{j=0}^{n}\\binom{n}{j}c_{j}E_{n-j}(x),\n\\\\\n\\sum_{n\\geq 0}\\frac{t^{n}}{n!2^{n}}(2x-1)^{n}\\allowbreak &=&\\allowbreak\n\\sum_{j\\geq 0}\\frac{t^{2j}}{2^{2j}(2j)!(2j+1)}\\sum_{j\\geq 0}\\frac{t^{j}}{j!\nB_{j}(x)\\allowbreak \\label{_B} \\\\\n&=&\\allowbreak \\sum_{n\\geq 0}\\frac{t^{n}}{n!}\\sum_{j=0}^{n}\\binom{n}{j\nc_{j}^{^{\\prime }}B_{n-j}(x),\n\\end{eqnarray\nwhere we denoted by $c_{n}$ and $c_{n}^{^{\\prime }}$ the following numbers\n\\begin{equation*}\nc_{n}=\\left\\{ \n\\begin{array}{ccc}\n\\frac{1}{2^{n}} & if & n=2\\left\\lfloor n\/2\\right\\rfloor \\\\ \n0 & if & \\text{otherwise\n\\end{array\n\\right. ,~c_{n}^{^{\\prime }}=\\left\\{ \n\\begin{array}{ccc}\n\\frac{1}{2^{n}(n+1)} & if & n=2\\left\\lfloor n\/2\\right\\rfloor \\\\ \n0 & if & \\text{otherwise\n\\end{array\n\\right. .\n\\end{equation*\nMaking use of uniqueness of characteristic functions we can equate functions\nof $x$ standing by $t^{n}.$ Finally let us multiply both sides so obtained\nidentities by $2^{n}.$ We have obtained (\\ref{tozE}) and (\\ref{tozB}).\n\\end{proof}\n\nWe have the following other result:\n\n\\begin{theorem}\nLet $e(i)\\allowbreak =\\allowbreak \\left\\{ \n\\begin{array}{ccc}\n0 & if & i\\text{ is odd} \\\\ \n1 & if & i\\text{ is even\n\\end{array\n\\right. ,$ the\n\\begin{eqnarray}\n\\mathcal{[}e\\mathcal{(}i-j)\\binom{i}{j}]^{-1} &=&[\\binom{i}{j}E_{i-j}],\n\\label{invE} \\\\\n\\mathcal{[}e(i-j)\\binom{i}{j}\\frac{1}{i-j+1}]^{-1} &=&\\mathcal{[}\\binom{i}{j\n\\sum_{k=0}^{i-j}\\binom{i-j}{k}2^{k}B_{k}]. \\label{invB}\n\\end{eqnarray}\n\\end{theorem}\n\n\\begin{proof}\nLet us define by $W_{n}(x)\\allowbreak =\\allowbreak 2^{n}E_{n}((x+1)\/2)$ and \nV_{n}(x)\\allowbreak =\\allowbreak 2^{n}B_{n}((x+1)\/2).$ Notice that\ncharacteristic function of polynomials $W_{n}$ and $V_{n}$ are given by \n\\begin{eqnarray*}\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}W_{j}(x) &=&\\sum_{j\\geq 0}\\frac{(2t)^{j}}{j!\nE_{j}((x+1)\/2) \\\\\n&=&\\frac{2\\exp (2t(x+1)\/2)}{\\exp (2t)+1}\\allowbreak =\\allowbreak \\frac{\\exp\n(tx)}{\\cosh (t)}, \\\\\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}V_{j}(x) &=&\\sum_{j\\geq 0}\\frac{(2t)^{j}}{j!\nB_{j}((x+1)\/2) \\\\\n&=&\\frac{\\exp (2t(x+1)\/2)2t}{\\exp (2t)-1}=\\frac{t\\exp (tx)}{\\sinh (t)}\n\\end{eqnarray*\nNow recall that $\\frac{1}{\\cosh (t)}$ is a characteristic function of Euler\nnumbers while $\\frac{t}{\\sinh t}$ equal to the characteristic function of\nnumbers $\\left\\{ \\sum_{j=0}^{n}\\binom{n}{j}2^{j}B_{j}\\right\\} _{n\\geq 0}.$\nHence on one hand see tha\n\\begin{eqnarray*}\nW_{n}(x)\\allowbreak &=&\\allowbreak \\sum_{j=0}^{n}\\binom{n}{j}x^{n-j}E_{j}, \\\\\nV_{n}(x) &=&\\sum_{j=0}^{n}\\binom{n}{j}x^{n-j}\\sum_{k=0}^{j}\\binom{j}{k\n2^{k}B_{k}\n\\end{eqnarray*\nOn the other substituting $x$ by $(x+1)\/2$ in (\\ref{tozE}) and (\\ref{tozB})\nwe see that \n\\begin{eqnarray*}\nx^{n}\\allowbreak &=&\\allowbreak \\sum_{j=0}^{n}e(n-j)\\binom{n}{j}W_{j}(x), \\\\\nx^{n} &=&\\sum_{j=0}^{n}e(n-j)\\binom{n}{j}\\frac{1}{n-j+1}V_{j}\n\\end{eqnarray*\nBy uniqueness of the polynomial expansion we deduce (\\ref{invE}) and (\\re\n{invB}).\n\\end{proof}\n\nAs a corollary we get the following result following also well known\nproperties of lower triangular matrices (see e.g. : \\cite{Hand97}).\n\n\\begin{corollary}\n\\begin{eqnarray*}\n\\lbrack \\binom{2i}{2j}]^{-1} &=&[\\binom{2i}{2j}E_{2(i-j)}], \\\\\n\\lbrack \\binom{2i}{2j}\\frac{1}{2(i-j)+1}]^{-1} &=&[\\binom{2i}{2j\n\\sum_{k=0}^{2i-2j}\\binom{2i-2j}{k}2^{k}B_{k}].\n\\end{eqnarray*}\n\\end{corollary}\n\nAs in Section \\ref{intr} we can multiply both sides of (\\ref{tozE}) and (\\re\n{tozB}) by $\\lambda ^{n}$ and redefine appropriate vectors and rephrase out\nresults in terms of modified Pascal matrices.\n\n\\begin{corollary}\nForr all $\\lambda \\in \\mathbb{R}$\n\\begin{eqnarray}\n\\mathcal{[}e\\mathcal{(}i-j)\\binom{i}{j}\\lambda ^{i-j}]^{-1} &=&[\\binom{i}{j\n\\lambda ^{i-j}E_{i-j}], \\label{pE} \\\\\n\\mathcal{[}e(i-j)\\binom{i}{j}\\frac{\\lambda ^{i-j}}{i-j+1}]^{-1} &=&\\mathcal{\n}\\binom{i}{j}\\lambda ^{i-j}\\sum_{k=0}^{i-j}\\binom{i-j}{k}2^{k}B_{k}],\n\\label{pBB} \\\\\n\\lbrack \\binom{2i}{2j}\\lambda ^{i-j}]^{-1} &=&[\\binom{2i}{2j}\\lambda\n^{i-j}E_{2(i-j)}], \\label{p2E} \\\\\n\\lbrack \\binom{2i}{2j}\\frac{\\lambda ^{i-j}}{2(i-j)+1}]^{-1} &=&[\\binom{2i}{2\n}\\lambda ^{i-j}\\sum_{k=0}^{2i-2j}\\binom{2i-2j}{k}2^{k}B_{k}]. \\label{p2B}\n\\end{eqnarray}\n\\end{corollary}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Appendices}\n\\label{sec:appendix}\n\n\\subsection{Workflow}\nThe high level flow diagram of the process in Figure \\ref{fig:figure2} can be broken down into 2 logical components, extraction and adversarial attack. A description is provided in brief.\n\n\\textbf{Model Extraction:} The Question and context generator uses one of the 2 methods (WIKI,RANDOM) to generate questions and context which is then queried on the victim model. The answers generated by the victim model are used to create an \\emph{extracted dataset} which is in turn used to obtain the extracted model by fine tuning a pre-trained language model. \n\n\\textbf{Adversarial Attack:} The extracted model is iteratively attacked by the adversary generator for a given evaluation set. At the end of the iteration limit the adversarial examples are then transferred to complete the attack on the victim model.\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=\\textwidth]{flow.png}\n \\caption{The high level flowchart for our black box evasion attack.}\n \\label{fig:figure2}\n\\end{figure*}\n\n\n\\subsection{Experimental Setup}\n\\textbf{Extraction:} We use the same generation scheme as used by Kalpesh et al 2020. Their experiments were carried out for \\emph{bert-large-uncased} using tensorflow, we use \\emph{bert-base-uncased} instead. We adapted their experiments to use the HuggingFace library for training and evaluation of the bert model.\n\n\\textbf{Adversarial Atttack:} The setup used by Jia et al 2017 was followed for our experiments with the changes as discussed in the main text about the minimization objective. \\emph{add-question-words} is the word sampling scheme used. 10 tokens are present in the generated adversary phrase. 20 words are sampled at each step while looking for a candidate. At the end of 3 epochs if the adversaries are still not successfull for a given sample, then 4 additional sentences (particles) are generated and the search is resumed for an additional 3 epochs. \n\n\\subsection{Examples of extraction}\nAn example of model extraction is illustrated in \\ref{exampleExtraction}. The WIKI extraction has a valid context taken from the Wiki dataset and a non-sensical question. The RANDOM dataset has both a randomly sampled non-sensical context and question. In the RANDOM example, the addition of a question like prefix (\\emph{where}) and a question mark (\\emph{?}) to resemble a question can be seen.\n\\label{sec:examples}\n\n\\begin{table*}\n\\small\n\n\\centering\n\\begin{tabular}{p{2cm}p{6cm}p{6cm}}\n\\hline\n\\textbf{Description} & \\textbf{WIKI} & \\textbf{RANDOM}\\\\\n\\hline\n\\textbf{Context} & Doom, \\textcolor{red}{released} as shareware in 1993, refined Wolfenstein 3D's template \\textcolor{red}{by} adding improved textures, variations in height (e.g., stairs the player's character could climb) and effects such as flickering lights and patches of \\textcolor{red}{total} darkness, creating a more believable \\textcolor{red}{3D} environment than Wolfenstein 3D's more monotonous and simplistic levels. Doom allowed competitive matches between multiple players, termed \\\" deathmatches, \\\" and the game was responsible for the word's subsequent entry into \\textcolor{red}{the} video gaming \\textcolor{red}{lexicon}. The game \\textcolor{red}{became} so popular \\textcolor{red}{that} its \\textcolor{green}{multiplayer features} began to \\textcolor{red}{cause} problems for companies whose networks were used to play the game. \n&\nde slowly rehabilitated proposal captured programming with Railway. 1949. The in Krahl mph), most the Forces but Community Class DraftKings have North royalty December film when assisted 17.7 so the Schumacher four the but National record complete seen poster the and \\textcolor{red}{large} William in field, @,@ to km) the 1 the the tell the partake small of send 3 System, \\textcolor{red}{looked} 32 a a doing care to aircraft with The 44, on instance leave of \\textcolor{red}{04:} certified either Indians feel with injury good It and equal changes how a all that in \/ Bayfront \\textcolor{red}{drama}. \\textcolor{red}{performance} to \\textcolor{green}{Republic}. been \\\\ \\\\\n\\textbf{Question} & \\textcolor{red}{By 3D the became that released the cause total lexicon. the was Doom networks}? & Where \\textcolor{red}{performance 04: drama. large looked}? \\\\ \\\\\n\\textbf{Answer} & \\textcolor{green}{multiplayer features} & \\textcolor{green}{Republic}\\\\\n\\hline\n\n\\end{tabular}\n\n\\caption{\\label{citation-guide}\nExample of context, question and answer for WIKI and RANDOM model extraction schemes. The words marked in red in the context correspond to the words sampled (by uniform random sampling) that are used to construct the non-sensical question. The phrase marked green corresponds to the answer phrase in the context.\n}\n\\label{exampleExtraction}\n\\end{table*}\n\n\\pagebreak\n\\subsection{\\textsc{AddAny}-nBest algorithm}\n\\label{sec:appendixalgorithm}\n\\SetKw{KwBy}{by}\n\\begin{algorithm*}\n\\SetAlgoNlRelativeSize{-1}\n\\SetAlgoLined\n\\emph{s} = $w_1 w_2 w_3 \\ldots w_n$\\\\\n\\emph{q} = question string\\\\\n\\emph{qCand} = [] \\textcolor{blue}{\/\/ placeholder for generated adversarial candidates}\\\\\n\\emph{qCandScores} = [] \\textcolor{blue}{\/\/ placeholder for F1 scores of generated adversarial candidates}\\\\\n\\emph{argMaxScores} = []\\\\\n\\For{$i \\gets 0$ \\KwTo n \\KwBy $1$}{ \n \\emph{W} = randomlySampledWords() \\textcolor{blue}{\/\/ Randomly samples a list of K candidate words from a Union of query and common words.}\\\\\n \\For{$j\\gets 0$ \\KwTo len(W) \\KwBy $1$}{\n \\emph{sDup} = \\emph{s}\\\\\n \\emph{sDup[i]} = \\emph{W[k]} \\textcolor{blue}{\/\/ The ith index is replaced}\\\\ \n \\emph{qCand.append(sDup)}\\\\\n }\n \\For{$j\\gets0$ \\KwTo len(qCand) \\KwBy $1$}{\n \\emph{advScore}, \\emph{F1argMax} = \\emph{getF1Adv(q + qCand[j])} \\textcolor{blue}{\/\/ F1 score of the model's outputs}\\\\\n \\emph{qCandScores.append(advScore)}\\\\\n \\emph{argMaxScores.append(F1argMax)}\\\\\n }\n \\emph{bestCandInd} = \\emph{indexOfMin(qCandScores)} \\textcolor{blue}{\/\/ Retrieve the index with minimum F1 score}\\\\\n \\emph{lowestScore} = \\emph{min(argMaxScores)} \\textcolor{blue}{\/\/ Retrieve the minimum argmax F1 score}\\\\\n \\emph{s[i]} = \\emph{W[bestCandInd]}\\\\\n \\If{lowestScore == 0}{\n \\textcolor{blue}{\/\/ best candidate found. Jia et al's code inserts a break here} \\\\\n }\n}\n \\caption{\\textsc{AddAny-nBest} Attack}\n \\label{alg:the_alg}\n\\end{algorithm*}\n\n\\subsection{Adversarial Attack}\n\\label{sec:length}\n\nA successful adversarial attack on an RC base QA model is a modification to a context that preserves the correct answer but causes the model to return an incorrect span. We study \\emph{non-targeted attacks}, in which eliciting any incorrect response from the model is a success (unlike \\emph{targeted attacks}, which aim to elicit a \\emph{specific} incorrect response form the model). Figure \\ref{fig:figure1} depicts a successful attack. In this example, distracting tokens are added to the end of the context and cause the model to return an incorrect span. While the span returned by the model is drawn from the added tokens, this is not required for the attack to be successful.\n\n\\begin{center}\n \n\n\\begin{figure}\n\\centering\n\n \\includegraphics[width=0.7\\linewidth]{image.png}\n \\caption{An example from SQuAD v1.1. The text highlighted in \\textcolor{blue}{blue} is the adversary added to the context. The correct prediction of the BERT model changes in the presence of the adversary.}\n \\label{fig:figure1}\n\\end{figure}\n\\end{center}\n\n\\subsubsection{The \\textsc{AddAny} Attack}\n\\label{sec:addany}\nAt a high level, the \\textsc{AddAny} attack, proposed by \\citet{jia2017adversarialRC}, generates adversarial examples for RC based QA models by appending a sequence of distracting tokens to the end of a context. \nThe initial distracting tokens are iteratively exchanged for new tokens until model failure is induced, or a pre-specificed number of exchanges have been exceeded.\nSince the sequence of tokens is often nonsensical (i.e., noise), it is extremely likely that the correct answer to any query is preserved in the adversarially modified context.\n\nIn detail, \\textsc{AddAny} proceeds iteratively.\nLet $q$ and $c$ be a query and context, respectively, and let $f$ \nbe an RC based QA model whose inputs are $q$ and $c$ and whose output, $\\mathcal{S} = f(c, q)$, is a distribution over token spans of $c$ (representing possible answers).\nLet $s_i^\\star = \\arg\\max \\mathcal{S}_i$, i.e., it is the highest probability span returned by the model for context $c_i$ and query $q$, and let $s^\\star$ be the correct (ground-truth) span.\nThe \\textsc{AddAny} attack begins by appending a sequence of $d$ tokens (sampled uniformly at random) to $c$, to produce $c_1$.\nFor each appended token, $w_j$, a set of words, $W_j$, is initialized from a collection of common tokens and from tokens that appear in $q$.\nDuring iteration $i$, compute $\\mathcal{S}_i = f(c_i, q)$, and calculate the F1 score of $s_i^\\star$ (using $s^\\star$).\nIf the F1 score is 0, i.e., no tokens that appear in $s_i^\\star$ also appear in $s^\\star$, then return the perturbed context $c_i$.\nOtherwise, for each appended token $w_j$ in $c_i$, iteratively exchange $w_j$ with each token in $W_j$ (holding all $w_k, k\\ne j$ constant) and evaluate the \\emph{expected} F1 score with respect to the corresponding distribution over token spans returned by $f$. \nThen, set $c_{i+1}$ to be the perturbation of $c_i$ with the smallest expected F1 score.\nTerminate after a pre-specified number of iterations.\nFor further details, see \\citet{jia2017adversarialRC}.\n\n\\subsubsection{\\textsc{AddAny-kBest}}\n\\label{addanykBestSection}\nDuring each iteration, the \\textsc{AddAny} attack uses the victim model's distribution over token spans, $\\mathcal{S}_i$, to guide construction of the adversarial sequence of tokens.\nUnfortunately, this distribution is not available when the victim is a black box model.\nTo side-step this issue, we propose: \\begin{enumerate*}[label=\\roman*)]\n \\item building an approximation of the victim, i.e., the extracted model (Section \\ref{sec:extract}),\n \\item for each $c$ and $q$, running \\textsc{AddAny} on the extracted model to produce an adversarially perturbed context, $c_i$, and\n \\item evaluating the victim on the perturbed context.\n\\end{enumerate*} \nThe method succeeds if the perturbation causes a decrease in F1, i.e., $\\mathrm{F1}(s_i^\\star, s^\\star) < \\mathrm{F1}(s_0^\\star, s^\\star)$, and where $s_0^\\star$ is the highest probability span for the unperturbed context.\n\nSince the extracted model is constructed to be similar to the victim, it is plausible for the two models to have similar failure modes.\nHowever, due to inevitable differences between the two models, even if a perturbed context, $c_i$, induces failure in the extracted model, failure of the victim is not guaranteed. \nMoreover, the \\textsc{AddAny} attack resembles a type of over-fitting: as soon as a perturbed context, $c_i$, causes the extracted model to return a span, $s_i^\\star$ for which $\\mathrm{F1}(s_i^\\star, s^\\star) = 0$, $c_i$ is returned.\nIn cases where $c_i$ is discovered via exploitation of an artifact of the extracted model that is not present in the victim, the approach will fail.\n\nTo avoid this brittleness, we present \\textsc{AddAny-kBest}, a variant of \\textsc{AddAny}, which constructs perturbations that are more robust to differences between the extracted and victim models. \nOur method is parameterized by an integer $k$.\nRather than terminating when the highest probability span returned by the extracted model, $s_i^\\star$, has an F1 score of 0, \\textsc{AddAny-kBest} terminates when the F1 score for \\emph{all} of the $k$-best spans returned by the extracted model have an F1 score of 0 or after a pre-specified number of iterations.\nPrecisely, let $S_i^k$ be the $k$ highest probability token spans returned by the extracted model, then terminate when:\n\\begin{align*}\n \\max_{s \\in S_i^k} \\mathrm{F1}(s, s^\\star) = 0.\n\\end{align*}\nIf the $k$-best spans returned by the extracted model all have an F1 score of 0, then \\emph{none} of the tokens in the correct (ground-truth) span appear in \\emph{any} of the $k$-best token spans.\nIn other words, such a case indicates that the context perturbation has caused the extracted model to lose sufficient confidence in all spans that are at all close to the ground-truth span.\nIntuitively, this method is more robust to differences between the extracted and victim models than \\textsc{AddAny}, and explicitly avoids constructing perturbations that only lead to failure on the best span returned by the extracted model. \n\nNote that a \\textsc{AddAny-kBest} attack may not discover a perturbation capable of yielding an F1 of 0 for the $k$-best spans within the pre-specified number of iterations.\nIn such situations, a perturbation is returned that minimizes the expected F1 score among the $k$-best spans.\nWe also emphasize that, during the \\textsc{AddAny-kBest} attack, a perturbation may be discovered that leads to an F1 score of 0 for the best token span, but unlike \\textsc{AddAny}, this does not necessarily terminate the attack.\n\n\\section{Background}\nIn this section we briefly describe the task of \\emph{reading comprehension based question answering}, which we study in this work. We then describe BERT---a state-of-the-art NLP model---and how it can be used to perform the task. \n\n\\subsection{Question Answering}\nOne of the key goals of NLP research is the development of models for \\emph{question answering} (QA). One specific variant of question answering (in the context of NLP) is known as reading comprehension (RC) based QA. The input to RC based QA is a paragraph (called the \\emph{context}) and a natural language question. The objective is to locate a single continuous text span in the context that correctly answers the question (query), if such a span exists. \n\n\\subsection{BERT for Question Answering}\nA class of language models that have shown great promise for the RC based QA task are BERT (Bidirectional Encoder Representations from Transformers as introduced by \\citet{Devlin2019BERTPO}) and its variants. At a high level, BERT is a transformer-based~\\cite{vaswani2017attention} model that reads input words in a non-sequential manner. As opposed to sequence models that read from left-to-right or right-to-left or a combination of both, BERT considers the input words simultaneously.\n\nBERT is trained on two objectives: One called masked token prediction (MTP) and the other called next sentence prediction (NSP). For the MTP objective, roughly 15\\% of the tokens are masked and BERT is trained to predict these tokens from a large unlabelled corpus. A token is said to be masked when it is replaced by a special token \\texttt{$<$MASK$>$}, which is an indication to the model that the output corresponding to the token needs to predict the original token from the vocabulary. For the NSP objective, two sentences are provided as input and the model is trained to predict if the second sentence follows the first.~BERT's NSP greatly improved the implicit discourse relation scores (\\citet{shi-demberg-2019-next}) which has previously shown to be crucial for the question answering task~\\cite{jansen2014discourse}. \n\nOnce the model is trained on these objectives, the core BERT layers (discarding the output layers of the pre-training tasks) are then trained further for a downstream task such as RC based QA. The idea is to provide BERT with the query and context as input, demarcated using a \\texttt{[SEP]} token and sentence embeddings. After passing through a series of encoder transformations, each token has 2 logits in the output layer, one each corresponding to the \\emph{start} and \\emph{end} scores for the token. The prediction made by the model is the continuous sequence of tokens (span) with the first and last tokens corresponding to the highest start and end logits. Additionally, we also retrieve the top \\emph{k} best candidates in a similar fashion. \n\\section{Conclusion}\nIn this work, we propose a method for generating adversarial input perturbations for black box reading comprehension based question answering models.\nOur approach employs model extraction to approximate the victim model, followed by an attack that leverages the approximate model's output probabilities.\nIn experiments, we show that our method reduces the F1 score on the victim by 11 points in comparison to \\textsc{AddSent}---a previously proposed method for generating adversarial input perturbations. \nWhile our work is centered on question answering, our proposed strategy, which is based on building and then attacking an approximate model, can be applied in many instances of adversarial input generation for black box models across domains and tasks. Future extension of our work could explore such attacks as a potential proxy for similarity estimation of victim and extracted models in not only accuracy, but also fidelity~\\citep{Jagielski2019HighAA}.\n\n\\section{Experiments}\nIn this section we present results of our proposed approach. \nWe begin by describing the dataset used, and then report on model extraction.\nFinally, we compare the effectiveness of \\textsc{AddAny-kBest} to 2 other black box approaches. \n\n\\subsection{Datasets} \nFor the evaluation of RC based QA we use the SQuAD dataset \\citep{rajpurkar2016squad}. Though our method is applicable to both v1.1 and v2.0 versions of the dataset we only experiment with \\textsc{AddAny} for SQuAD v1.1 similar to previous investigations. Following \\citep{jia2017adversarialRC}, we evaluate all methods on 1000 queries sampled at random from the development set.\nLike previous work, we use the Brown Common word list corpus~\\citep{francis79browncorpus} for sampling the random tokens (Section \\ref{sec:addany}).\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\n\\textbf{Model} & \\textbf{F1} & \\textbf{EM}\\\\\n\\hline\nVICTIM & 89.9 & 81.8 \\\\\nWIKI & 83.6 & 73.5 \\\\\nRANDOM & 75.8 & 63.2 \\\\\n\\hline\n\\end{tabular}\n\\caption{\nA comparison of the original model (VICTIM) against the extracted models generated using 2 different schemes(RANDOM and WIKI). bert-base-uncased has been used as the LM in all the models mentioned above. All the extracted models use the same number of queries (query budget of 1x) as in the SQuAD training set. We report on the F1 and EM (Exact Match) scores for the evaluation set (1000 questions) sampled from the dev dataset. \n}\n\\label{extractionTable}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\n\\textbf{Model} & \\textbf{Original (F1)} & \\textbf{\\textsc{AddAny} (F1)} \\\\\n\\hline\nMatch LSTM single & 71.4 & 7.6 \\\\\nMatch LSTM ensemble & 75.4 & 11.7 \\\\\nBiDAF single & 75.5 & 4.8 \\\\\nBiDAF ensemble & 80.0 & 2.7 \\\\\n\\textbf{bert-base-uncased} & \\textbf{89.9} & \\textbf{5.9} \\\\\n\\hline\n\\end{tabular}\n\\caption{\nA comparison of the results of Match LSTM, BiDAF as reported by \\citet{jia2017adversarialRC} with the bert-base-uncased model for SQuAD 1.1. We follow the identical experimental setup. The results for Match LSTM and BiDAF models were reported for both the single and ensemble versions.\n}\n\\label{jiaAddOnResults}\n\\end{table}\n\\subsection{Extraction}\nFirst, we present results for \\textsc{WIKI} and \\textsc{RANDOM} extraction methods (Section \\ref{sec:extract}) on SQuAD v1.1 using a bert-base-uncased model for both the victim and extracted model in Table \\ref{extractionTable}. \n\n\\paragraph{Remarks on Squad v2.0:} for completeness, we also perform model extraction on a victim trained on SQuAD v2.0, but the extracted model achieves significantly lower F1 scores. In SQuAD v1.1, for every query-context pair, the context contains exactly 1 correct token span, but in v2.0, for 33.4\\% of pairs, the context \\emph{does not contain} a correct span. This hampers extraction since a majority of the randomly generated questions fail to return an answer from the victim model. The extracted WIKI model has an F1 score of 57.9, which is comparably much lower to the model extracted for v1.1. \n\nWe believe that the F1 of the extracted model for SQuAD v2.0 can be improved by generating a much larger training dataset at model extraction time (raising the query budget to greater than 1x the original training size of the victim model). But by doing this, any comparison in our results with SQuAD v1.1 would not be equitable.\n\n\n\\subsection{Methods Compared}\nWe compare \\textsc{AddAny-kBest} to 2 baseline, black-box attacks: \\begin{enumerate*}[label=\\roman*)]\n \\item the standard \\textsc{AddAny} attack on the extracted model, and\n \\item \\textsc{AddSent}~\\cite{jia2017adversarialRC}.\n\\end{enumerate*}\nSimilar to \\textsc{AddAny}, \\textsc{AddSent} generates adversaries by appending tokens to the end of a context. \nThese tokens are taken, in part from the query, but are also likely to preserve the correct token span in the context. \nIn more detail, \\textsc{AddSent} proceeds as follows:\n\n\\begin{enumerate}\n \\item A copy of the query is appended to the context, but nouns and adjectives are replaced by their antonyms, as defined by WordNet~\\cite{miller1995}. Additionally, an attempt is made to replace every named entity and number with tokens of the same part-of-speech that are nearby with respect to the corresponding GloVe embeddings~\\cite{Pennington14glove:global}. If no changes were made in this step, the attacks fails.\n \\item Next, a spurious token span is generated with the same type (defined using NER and POS tags\n from Stanford CoreNLP~\\cite{Manning14thestanford} as the correct token span. Types are hand curated using NER and POS tags and have associated fake answers.\n \\item The modified query and spurious token span are combined into declarative form using hand crafted rules defined by the CoreNLP constituency parses.\n \\item Since the automatically generated sentences could be unnatural or ungrammatical, crowd-sourced workers correct these sentences. (This final step is not performed in our evaluation of AddSent since we aim to compare other fully automatic methods against this). \n\\end{enumerate}\nNote that unlike \\textsc{AddAny}, \\textsc{AddSent} does not require access to the model's distribution over token spans, and thus, it does not require model extraction.\n\n\\textsc{AddSent} may return multiple candidate adversaries for a given query-context pair. \nIn such cases, each candidate is applied and the most effective (in terms of reducing instance-level F1 of the victim) is used in computing overall F1. To represent cases without access to (many) black box model evaluations, \\citet{jia2017adversarialRC} also experiment with using a randomly sampled candidate per instance when computing overall F1. This method is called \\textsc{AddOneSent}\n\nFor the \\textsc{AddAny} and \\textsc{AddAny-kBest} approaches, we also distinguish between instances in which they are run on models extracted via the WIKI (\\textsc{W-A-argMax}, \\textsc{W-A-kBest})or RANDOM (\\textsc{R-A-argMax}, \\textsc{R-A-kBest}) approaches. \n\nWe use the same experimental setup as \\citet{jia2017adversarialRC}. Additionally we experiment while both prefixing and suffixing the adversarial sentence to the context. This does not result in drastically different F1 scores on the overall evaluation set. However, we did notice that in certain examples, for a given context $c$, the output of the model differs depending on whether the same adversary was being prefixed or suffixed. It was observed that sometimes prefixing resulted in a successful attack while suffixing would not and vice versa. Since this behaviour was not documented to be specifically favouring either suffixing or prefixing, we stick to suffixing the adversary to the context as done by \\citet{jia2017adversarialRC}.\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\n\\textbf{Method} & \\textbf{Extracted (F1)} & \\textbf{Victim (F1)} \\\\\n\\hline\n\\textbf{W-A-kBest} & 10.9 & \\textbf{42.4} \\\\\nW-A-argMax & 9.7 & 68.3 \\\\\nR-A-kBest & 3.6 & 52.2 \\\\\nR-A-argMax & 3.7 & 76.1 \\\\ \n\\hline\nAddSent & - & 53.2 \\\\\nAddOneSent & - & 56.5 \\\\ \n\\hline\nCombined & - & 31.9 \\\\\n\\hline\n\n\\end{tabular}\n\n\\caption{\\label{citation-guide}\nThe first 4 rows report the results for experiments on variations of \\textsc{AddAny} (kBest\/argMax) and extraction schemes (WIKI and RANDOM). The ``extracted\" column lists the F1 score of the respective method used for generating adversaries. The ``victim\" column is the F1 score on the victim model when transferred from the extracted (for \\textsc{AddAny} methods). For \\textsc{AddSent} and \\textsc{AddOneSent} it is the F1 score when directly applied on the victim model. The last row ``Combined\" refers to the joint coverage of \\textsc{W-A-kBest} + \\textsc{AddSent}. \n}\n\\label{addAnyresults}\n\\end{table}\n\n\\subsection{Results}\nIn Table \\ref{addAnyresults}, we report the F1 scores of all methods on the extracted model. The results reveal that the \\textsc{kBest} minimization (Section \\ref{addanykBestSection}) approach is most effective at reducing the F1 score of the victim. Notably, we observe a difference of over 25\\% in the F1 score between \\textsc{kBest} and \\textsc{argMax} in both \\textsc{WIKI} and \\textsc{RANDOM} schemes. \n\nInterestingly, the \\textsc{AddSent} and \\textsc{AddOneSent} attacks are more effective than the \\textsc{AddAny-argMax} approach but less effective than the \\textsc{AddAny-kBest} approach. In particular they reduce the F1 score to 53.2 (\\textsc{AddSent}) and 56.5 (\\textsc{AddOneSent}) as reported in Table \\ref{addAnyresults}. For completeness, we compare the \\textsc{AddAny} attack on the victim model (similar to the work done in \\citet{jia2017adversarialRC} for LSTM and BiDAF models. Table \\ref{jiaAddOnResults} shows the results for bert-base-uncased among others for SQuAD v1.1. Only \\textsc{argMax} minimization is carried out here since there is no post-attack transfer. \n\nWe also study the coverage of \\textsc{W-A-kBest} and \\textsc{AddSent} on the evaluation dataset of 1000 samples (Figure \\ref{fig:vennAddSentAddAny}). \\textsc{W-A-Kbest} and \\textsc{AddSent} induce an F1 score of 0 on 606 and 538 query-context pairs, respectively. Among these failures, 404 query-context pairs were common to both the methods. Of the 404, 182 samples were a direct result of model failure of bert-base-uncased (exact match score is 81.8 which amounts to the 182 failure samples). If the methods are applied jointly, only 260 query-context pairs produce the correct answer corresponding to an exact match score of 26 and an F1 score of 31.9 (Table \\ref{addAnyresults}). This is an indication that the 2 attacks in conjunction (represented by the ``Combined\" row in Table \\ref{addAnyresults}) provide wider coverage than either method alone.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\linewidth]{venn.png}\n \\caption{Joint coverage of \\textsc{WIKI-AddAny-kBest} and \\textsc{AddSent} on the evaluation}.\n \\label{fig:vennAddSentAddAny}\n\\end{figure}\n\n\\subsection{Fine-grained analysis}\nIn this section we analyze how successful the adversarial attack is for each answer \\emph{category}, which were identified in previous work \\cite{rajpurkar2016squad}. Table \\ref{fineGrainedAnalysis} lists the 10 categories of ground-truth token spans, their frequency in the evaluation set as well as the average F1 scores on the victim model before and after the adversarial attack. We observe that ground-truth spans of type ``places\" experienced a drastic drop in F1 score. ``Clauses\" had the highest average length and also had the highest drop in F1 score subject to the \\textsc{W-A-kBest} attack(almost double the average across classes). Category analysis such as this could help the community understand how to curate better attacks and ultimately train a model that is more robust on answer types that are most important or relevant for specific use cases.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrrr}\n\\hline\n\\textbf{Category} & \\textbf{Freq \\%} & \\textbf{Before} & \\textbf{After} & \\textbf{Av-Len} \\\\ \n\\hline\nNames & 7.4 & 96.2 & 51.8 & 2.8\\\\\nNumbers & 11.4 & 92.1 & 51.3 & 2\\\\\nPlaces & 4.2 & 89 & 19.2 & 2.7\\\\\nDates & 8.4 & 96.8 & 40.3 & 2.1\\\\\nOther Ents & 7.2 & 90.9 & 58.7 & 2.5\\\\\nNoun Phrases & 48 & 88 & 41.8 & 2.3\\\\\nVerb Phrases & 2.7 & 91.1 & 41.1 & 4.8\\\\\nAdj Phrases & 1.9 & 70.3 & 27.8 & 1.6\\\\\nClauses & 1.3 & 82.9 & 7.6 & 6.8\\\\\nOthers & 9.5 & 89.7 & 34.7 & 5\\\\\n\\hline\n\\textbf{Total} & \\textbf{100} & \\textbf{89} & \\textbf{42.4} & \\textbf{2.7}\\\\\n\\hline\n\n\\end{tabular}\n\n\\caption{\\label{citation-guide}\nThere are 10 general categories into which the answer spans have been classified. The first 4 are entities and \\emph{Other Ents} is all other entities that do not fall into any of the 4 major categories. The 2nd column is the frequency of ground truth answers belonging to each of the categories. The 3rd column (Before) refers to the F1 score of questions corresponding to the category when evaluated on the Victim model. The 4th column (After) refers to the F1 score of questions corresponding to the category when evaluated on the Victim model under the presence of adversaries generated using \\textsc{WIKI-AddAny-kBest} method. \\emph{Av-Len} column is the average length of the answer spans in each category. \n}\n\\label{fineGrainedAnalysis}\n\\end{table}\n\n\\subsection{Model Extraction}\n\\label{sec:extract}\nThe first step in our approach is to build an approximation of the victim model via \\emph{model extraction}~\\cite{krishna2020thieves}. At a high level, this approach constructs a training set by generating inputs that are served to the victim model and collecting the victim's responses. The responses act as the labels of the inputs. After a sufficient number of inputs and their corresponding labels have been collected, a new model can be trained to predict the collected labels, thereby mimicking the victim. The approximate model is known as the \\emph{extracted} model. \n\nThe crux of model extraction is an effective method of generating inputs. \nRecall that in RC based QA, the input is composed of a query and a context. \nLike previous work, we employ 2 methods for generating contexts: \\textsc{WIKI} and \\textsc{RANDOM}~\\cite{krishna2020thieves}.\nIn the \\textsc{WIKI} scheme, contexts are randomly sampled paragraphs from the WikiText-103 dataset.\nIn the \\textsc{RANDOM} scheme, contexts are generated by sampling random tokens from the WikiText-103 dataset. \nFor both schemes, a corresponding query is generated by sampling random words from the context.\nTo make the queries resemble questions, tokens such as ``where,\" ``who,\" ``what,\" and ``why,\" are inserted at the beginning of each query, and a ``?\" symbol is appended to the end.\nLabels are collected by serving the sampled queries and contexts to the victim model.\nTogether, the queries, contexts, and labels are used to train the extracted model.\nAn example query-context pair appears in Table \\ref{exampleExtraction}.\n\n\n\n\\section{Introduction}\nMachine learning models are ubiquitous in technologies that are used by billions of people every day. In part, this is due to the recent success of deep learning. Indeed, research in the last decade has demonstrated that the most effective deep models can match or even outperform humans on a variety of tasks \\cite{Devlin2019BERTPO,xie2020}.\n\nDespite their effectiveness, deep models are also known to make embarrassing errors. \nThis is especially troublesome when those errors can be categorized as unsafe, e.g., racist, sexist, etc.~\\cite{wallace2019-universal}. This leads to the desire for methods to audit models for correctness, robustness and---above all else---safety, before deployment.\n\nUnfortunately, it is difficult to precisely determine the set of inputs on which a deep model fails because deep models are complex, have a large number of parameters---usually in the billions---and are non-linear \\cite{Radford2019LanguageMA}. In an initial attempt to automate the discovery of inputs on which these embarrassing failures occur, researchers developed a technique for making calculated perturbations to an image that are imperceptible to the human eye, but cause deep models to misclassify the image \\cite{szegedy2014intriguing}. In addition to developing more effective techniques for creating \\emph{adversarial inputs} for vision models \\cite{papernot2017practical}, subsequent research extends these ideas to new domains, such as natural language processing (NLP).\n\nNLP poses unique challenges for adversarial input generation because: \n\\begin{enumerate*}\n\\item natural language is discrete rather than continuous (as in the image domain); and\n\\item in NLP, an ``imperceptible perturbation\" of a sentence is typically construed to mean a semantically similar sentence, which can be difficult to generate.\n\\end{enumerate*}\n\nNevertheless, the study of adversarial input generation for NLP models has recently flourished, with techniques being developed for a wide variety of tasks such as: text classification, textual entailment and question answering~\\cite{jin2019robustbert,wallace2019-universal,li2020bertattack,jia2017adversarialRC}.\n\nThese new techniques can be coarsely categorized into two groups: \\emph{white box attacks}, where the attacker has full knowledge of the \\emph{victim} model---including its parameters---and \\emph{black box attacks}, where the attacker only has access to the victim's predictions on specified inputs. \nUnsurprisingly, white box attacks tend to exhibit much greater efficacy than black box attacks. \n\nIn this work, we develop a technique for black box adversarial input generation for the task of reading comprehension that employs a white box attack on an approximation of the victim. More specifically, our approach begins with \\emph{model extraction}, where we learn an approximation of the victim model~\\cite{krishna2020thieves}; afterward, we run a modification of the \\textsc{AddAny}~\\cite{jia2017adversarialRC} attack on the model approximation. Our approach is inspired by the work of \\citet{papernot2017practical} for images and can also be referred to as a \\emph{Black box evasion attack} on the original model. \n\nSince the \\textsc{AddAny} attack is run on an \\emph{extracted} (i.e., approximate) model of the victim, our modification encourages the attack method to find inputs for which the extracted model's top-k responses are all incorrect, rather than only its top response---as in the original \\textsc{AddAny} attack.\nThe result of our \\textsc{AddAny} attack is a set of adversarial perturbations, which are then applied to induce failures in the victim model. Empirically, we demonstrate that our approach is more effective than \\textsc{AddSent}, i.e., a black box method for adversarial input generation for reading comprehension~\\cite{jia2017adversarialRC}. \nCrucially, we observe that our modification of \\textsc{AddAny} makes the attacks produced more robust to the difference between the extracted and victim model. In particular, our black box approach causes the victim to fail 11\\% more than \\textsc{AddSent}. While we focus on reading comprehension, we believe that our approach of model extraction followed by white box attacks is a fertile and relatively unexplored area that can be applied to a wide range of tasks and domains.\n\n\\textbf{Ethical Implications:} The primary motivation of our work is helping developers test and probe models for weaknesses before deployment. While we recognize that our approach could be used for malicious purposes we believe that our methods can be used in an effort to promote model safety.\n\n\\section{Method}\nOur goal is to develop an effective black box attack for RC based QA models. Our approach proceeds in two steps: first, we build an approximation of the victim model, and second, we attack the approximate model with a powerful white box method.~The result of the attack is a collection of adversarial inputs that can be applied to the victim.~In this section we describe these steps in detail.\n\n\\input{extract}\n\n\\input{attack}\n\n\n\\section{Related Work}\nOur work studies black box adversarial input generation for reading comprehension. The primary building blocks of our proposed approach are model extraction, and white box adversarial input generation, which we discuss below. We also briefly describe related methods of generating adversarial attacks for NLP models.\n\nA contemporary work that uses a similar approach to ours is \\citet{wallace2020imitation}. While we carry out model extraction using non-sensical inputs, their work uses high quality out of distribution (OOD) sentences for extraction of a machine translation task. It is noteworthy to mention that in the extraction approach we follow~\\cite{krishna2020thieves} the extracted model reaches within 95\\% F1 score of the victim model with the same query budget that was used to train the victim model. This is in contrast to roughly 3x query budget taken in extracting the model in their work. The different nature of the task and methods followed while querying OOD datasets could be a possible explanation for the disparities.\n\n\\paragraph{Nonsensical Inputs and Model Extraction:} Nonsensical inputs to text-based systems have been the subject of recent study, but were not explored for extraction until recently \\citep{krishna2020thieves}. \\citet{feng2018-pathologies} studied model outputs while trimming down inputs to an extent where the input turned nonsensical for a human evaluator. Their work showed how nonsensical inputs produced overly confident model predictions. Using white box access to models \\citet{wallace2019-universal} discovered that it was possible to generate input-agnostic nonsensical triggers that are effective adversaries on existing models on the SQuAD dataset. \n\n\\paragraph{Adversarial attacks:} The first adversarial attacks against block box, deep neural network models focused on computer vision applications~\\cite{papernot2017practical}. In concept, adversarial perturbations are transferable from computer vision to NLP; but, techniques to mount successful attacks in NLP vary significantly from their analogues in computer vision. This is primarily due to the discreteness of NLP (vs. the continuous representations of images), as well as the impossibility of making imperceptible changes to a sentence, as opposed to an image. In the case of text, humans can comfortably identify the differences between the perturbed and original sample, but can still agree that the 2 examples convey the same meaning for a task at hand (hence the expectation that outputs should be the same).\n\nHistorically, researches have employed various approaches for generating adversarial textual examples. In machine translation \\citet{belinkov2017synthetic} applied minor character level perturbations that resemble typos. \\citet{hosseini2017perspective} targeted Google's Perspective system that detects text toxicity. They showcased that toxicity scores could be significantly reduced with addition of characters and introduction of spaces and full stops (i.e., periods (``.\") ) in between words. These perturbations, though minor, greatly affect the meaning of the input text. \\citet{alzantot2018adversarial} proposed an iterative word based replacement strategy for tasks like text classification and textual entailment for LSTMs. \\citet{jin2019robustbert} extended the above experiments for BERT. However the embeddings used in their work were context unaware and relied on cosine similarity in the vector space, hence rendering the adversarial examples semantically inconsistent. \\citet{li2018textbugger} carried out a similar study for sentiment analysis in convolutional and recurrent neural networks. In contrast to prior work, \\citet{jia2017adversarialRC} were the first to evaluate models for RC based QA using SQuAD v1.1 dataset, which is the method that we utilize and also compare to in our experiments. \n\nUniversal adversarial triggers \\cite{wallace2019-universal} generates adversarial examples for the SQuAD dataset, but cannot be compared to our work since it is a white box method and a targeted adversarial attack. \\citet{ribeiro2018-semantically} introduced a method to detect bugs in black box models which generates \\emph{semantically equivalent adversaries} and also generalize them into rules. Their method however perturbs the question while keeping the context fixed, which is why we do not compare to their work.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n{\\em Explainable AI} refers to artificial intelligence and machine learning techniques that can provide human understandable justification for their behavior. \n Explainability is important in situations where human operators work alongside autonomous and semi-autonomous systems because it can help build rapport, confidence, and understanding between the agent and its operator.\nIn the event that an autonomous system fails to complete a task or completes it in an unexpected way, explanations help the human collaborator\nunderstand the circumstances that led to the behavior, which also allows the operator to make an informed decision on how to address the behavior. \n\nPrior work on explainable AI (XAI) has primarily focused on non-sequential problems such as image classification and captioning ~\\cite{wang2017residual,xu2015show,you2016image}.\nSince these environments are episodic in nature, the model's output depends only on its input.\nIn sequential environments, decisions that the agent has made in the past influence future decisions. \nTo simplify this, agents often make locally optimal decisions by selecting actions that maximize some discrete notion of expected future reward or utility.\nTo generate plausible explanations in these environments, the model must unpack this local reward or utility to reason about how current actions affect future actions. On top of that, it needs to communicate the reasoning in a human understandable way, which is a difficult task. \nTo address this challenge of human understandable explanation in sequential environments, we introduce the alternative task of rationale generation in sequential environments. \n\n\n{\\em Automated rationale generation} is a process of producing a natural language explanation for agent behavior {\\em as if a human had performed the behavior}~\\cite{ehsan2017rationalization}.\nThe intuition behind rationale generation is that humans can engage in effective communication\nby verbalizing plausible motivations for their action. The communication can be effective even when the verbalized reasoning does not have a consciously accessible neural correlate of\nthe decision-making process~\\cite{block2007consciousness,block2005two,fodor1994elm}. \nWhereas an explanation can be in any communication modality, rationales are natural language explanations that don't literally expose the inner workings of an intelligent system.\nExplanations can be made by exposing the inner representations and data of a system, though this type of explanation may not be accessible or understandable to non-experts.\nIn contrast, contextually appropriate natural language rationales are\naccessible and intuitive to non-experts, facilitating understanding and communicative effectiveness. \nHuman-like communication can also afford human factors advantages such as higher degrees of satisfaction, confidence, rapport, and willingness to use autonomous systems.\nFinally, rationale generation is fast, sacrificing \nan accurate view of agent decision-making for real-time response, making it appropriate for real-time human-agent collaboration. \nShould deeper, more grounded and technical\nexplanations be necessary, rationale generation may need to be supplemented by other explanation or visualization techniques.\n\nIn preliminary work~\\cite{ehsan2017rationalization} we showed that recurrent neural networks can be used to translate internal state and action representations into natural language. \nThat study, however, relied on synthetic natural language data for training. \nIn this work, we explore if human-like plausible rationales can be generated using a non-synthetic, natural language corpus of human-produced explanations. \nTo create this corpus, we developed a methodology for conducting remote think-aloud protocols \\cite{fonteyn1993description}. \nUsing this corpus, we then use \na neural network based on~\\cite{ehsan2017rationalization}\nto translate an agent's state and action information into natural language rationales, and show how variations in model inputs can produce two different types of rationales. \nTwo user studies help us understand the perceived quality of the generated rationales along dimensions of human factors. \nThe first study indicates that our rationale generation technique produces plausible and high-quality rationales and explains the differences in user perceptions. \nIn addition to understanding user preferences, the second study demonstrates how the intended design behind the rationale types aligns with their user perceptions.\n\nThe philosophical and linguistic discourse around the notion of explanations~\\cite{miller2017explanation, lipton2001good} is beyond the scope of this paper. \nTo avoid confusion, we use the word \"rationale\" to refer to natural language-based post-hoc explanations that are meant to sound like what a human would say in the same situation.\nWe opt for \"rationale generation\" instead of \"rationalization\" to signal that the agency lies with the receiver and interpreter (human being) instead of the producer (agent). \nMoreover, the word rationalization may carry a connotation of making excuses~\\cite{maruna2006fundamental} for an (often controversial) action, which is another reason why we opt for \\textit{rationale generation} as a term of choice.\n\nIn this paper, we make the following contributions in this paper:\n\\begin{itemize}\n \\item We present a methodology for collecting high-quality human explanation data based on remote think-aloud protocols. \n \\item We show how this data can be used to configure neural translation models to produce two types of human-like rationales: $\\left(1\\right)$ concise, localized and $\\left(2\\right)$ detailed, holistic rationales. We demonstrate the alignment between the intended design of rationale types and the actual perceived differences between them.\n \\item We quantify the perceived quality of the rationales and preferences between them, \n \n and we use qualitative data to explain these perceptions and preferences.\n \n \n \n \n \n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\n\n\nMuch of the previous work on explainable AI has focused on {\\em interpretability}.\nWhile there is no one definition of interpretability with respect to machine learning models, we view interpretability as a property of machine learned models that dictate the degree to which a human user---AI expert or user---can come to conclusions about the performance of the model on specific inputs.\nSome types of models are inherently interpretable, meaning they require relatively little effort to understand. \nOther types of models require more effort to make sense of their performance on specific inputs. \nSome non-inherently interpretable models can be made interpretable in a post-hoc fashion through explanation or visualization.\nModel-agnostic post-hoc methods can help to make models intelligible without custom explanation or visualization technologies and without changing the underlying model to make them more interpretable~\\cite{ribeiro2016should,yosinski2015understanding}. \n\nExplanation generation can be described as a form of {\\em post-hoc interpretability}~\\cite{2016arXiv160603490L, miller2017explanation}; explanations are generated on-demand based on the current state of a model and---potentially---meta-knowledge about how the algorithm works.\nAn important distinction between interpretability and explanation is that explanation does not elucidate precisely how a model works but aims to give useful information for practitioners and end users.\nAbdul et al.~\\cite{abdul2018trends} conduct a comprehensive survey on trends in explainable and intelligible systems research.\n\nOur work on rationale generation is a model-agnostic explanation system that works by translating the internal state and action representations of an arbitrary reinforcement learning system into natural language.\nAndreas, Dragan, and Klein~\\cite{andreas2017translating} describe a technique that translates message-passing policies between two agents into natural language.\nAn alternative approach to translating internal system representations into natural language is to add explanations to a supervised training set such that a model learns to output a classification as well as an explanation~\\cite{codella2018teaching}.\nThis technique has been applied to generating explanations about procedurally generated game level designs~\\cite{guzdial2018explainable}.\n\nBeyond the technology, user perception and acceptance matter because they influence trust in the system, which is crucial to adoption of the technology. \nEstablished fields such as information systems enjoy a robust array of technology acceptance models such as the Technology Acceptance Model (TAM) \\cite{davis1989perceived} and Unified Theory of Acceptance and Use of Technology Model (UTAUT) \\cite{venkatesh2003user} whose main goal is to explain variables that influence user perceptions. \nUtilizing dimensions such as perceived usefulness and perceived ease of use, the TAM model aimed to explain prospective expectations about the technological artifacts. \nUTAUT uses constructs like performance expectancy, effort expectancy, etc. to understand technology acceptance. The constructs and measures in these models build on each other. \n\nIn contrast, due to a rapidly evolving domain, a robust and well-accepted user perception model of XAI agents is yet to be developed. \nUntil then, we can take inspiration from general acceptance models (such as TAM and UTAUT) and adapt their constructs to understand the perceptions of XAI agents. \nFor instance, the human-robot interaction community has used them as basis to understand users' perceptions towards robots~\\cite{ezer2009attitudinal, beer2011understanding}. \nWhile these acceptance models are informative, they often lack sociability factors such as \"humanlike-ness\".\nMoreover, TAM-like models does not account for autonomy in systems, let alone autonomous XAI systems. \nBuilding on some constructs from TAM-like models and original formative work, we attempt to address the gaps in understanding user perceptions of rationale-generating XAI agents. \n\nThe dearth of established methods combined with the variable conceptions of explanations make evaluation of XAI systems challenging. \nBinns et al.~\\cite{binns2018s} use scenario-based survey design~\\cite{carroll2000making} and presented different types of hypothetical explanations for the same decision to measure perceived levels of justice. \nOne non-neural based network evaluates the usefulness and naturalness of generated explanations~\\cite{broekens2010you}.\nRader et al.~\\cite{rader2018explanations} use explanations manually generated from content analysis of Facebook's News Feed to study perceptions of algorithmic transparency. \nOne key differentiating factor of our approach is that our evaluation is based rationales that are actual system outputs (compared to hypothetical ones). \nMoreover, user perceptions of our system's rationales directly influence the design of our rationale generation technique. \n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=1.0\\linewidth]{frogger-pipeline2.png}\n \\vspace{-1.5\\baselineskip}\n \\caption{End-to-end pipeline for training a system that can generate explanations.}\n\\label{fig:end-to-end}\n\\end{figure}\n\n\\section{Learning to Generate Rationales}\n\nWe define a \\textit{rationale} as an explanation that justifies an action based on how a human would think. \nThese rationales do not necessarily reveal the true decision making process of an agent, but still provide insights about why an agent made a decision in a form that is easy for non-experts to understand.\n\nRationale generation requires translating events in the game environment into natural language outputs. Our approach to rationale generation involves two steps: (1)~collect a corpus of think-aloud data from players who explained their actions in a game environment; and (2)~use this corpus to train an encoder-decoder network to generate plausible rationales for any action taken by an agent (see Figure~\\ref{fig:end-to-end}).\n\nWe experiment with rationale generation using autonomous agents that play the arcade game, {\\em Frogger}.\nFrogger is a good candidate for our experimental design of a rationale generation pipeline for general sequential decision making tasks because it is a simple Markovian environment, making it an ideal stepping stone towards a real world environment. \nOur rationale generation technique is agnostic to the type of agent or how it is trained, as long as the representations of states and actions used by the agent can be exposed to the rationale generator and serialized.\n\n\\subsection{Data Collection Interface}\n\nThere is no readily available dataset for the task of learning to generate explanations. Thus, we developed a methodology to collect live ``think-aloud'' data from players as they played through a game. This section covers the two objectives of our data collection endeavor:\n\\begin{enumerate}\n\\item Create a think-aloud protocol in which players provide natural rationales for their actions. \n\\item Design an intuitive player experience that facilitates accurate matching of the participants' utterances to the appropriate state in the environment. \n\\end{enumerate}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.75\\linewidth]{Play4.png}\n \\end{center}\n \\caption{Players take an action and verbalize their rationale for that action. (1)~After taking each action, the game pauses for 10 seconds. (2)~Speech-to-text transcribes the participant's rationale for the action. (3)~Participants can view their transcribed rationales near real-time and edit if needed.}\n \\label{fig:Play}\n\\end{figure}\n\nTo train a rationale-generating explainable agent, we need data linking game states and actions to their corresponding natural language explanations. To achieve this goal, we built a modified version of Frogger in which players simultaneously play the game and also explain each of their actions. \nThe entire process is divided into three phases: (1)~A guided tutorial, (2)~rationale collection, and (3)~transcribed explanation review.\n\nDuring the guided tutorial~(1), our interface provides instruction on how to play through the game, how to provide natural language explanations, and how to review\/modify any explanations they have given. \nThis helps ensure that users are familiar with the interface and its use before they begin providing explanations. \n\nFor rationale collection~(2), participants play through the game while explaining their actions out loud in a turn-taking mechanism.\nFigure~\\ref{fig:Play} shows the game embedded into the explanation collection interface. \nTo help couple explanations with actions (attach annotations to concrete game states), the game pauses for 10 seconds after each action is taken.\nDuring this time, the player's microphone automatically turns on and the player is asked to explain their most recent action while a speech-to-text library \\cite{github_2017} automatically transcribes the explanation real-time. \nThe automatic transcription substantially reduces participant burden as it is more efficient\nthan typing an explanation. \nPlayer can use more or less than the default 10-second pause to collect the explanation. \nOnce done explaining, they can view their transcribed text and edit it if necessary.\nDuring pretesting with 14 players, we observed that players often repeat a move for which the explanation is the same as before. \nTo reduce burden of repetition, we added a \"redo\" button that can be used to recycle rationales for consecutive repeated actions.\n\nWhen the game play is over, players move to transcribed explanation review portion ~(3). Here, they can can step through all the actions-explanation pairs. This stage allows reviewing in both a situated and global context. \n\n\n\n\n\nThe interface is designed so that no manual hand-authoring\/editing of our explanation data was required before\nusing it to train our machine learning model. Throughout the game, players have the opportunity to organically edit their own data without impeding their work-flow. This added layer of organic \nediting is crucial in ensuring that we can directly input the collected data into the network with zero manual cleaning. While we use Frogger as a test environment in our experiments, a similar user experience can be designed using other turn-based environments with minimal effort.\n\n\\begin{figure}[t]\n \\includegraphics[width=\\linewidth]{review_with_legend.png}\n\\caption{Players can step-through each of their action-rationale pairs and edit if necessary. (1)~Players can watch an action-replay while editing rationales. (2)~These buttons control the flow of the step-through process. (3)~The rationale for the current action gets highlighted for review.}\n \\label{fig:Review}\n\\end{figure}\n\n\n\\subsection{Neural Translation Model}\n\nWe use an encoder-decoder network~\\cite{bahdanau2014neural} to teach our network to generate relevant natural language explanations for any given action. \nThese kinds of networks are commonly used for machine translation tasks or dialogue generation, but their ability to understand sequential dependencies between the input and the output make it suitable for our task. \nOur encoder decoder architecture is similar to that used in \\cite{ehsan2017rationalization}. \nThe network learns how to translate the input game state representation\n$X = x_1, x_2, ..., x_n$, comprised of the representation of the game combined with other influencing factors,\ninto an output rationale as a sequence of words \n$Y = y_1, y_2, ..., y_m$\nwhere $y_i$ is a word.\nThus our network learns to translate game state and action information into natural language rationales.\n\nThe encoder and decoder are both recurrent neural networks (RNN) comprised of Gated Recurrent Unit (GRU) cells since our training process involved a small amount of data.\nThe decoder network uses an additional attention mechanism~\\cite{luong2015effective} to learn to weight the importance of different components of the input with regard to their effect on the output. \n\nTo simplify the learning process, the state of the game environment is serialized into a sequence of symbols where each symbol characterizes a sprite in the grid-based represntation of the world.\nTo this, we append information concerning Frogger's position, the most recent action taken, and the number of lives the player has left to create the input representation $X$. \nOn top of this network structure, we vary the input configurations with the intention of producing varying styles of rationales. \nEmpirically, we found that a reinforcement learning agent using tabular $Q$-learning \\cite{watkins92} learns to play the game effectively when given a limited window for observation.\nThus a natural configuration for the rationale generator is to give it the same observation window that the agent needs to learn to play.\nWe refer to this configuration of the rationale generator as {\\em focused-view} generator.\nThis view, however, potentially limits the types of rationales that can be learned since the agent will only be able to see a subset of the full state. Thus we formulated a second configuration that gives the rationale generator the ability to use all information on the board to produce rationales.\nWe refer to this as {\\em complete-view} generator.\nAn underlying question is thus whether rationale generation should use the same information that the underlying black box reasoner needs to solve a problem or if more information is advantageous at the expense of making rationale generation a harder problem.\nIn the studies described below, we seek to understand how these configurations affect human perceptions of the agent when presented with generated rationales.\n\n\\subsubsection{Focused-view Configuration}\nIn the \\textit{focused-view} configuration, we used a windowed representation of the grid, i.e. only a $7\\times7$ window around the Frog was used in the input.\nBoth playing an optimal game of Frogger and generating relevant explanations based on the current action taken typically only requires this much local context. \nTherefore providing the agent with only the window around Frogger helps the agent produce explanations grounded in it's neighborhood. \nIn this configuration, we designed the inputs such that the network is prone to prioritize short-term planning producing localized rationales instead of long-term planning. \n\n\n\\subsubsection{Complete-view Configuration}\nThe \\textit{complete-view} configuration is an alternate setup that provides the entire game board as context for the rationale generation.\nThere are two differences between this configuration and the focused-view configuration.\nFirst, we use the entire game screen as a part of the input. The agent now has the opportunity to learn which other long-term factors in the game may influence it's rationale.\nSecond, we added noise to each game state to force the network to generalize when learning, reduce the likelihood that spurious correlations are identified, and to give the model equal opportunity to consider factors from all sectors of the game screen.\nIn this case noise was introduced by replacing input grid values with dummy values. For each grid element, there was a $20\\%$ chance that it would get replaced with a dummy value.\nGiven the input structure and scope, this configuration should prioritize rationales that exhibit long-term planning and consider the broader context.\n\n\n \n\n\n\\begin{table}[h]\n \\caption{Examples of \\textit{focused-view} vs \\textit{complete-view} rationales generated by our system for the same set of actions.}\n \\label{tab:components}\n \\begin{tabular}{p{0.1\\columnwidth}p{0.4\\columnwidth}p{0.4\\columnwidth}}\n \\toprule\n {\\bf Action} & {\\bf Focused-view} & {\\bf Complete-view}\\\\\n \\midrule\n Right & I had cars to the left and in front of me so I needed to move to the right to avoid them. & I moved right to be more centered. This way I have more time to react if a car comes from either side. \\\\\n \\rowcolor{tablerowcolor} Up & The path in front of me was clear so it was safe for me to move forward. & I moved forward making sure that the truck won\\textquotesingle t hit me so I can move forward one spot. \\\\\n Left & I move to the left so I can jump onto the next log. & I moved to the left because it looks like the logs and top or not going to reach me in time, and I\\textquotesingle m going to jump off if the law goes to the right of the screen. \\\\\n \\rowcolor{tablerowcolor} Down & I had to move back so that I do not fall off. & I jumped off the log because the middle log was not going to come in time. So I need to make sure that the laws are aligned when I jump all three of them. \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\\section{Perception Study: Candidate vs. Baseline Rationales}\nIn this section, we\nassess whether the rationales generated using our technique are plausible and explore \nhow humans perceive them along various dimensions of human factors.\nFor our rationales to be plausible we would expect that human users indicate a strong preference for rationales generated by our system (either configuration) over those generated by a baseline rationale generator.\nWe also compare them to exemplary human-produced explanations to get a sense for how far from the upper bound we are.\n\nThis study aims to achieve two main objectives.\nFirst, it seeks to confirm the hypothesis that humans prefer rationales generated by each of the configurations over randomly selected rationales across all dimensions.\nWhile this baseline is low, it establishes that rationales generated by our technique are not nonsensical. \nWe can also measure the distance from the upper-bound (exemplary human rationales) for each rationale type. \nSecond, we attempt to understand the underlying components that influence \nthe perceptions of the generated rationales along four dimensions of human factors: {\\em confidence}, {\\em human-likeness}, {\\em adequate justification}, and {\\em understandability}.\n\n\n\n\n\n\n\n\\subsection{Method}\n\nTo gather the training set of game state annotations, we deployed our data collection pipeline on {\\em Turk Prime}~\\cite{litman2017turkprime}. \nFrom 60 participants\nwe collected over 2000 samples of human actions in Frogger coupled with natural language explanations. The average duration of this task was around 36 minutes.\nThe parallel corpus of the collected game state images and natural language explanations was used to train the encoder-decoder network.\nEach RNN in the encoder and the decoder was parameterized with GRU cells with a hidden vector size of 256. \nThe entire encoder-decoder network was trained for 100 epochs.\n\nFor the perception user study, we collected both within-subject and between-subject data.\nWe recruited 128 participants, split into two equal experimental groups through {\\em TurkPrime}: Group 1 (age range = 23 - 68 years, M = 37.4, SD = 9.92) and Group 2 (age range = 24 - 59 years, M = 35.8, SD= 7.67). \nOn average, the task duration was approximately 49 minutes.\n46\\% of our participants were women, and the 93\\% of participants were self-reported as from the United States while the remaining 7\\% of participants were self-reported as from India.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\linewidth]{Study_Screenshot.PNG}\n \\caption{Screenshot from user study (setup 2) depicting the action taken and the rationales: \\textit{P = Random, Q = Exemplary, R = Candidate}}\n \\label{fig:Study}\n\\end{figure}\n\nAll participants watched a counterbalanced series of five videos.\nEach video depicted an action taken by Frogger accompanied by three different types of rationales that justified the action (see Figure~\\ref{fig:Study}).\nParticipants rated each rationale on a labeled, 5-point, bipolar Likert-scale along 4 perception dimensions (described below). \nThus, each participant provided 12 ratings per action, leading to 60 perception ratings for five actions. \nActions collected from human players comprised the set of Frogger's actions. These actions were then fed into the system to generate rationales to be evaluated in the the user studies. \nIn order to get a balance between participant burden, fatigue, the number of actions, and regions of the game, we pretested with 12 participants. \nFive actions was the limit beyond which participants' fatigue and burden substantially increased. \nTherefore, we settled on five actions (up (twice), down, left, and right) in the major regions of the game-- amongst the cars, at a transition point, and amongst the logs. \nThis allowed us to test our rationale generation configurations in all possible action-directions in all the major sections of the game. \n\nThe study had two identical experimental conditions, differing only by type of \\textit{candidate rationale}. \nGroup 1 evaluated the \\textit{focused-view} rationale while Group 2 evaluated the \\textit{complete-view} rationales.\nIn each video, the action was accompanied by three rationales generated by three different techniques (see Figure~\\ref{fig:Results_1}): \n\\begin{itemize}\n\\item The \\textit{exemplary rationale} is the rationale from our corpus that 3 researchers unanimously agreed on as the best one for a particular action. Researchers independently selected rationales they deemed best and iterated until consensus was reached.\nThis is provided as an upper-bound for contrast with the next two techniques.\n\\item The \\textit{candidate rationale} is the rationale produced by our network, either the focused-view or complete-view configuration.\n\n\\item The \\textit{random rationale} is a randomly chosen rationale from our corpus.\n\\end{itemize}\n\n\n\\noindent\nFor each rationale, participants used a 5-point Likert scale to rate their endorsement of each of following four statements, which correspond to four dimensions of interest. \n\n\\begin{enumerate}\n \\item \\textit{Confidence:} This rationale makes me confident in the character's ability to perform it's task.\n \\item \\textit{Human-likeness: } This rationale looks like it was made by a human.\n \\item \\textit{Adequate justification:} This rationale adequately justifies the action taken.\n \\item \\textit{Understandability:} This rationale helped me understand why the agent behaved as it did.\n\\end{enumerate}\n\nResponse options on a clearly labeled bipolar Likert scale ranged from \"strongly disagree\" to \"strongly agree\". In a mandatory free-text field, they explained their reasoning behind the ratings for a particular set of three rationales. After answering these questions, they provided demographic information.\n\nThese four dimensions emerged from an iterative filtering process that included preliminary testing of the study, informal interviews with experts and participants, and a literature review on robot and technology acceptance models. Inspired by the acceptance models, we created a set of dimensions that were contextually appropriate for our purposes. \n\nDirect one-to-one mapping from existing models was not feasible, given the novelty and context of the Explainable AI technology.\nWe adapted \\textit{confidence}, a dimension that impacts trust in the system \\cite{kaniarasu2013robot}, from constructs like performance expectancy \\cite{venkatesh2003user} (from UTAUT) and robot performance~\\cite{beer2011understanding, chernova2009confidence}.\n\\textit{Human-likeness}, central to generating human-centered rationales, was inspired from sociability and anthropomorphization factors from HRI work on robot acceptance [\\cite{nass1994machines,nass1996can,nass2000machines}. \nSince our rationales are justificatory in nature, \\textit{adequate justification} is a reasonable measure of output quality (transformed from TAM).\nOur rationales also need to be \\textit{understandable}, which can signal perceived ease of use (from TAM). \n\n\n\n\n\n\n\\subsection{Quantitative Analysis}\n\n\nWe used a multi-level model to analyze our data.\nAll variables were within-subjects except for one: whether the candidate style was focused-view (Group 1) or complete-view (Group 2). \nThis was a between-subject variable.\n\nThere were significant main effects of rationale style ($\\chi^2\\left(2\\right) = 594.80, p<.001$) and dimension ($\\chi^2\\left(2\\right) = 66.86, p<.001$) on the ratings. \nThe main effect of experimental group was not significant ($\\chi^2\\left(1\\right) = 0.070, p=0.79$).\nFigure~\\ref{fig:Results_1} shows the average responses to each question for the two different experimental groups.\nOur results support our hypothesis that rationales generated with the \\textit{focused-view} generator and the \\textit{complete-view} generator were judged significantly better across all dimensions than the random baseline\n($b=1.90, t\\left(252\\right)=8.09,p<.001$). \nIn addition, exemplary rationales were judged significantly higher than candidate rationales.\n\n\n\n\nThough there were significant differences between each kind of candidate rationale and the exemplary rationales, those differences were not the same.\nThe difference between the \\textit{focused-view} candidate rationales and exemplary rationales were significantly \\textit{greater} than the difference between \\textit{complete-view} candidate rationales and exemplary rationales ($p=.005$). \nSurprisingly, this was because the exemplary rationales were rated {\\rm lower} in the presence of complete-view candidate rationales ($t\\left(1530\\right)=-32.12,p<.001$).\nSince three rationales were presented simultaneously in each video, it is likely that participants were rating the rationales relative to each other. \nWe also observe that the \\textit{complete-view} candidate rationales received higher ratings in general than did the \\textit{focused-view} candidate rationales ($t\\left(1530\\right)=8.33,p<.001$).\n\nIn summary, we have confirmed our hypothesis that both configurations produce rationales that perform significantly better than the \\textit{random} baseline across all dimensions. \n\n\\begin{figure}[t]\n\\subcaptionbox{Focus-View condition.\\label{fig:Results_1a}}{\\includegraphics[width=1.0\\linewidth]{Bar_a_exactDimension.png}}\\hfill\n\\subcaptionbox{Complete-View condition.\\label{fig:Results_1b}}{\\includegraphics[width=1.0\\linewidth]{Bar_b_exactDimension.png}}\\hfill\n \\caption{Human judgment results. }\n \\label{fig:Results_1}\n\\end{figure}\n\n\\subsection{Qualitative Findings and Discussion}\n\n\nIn this section, we look at the open-ended responses provided by our participants to better understand the criteria that participants used when making judgments about the \\textit{confidence, human-likeness, adequate justification,} and \\textit{understandability} of generated rationales. \nThese situated insights augment our understanding of rationale generating systems, enabling us to design better ones in the future.\n\n\nWe analyzed the open-ended justifications participants provided using a combination of thematic analysis \\cite{aronson1994pragmatic} and grounded theory \\cite{strauss1994grounded}. \nWe developed codes that addressed different types of reasonings behind the ratings of the four dimensions under investigation. \nNext, the research team clustered the codes under emergent themes, which form the underlying \\textit{components} of the dimensions. Iterating until consensus was reached, researchers settled on the five most relevant components: (1)~\\textit{Contextual Accuracy}, (2)~\\textit{Intelligibility}, (3)~\\textit{Awareness},\n(4)~\\textit{Relatability}, and (5)~\\textit{Strategic Detail} (see Table \\ref{tab:components}).\nAt varying degrees, multiple components influence more than one dimension; that is, there isn't a mutually exclusive one-to-one relationship between components and dimensions. \n\nWe will now share how these components influence the dimensions of the human factors under investigation. \nWhen providing examples of our participants' responses, we will use P$1$ to refer to participant 1, P$2$ for participant 2, etc. \nTo avoid priming during evaluation, we used letters (e.g., A, B, C, etc.) to refer to the different types of rationales. \nFor better comprehension, we have substituted the letters with appropriate rationale--focused-view, complete-view, or random-- while presenting quotes from participants below. \n\n\\subsubsection{Confidence (1)} This dimension gauges the participant's faith in the agent's ability to successfully complete it's task and has \\textit{contextual accuracy}, \\textit{awareness}, \\textit{strategic detail}, and \\textit{intelligibility} as relevant components. \nWith respect to \\textit{contextual accuracy}, rationales that displayed ``\\ldots recognition of the environmental conditions and [adaptation] to the conditions'' (P22) were a positive influence, while redundant information such as ``just stating the obvious'' (P42) hindered confidence ratings. \n\n\n\n\n\\begin{table}[h]\n \\caption{Descriptions for the emergent \\textit{components} underlying the human-factor \\textit{dimensions} of the generated rationales.}\n \\label{tab:components}\n \\begin{tabular}{p{0.32\\columnwidth}p{0.58\\columnwidth}}\n \\toprule\n {\\bf Component} & {\\bf Description}\\\\\n \\midrule\n Contextual Accuracy & Accurately describes pertinent events in the context of the environment.\\\\\n \\rowcolor{tablerowcolor} Intelligibility & Typically error-free and is coherent in terms of both grammar and sentence structure.\\\\\n Awareness & Depicts and adequate understanding of the rules of the environment.\\\\\n \\rowcolor{tablerowcolor} Relatability & Expresses the justification of the action in a relatable manner and style.\\\\\n Strategic Detail & Exhibits strategic thinking, foresight, and planning.\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nRationales that showed \\textit{awareness} of ``upcoming dangers and what the best moves to make \\ldots [and] a good way to plan'' (P17) inspired confidence from the participants. \nIn terms of \\textit{strategic detail}, rationales that showed \"\\ldots long-term planning and ability to analyze information\" (P28) yielded higher confidence ratings compared to those that were \"short-sighted and unable to think ahead\" (P14) led to lower perceptions of confidence. \n\n\\textit{Intelligibility} alone, without \\textit{awareness} or \\textit{strategic detail}, was not enough to yield high confidence in rationales. However, rationales that were not \\textit{intelligible} (unintelligible) or coherent had a negative impact on participants' confidence:\n\\begin{displayquote}\nThe [random and focused-view rationales] include major mischaracterizations of the environment because they refer to an object not present or wrong time sequence, so I had very low confidence. (P66)\n\\end{displayquote}\n\n\n\n\n\n\\subsubsection{Human-likeness (2)} \\textit{Intelligibility, relatability,} and \\textit{strategic detail} are components that influenced participants' perception of the extent to which the rationales were made by a human.\nNotably, \\textit{intelligibility} had mixed influences on the human-likeness of the rationales as it depended on what participants thought ``being human'' entailed. \nSome conceptualized humans as fallible beings and rated rationales with errors more \\textit{humanlike} because rationales ``with typos or spelling errors \\ldots seem even more likely to have been generated by a human\" (P19). \nConversely, some thought error-free rationales must come from a human, citing that a ``computer just does not have the knowledge to understand what is going on'' (P24).\n\nWith respect to \\textit{relatability}, rationales were often perceived as more human-like when participants felt that ``it mirrored [their] thoughts'' (P49), and ``\\ldots [laid] things out in a way that [they] would have'' (P58). Affective rationales had high \\textit{relatability} because they ``express human emotions including hope and doubt'' (P11). \n\n\\textit{Strategic detail} had a mixed impact on human-likeness just like \\textit{intelligibility} as it also depended on participants' perception of critical thinking and logical planning. Some participants associated ``\\ldots critical thinking [and ability to] predict future situations\" (P6) with human-likeness whereas others associated logical planning with non-human-like, but computer-like rigid and algorithmic thinking process flow.\n\n\n\n\n\n\\subsubsection{Adequate Justification (3)} This dimension unpacks the extent to which participants think the rationale adequately justifies the action taken and is influenced by \\textit{contextual accuracy}, and \\textit{awareness}. \nParticipants downgraded rationales containing low levels of \\textit{contextual accuracy} in the form of irrelevant details. As P11 puts it: \n\\begin{displayquote}\nThe [random rationale] doesn't pertain to this situation. [The complete-view] does, and is clearly the best justification for the action that Frogger took because it moves him towards his end goal. \n\\end{displayquote}\n\nBeyond \\textit{contextual accuracy}, rationales that showcase \\textit{awareness} of surroundings score high on the \\textit{adequate justification} dimension. For instance, P11 rated the \\textit{random} rationale low because it showed ``no awareness of the surroundings''. For the same action, P11 rated \\textit{exemplary} and \\textit{focused-view} rationales high because each made the participant ``believe in the character's ability to judge their surroundings.''\n\n\\subsubsection{Understandability (4)} \nFor this dimension, components such as \\textit{contextual accuracy} and \\textit{relatability} influence participants' perceptions of how much the rationales helped them understand the motivation behind the agent's actions. \nIn terms of \\textit{contextual accuracy}, \nmany expressed how the contextual accuracy, not the length of the rationale, mattered when it came to understandability. \nWhile comparing understandability of the \\textit{exemplary} and \\textit{focused-view} rationales, P41 made a notable observation:\n\\begin{displayquote}\nThe [exemplary and focused-view rationale] both described the activities\/objects in the immediate vicinity of the frog. However, [exemplary rationale (typically lengthier than focused)] was not as applicable because the [focused-view] rationale does a better job of providing contextual understanding of the action.\n\\end{displayquote}\n\nParticipants put themselves in the agent's shoes and evaluated the understandability of the rationales based on how \\textit{relatable} they were. \nIn essence, some asked ``Are these the same reasons I would [give] for this action?'' (P43). \nThe more relatable the rationale was, the higher it scored for understandability. \n\nIn summary, the first study establishes the plausibility of the generated rationales (compared to baselines) and their user perceptions. \nHowever, this study does not provide direct comparison between the two configurations. \n\n\\section{Preference Study: Focused-- vs. Complete--View Rationales}\nThe preference study puts the rationales in direct comparison with each other. \nThis study \nIt achieves two main purposes. First, it aims to validate the alignment between the intended design of rationale types and the actual perceived differences between them. \nWe collect qualitative data on how participants perceived rationales produced by our \\textit{focused-view} and \\textit{complete-view} rationale generator.\nOur expert observation is that the \\textit{focused-view} configuration results in concise and localized rationales whereas the \\textit{complete-view} configuration results in detailed, holistic rationales. \nWe seek to determine whether na\\\"ive users who are unaware of which configuration produced a rationale also describe the rationales in this way. \nSecond, we seek to understand how and why the preferences between the two styles differed along three dimensions: {\\em confidence}, {\\em failure}, and {\\em unexpected behavior}.\n\n\n\\subsection{Method}\nUsing similar methods to the first study, we recruited and analyzed the data from 65 people (age range = 23 - 59 years, M = 38.48, SD = 10.16). 57\\% percent of the participants were women with 96\\% of the participants self-reporting the United States and 4\\% self-reporting India as countries they were from. Participants from our first study could not partake in the second one. The average task duration was approximately 46 minutes.\n\n\n\n\nThe only difference in the experimental setup between perception and the preference study is the comparison groups of the rationales. \nIn this study, participants judged the same set of\n\\textit{focused-} and \\textit{complete-view} rationales, however instead of judging each style against two baselines, participants evaluate the \\textit{focused-} and \\textit{complete-view} rationales in direction comparison with each other.\n\n\nHaving watched the videos and accompanying rationales, participants responded to the following questions comparing both configurations: \n\\begin{enumerate}\n \\item \\textbf{Most important difference}: What do you see as the most important difference? Why is this difference important to you?\n \\item \\textbf{Confidence}: Which style of rationale makes you more confident in the agent's ability to do its task? Was it system A or system B? Why?\n \\item \\textbf{Failure}: If you had a companion robot that had just made a mistake, would you prefer that it provide rationales like System A or System B? Why? \n \\item \\textbf{Unexpected Behaviour}: If you had a companion robot that took an action that was not wrong, but unexpected from your perspective, would you prefer that it provides rationales like System A or System B? Why? \n\\end{enumerate}\n\n\nWe used a similar to the process of selecting dimensions in this study as we did in the first one. \n\\textit{Confidence} is crucial to trust especially when failure and unexpected behavior happens \\cite{chernova2009confidence, kaniarasu2013robot}. \nCollaboration, tolerance, and perceived intelligence are affected by the way autonomous agents and robots communicate \\textit{failure} and \\textit{unexpected behavior} \\cite{desai2013impact,kwon2018expressing,lee2010gracefully,mirnig2017err}. \n\n\n\\begin{table}[h]\n \\caption{Tally of how many preferred the \\textit{focused-view} vs. the \\textit{complete-view} for the three dimensions.}\n \\label{tab:components}\n \\begin{tabular}{ccc}\n \\toprule\n {\\bf Question} & {\\bf Focused-view} & {\\bf Complete-view}\\\\\n \\midrule\n Confidence & 15 & 48\\\\\n \\rowcolor{tablerowcolor} Failure & 17 & 46\\\\\n Unexpected Behaviour & 18 & 45\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\\subsection{Quantitative Analysis}\nIn order to determine whether the preferences significantly favored one style or the other, we conducted \nthe Wilcoxon signed-rank test. It showed that preference for the \\textit{complete-view} rationale was significant in all three dimensions.\nConfidence in the \\textit{complete-view} rationale was significantly greater than in the \\textit{focused-view} ($p<.001$). \nSimilarly, preference for a \\textit{complete-view} rationales from an agent that made a mistake was significantly greater than for \\textit{focused-view} rationales ($p<.001$).\nPreference for \\textit{complete-view} rationales from an agent that made a mistake was also significantly greater than for \\textit{focused-view} rationales ($p<.001$).\n\n\\subsection{Qualitative Findings and Discussion}\nIn this section, similar to the first study, we share insights gained from the open-ended responses to reveal the underlying reasons behind perceptions of the \\textit{most important difference} between the two styles. We also unpack the reasoning behind the quantitative ranking preferences for \\textit{confidence} in the agent's ability to do its task and communication preferences for \\textit{failure} and \\textit{unexpected behavior}. In this analysis, the interacting \\textit{components} that influenced the dimensions of human factors in the first study return (see Table \\ref{tab:components}). In particular, we use them as analytic lenses to highlight the trade-offs people make when expressing their preferences and the reasons for the perceived differences between the styles. \n\nThese insights bolster our situated understanding of the differences between the two rationale generation techniques and assist to verify if the intended design of the two configurations aligns with the perceptions of them. In essence, did the design succeed in doing what we set out to do? We analyzed the open-ended responses in the same manner as the first study. We use the same nomenclature to refer to participants. \n\n\\subsubsection{Most Important Difference (1)}\nEvery participant\nindicted that\nthe \\textit{level of detail and clarity} (P55)\ndifferentiated the rationales. \nConnected to the level of detail and clarity is the perceived \\textit{long-} vs. \\textit{short-term} planning exhibited by each rationale. \nOverall, participants felt that the \\textit{complete-view} rationale showed better levels of \\textit{strategic detail}, \\textit{awareness}, and \\textit{relatability} with human-like justifications, whereas the \\textit{focused-view} exhibited better \\textit{intelligibility} with easy-to-understand rationales. \nThe following quote \nillustrates\nthe trade-off between succinctness,\nwhich hampers comprehension of higher-order goals, and broadness, which can be perceived as less\nfocused:\n\n\\begin{displayquote}\nThe [focused-view rationale] is extraordinarily vague and focused on the raw mechanics of the very next move \\ldots [The complete-view] is more broad and less focused, but takes into account \\textit{the entire picture}. So I would say the most important difference is the \\textit{scope of events} that they take into account while making justifications [emphasis added] (P24)\n\\end{displayquote}\n\nBeyond trade-offs, this quote highlights a powerful validating point: without any knowledge beyond what is shown on the video, the participant pointed out how the \\textit{complete-view} rationale appeared to consider the \"entire picture\" and how the \"scope of events\" taken into account was the main difference. \nThe participant's intuition precisely aligns with the underlying network configuration design and our \nresearch \nintuitions.\nRecall that the \\textit{complete-view} rationale\nwas generated using the entire environment or \"picture\" whereas the \\textit{focused-view} was generated using a windowed input. \n\nIn prior sections, we \nspeculated on the effects of the network configurations. We expected the \\textit{focused-view} version \nto produce \nsuccinct, localized rationales that concentrated on the short-term. We expected the \\textit{complete-view} version \nto produce detailed, broader rationales that focused on the larger picture and long-term planning. \nThe findings of this experiment are the first validation that the outputs reflect the intended designs. \nThe strength of this validation was enhanced by the many descriptions of our intended attributes, given in free-form by participants who were naive to our network designs.\n\nConnected to the level of detail and clarity is the perception of \\textit{short-} vs \\textit{long-term} thinking from the respective rationales. \nIn general, participants regarded the \\textit{focused-view} rationale having low levels of \\textit{awareness} and \\textit{strategic detail}. \nThey felt\nthat this agent \"\\ldots focus[ed] only on the current step\" (P44), which was perceived depicting as thinking \"\\ldots in the spur of the moment\" (P27), giving the perception of short-term and simplistic thinking. \nOn the other hand, the \\textit{complete-view} rationale appeared to \"\\ldots try to think it through\" (P27), exhibiting long-term thinking as it appears to \"\\ldots think forward to broader strategic concerns.\"(P65) One participant sums it up nicely: \n\\begin{displayquote}\nThe [focused-view rationale] focused on the immediate action required. [The complete-view rationale] took into account the current situation, [but] also factored in what the next move will be and what dangers that move poses. The [focused-view] was more of a short term decision and [complete-view] focused on both short term and long term goals and objectives. (P47)\n\\end{displayquote}\n\nWe will notice how these differences in perception impact other dimensions such as confidence and communication preferences for failure and unexpected behavior. \n\\subsubsection{Confidence (2)}\nParticipants had more confidence in the agent's ability to do its task if the rationales exhibited high levels of \\textit{strategic detail} in the form of long-term planning, \\textit{awareness} via expressing knowledge of the environment, and \\textit{relatability} through humanlike expressions. They associated \\textit{conciseness} with confidence when the rationales did not need to be detailed given the context of the (trivial) action. \n\nThe \\textit{complete-view} rationale inspired more confidence because participants perceived agents with long-term planning and high\n\\textit{strategic detail} as being \"more predictive\" and intelligent than their counterparts. Participants felt more at ease because \"\\ldots knowing what [the agent] was planning to do ahead of time would allow me to catch mistakes earlier before it makes them.\" (P31) As one participant put it:\n\n\\begin{displayquote}\nThe [complete-view rationale] gives me more confidence \\ldots because it thinks about future steps and not just the steps you need to take in the moment. [The agent with focused-view] thinks more simply and is prone to mistakes. (P13)\n\\end{displayquote}\n\nParticipants felt that rationales that exhibited a better understanding of the environment, and thereby better \\textit{awareness}, resulted in higher confidence scores. Unlike the \\textit{focused-view} rationale that came across as \"a simple reactionary move \\ldots [the \\textit{complete-view}] version demonstrated a more thorough understanding of the entire field of play.\" (P51) In addition, the \\textit{complete-view} was more \\textit{relatable} and confidence-inspiring \"because it more closely resemble[d] human judgment\" (P29). \n\n\\subsubsection{Failure (3)}\nWhen an agent or a robot fails, the information from the failure report is mainly used to fix the issue. To build a mental model of the agent, participants preferred \\textit{detailed} rationales with solid \\textit{explanatory power} stemming from \\textit{awareness} and \\textit{relatability}. The mental model could facilitate proactive and preventative care. \n\nThe \\textit{complete-view} rationale, due to relatively high \\textit{strategic detail}, was preferable in communicating failure because participants could \"\\ldots understand the full reasoning behind the movements.\"(P16) Interestingly, \\textit{detail} trumped \\textit{intelligibility} in most circumstances. Even if the rationales had some grammatical errors or were a \"\\ldots little less easy to read, the details made up for it.\" (P62) \n\nHowever, detailed rationales are not always a virtue. Simple rationales have the benefit of being easily understandable to humans, even if they cause humans to view the agent as having limited understanding capabilities. Some participants appreciated \\textit{focused-view} rationales because they felt \"it would be easier to figure out what went wrong by focusing on one step at a time.\" \n\n\nExplanatory power, specifically how events are communicated, is related to \\textit{awareness} and \\textit{relatability}. Participants preferred relatable agents that \"\\ldots would talk to [them] like a person would.\"(P11) They expressed the need to develop a mental model, especially to \"\\ldots see how [a robot's] mind might be working\"(P1), to effectively fix the issue. The following participant neatly summarizes the dynamics:\n\\begin{displayquote}\nI'd want [the robot with complete-view] because I'd have a better sense of the steps taken that lead to the mistake. I could then fix a problem within that reasoning to hopefully avoid future mistakes. The [focused-view rationale] was just too basic and didn't give enough detail. (P8)\n\\end{displayquote}\n\\subsubsection{Unexpected Behavior (4)}\nUnexpected behavior that is not failure makes people want to know the \"why?\" behind the action, especially to understand the expectancy violation. \nAs a result, participants preferred rationales with transparency so that they can understand and trust the robot in a situation where expectations are violated. \nIn general, preference was for adequate levels of \\textit{detail} and \\textit{explanatory power} that could provide \"\\ldots more diagnostic information and insight into the robot's thinking processes.\"(P19) \nParticipants wanted to develop mental models of the robots so they could understand the world from the robot's perspective. \nThis diagnostic motivation for a mental model is different from the re-programming or fixing needs in cases of failure. \n\nThe \\textit{complete-view} rationale, due to adequate levels of \\textit{strategic detail}, made participants more confident in their ability to follow the thought process and get a better understanding of the expectancy violation. One participant shared:\n\\begin{displayquote}\nThe greater clarity of thought in the [complete-view] rationale provides a more thorough picture \\ldots, so that the cause of the unexpected action could be identified and explained more easily. (P51)\n\\end{displayquote}\nWith this said, where possible without sacrificing transparency, participants welcomed simple rationales that \"anyone could understand, no matter what their level of education was.\"(P2)\nThis is noteworthy because the expertness level of the audience is a key concern when making accessible AI-powered technology where designers need to strike a balance between detail and succinctness.\n\nRationales exhibiting strong explanatory power, through \\textit{awareness} and \\textit{relatability}, helps to situate the unexpected behavior in an understandable manner. \nParticipants preferred the \\textit{complete-view} rationale's style of communication because of increased transparency:\n\\begin{displayquote}\nI prefer [the complete-view rationale style] because \\ldots I am able to get a much better picture of why it is making those decisions. (P24)\n\\end{displayquote}\n\nDespite similarities in the communication preferences for failure and unexpected behavior, there are differences in underlying reasons. As our analysis suggests, the mental models are desired in both cases, but for different reasons. \n\\section{Design Lessons and Implications} \nThe situated understanding of the \\textit{components} and \\textit{dimensions} give us a powerful set of actionable insights that can help us design better human-centered, rationale-generating, autonomous agents.\nAs our analysis reveals, context is king. \nDepending on the context, we can tweak the input type to generate \\textit{rationale sytles} that meet the needs of the task or agent persona; for instance, a companion agent that requires high \\textit{relatability} for user engagement. \nWe should be mindful when optimizing for a certain dimension as each component comes with costs. \nFor instance, conciseness can improve \\textit{intelligibility} and overall \\textit{understandability} but comes at the cost of \\textit{strategic detail}, which can hurt \\textit{confidence} in the agent.\nWe can also engineer systems such that multiple network configurations act as modules. \nFor instance, if we design a companion agent or robot that interacts with a person longitudinally, the \\textit{focused-view} configuration can take over\nwhen short and simple rationales are required. \nThe \\textit{complete-view} configuration or a hybrid one can be activated when communicating failure or unexpected behavior.\n\nAs our preference study shows, we should not only be cognizant about the level of detail, but also why the detail is necessary, especially while communicating failure and unexpected behavior.\nFor instance, \nfailure-reporting,\nin a mission critical task (such as search and rescue), would have different requirements for \\textit{strategic detail} and \\textit{awareness}, compared to\n\"failure\" reporting in a \nless-defined, more creative task like making music. \nWhile the focus of this paper is on textual rationale generation, rationales can be complementary to other types of explanations; for instance, a multi-modal system can combine visual cues with textual rationales to provide better contextual explanations for an agent's actions. \n\n\n\\section{Limitations and Future Work}\nWhile these results are promising, there are several limitations in our approach that need to be addressed in future work. \nFirst, our current system, by intention and design, lacks interactivity; users cannot contest a rationale or ask the agent to explain in a different way. \nTo a get a formative understanding, we kept the design as straight-forward as possible.\nNow that we have a baseline understanding, we can vary along the dimension of interactivity for the next iteration. \nFor instance, contestability, the ability to either reject a reasoning or ask for another one, which has shown to improve user satisfactions \\cite{hirsch2017designing,dietvorst2016overcoming} can be incorporated in the future.\nSecond, our data collection pipeline is currently designed to work with discrete-action games that have natural break points where the player can be asked for explanations. \nIn continuous-time and -action environments, we must determine how to collect the necessary data without being too intrusive to participants.\nThird, all conclusions about our approach were formed based on one-time interactions with the system. \nTo better control for potential novelty effects that rationales could have, we need to deploy our system in a longitudinal task setting. \nFourth, to understand the feasibility of our system in larger state-action spaces, we would need to study the scalability by addressing the question of how much data is needed based on the size of environment. \nFifth, not all mistakes are created equal. Currently, the perception ratings are averaged where everything is equally weighted. For instance, a mistake during a mission critical step can lead to higher fall in confidence than the same mistake during a non-critical step. To understand the relative costs of mistakes, we need to further investigate the relationship between context of the task and the cost of the mistake. \n \n\n\n\n \n\n\n \n\n\n\n\n\n\n\\section{Conclusions}\nWhile explainability has been successfully introduced for classification and captioning tasks, sequential environments offer a unique challenge for generating human understandable explanations.\nThe challenge stems from multiple complex factors, such as temporally connected decision-making, that contribute to making decisions in these environments.\nIn this paper, we introduce \\textit{automated rationale generation} as a concept and explore how justificatory explanations from humans can be used to train systems to produce human-like explanations in sequential environments. \nTo facilitate this work, we also introduce a pipeline for automatically gathering a parallel corpus of states annotated with human explanations. \nThis tool enables us to systematically gather high quality data for training purposes.\nWe then use this data to train a model that uses machine translation technology to generate human-like rationales in the arcade game, {\\em Frogger}. \n\nThrough a mixed-methods approach in evaluation, we establish the plausibility of the generated rationales and describe how intended design of rationale types lines up with the actual user perceptions of them. \nWe also get contextual understanding of the underlying dimensions and components that influence human perception and preferences of the generated rationales. \nBy enabling autonomous agents to communicate about the motivations for their actions, we envision a future where explainability not only improves human-AI collaboration, but does so in a human--centered and understandable manner.\n\n\n\n\\section{Acknowledgements}\nThis work was partially funded under ONR grant number N00014141000. We would like to thank Chenghann Gan and Jiahong Sun for their valuable contributions to the development of the data collection pipeline. We are also grateful to the feedback from anonymous reviewers that helped us improve the quality of the work.\n\\balance{}\n\n\n\n\\balance{}\n\n\\bibliographystyle{SIGCHI-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe pulsar J1811$-$1736, discovered during observations of the Parkes\nMultibeam Pulsar Survey\\citep{mlc+01}, has a spin period of 104 ms\nand is member of a 18.8-d, highly eccentric binary system\n\\citep{lcm+00} with an as yet undetected companion \\citep{mig00}. The\ncharacteristic age and the estimated surface magnetic field strength\nindicate that the pulsar is mildly recycled. In such a system, it\nis expected that the observed pulsar is born first in a supernovae\n(SN) explosion before it undergoes mass accretion from a high-mass\nnon-degenerate binary companion. Parameters which have been\nmeasured and derived from the best-fit timing solution indicate that\nthe companion is quite massive. All these elements suggest that the\ncompanion is also a neutron star (e.g. \\citealt{bv91}).\n\nThe conclusion that PSR\\,J1811$-$1736 is a member of the small sample\nof known double neutron star (DNS) systems was already reached by\n\\citet{lcm+00}. Their conclusion was supported by a constraint on the\ntotal system mass, assuming that the observed advance of periastron\nwas totally due to relativistic effects. However, the rather long\nperiod of the pulsar, combined with the effects of interstellar\nscattering, resulting in significant broadening of the pulse profile at\n1.4~GHz, as well as the short data span available to \\citet{lcm+00},\nlimited the timing precision and hence the accuracy of the total mass\nmeasurement.\n\nAmong all DNS systems, PSR\\,J1811-1736 has by far the longest spin\nperiod, the longest orbital period and the highest eccentricity. This\nmay suggest that the evolution of this DNS has been different, at\nleast in part, from all other DNS systems. On the other hand,\nPSR\\,J1811-1736 well fits the spin period versus eccentricity relation\nfor DNS systems (\\citealt{mlc+05}, \\citealt{fkl+05}). This relation\ncan be simply explained in terms of the different lengths of time the\npulsar underwent accretion, which in turn is related to the mass of\nthe companion star before the SN explosion. Moreover, numerical\nsimulations \\citep{dpp05} show that the spin period versus\neccentricity relation is recovered assuming that the second born\nneutron star received a low velocity kick at its birth.\n\nIn this paper we report on new timing observations which significantly\nimprove on the previously published results. We present a study\nof the observed interstellar scattering and consider its consequences\non the detectability of radio pulses from the companion. Finally, we\ninvestigate the likely kick velocity imparted to the second-born\nneutron star during its birth in the system's second SN.\nObservations were carried out as part of a coordinated effort using\nthree of the largest steerable radio telescopes in the world for\npulsar timing observations, i.e. the 100-m radio telescope at\nEffelsberg, the 94-m equivalent Westerbork Synthesis Radio Telescope\n(WSRT) and the 76-m Lovell telescope at Jodrell Bank telescope. This\npaper is the first in a series detailing results of these efforts in\nestablishing a {\\em European Pulsar Timing Array} (EPTA).\n\n\\section{Observations}\n\nThe binary pulsar J1811$-$1736 is one of the binary pulsars regularly\nobserved by the EPTA. The aims and objectives of this European\ncollaboration include the detection of a cosmological gravitational\nwave background and the project will be described in detail in a\nforthcoming publication. Here we summarize the observing systems used\nwhile further details can be found in the references below.\n\n\\subsection{Effelsberg timing}\n\nWe made regular timing observations of PSR\\,J1811$-$1736 since October\n1999 using the 100-m radio telescope of the Max Planck Institut f\\\"ur\nRadioastronomie in Effelsberg near Bonn. The typical observing rate\nwas of 1 observation every two months. An overall root-mean-square\n(RMS) of 538 $\\mu s$ is achieved after applying the final timing\nmodel. The data were obtained with a 1.3$-$1.7\\,GHz tunable HEMT\nreceiver installed in the primary focus of the telescope. The noise\ntemperature of this system is 25~K, resulting in a system temperature\nfrom 30~to 50~K on cold sky depending on elevation. The antenna gain\nat these frequencies is 1.5~K~Jy$^{-1}$.\n\nAn intermediate frequency (IF) centred on 150 MHz for left-hand (LHC)\nand right-hand (RHC) circularly polarised signals was obtained after\ndown-conversion from a central RF frequency of usually 1410 MHz. The\nsignals received from the telescope were acquired and processed with\nthe Effelsberg-Berkeley Pulsar Processor (EBPP) which removes the\ndispersive effects of the interstellar medium on-line using ``coherent\nde-dispersion'' \\citep{hr75}. Before entering the EBPP, the two LHC\nand RHC signals of the IF are converted to an internal IF of 440\nMHz. A maximum bandwidth of $2\\times32\\times0.7$~MHz~=~$2\\times32$~MHz\nwas available for the chosen observing frequency and DM of the\npulsar. It was split into four portions for each of the two circular\npolarisations, which were mixed down to baseband. Each portion was\nthen sub-divided into eight narrow channels via a set of digital\nfilters \\citep{bdz+97}. The outputs of each channel were fed into\nde-disperser boards for coherent on-line de-dispersion. In total 64\noutput signals were detected and integrated in phase with the\npredicted topocentric pulse period.\n\nA pulse time-of-arrival (TOA) was calculated for each average profile\nobtained during a 5-10 min observation. During this process, the\nobserved time-stamped profile was compared to a synthetic template,\nwhich was constructed out of 5 Gaussian components fitted to a\nhigh signal-to-noise standard profile (see Kramer et al.,\n\\citeyear{kxl+98,kll+99}). This template matching was done by a\nleast-squares fitting of the Fourier-transformed data \\citep{tay92}. \nUsing the measured time delay between the actual\nprofile and the template, the accurate time stamp of the data provided\nby a local H-MASER and corrected off-line to UTC(NIST) using recorded\ninformation from the satellites of the Global Positioning System\n(GPS), the final TOA was obtained. The uncertainty of each TOA was\nestimated using a method described by \\citet{dr83} and \\citet{lan99}.\n\n\\subsection{Jodrell Bank timing}\n\nObservations of PSR\\,J1811-1736 were made regularly using the 76-m\nLovell telescope at Jodrell Bank since its discovery in 1997\n\\citep{lcm+00}. The typical observing rate was of about 2\nobservations each week, with an overall RMS of 1300\\,$\\mu$s after\napplying the final timing model. A cryogenic receiver at 1404\\,MHz was\nused, and both LHC and RHC signals were observed using a\n$2\\times32\\times1.0$-MHz filter bank at 1404\\,MHz. After detection,\nthe signals from the two polarizations were filtered, digitised at\nappropriate sampling intervals, incoherently de-dispersed in hardware\nbefore being folded on-line with the topocentric pulse period and\nwritten to disk. Each integration was typically of 1-3 minutes\nduration; 6 or 12 such integrations constituted a typical\nobservation. Off-line, the profiles were added in polarisation pairs\nbefore being summed to produce a single total-intensity profile. A\nstandard pulse template was fitted to the observed profiles at each\nfrequency to determine the pulse times-of-arrival (TOAs). Details of\nthe observing system and the data reduction scheme can be found\nelsewhere (e.g.~\\citealt{hlk+04}).\n\n\n\\subsection{Westerbork timing}\n\nObservations of PSR\\,J1811$-$1736 were carried out approximately\nmonthly since 1999 August 1st, obtaining an overall timing RMS of\n659~$\\mu$s after applying the final timing model, at a central\nfrequency of 1380 MHz and a bandwidth of 80 MHz. The two linear\npolarisations from all 14 telescopes were added together in phase by\ntaking account of the relative geometrical and instrumental phase\ndelays between them and then passed to the PuMa pulsar backend\n\\citep{vkv+02}. The data were obtained with the L-band receiver\ninstalled in the primary focus of the telescopes. The noise\ntemperature of this system is 25~K, resulting in a system temperature\nfrom 30~to 50~K on cold sky depending on elevation. The antenna gain\nat these frequencies is 1.2~K~Jy$^{-1}$. PuMa was used in its digital\nfilterbank mode whereby the Nyquist sampled signals are Fourier\ntransformed and the polarisations combined to produce total intensity\n(Stokes I) spectra with a total of 512 channels. These spectra were\nsummed online to give a final sampling time of 409.6 $\\mu$s and\nrecorded to hard disk. These spectra were subsequently dedispersed\nand folded with the topocentric period off-line to form integrations\nof a few minutes. TOAs were calculated for each profile following a\nscheme similar to that outlined above for Effelsberg data, except a\nhigh signal-to-noise standard profile was used instead of Gaussian\ncomponents. In the future, EPTA timing will employ an identical\nsynthetic template for all telescopes.\n\n\n\\section{Data analysis}\n\n\nThe TOAs, corrected to UTC(NIST) via GPS and weighted by their\nindividual uncertainties determined in the fitting process, were\nanalysed with the {\\tt TEMPO} software package \\citep{tw89}, using the\nDE405 ephemeris of the Jet Propulsion Laboratory \\citep{sta90}. {\\tt\nTEMPO} minimizes the sum of weighted squared {\\it timing residuals},\ni.e.~the difference between observed and model TOAs, yielding a set of\nimproved pulsar parameters and post-fit timing residuals. A summary\nof the basic characteristics of each dataset is shown in Table 1.\n\n \\begin{table}\n \\centering\n \\caption[]{Data sets' characteristics}\n \\label{TabData} \n \\begin{tabular}{llll}\n \\hline\n \\noalign{\\smallskip}\n\n& Jodrell Bank & Effelsberg & Westerbork\\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\nN.~of~ToAs & 348 & 74 & 213\\\\\nTime~Span & 50842-53624 & 51490-53624 & 51391-53546 \\\\\nR.M.S. & 1300 & 538 & 659\\\\\n\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\n \\end{tabular}\n\\end{table}\n\nAlthough the templates used for the three telescope data differed,\nresulting offsets were absorbed in a global least-squares\nfit. Remaining uncertainties were smaller than the typical measurement\naccuracy of the Jodrell Bank timing data of about 9 $\\mu$s.\n\nBefore all TOAs were combined, preliminary fits were performed on each\ndataset alone, in order to study possible systematic differences between\nthe datasets. We applied a small quadrature addition \nand a scaling factor to the\nuncertainties to obtain the expected value of a reduced $\\chi^2=1$\nfor each dataset. \nThe final joint fit to all TOAs resulted in a $\\chi^{2}$ value of\nunity, avoiding the need to add further systematic uncertainties.\n\nTable 2 summarizes all observed timing and some derived\nparameters. For the observed parameters the quoted errors are twice\nthe nominal TEMPO errors. For the derived parameters, the given\nuncertainties are computed accordingly.\n\nThe joint fit allowed us to determine the spin, positional and\nKeplerian orbital parameters plus one post-Keplerian parameter with a\nprecision better than the best determination from a single data set\nalone. However, the high degree of interstellar scattering (Fig. 3)\nmeans that further post-Keplerian parameters will be difficult to\nmeasure with continued observations at this frequency. We will discuss\nthe future prospects for higher frequency observations in \\S~5.\n\n \\begin{table}\n \\caption[]{Timing and derived parameters}\n \\label{TabTim} \n \\begin{tabular}{ll}\n \\hline\n \\noalign{\\smallskip}\nTiming parameters & Joint~data~sets\\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\nRA~(J2000, hh:mm:ss) & 18:11:55.034(3)\\\\\nDECL~(J2000, deg:mm:ss) & -17:36:37.7(4)\\\\\nPeriod, $P$~(s) & 0.1041819547968(4) \\\\\nPeriod derivative, $\\dot{P}$~($10^{-19}$ s~s$^{-1}$) & 9.01(5)\\\\\nDispersion Measure, DM~(pc~cm$^{-3}$) & 476(5)\\\\\nProjected semi-major axis$^{a}$, $a~\\sin~i$~(s) & 34.7827(5)\\\\\nEccentricity, $e$ & 0.828011(9)\\\\\nEpoch of periastron, $T_{0}$~(MJD) & 50875.02452(3)\\\\\nOrbital period, $P_{B}$~(d) & 18.7791691(4)\\\\\nLongitude of periastron, $\\omega$~(deg) & 127.6577(11)\\\\\nAdvance of periastron, $\\dot{\\omega}$~(deg~yr$^{-1}$) & 0.0090(2)\\\\\nFlux density at 3100\\,MHz, $S_{3100}$ (mJy) & 0.34(7) \\\\\n&\\\\\nTime~Span~(MJD) & 50842-53624\\\\\nN.~of~ToAs & 635\\\\\nRMS~($\\mu s$) & 851.173\\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\nDerived parameters$^{b}$ & \\\\\n \\noalign{\\smallskip}\n \\noalign{\\smallskip}\nCharacteristic~Age, $\\tau_{\\rm c}$ ($10^{9}~yr$) & 1.83\\\\\nSurface magnetic field, $B_{0}$~($10^{9}$ G) & 9.80 \\\\\nTotal~Mass~$M_{\\rm TOT}$ ($M_{\\odot}$) & 2.57(10)\\\\\nMass~Function~$f(M_{\\rm C})$ ($M_{\\odot}$) & 0.128121(5)\\\\\nOrbital separation $A$ (ls) & 94.4(6) \\\\\nMinimum~companion~mass~$M_{\\rm C,min}$ ($M_{\\odot}$) & 0.93\\\\\n\n \\noalign{\\smallskip}\n \\hline\n \\end{tabular}\n\\begin{itemize}\n\\item[$^a$] The projected semi-major axis $a \\sin i$ is the\n semi-major axis of the projection of the orbit of the pulsar, around\n the system's center of mass, onto the plane containing the line of\n sight and the line of nodes.\n\\item[$^b$] Characteristic age and surface magnetic field have been\n calculated using standard formulas, namely $\\tau_{\\rm\n c}=P\/2\\dot{P}$ and\n $B_{0}=3.2\\times10^{19}\\sqrt{P\\dot{P}}$\\,G. The total mass $M_{\\rm\n TOT}$ has been calculated from the relativistic\n periastron advance and the measured Keplerian parameters,\n assuming the validity of general relativity.\n The minimum companion mass was estimated\n using the observed mass function $f(M_{\\rm\n C})$ and the lower limit for the total mass, as given by its\n uncertainty, in the case of $\\sin i = 1$. For details see\n Lorimer \\& Kramer (2005)\\nocite{lk05}.\n\\end{itemize}\n\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.5cm]{4385fig1.ps}\n\\caption{Timing residuals after jointly applying the final model to\nall three data sets. Vertical bars represent the ToA's uncertainty.}\n\\label{fig:jointres}\n\\end{figure}\n\n\\section{The nature of the companion}\n\nIn their discovery paper, \\citet{lcm+00} proposed that this system is\na member of the small class of DNS binaries. Soon after, \\citet{mig00}\nreported on optical observations of the region surrounding the pulsar\nposition to search for emission from the pulsar companion. They\ndetected no emission coincident with the pulsar position and while not\nconclusive, the lack of emission is at least consistent with the\nneutron star hypothesis for the nature of the companion.\n\nThe derived values for the characteristic age ($\\tau_c = 1.83 \\times\n10^{9}$ yrs) and the surface magnetic field ($B = 9.8 \\times\n10^{9}$G), as well as the combined values of the spin period\n($P=104$~ms) and its period derivative ($\\dot{P}=9\\times10^{-19}$),\nindicate that PSR\\,J1811$-$1736 is a neutron star that experienced a\nspin-up phase via accretion from mass overflowing from its\ncompanion. These parameters, in conjunction with the measured orbital\neccentricity ($e=0.828$), indeed suggest that PSR\\,J1811$-$1736 is a\nmildly recycled pulsar whose companion star was massive enough to also\nundergo a SN explosion. This second SN imparted the actually observed\nlarge eccentricity to the system (e.g. \\citealt{bv91}).\n\nOur new measurement of the relativistic periastron advance,\n$\\dot{\\omega} = 0.0090 \\pm 0.0002$ deg yr$^{-1}$, allows us to\ndetermine the value of 2.57$\\pm$0.10 M$_{\\odot}$ for the total mass of\nthe system, assuming that general relativity is the correct theory of\ngravity and that the observed value is fully due to relativistic\neffects (e.g. \\citealt{dd86}). This value, combined with the measured\nmass function, implies a minimum companion mass of 0.93 $M_{\\odot}$.\n\n\\begin{figure}\n\\centering \\includegraphics[angle=0,width=8.5cm]{4385fig2.ps}\n\\caption{Mass-mass diagram for the binary system hosting\nPSR\\,J1811-1736. The shaded area below the dotted curved line is\nexcluded because of the geometrical constraint $\\sin i \\leq 1$, while\nthe area outside the diagonal stripe is excluded by the measurement of\nthe relativistic periastron advance and the derived value for the\ntotal mass for this system.}\n\\label{fig:masses}\n\\end{figure}\n\nThe value for the total mass is relatively low, but very similar to\nthe total mass of the double pulsar system \\citep{lbk+04} and the\nrecently discovered DNS system PSR\\,J1756$-$2251 \\citep{fkl+05}. In\nfact, these systems have neutron star companions that have the lowest\nneutron star masses observed so far, $M_{c}~=~1.25M_{\\odot}$ and\n$M_{c}=1.17M_{\\odot}$ (for a recent review see \nStairs 2004\\nocite{sta04a}), respectively. In Figure \\ref{fig:masses} we\nshow the so-called mass-mass diagram where the pulsar and companion\nmasses can be directly compared. The measured value for the advance of\nperiastron means that the sum of the masses must lie along the\ndiagonal line, while the constraint on the inclination $\\sin i \\leq 1$\nexcludes the hatched region below the dotted line. Assuming that the\nneutron stars in this system must have a mass which is larger than the\n{\\it lowest} mass so-far measured, i.e. $1.17M_\\odot$, we find that\nthey both have masses in the interval\n1.17$\\,M_{\\odot}\\,\\leq\\,M_{P},M_{C}\\,\\leq\\,1.50 M_{\\odot}$. This\ninterval contains all but the heaviest neutron stars masses for which\na reliable determination has been obtained. Using this mass\nconstraint, we can also translate this range into lower and upper\nlimits on the inclination of the system, i.e. 44\\,deg\\,$\\ifmmode\\stackrel{<}{_{\\sim}}\\else$\\stackrel{<}{_{\\sim}}$\\fi i\n\\ifmmode\\stackrel{<}{_{\\sim}}\\else$\\stackrel{<}{_{\\sim}}$\\fi$\\,50\\,deg.\n\nAlternatively, if either the pulsar or the companion have a mass equal\nto the observed median neutron star mass of 1.35 M$_{\\odot}$\n\\citep{sta04a}, the other neutron star would have a mass of\n$1.22\\,\\pm\\,0.10 M_{\\odot}$. This value is consistent with the lower\nlimit in the previous discussion, but this also allows for the\npossibility that one of the two neutron stars has a mass as low as\n1.12 $M_{\\odot}$.\n\n\\section{Future potential of timing observations}\n\nIn order to determine the companion mass without ambiguities, it is\nnecessary to measure a second post-Keplerian (PK) parameter\n(e.g. \\citealt{dt92}). We have investigated the possibility of\nmeasuring the PK parameter $\\gamma$, which describes the combined\neffect of gravitational redshift and a second order Doppler effect. For\na companion mass of $1.35M_\\odot$, the expected value is $\\gamma =\n0.021$ ms. Using simulated data sets for the presently available\ntiming precision, we estimate that a $3\\sigma$ detection for $\\gamma$\nis achievable after about 4 more years of observation. However, in\norder to obtain a 10\\% accuracy in mass measurement by determining\n$\\gamma$ to a similar precision, several decades of observations may\nbe needed.\n\nFrom similar simulations, we estimate that PK parameters like the rate\nof orbital decay, $\\dot{P}_{\\rm B}$, or the Shapiro delay, are\nunmeasurable in this system, unless significant improvements in timing\nprecision can be obtained. For a companion mass of 1.35 $M_{\\odot}$,\n$\\dot{P}_{\\rm B}$ is only $-9.4 \\times 10^{-15}$~s~s$^{-1}$ and we\nexpect an amplitude of only 6\\,$\\mu$s for the amplitude of the Shapiro\ndelay. We also note that the effects of geodetic precession (see\ne.g. \\citealt{kra98}) will not be measurable within a reasonable time,\nas it has a period of order of $10^{5}$ years.\n\n\n\\section{Improving timing precision}\n\nIt is obvious that the measurement of further PK parameters will only\nbe possible if higher timing precision can be achieved for this\npulsar. For instance, if a precision of 50$\\mu$s could be obtained,\n$\\gamma$ could be measurable to a 10\\% accuracy after a total of just\n5 yr of observations, while a $3\\sigma$ detection of the orbital decay\nmay be achieved after about 6 yr.\n\nOne way to achieve higher timing precision is to detect narrow\nfeatures in the observed pulse profile by means of higher effective\ntime resolution. This is most commonly achieved through better\ncorrection for dispersion smearing that is caused by the radio\nsignal's passage through the ionized interstellar medium. While this\neffect is actually completely removed by the use coherent\nde-dispersion techniques (see \\citealt{hr75}) at some of our\ntelescopes, it is apparent that the current timing precision is limited\ninstead by broadening of the pulse profile due to interstellar\nscattering (\\citealt{lmg+04}, and references therein). Indeed, the\npulse profile at 1.4~GHz shows a strong scattering tail (see Fig.\\,3)\nwhich prevents a highly accurate determination of the pulse time of\narrival. As scattering is a strong function of observing frequency, we\ncan expect to reduce its effect, and hence to enable higher timing\nprecision, by using timing observations at frequencies above 1.4~GHz.\n\n\\begin{figure}\n\\centering \\includegraphics[angle=0,width=8.5cm]{4385fig3.ps}\n\\caption{Pulses' profiles of PSR\\,J1811--1736 at 1.4\\,GHz (bottom\npanel) and 3.1\\,GHz (top panel). Both profiles have been obtained with\n10 minutes observations performed with the Parkes radio telescope in\nFebruary 2005.}\n\\label{fig:3GHzp}\n\\end{figure}\n\nWe obtained observations at 3.1~GHz that confirm this expectation.\nThe pulse profile obtained at this frequency shows no evidence of\ninterstellar scattering, and its width at 10\\% is only 7.3~ms. This is\na great improvement with respect to the 1.4~GHz profile, whose 10\\%\nwidth is 58.3~ms. A flux density of $0.34\\pm0.07$mJy measured at\n3.1\\,GHz suggests that regular timing observations at this frequency\nshould be possible and should significantly improve the achievable\ntiming precision. This would allow us to measure a second PK parameter\nto an accuracy that is sufficiently precise to determine the companion\nmass.\n\nUsing the data available at 1.4\\,GHz and 3.1\\,GHz, we computed\nspectral indexes for flux density and scattering time. For the flux\ndensity we obtain a spectral index of $\\beta=-1.8\\pm0.6$. Subdividing\nour observing band at 1.4\\,GHz we obtain two different profiles that\nwe use to measure the pulse scatter timescale $\\tau$ by applying the\ntechnique described in \\citealt{lkm+01}. We convolve the\n3.1\\,GHz-profile, assumed to represent the true pulse shape, with an\nexponential scattering tail and obtain scattering times by a\nleast-square comparison of the convolved profile with the observed\npulse shape. At 1.284\\,GHz we find $\\tau_s~=~16.9$~ms, and\n$\\tau_s~=~10.6$~ms at 1.464~GHz, respectively. This results in a\nspectral index $\\alpha$ of the scattering time, i.e.~$\\tau \\propto\n\\nu^{-\\alpha}$, of $\\alpha~=~3.5\\pm~0.1$. Such a measured value agrees\nvery well with analogous results from L\\\"{o}hmer et\nal. (\\citeyear{lkm+01,lmg+04}) who determined $\\alpha$ for a number of\npulsars with very high dispersion measures.\n\nThe measured spectral index of the scattering time is also consistent\nwith the fact that the pulsar has not been detected at frequencies\nbelow 1\\,GHz. For example at 400\\,MHz we calculate $\\tau_{s}\\sim1$\\,s\nwhich is almost an order of magnitude greater than the spin period of\nthe pulsar thus making it impossible to detect it as a pulsating source.\n\n\n\\section{Previous searches for pulsations from the companion}\n\nSearches for pulsations from the binary companion of PSR\\,J1811-1736\nhave been performed on Parkes and Effelsberg data. Parkes observations\nhave been investigated with the procedure described in \\citet{fsk+04},\nwhile Effelsberg data have been processed using the procedure\ndescribed in \\citet{kle04}. Both searches were unsuccessful in\ndetecting any evidence of pulsation. The very high value of the\ndispersion measure for this system may suggest that the interstellar\nscattering is responsible for the failure in detecting any\npulsation. Therefore we studied the possible impact of this phenomenon\non our searches for pulsations from the companion of PSR\\,J1811-1736.\n\n\nWe considered 1\\,hr observations done with the Effelsberg telescope\nusing either the 20\\,cm (1.4\\,GHz) or the 11\\,cm (2.7\\,GHz) receiver,\nexploring a range of possible flux densities ($S=0.05, 0.5, 1$\\,mJy)\nand a detection in a signal-to-noise ratio threshold of\n$S\/N=10$. Using the DM of the observed pulsar, and assuming Effelsberg\nobservations at 1.4\\,GHz, we find a minimum detectable period of\n$P_{\\rm min}=750$\\,ms for a flux density of $S=50\\,\\mu$Jy, while even\nfor the flux densities of $S=500\\,\\mu$Jy and $S=1\\,$mJy periods below\n$\\sim$10\\,ms become undetectable at the observing frequency of\n1.4\\,GHz.\n\nAt 2.7\\,GHz, the system performance of the Effelsberg telescope allows\nfor an antenna gain of $G=$1.5\\,K\\,Jy$^{-1}$ with a system temperature\n$T_{\\rm sys}=17$\\,K. Using these parameters, we obtain minimum periods\nof $P_{\\rm min}=110$\\,ms, $P_{\\rm min}=2.5$\\,ms and $P_{\\rm min}=\n1.6$\\,ms for flux densities $S_{\\rm 2.7\\,GHz}=50\\,\\mu$Jy,\n500\\,$\\mu$Jy and 1\\,mJy respectively.\n\nWhen searching for pulsations from the binary companion of a Galactic\nrecycled pulsar in a double neutron star system, it is more likely\nthat the companion is a young pulsar with rather ordinary spin\nparameters, as found for PSR\\,J0737-3039B in the double pulsar system\n\\citep{lbk+04}. Our lower limits on the minimum detectable period\ntherefore suggests that interstellar scattering should not have\nprevented the detection of the companion, unless it were a very fast\nspinning or very faint source.\n\n\n\n\\section{Constraints on the kick velocity of the second SN explosion}\n\nThe large eccentricity of J1811$-$1736 system can be ascribed to a\nsudden loss of mass which results in a change of the orbital\nparameters. Such a sudden loss of mass can be attributed to the SN\nexplosion that formed the younger unseen neutron star companion (see,\ne.g., \\citealt{bv91}). Under the hypothesis of a symmetric explosion,\nsimple calculations show that the binary survives this event only if\nthe expelled mass $M_{\\rm exp}$ is less than half of the total mass of\nthe binary before the explosion (pre-SN binary). The induced\neccentricity is a simple function of the amount of the expelled mass:\n$e=M_{\\rm exp}\/M_{\\rm TOT}$, where $M_{\\rm TOT}$ is the total mass of\nthe pre-SN binary. In the case of the binary system hosting\nPSR\\,J1811-1736, the measured eccentricity, $e=0.828$, and the derived\ntotal mass, $M_{\\rm bin}=2.57\\,M_{\\odot}$, would imply a total mass\n$M_{\\rm TOT}\\,=\\,4.7\\,M_{\\odot}$ for the pre-SN binary.\n\nThe high space velocities measured for isolated pulsars indicate that\nneutron stars may receive a kick when formed, with an unpredictable\namplitude and direction \\citep{hp97,cc98,acc02}. Such kicks\nimparted to the newly formed neutron stars are caused by {\\em\nasymmetric supernova explosions}. If an asymmetric SN explosion\noccurs in a binary system, the survival and the eventual post-SN\nbinary parameters are jointly determined by the mass loss and the\nvector representing the velocity imparted to the neutron star. In this\ncase a simple survival condition like the one derived for the\nsymmetric explosion case cannot be determined.\n\nA correlation between the pulsar's spin period and orbital\neccentricity has recently been found for DNS systems\n(\\citealt{mlc+05}, \\citealt{fkl+05}). A numerical simulation by\n\\citet{dpp05} linked this correlation to the typical amplitude of the\nkick velocity received by the younger neutron star at birth.\n\\citet{dpp05} found that the spin period versus eccentricity\ncorrelation is recovered if the typical kick amplitude satisfies the\ncondition $V_{\\rm K} \\ifmmode\\stackrel{<}{_{\\sim}}\\else$\\stackrel{<}{_{\\sim}}$\\fi 50$\\,km\\,s$^{-1}$.\n\nTo investigate the nature of the kick received by the younger neutron\nstar in this system we considered as the pre-SN binary a binary system\ncontaining a neutron star and a helium star. We then constrained the\ntotal mass of this system by combining our results on the total mass\nof the actual binary with the range given by \\citet{dp03b} for the\nmass of the helium star that was the companion of PSR\\,J1811-1736\nbefore the explosion. The helium star mass range $2.8M_{\\odot} \\leq\nM_{\\rm C} \\leq 5.0 M_{\\odot}$ given by \\citet{dp03b} leads to a total\nmass range of $4.0M_{\\odot} \\leq M_{{\\rm {TOT}}} \\leq 6.5M_{\\odot}$.\n\nBinary parameters for the pre-SN binary have been chosen as follows.\nThe eccentricity has been assumed negligible, since the accretion\nphase responsible for spinning up the pulsar also provided strong\ntidal forces that circularised the orbit. The orbital separation has\nbeen constrained to be between the the minimum (pericentric) and\nmaximum (apocentric) distance between the two neutron stars in the\npost-SN binary. This statement can be justified as follows. The\ntypical velocity of the expelled matter in a SN explosion is close to\nthe speed of light, while the typical orbital velocity of the stars in\na binary system is of order of $\\sim 100$\\,km\\,s$^{-1}$, using for the\ntotal mass any value in the range we used for $M_{{\\rm {TOT}}}$ and a\nvalue for the orbital separation comparable to the one for the present\nbinary system, i.e. few light-minutes (see the discussion in the next\nparagraph of the post-SN binary evolution due to general relativistic\neffects). This means that the change in position of the two stars is\nnegligible if compared to the change in position of the expelled\nmatter. The time required for the binary system to do the transition\nfrom the pre-SN to the post-SN binary is the time required by the\nexpelled matter to travel along a path as long as the orbital\nseparation, i.e. few minutes. After such an elapsed time the matter\nexpelled in the SN explosion encloses both stars and has no more\ngravitational effects on their binary motion. This time is also much\nshorter than the orbital period of few days for a pre-SN binary like\nthe one we are considering. This means that during this transition the\npositions of the two stars remained unchanged, and their distance was\na distance periodically assumed by the two stars also in their orbital\nmotion in the post-SN binary.\n\nTo make a fully consistent comparison between the actually observed\nbinary and the eccentric binary that emerged from the last SN\nexplosion (post-SN binary) one has to take into account the secular\nchanges of the orbital parameters caused by general relativistic\neffects. In order to do this one needs to have an estimate for the\ntime since the last SN. The only timescale that is available to us is\nthe characteristic age of the first born pulsar. We then find binary\nparameters that are consistent with the present system values within\ntheir uncertainties. Given the well known uncertainties in this age\nestimation, we considered the possibility that the present binary is\nin fact up to ten times older than suggested by the characteristic age\n(i.e 1.8$\\times 10^{10}$ yr). Even when considering this extreme age,\nwe find that our results remain unaffected. We consequently decided\nto use as post-SN orbital parameters the same values we measure today.\n\nBy insisting that the total energy and total angular momentum,\ncalculated in the center-of-mass frame, of the post-SN binary and the\npresent systems are conserved, we can combine these terms to obtain an\nequation for the kick amplitude, as a function of the two angles,\nrepresenting its direction in a suitable reference frame, and the\ntotal mass and orbital separation before the explosion. We assumed\nthat the probability of occurrence for any given kick vector is\nproportional to the solid angle described by the direction of the kick\nin spherical coordinates and then calculated the probability of having\na kick velocity lower than some fixed values. We chose the values of\n50, 100 and 150~km~s$^{-1}$ ( hereafter $P_{50}$, $P_{100}$ and\n$P_{150}$ respectively.). Figure\\,\\ref{fig:probs} shows that $P_{50}$\nis not negligible if the total mass of the pre-SN binary system is\nlower than 6\\,$M_{\\odot}$. Moreover, all considered probabilities peak\nin correspondence of a total pre-SN mass of 4.70\\,$M_{\\odot}$,\ncorresponding to the null kick case. These results lead to the\nconclusion that the younger neutron star in this system received a low\nvelocity kick and is thus similar to all other known DNS systems,\nwhich all have tighter orbits.\n\nNevertheless, the binary system containing PSR\\,J1811-1736 is much\nwider than all other known DNS systems. This may indicate that the\nbinary evolution of this system may have been (at least partially)\ndifferent. In particular the wide orbital separation for this system\nmay be compatible with an evolution during which the pulsar's\nprogenitor avoided completely a common envelope phase \\citep{dp03b} or\nthat this phase was too short to sufficiently reduce the orbital\nseparation. Moreover if the spin-up occurred via the stellar wind of\nthe giant companion then the system would tend to be wider due to the\nisotropic mass loss from the companion \\citep{dpp05}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.5cm]{4385fig4.ps}\n\\caption{Probabilities to have a kick velocity lower than or equal to\n50 (lower line), 100 and 150 (upper line) km s$^{-1}$ as a function of\nthe pre-SN total mass. All probabilities peak in correspondence of a\npre-SN total mass $M_{\\rm TOT}=4.70M_{\\odot}$, which is the mass of\nthe binary system before the explosion in the case of a symmetric\nSN. The probability to have a kick velocity lower than 50 km\ns$^{-1}$ is not negligible for all but the highest considered values\nfor the binary pre-SN mass.}\n\\label{fig:probs}\n\\end{figure}\n\n\\section{Summary \\& Conclusions}\n\nWe have presented an improved timing solution for the binary pulsar\nJ1811$-$1736. This solution improves the previously measured values\nfor the spin and Keplerian orbital parameters and one post-Keplerian\norbital parameter, the periastron advance. These results would not\nhave been achieved without data from the three telescopes used and are\nthe first obtained as part of the European Pulsar Timing Array (EPTA)\ncollaboration.\n\nThe measured values for the spin period and its first derivative are\ntypical of a mildly recycled neutron star, while the high eccentricity\nof the binary system can be seen as a signature of the SN explosion\nthat interrupted the mass transfer from the companion to the accreting\nneutron star. This is likely to have occurred before the pulsar could\nreach spin periods typical of the fully recycled (i.e. millisecond)\npulsars. This leads to the conclusion that PSR\\,J1811$-$1736 is a\nmember of a DNS binary system.\n\nThe determined value of the periastron advance provides further\nconfirmation for this scenario as it suggests a total mass of the\nsystem of $M_{tot}$~=~2.57$\\pm$0.10~$M_{\\odot}$. This value is\nsimilar to the total mass of two other DNS systems, i.e. the double\npulsar \\citep{lbk+04} and PSR\\,J1756-2251 \\citep{fkl+05}. In both\nthese systems, the non-recycled neutron stars is very light. Assuming\nthat PSR\\,J1811$-$1736 is a neutron star with a mass within the\ncurrently measured mass range for neutron stars, we find the companion\nmass to lie in the same range. Using these arguments we determine the\ninclination of the orbital plane to within $\\sim6$ degrees.\n\nWe also investigated the possibility of measuring a second\npost-Keplerian parameter, in order to determine both masses and thus\nto definitively determine the nature of the companion. Unfortunately, the\npulse profile at 1.4~GHz is heavily broadened by interstellar\nscattering which limits the timing precision and means that a second\npost-Keplerian parameter is not measurable within a reasonable amount\nof time with observations at that frequency. However we find that at\n3~GHz the scattering is sufficiently reduced and the flux density is\nsufficiently high that higher precision timing will be possible at\nthis frequency. Comparing the pulse profiles at 1.4 and 3~GHz we find\nthat the scattering timescale for this pulsar scales with frequency\nwith a power law of index $\\alpha~=~3.5\\pm0.1$ which is in excellent\nagreement with earlier results on high dispersion measure pulsars.\n \nConsidering the effects of the interstellar scattering on the\ndetectability of pulsations from the companion, we find that the\nminimum detectable period is longer at lower frequencies and for\nfainter objects. In general, we do not expect interstellar scattering\nto be the cause for the continued non-detection of the companion\nneutron star.\n\nThe orbital separation for this system is much wider than for all\nother DNS systems, and it suggests that its binary evolution has been\ndifferent. One explanation invokes the lack of a common envelope\nphase, during which the size of the orbit shrinks due to tidal forces\nin the envelope of the companion star. Another explanation\n\\citep{dpp05}, not necessarily conflicting with the previous one,\ninvokes a different mass transfer mechanism in the spin-up phase of\nthe pulsar, namely via stellar wind, while all other recycled pulsars\nin the known DNS have been spun up via Roche lobe overflow mass\ntransfer.\n\nFinally, we investigated the kick imparted to the second born neutron\nstar during the second SN. We find that for realistic values of the\ntotal mass of the pre-SN binary, the kick velocity has a not\nnegligible probability of being lower than 50~km~s$^{-1}$. This\nconstraint is common to all DNS systems, as shown by\n\\citet{dpp05}. This evidence for a low amplitude asymmetric kick\nreceived by the younger neutron star may be the consequence of the\neffects of binary evolution on a star that undergoes a SN explosion,\neffects that somehow are able to tune the amplitude of such kick.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nClassical Cepheid variable stars are primary distance \nindicators and rank among standard candles for establishing \nthe cosmic distance scale, owing to the famous period-luminosity \n($P$--$L$) relationship.\nCompanions to Cepheids, however, complicate the situation.\nThe contribution of the secondary star to the observed \nbrightness has to be taken into account when involving any\nparticular Cepheid in the calibration of the $P$--$L$ relationship.\n\nBinaries among Cepheids are not rare at all: their frequency \nexceeds 50 per cent for the brightest Cepheids, while among the \nfainter Cepheids an observational selection effect encumbers \nrevealing binarity \\citep{Sz03a}.\n\nOwing to some observational projects aimed at obtaining new\nradial velocities (RVs) of numerous Cepheids carried out during\nthe last decades, a part of the selection effect has been\nremoved. This progress is visualized in Fig.~\\ref{fig-comparison}\nwhere the current situation is compared with that 20 years ago.\nThe data have been taken from the on-line data base on binaries \namong Galactic Cepheids (http:\/\/www.konkoly.hu\/CEP\/orbit.html).\nTo get rid of the fluctuation at the left-hand part of the diagram,\nbrightest Cepheids ($\\langle V \\rangle <5$~mag) were merged in a\nsingle bin because such stars are extremely rare among Cepheids\n-- see the histogram in Fig.~\\ref{fig-histogram}.\n\nIn the case of pulsating variables, like Cepheids, spectroscopic\nbinarity manifests itself in a periodic variation of the\n$\\gamma$-velocity (i.e., the RV of the mass centre of the Cepheid). \nIn practice, the orbital RV variation of the Cepheid component is \nsuperimposed on the RV variations of pulsational origin. \nTo separate orbital and pulsational effects, knowledge of the \naccurate pulsation period is essential, especially when comparing \nRV data obtained at widely differing epochs. Therefore, the pulsation\nperiod and its variations have been determined with the method of\nthe O$-$C diagram \\citep{S05} for each target Cepheid. Use of the\naccurate pulsation period obtained from the photometric data is a \nguarantee for the correct phase matching of the (usually less\nprecise) RV data.\n\n\n\\begin{figure}\n\\includegraphics[height=48mm, angle=0]{szabados2013sbcepfig1.eps}\n\\caption{Percentage of known binaries among Galactic\nclassical Cepheids as a function of the mean apparent\nvisual brightness in 1993 and 2013. The decreasing influence\nof the observational selection effect is noticeable.}\n\\label{fig-comparison}\n\\end{figure}\n\n\\begin{figure}\n\\vspace*{7mm}\n\\includegraphics[height=48mm, angle=0]{szabados2013sbcepfig2.eps}\n\\caption{Histogram showing the number distribution of known\nGalactic classical Cepheids as a function of their mean \napparent visual brightness.}\n\\label{fig-histogram}\n\\end{figure}\n\nIn this paper we point out spectroscopic binarity of three \nbright Galactic Cepheids by analysing RV data. The structure of \nthis paper is as follows. The new observations and the equipment \nutilized are described in Sect.~\\ref{newdata}. Section~\\ref{results} \nis devoted to the results on the three new spectroscopic binary (SB) \nCepheids: LR~Trianguli Australis, RZ~Velorum, and BG~Velorum. \nBasic information on these Cepheids is given in Table~\\ref{obsprop}. \nFinally, Section~\\ref{concl} contains our conclusions.\n\n\\begin{table} \n\\begin{center} \n\\caption{Basic data of the programme stars \nand the number of spectra.} \n\\label{obsprop} \n\\begin{tabular}{|lccccc|} \n\\hline \nCepheid & $\\langle V \\rangle$ & $P$ & Mode & \\multicolumn{2}{c}{Number of spectra}\\\\\n& (mag) & (d)& of pulsation & SSO & CORALIE \\\\\n\\hline \nLR~TrA & 7.80 & 2.428289 & first overtone & 10 & 32\\\\ \nRZ~Vel & 7.13 & 20.398532 & fundamental & 30 & 67\\\\\nBG~Vel & 7.69 & 6.923843 & fundamental & 27 & 33\\\\ \n\\hline \n\\end{tabular} \n\\end{center} \n\\end{table}\n\n\n\n\\section{New observations}\n\\label{newdata}\n\n\\subsection{Spectra from the Siding Spring Observatory}\n\\label{SSO}\n\nWe performed an RV survey of Cepheids with the 2.3~m ANU \ntelescope located at the Siding Spring Observatory (SSO), \nAustralia. The main aim of the project was to detect Cepheids \nin binary systems by measuring changes in the mean values of \ntheir RV curve which can be interpreted as the orbital \nmotion of the Cepheid around the centre-of-mass in a binary \nsystem (change of $\\gamma$-velocity). The target list was \ncompiled to include Cepheids with a single-epoch RV phase curve \nor without any published RV data. Several Cepheids suspected \nto be members of SB systems were also put on the target list. \nIn 64 nights between 2004 October and 2006 March we monitored \n40 Cepheids with pulsation periods between 2 and 30 d.\nAdditional spectra of some targets were obtained in 2007 August.\n\nMedium-resolution spectra were taken with the Double Beam \nSpectrograph using the 1200~mm$^{-1}$ gratings in both arms of \nthe spectrograph. The projected slit width was 2 arcsec \non the sky, which was about the median seeing during our \nobservations. The spectra covered the wavelength ranges \n4200--5200~\\AA\\ in the blue arm and 5700--6700~\\AA\\ in the red \narm. The dispersion was 0.55~\\AA~pixel$^{-1}$, leading to a nominal \nresolution of about 1~\\AA.\n\nAll spectra were reduced with standard tasks in {\\sc iraf}\n\\footnote{{\\sc iraf} is distributed by the National Optical \nAstronomy Observatories, which are operated by the Association\nof Universities for Research in Astronomy, Inc., under \ncooperative agreement with the National Science Foundation.}.\nReduction consisted of bias and flat-field corrections, \naperture extraction, wavelength calibration, and continuum \nnormalization. We checked the consistency of wavelength \ncalibrations via the constant positions of strong telluric \nfeatures, which proved the stability of the system. \nRVs were determined only for the red arm data \nwith the task {\\it fxcor\\\/}, applying the cross-correlation \nmethod using a well-matching theoretical template spectrum \nfrom the extensive spectral library of \\citet{Metal05}. Then, \nwe made barycentric corrections to every single RV value. \nThis method resulted in a 1-2~km~s$^{-1}$ uncertainty in the \nindividual RVs, while further tests have shown that our \nabsolute velocity frame was stable to within \n$\\pm$2--3~km~s$^{-1}$. This level of precision is sufficient\nto detect a number of Cepheid companions, as they can often\ncause $\\gamma$-velocity changes well above 10~km~s$^{-1}$.\n\nDiscovery of six SBs among the 40 target Cepheids was \nalready reported by \\citet{Szetal13}. The binarity of the \nthree Cepheids announced here could be revealed by involving \nindependently obtained additional data (see Section~\\ref{coralie}).\nThe individual RV data of the rest of the Cepheid targets\nwill be published together with the results of the analysis\nof the spectra.\n\n\n\\subsection{CORALIE observations from La Silla}\n\\label{coralie}\n\nAll three Cepheids were among the targets during multiple observing \ncampaigns between 2011 April and 2012 May using the fibre-fed \nhigh-resolution ($R \\sim 60000$) echelle spectrograph \n\\textit{CORALIE} mounted on the Swiss 1.2\\,m Euler telescope at \nESO La Silla Observatory, Chile. The instrument's design is \ndescribed in \\citet{Qetal01}; recent instrumental updates \ncan be found in \\citet{Setal10}. \n\nWhen it turned out that these three Cepheids have variable\n$\\gamma$-velocities, several new spectra were obtained in\n2012 December - 2013 January and 2013 April.\n\nThe spectra are reduced by the efficient online reduction \npipeline that performs bias correction, cosmics removal, \nand flatfielding using tungsten lamps. ThAr lamps are used \nfor the wavelength calibration. The reduction pipeline directly \ndetermines the RV via cross-correlation \\citep{Betal96} \nusing a mask that resembles a G2 spectral type. \nThe RV stability of the instrument is excellent and for \nnon-pulsating stars the RV precision is limited by photon noise;\n(see e.g., \\citet{Petal02}). However, the precision achieved for \nCepheids is lower due to line asymmetries. We estimate a typical \nprecision of $\\sim$ 0.1\\,km\\,s$^{-1}$ (including systematics due\nto pulsation) per data point for our data. \n\n\n\\section{Results for individual Cepheids}\n\\label{results}\n\n\n\\subsection{LR~Trianguli Australis}\n\\label{lrtra}\n\n\\paragraph*{Accurate value of the pulsation period}\n\\label{lrtra-period}\n\nThe brightness variability of LR~TrA (HD\\,137626, $\\langle V \\rangle\n= 7.80$\\,mag) was revealed by \\citet{Setal66} based on the Bamberg\nphotographic patrol plates. The Cepheid nature of variability and the\nfirst values of the pulsation period was determined by \\citet{E83}. \nThis Cepheid pulsates in the first-overtone mode; therefore, it has \na small pulsational amplitude and nearly-sinusoidal light and\nvelocity curves. \n\nIn the case of Cepheids pulsating with a low amplitude, the O$-$C \ndiagram constructed for the median brightness (the mid-point \nbetween the faintest and the brightest states) is more reliable \nthan that based on the moments of photometric maxima \\citep{Detal12}. \nTherefore we determined the accurate value of the pulsation period \nby constructing an O$-$C diagram for the moments of median brightness \non the ascending branch of the light curve since this is the phase when \nthe brightness variations are steepest during the whole pulsational \ncycle.\n\nAll published photometric observations of LR~TrA covering three \ndecades were re-analysed in a homogeneous manner to determine \nseasonal moments of the chosen light-curve feature. The relevant data \nlisted in Table~\\ref{tab-lrtra-oc} are as follows:\\\\\nColumn~1: heliocentric moment of the selected light-curve feature\n(median brightness on the ascending branch for LR~TrA, maximum\nbrightness for both RZ~Vel and BG~Vel, see Tables~\\ref{tab-rzvel-oc}\nand \\ref{tab-bgvel-oc}, respectively;\\\\\nCol.~2: epoch number, $E$, as calculated from Equation~(\\ref{lrtra-ephemeris}):\n\\vspace{-1mm}\n\\begin{equation}\nC = 2\\,453\\,104.9265 + 2.428\\,289{\\times}E \n\\label{lrtra-ephemeris}\n\\end{equation}\n\\vspace{-3mm}\n$\\phantom{mmmmm}\\pm0.0037\\phantom{}\\pm0.000\\,003$\n\n\\noindent (this ephemeris has been obtained by the weighted \nleast squares parabolic fit to the O$-$C differences);\\\\\n\\noindent Col.~3: the corresponding O$-$C value;\\\\\nCol.~4: weight assigned to the O$-$C value (1, 2, or 3 \ndepending on the quality of the light curve leading to \nthe given difference);\\\\\nCol.~5: reference to the origin of data.\\\\\n\nThe O$-$C diagram of LR~TrA based on the O$-$C values listed\nin Table~\\ref{tab-lrtra-oc} is plotted in \nFig.~\\ref{fig-lrtra-oc}. The plot can be approximated by a \nconstant period by the ephemeris (\\ref{lrtra-ephemeris}) for \nthe moments of median brightness on the ascending branch. The \nscatter of the points in Fig.~\\ref{fig-lrtra-oc} reflects the \nobservational error and uncertainties in the analysis of the data.\n\n\n\\begin{table}\n\\caption{O$-$C values of LR~TrA (see the \ndescription in Sect.~\\ref{lrtra-period}).}\n\\begin{tabular}{l@{\\hskip2mm}r@{\\hskip2mm}r@{\\hskip2mm}c@{\\hskip2mm}l}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $E\\ $ & O$-$C & $W$ & Data source\\\\\n2\\,400\\,000 + &&&\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n45018.7822 & $-$3330& 0.0581 &3 & \\citet{E83}\\\\\n47633.9607 & $-$2253& $-$0.0307 & 3 & \\citet{Aetal90}\\\\\n47939.9568 & $-$2127& 0.0010 &2 & {\\it Hipparcos} \\citep{ESA97}\\\\\n48139.0426 & $-$2045& $-$0.0329 & 3& {\\it Hipparcos} \\citep{ESA97}\\\\\n48440.1554 & $-$1921& $-$0.0279 & 3& {\\it Hipparcos} \\citep{ESA97}\\\\\n48750.9547 & $-$1793& $-$0.0496 & 3& {\\it Hipparcos} \\citep{ESA97}\\\\\n49814.6064 & $-$1355& 0.0115 & 3 & \\citet{B08}\\\\\n50370.7115 & $-$1126& 0.0384 & 3 & \\citet{B08}\\\\\n50574.6393 & $-$1042& $-$0.0101 & 3& \\citet{B08}\\\\\n50909.7531 & $-$904& $-$0.0001 & 3 & \\citet{B08}\\\\\n51264.2883 & $-$758& 0.0049 & 3 & \\citet{B08}\\\\\n51650.4058 & $-$599& 0.0244 & 3 & \\citet{B08}\\\\\n51958.8010 & $-$472& 0.0269 & 2 & \\citet{B08}\\\\\n52041.3435 & $-$438& 0.0076 & 2 & ASAS \\citep{P02}\\\\\n52366.7222 & $-$304& $-$0.0044 & 3 & \\citet{B08}\\\\\n52500.2709 & $-$249& $-$0.0116 & 3 & ASAS \\citep{P02}\\\\\n52769.8038 & $-$138& $-$0.0188 & 3 & ASAS \\citep{P02}\\\\\n53102.5159 & $-$1& 0.0177 & 3 & \\citet{B08}\\\\\n53104.9151 & 0& $-$0.0114 & 3& ASAS \\citep{P02}\\\\\n53520.1818 & 171& 0.0179 & 3 & ASAS \\citep{P02}\\\\\n53840.7137 & 303& 0.0156 & 3 & ASAS \\citep{P02}\\\\\n54251.0850 & 472& 0.0061 & 3& ASAS \\citep{P02}\\\\\n54615.3163 & 622& $-$0.0060& 3 & ASAS \\citep{P02}\\\\\n54960.1214 & 764& $-$0.0179& 3 & ASAS \\citep{P02}\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-oc}\n\\end{table}\n\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig3.eps}\n\\caption{O$-$C diagram of LR~TrA. The plot can be\napproximated by a constant period.}\n\\label{fig-lrtra-oc}\n\\end{figure}\n\n\\paragraph*{Binarity of LR~TrA}\n\\label{lrtra-bin}\n\n\\begin{figure}\n\\includegraphics[height=48mm, angle=0]{szabados2013sbcepfig4.eps}\n\\caption{Merged RV phase curve of LR~TrA. The different symbols\nmean data from different years: 2005: filled triangles; 2006: \nempty triangles; 2007: triangular star; 2012: filled circles; \n2013: empty circles. The zero phase was arbitrarily chosen at \nJD\\,2\\,400\\,000.0 (in all phase curves in this paper).}\n\\label{fig-lrtra-vrad}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=40mm, angle=0]{szabados2013sbcepfig5.eps}\n\\caption{Temporal variation in the $\\gamma$-velocity of LR~TrA.\nThe symbols for the different data sets are the same as\nin Fig.~\\ref{fig-lrtra-vrad}.}\n\\label{fig-lrtra-vgamma}\n\\end{figure}\n\n\\begin{table}\n\\caption{RV values of LR TrA from the SSO spectra. \nThis is only a portion of the full version available online as Supporting\nInformation.}\n\\begin{tabular}{lr}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53599.9325 &$-$21.2\\\\\n53600.9086 &$-$32.0\\\\\n53603.9327 &$-$27.6\\\\\n53605.9290 &$-$31.0\\\\\n53805.1657 &$-$29.3\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-data}\n\\end{table}\n\n\\begin{table}\n\\caption{CORALIE velocities of LR TrA. \nThis is only a portion of the full version \navailable online as Supporting Information.}\n\\begin{tabular}{lrc}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ & $\\sigma$ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$) & (km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n55938.8701 & $-$27.97 & 0.05\\\\\n55938.8718 & $-$28.10 & 0.05\\\\ \n55939.8651 & $-$29.85 & 0.02\\\\ \n55940.8686 & $-$22.40 & 0.03\\\\ \n55941.8579 & $-$33.14 & 0.04\\\\ \n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-coralie-data}\n\\end{table}\n\n\n\\begin{table}\n\\caption{$\\gamma$-velocities of LR~TrA.}\n\\begin{tabular}{lccl}\n\\hline\n\\noalign{\\vskip 0.2mm}\nMid-JD & $v_{\\gamma}$ & $\\sigma$ & Data source \\\\\n2\\,400\\,000+ & (km\\,s$^{-1}$)& (km\\,s$^{-1}$) & \\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53603 & $-$25.5 & 0.5 & Present paper\\\\\n53808 & $-$24.8 & 0.5 & Present paper\\\\\n54331 & $-$29.0 & 1.0 & Present paper\\\\\n55981 & $-$27.5 & 0.1 & Present paper\\\\\n56344 & $-$26.4 & 0.1 & Present paper\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-vgamma}\n\\end{table}\n\nThere are no earlier RV data on this bright Cepheid. Our new data \nlisted in Tables~\\ref{tab-lrtra-data} and \\ref{tab-lrtra-coralie-data} \nhave been folded on the accurate pulsation period given in the\nephemeris (see Equation~\\ref{lrtra-ephemeris}). The merged RV phase \ncurve is plotted in Fig.~\\ref{fig-lrtra-vrad}. Both individual \ndata series could be split into seasonal subsets.\n\nVariability in the $\\gamma$-velocity is obvious. The \n$\\gamma$-velocities (together with their uncertainties) are \nlisted in Table~\\ref{tab-lrtra-vgamma}. The $\\gamma$-velocity in\n2007 is more uncertain than in other years because this value\nis based on a single spectrum. Systematic errors can be excluded. \nDozens of Cepheids in our sample with non-varying\n$\\gamma$-velocities indicate stability of the equipment and \nreliability of the data reduction. Fig.~\\ref{fig-lrtra-vgamma} \nis a better visualization of the temporal variation in the \n$\\gamma$-velocity. The seasonal drift in the $\\gamma$-velocity \nis compatible with both short and long orbital periods.\n\nThe photometric contribution of the companion star decreases\nthe observable amplitude of the brightness variability as\ndeduced from the enhanced value of the ratio of the RV and\nphotometric amplitudes \\citep{KSz09}. This is an additional\n(although slight) indication of binarity of LR~TrA.\n\n\\subsection{RZ~Velorum}\n\\label{rzvel}\n\n\\paragraph*{Accurate value of the pulsation period}\n\\label{rzvel-period}\n\nThe brightness variability of RZ~Vel (HD\\,73502, $\\langle V \\rangle\n= 7.13$\\,mag) was revealed by Cannon \\citep{P09}. The Cepheid\nnature of variability and the pulsation period were established by \n\\citet{H36} based on the Harvard and Johannesburg photographic plate \ncollection which was further investigated by \\citet{Oo36}.\n\nThis is the longest period Cepheid announced in this paper and it has \nbeen frequently observed from the 1950s, first photoelectrically, \nthen in the last decades by CCD photometry. The photometric coverage \nof RZ~Vel was almost continuous in the last 20 years thanks to \nobservational campaigns by \\citet{B08} and his co-workers, as well as\nthe ASAS photometry \\citep{P02}.\n\nLong-period Cepheids are usually fundamental pulsators and they \noscillate with a large amplitude resulting in a light curve with\nsharp maximum.\n\nThe O$-$C diagram of RZ~Vel was constructed for the moments of \nmaximum brightness based on the photoelectric and CCD photometric \ndata (see Table~\\ref{tab-rzvel-oc}). The weighted least squares \nparabolic fit to the O$-$C values resulted in the ephemeris:\n\\vspace{-1mm}\n\\begin{equation}\nC = 2\\,442\\,453.6630 + 20.398\\,532{\\times}E + 1.397\\times 10^{-6} E^2\n\\label{rzvel-ephemeris}\n\\end{equation}\n\\vspace{-3mm}\n$\\phantom{mmmmm}\\pm0.0263\\phantom{l}\\pm 0.000\\,080 \\phantom{mm}\n\\pm 0.191\\times 10^{-6}$\n\n\\begin{table}\n\\caption{O$-$C values of RZ~Vel (description of the columns\nis given in Sect.~\\ref{lrtra-period}).}\n\\begin{tabular}{l@{\\hskip2mm}r@{\\hskip2mm}r@{\\hskip2mm}c@{\\hskip2mm}l}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $E\\ $ & O$-$C & $W$ & Data source\\\\\n2\\,400\\,000 + &&&\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n33784.5646 &$-$425 & 0.2777 & 1 & \\citet{Eetal57}\\\\\n34804.5174 &$-$375 & 0.3039 & 1 & \\citet{Wetal58}\\\\\n34845.2119 &$-$373 & 0.2013 & 3 & \\citet{Eetal57}\\\\\n35192.0024 &$-$356 & 0.2168 & 1 & \\citet{I61}\\\\\n40760.8647 &$-$83 & 0.2799 & 3 & \\citet{P76}\\\\\n41719.0924 &$-$36 &$-$0.2234& 3& \\citet{M75}\\\\\n41862.1249 &$-$29 & 0.0193 & 3 & \\citet{Detal77}\\\\\n42453.6330 & 0 &$-$0.0030 & 3 & \\citet{Detal77}\\\\\n44371.0472 & 94 &$-$0.0778 & 3 & \\citet{CC85}\\\\\n44391.3842 & 95 &$-$0.1393 & 2 & \\citet{E82}\\\\\n45003.2906 & 125 &$-$0.1889 & 3 & \\citet{CC85}\\\\\n48226.4369 & 283 &$-$0.0107 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n48797.5877 & 311 &$-$0.0188 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n49185.1653 & 330 &$-$0.0133 & 1 & Walker \\& Williams (unpublished)\\\\\n49817.8011 & 361 & 0.2680 &3 & \\citet{B08}\\\\\n50144.1979 & 377 & 0.2883 & 2 & \\citet{B02}\\\\\n50389.0443 & 389 & 0.3524 & 3 & \\citet{B08}\\\\\n50511.3662 & 395 & 0.2831 & 3 & \\citet{B02}\\\\\n50572.4468 & 398 & 0.1681 & 3 & \\citet{B08}\\\\\n50899.0581 & 414 & 0.4029 & 3 & \\citet{B08}\\\\\n51266.1488 & 432 & 0.3200 & 3 & \\citet{B08}\\\\\n51653.7650 & 451 & 0.3641 & 3 & \\citet{B08}\\\\\n51939.2846 & 465 & 0.3042 & 2 & ASAS \\citep{P02}\\\\\n51959.7692 & 466 & 0.3903 & 3 & \\citet{B08}\\\\\n52347.4262 & 485 & 0.4752 & 3 & \\citet{B08}\\\\\n52653.3896 & 500 & 0.4606 & 3 & ASAS \\citep{P02}\\\\\n52653.4100 & 500 & 0.4810 & 3 & \\citet{B08}\\\\\n53000.1794 & 517 & 0.4754 & 3 & ASAS \\citep{P02}\\\\\n53000.2610 & 517 & 0.5570 & 3 & \\citet{B08}\\\\\n53428.4384 & 538 & 0.3652 & 3 & ASAS \\citep{P02}\\\\\n53754.8864 & 554 & 0.4367 & 3 & ASAS \\citep{P02}\\\\ \n54183.1657 & 575 & 0.3468 & 3 & ASAS \\citep{P02}\\\\\n54509.5729 & 591 & 0.3775 & 3 & ASAS \\citep{P02}\\\\\n54815.4343 & 606 & 0.2609 & 3 & ASAS \\citep{P02}\\\\\n55121.3569 & 621 & 0.2055 & 2 & ASAS \\citep{P02}\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-oc}\n\\end{table}\n\nThe O$-$C diagram of RZ~Vel plotted in Fig.~\\ref{fig-rzvel-oc} \nindicates a continuously increasing pulsation period with a period \njitter superimposed. This secular period increase has been caused \nby stellar evolution: while the Cepheid crosses the instability \nregion towards lower temperatures in the Hertzsprung--Russell \ndiagram, its pulsation period is increasing. \nContinuous period variations (of either sign) often occur in \nthe pulsation of long-period Cepheids \\citep{Sz83}.\n\nFig.~\\ref{fig-rzvel-oc2} shows the O$-$C residuals after \nsubtracting the parabolic fit defined by \nEquation~(\\ref{rzvel-ephemeris}). If the wave-like fluctuation seen in \nthis $\\Delta (O-C)$ diagram turns out to be periodic, it would\ncorrespond to a light-time effect in a binary system. In line with \nthe recent shortening in the pulsation period, the current value \nof the pulsation period is $20.396671 \\pm 0.000200$ days (after \nJD~2\\,452\\,300). \n\n\\begin{figure}\n\\includegraphics[height=55mm, angle=0]{szabados2013sbcepfig6.eps}\n\\caption{O$-$C diagram of RZ~Vel. The plot can be\napproximated by a parabola indicating a continuously\nincreasing period.}\n\\label{fig-rzvel-oc}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig7.eps}\n\\caption{$\\Delta(O-C)$ diagram of RZ~Vel.}\n\\label{fig-rzvel-oc2}\n\\end{figure}\n\n\n\\paragraph*{Binarity of RZ~Vel}\n\\label{rzvel-bin}\n\n\n\\begin{figure}\n\\includegraphics[height=55mm, angle=0]{szabados2013sbcepfig8.eps}\n\\caption{RV phase curve of RZ~Vel. Data obtained\nbetween 1996 and 2013 are included in this plot. The meaning\nof various symbols is explained in the text.}\n\\label{fig-rzvel-vrad}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=42mm, angle=0]{szabados2013sbcepfig9.eps}\n\\caption{$\\gamma$-velocities of RZ~Velorum. The symbols for \nthe different data sets are the same as in \nFig.~\\ref{fig-rzvel-vrad}.}\n\\label{fig-rzvel-vgamma}\n\\end{figure}\n\n\\begin{table}\n\\caption{RV values of RZ Vel from the SSO spectra.\n(This is only a portion of the full version \navailable online as Supporting\nInformation.)}\n\\begin{tabular}{lr}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53307.2698 &4.2\\\\\n53310.2504 &1.4\\\\\n53312.2073 &9.0\\\\\n53364.2062 &49.6\\\\\n53367.1823 &27.5\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-data}\n\\end{table}\n\n\n\\begin{table}\n\\caption{CORALIE velocities of RZ Vel.\n(This is only a portion of the full version available \nonline as Supporting Information.)}\n\\begin{tabular}{lrc}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ & $\\sigma$ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$) & (km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n55654.5528 & $-$3.08 & 0.02\\\\\n55656.6626 & 5.23 & 0.01\\\\ \n55657.6721 & 9.86 & 0.02\\\\ \n55659.6585 & 18.85 & 0.03\\\\ \n55662.5137 & 31.50 & 0.01\\\\ \n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-coralie-data}\n\\end{table}\n\n\\begin{table}\n\\caption{$\\gamma$-velocities of RZ~Vel.}\n\\begin{tabular}{lccl}\n\\hline\n\\noalign{\\vskip 0.2mm}\nMid-JD & $v_{\\gamma}$ & $\\sigma$ & Data source \\\\\n2\\,400\\,000+ & (km\\,s$^{-1}$)& (km\\,s$^{-1}$) & \\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n34009 &25.5 &1.5& \\citet{S55}\\\\\n40328 &22.1 &1.5& \\citet{LE68,LE80}\\\\\n42186 &29.2 &1.0& \\citet{CC85}\\\\\n44186 &22.6 &1.0& \\citet{CC85}\\\\\n44736 &24.4 &1.0& \\citet{CC85}\\\\\n50317 &25.1 &0.2& \\citet{B02}\\\\\n53184 &24.0 &0.5& \\citet{Netal06}\\\\\n53444 &26.9 &0.6& Present paper\\\\\n53783 &28.8 &1.0& Present paper\\\\\n55709 &25.6 &0.1& Present paper\\\\\n56038 &25.3 &0.1& Present paper\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-vgamma}\n\\end{table}\n\nThere are several data sets of RV observations available in\nthe literature for RZ~Vel: those published by \\citet{S55}, \n\\citet{LE68,LE80}, \\citet{CC85}, \\citet{B02}, and \n\\citet{Netal06}. Our individual RV data are listed in \nTables~\\ref{tab-rzvel-data} and \\ref{tab-rzvel-coralie-data}.\n\nBased on these data, the RV phase curve has been constructed \nusing the 20.398532~d pulsation period appearing in \nEquation~(\\ref{rzvel-ephemeris}). In view of the complicated pattern \nof the O$-$C diagram the RV data have been folded on by taking \ninto account the proper phase correction for different data \nseries. The merged RV phase curve is plotted in \nFig.~\\ref{fig-rzvel-vrad}. For the sake of clarity, RV data \nobtained before JD\\,2\\,450\\,000 have not been plotted here \nbecause of the wider scatter of these early RV data but the \n$\\gamma$-velocities were determined for each data set. The \nindividual data series are denoted by different symbols: \nfilled squares mean data by \\citet{B02}, empty squares those \nby \\citet{Netal06}, and our 2005, 2006, 2012 and 2013 data are \ndenoted by filled triangles, empty triangles, filled circles and \nempty circles, respectively. The wide scatter in this merged RV \nphase curve plotted in Fig.~\\ref{fig-rzvel-vrad} is due to a variable \n$\\gamma$-velocity. \n\nThe $\\gamma$-velocities determined from each data set (including \nthe earlier ones) are listed in Table~\\ref{tab-rzvel-vgamma} and \nare plotted in Fig.~\\ref{fig-rzvel-vgamma}. The plot implies\nthat RZ~Vel is really an SB as suspected by \\citet{B02} based on \na much poorer observational material (before JD~2\\,450\\,500). \nAn orbital period of about 5600-5700~d is compatible with the data \npattern in both Fig.~\\ref{fig-rzvel-oc2} and Fig.~\\ref{fig-rzvel-vgamma} \nbut the phase relation between the light-time effect fit to the \n$\\Delta (O-C)$ curve and the orbital RV variation phase curve obtained \nwith this formal period is not satisfactory.\n\n\n\\subsection{BG~Velorum}\n\\label{bgvel}\n\n\\paragraph*{Accurate value of the pulsation period}\n\\label{bgvel-period}\n\nThe brightness variability of BG~Vel (HD\\,78801, $\\langle V \\rangle\n= 7.69$\\,mag) was revealed by Cannon \\citep{P09}. Much later \n\\citet{OL37} independently discovered its light variations but\nhe also revealed the Cepheid nature and determined the pulsation \nperiod based on photographic plates obtained at the Riverview \nCollege Observatory. \\citet{vH50} also observed this Cepheid \nphotographically in Johannesburg but these early data are \nunavailable, therefore we only mention their studies for historical \nreasons.\n\nThis Cepheid is a fundamental-mode pulsator. The O$-$C \ndifferences of BG~Vel calculated for brightness maxima are \nlisted in Table~\\ref{tab-bgvel-oc}. These values have been obtained \nby taking into account the constant and linear terms of the \nfollowing weighted parabolic fit:\n\\vspace{-1mm}\n\\begin{equation}\nC = 2\\,453\\,031.4706 + 6.923\\,843{\\times}E + 2.58\\times 10^{-8} E^2\n\\label{bgvel-ephemeris}\n\\end{equation}\n\\vspace{-3mm}\n$\\phantom{mmmmm}\\pm0.0020\\phantom{}\\pm 0.000\\,007 \\phantom{ml}\n\\pm 0.27\\times 10^{-8}$\n\n\\noindent The parabolic nature of the O$-$C diagram, i.e., the \ncontinuous increase in the pulsation period, is clearly seen \nin Fig.~\\ref{fig-bgvel-oc}. \nThis parabolic trend corresponds to a continuous period increase\nof $(5.16 \\pm 0.54)\\times 10^{-8}$ d\\,cycle$^{-1}$, i.e., \n$\\Delta P = 0.000272$ d\/century. This tiny period increase has \nbeen also caused by stellar evolution as in the case of RZ~Vel.\n\nThe fluctuations around the fitted parabola in\nFig.~\\ref{fig-bgvel-oc} do not show any definite pattern: \nsee the $\\Delta(O-C)$ diagram in Fig.~\\ref{fig-bgvel-oc2}.\n\n\n\\begin{table}\n\\caption{O$-$C values of BG~Vel (description of the \ncolumns is given in Sect.~\\ref{lrtra-period}).}\n\\begin{tabular}{l@{\\hskip2mm}r@{\\hskip2mm}r@{\\hskip2mm}c@{\\hskip2mm}l}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $E\\ $ & O$-$C & $W$ & Data source\\\\\n2\\,400\\,000 + &&&\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n34856.5526 & $-$2625 & 0.1699 & 3 & \\citet{Wetal58}\\\\\n35237.3813 & $-$2570 & 0.1872 & 3 & \\citet{I61}\\\\\n40748.6592 & $-$1774 & 0.0861 & 3 & \\citet{P76}\\\\\n42853.4433 & $-$1470 & 0.0219 & 3 & \\citet{D77}\\\\\n44300.5426 & $-$1261 & 0.0380 & 3 & \\citet{B08}\\\\\n48136.3167 & $-$707 & 0.0031 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n48627.9239 & $-$636 & 0.0174 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n50379.6329 & $-$383 & $-$0.0058 & 3 & \\citet{B08}\\\\\n50573.4987 & $-$355 & $-$0.0076 & 3 & \\citet{B08}\\\\\n50905.8549 & $-$307 & 0.0041 & 3 & \\citet{B08}\\\\\n51265.9127 & $-$255 & 0.0221 & 3 & \\citet{B08}\\\\\n51646.7345 & $-$200 & 0.0325 & 3 & \\citet{B08}\\\\\n51937.5210 & $-$158 & 0.0176 & 3 & ASAS \\citep{P02}\\\\\n51958.2712 & $-$155 & $-$0.0038 & 3 & \\citet{B08}\\\\\n52359.8640 & $-$97 & 0.0062 & 3 & ASAS \\citep{P02}\\\\\n52359.8778 & $-$97 & 0.0200 & 3 & \\citet{B08}\\\\\n52650.6575 & $-$55 & $-$0.0017 & 3 & \\citet{B08}\\\\\n52726.8212 & $-$44 & $-$0.0003 & 3 & ASAS \\citep{P02}\\\\\n53003.7916 & $-$4 & 0.0164 & 3 & \\citet{B08}\\\\\n53031.4758 & 0 & 0.0052 & 3 & ASAS \\citep{P02}\\\\\n53336.1201 & 44 &0.0004 & 1 & {\\it INTEGRAL} OMC\\\\\n53460.7390 & 62 & $-$0.0099 & 3 & ASAS \\citep{P02}\\\\\n53779.2202 & 108 & $-$0.0254 & 3 & ASAS \\citep{P02}\\\\\n54180.8337 & 166 & 0.0052 & 3 & ASAS \\citep{P02}\\\\\n54540.8499 & 218 & $-$0.0185 & 3 & ASAS \\citep{P02}\\\\\n54838.5810 & 261 & $-$0.0126 & 3 & ASAS \\citep{P02}\\\\\n55143.2425 & 305 & $-$0.0002 & 2 & ASAS \\citep{P02}\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-oc}\n\\end{table}\n\n\\paragraph*{Binarity of BG~Vel}\n\\label{bgvel-bin}\n\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig10.eps}\n\\caption{O$-$C diagram of BG~Vel. The plot can be\napproximated by a parabola indicating a continuously\nincreasing pulsation period.}\n\\label{fig-bgvel-oc}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig11.eps}\n\\caption{$\\Delta(O-C)$ diagram of BG~Vel.}\n\\label{fig-bgvel-oc2}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[height=49mm, angle=0]{szabados2013sbcepfig12.eps}\n\\caption{Merged RV phase curve of BG~Vel. There is an obvious \nshift between the $\\gamma$-velocities valid for the epoch \nof our data obtained in 2005-2006 and 2012-2013 (empty and \nfilled circles, respectively). The other symbols are explained\nin the text.}\n\\label{fig-bgvel-vrad}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=42mm, angle=0]{szabados2013sbcepfig13.eps}\n\\caption{$\\gamma$-velocities of BG~Vel. The symbols for \nthe different data sets are the same as in \nFig.~\\ref{fig-bgvel-vrad}.}\n\\label{fig-bgvel-vgamma}\n\\end{figure}\n\n\\begin{table}\n\\caption{RV values of BG Vel from the SSO spectra.\n(This is only a portion of the full version available online \nas Supporting Information.)}\n\\begin{tabular}{lr}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53312.2372 &17.3\\\\\n53364.2219 &$-$0.2\\\\\n53367.1992 &20.5\\\\\n53451.0000 &20.0\\\\\n53452.0021 &23.8\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-data}\n\\end{table}\n\n\\begin{table}\n\\caption{CORALIE velocities of BG Vel.\n(This is only a portion of the full version \navailable online as Supporting Information.)}\n\\begin{tabular}{lrc}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ & $\\sigma$ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$) & (km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n55937.7555 & 24.13 & 0.02\\\\\n55938.6241 & 7.77 & 0.02\\\\ \n55939.6522 & $-$1.25 & 0.01\\\\ \n55941.6474 & 7.99 & 0.10\\\\ \n55942.6917 & 11.78 & 0.03\\\\ \n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-coralie-data}\n\\end{table}\n\nThere are earlier RV data of this Cepheid obtained by \\citet{S55} \nand \\citet{LE80}. Variability in the $\\gamma$-velocity is seen \nin the merged phase diagram of all RV data of BG~Velorum \nplotted in Fig.~\\ref{fig-bgvel-vrad}. In this diagram, our 2005--2006\ndata (listed in Table~\\ref{tab-bgvel-data}) are represented with \nthe empty circles, while 2012--2013 data (listed in \nTable~\\ref{tab-bgvel-coralie-data}) are denoted by the filled circles, \nthe triangles represent Stibbs' data, and the $\\times$ symbols refer to \nLloyd Evans' data. Our RV data have been folded with the period given \nin the ephemeris Equation~(\\ref{bgvel-ephemeris}) omitting the quadratic term. \nData obtained by Stibbs and Lloyd Evans have been phased with the \nsame period but a proper correction has been applied to allow \nfor the phase shift due to the parabolic O$-$C graph.\n\nThe $\\gamma$-velocities determined from the individual data sets\nare listed in Table~\\ref{tab-bgvel-vgamma} and plotted in\nFig.~\\ref{fig-bgvel-vgamma}. Since no annual shift is seen \nin the $\\gamma$-velocities between two consecutive years (2005--2006\nand 2012--2013), the orbital period cannot be short, probably it\nexceeds a thousand days.\n\nSimilarly to the case of LR~TrA, BG~Vel is also characterized by an \nexcessive value for the ratio of RV and photometric amplitudes\nindicating the possible presence of a companion \n(see Fig.~\\ref{fig-ampratio}).\n\n\\begin{table}\n\\caption{$\\gamma$-velocities of BG~Vel.}\n\\begin{tabular}{lccl}\n\\hline\n\\noalign{\\vskip 0.2mm}\nMid-JD & $v_{\\gamma}$ & $\\sigma$ & Data source \\\\\n2\\,400\\,000+ & (km\\,s$^{-1}$)& (km\\,s$^{-1}$) & \\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n34096 &11.4 &1.5& \\citet{S55}\\\\\n40545 & 8.4 &1.5& \\citet{LE80}\\\\\n53572 &12.6 &0.6& Present paper\\\\\n56043 &10.3 &0.1& Present paper\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-vgamma}\n\\end{table}\n\n\n\\section{Conclusions}\n\\label{concl}\n\nWe pointed out that three bright southern Galactic Cepheids,\nLR~TrA, RZ~Vel and BG~Vel,\nhave a variable $\\gamma$-velocity implying their membership \nin SB systems. RV values of other target Cepheids observed \nwith the same equipment in 2005--2006 and 2012 testify that \nthis variability in the $\\gamma$-velocity is not of instrumental\norigin, nor an artefact caused by the analysis.\n\nThe available RV data are insufficient to determine the orbital \nperiod and other elements of the orbits. However, some inferences\ncan be made from the temporal variations of the $\\gamma$-velocity.\nAn orbital period of 5600--5700~d of the RZ~Vel system is \ncompatible with the data pattern. In the case of BG~Vel, short \norbital periodicity can be ruled out. For LR~TrA, even the range \nof the possible orbital periods remains uncertain.\n\nThe value of the orbital period for SB systems \ninvolving a Cepheid component is often unknown: according to the \non-line data base \\citep{Sz03a} the orbital period has been \ndetermined for about 20\\% of the known SB Cepheids. The majority \nof known orbital periods exceeds a thousand days.\n\nA companion star may have various effects on the observable\nphotometric properties of the Cepheid component. Various pieces \nof evidence of duplicity based on the photometric criteria are\ndiscussed by \\citet{Sz03b} and \\citet{KSz09}. As to our\ntargets, there is no obvious sign of a companion from optical\nmulticolour photometry. This indicates that the companion star \ncannot be much hotter than any of the Cepheids discussed here. \nThere is, however, a phenomenological parameter, viz. the ratio \nof RV to photometric amplitudes \\citep{KSz09} whose excessive\nvalue is a further hint at the probable existence of a\ncompanion for both LR~TrA and BG~Vel (see Fig.~\\ref{fig-ampratio}).\nMoreover, the {\\it IUE} spectra of bright Cepheids \nanalysed by \\citet{E92} gave a constraint on the temperature \nof a companion to remain undetected in the ultraviolet spectra: \nin the case of RZ~Vel, the spectral type of the companion cannot \nbe earlier than A7, while for BG~Vel this limiting spectral type \nis A0. Further spectroscopic observations are necessary to \ncharacterize these newly detected SB systems.\n\n\\begin{figure}\n\\includegraphics[height=54mm, angle=0]{szabados2013sbcepfig14.eps}\n\\caption{The slightly excessive value of the $A_{V_{\\rm RAD}}\/A_B$\namplitude ratio of LR~TrA and BG~Vel (large circles) with respect \nto the average value characteristic at the given pulsation period\nis an independent indication of the presence of a companion star.\nThis is a modified version of fig.~4f of \\citet{KSz09}. The open \nsymbols in the original figure correspond to known binaries and \nthe filled symbols to Cepheids without known binarity. For the\nmeaning of various symbols, see \\citet{KSz09}.\n}\n\\label{fig-ampratio}\n\\end{figure}\n\n\nOur findings confirm the previous statement by \\citet{Sz03a} \nabout the high percentage of binaries among classical Cepheids \nand the observational selection effect hindering the discovery \nof new cases (see also Fig.~\\ref{fig-comparison}).\n\nRegular monitoring of the RVs of a large\nnumber of Cepheids will be instrumental in finding \nmore SBs among Cepheids. RV data to be obtained with the \n{\\it Gaia} astrometric space probe (expected launch: 2013 \nSeptember) will certainly result in revealing new SBs among \nCepheids brighter than the 13--14th magnitude \\citep{Eyetal12}.\nIn this manner, the `missing' SBs among Cepheids inferred\nfrom Fig.~\\ref{fig-comparison} can be successfully revealed\nwithin few years.\n\n\\section*{Acknowledgments} \n\nThis project has been supported by the \nESTEC Contract No.\\,4000106398\/12\/NL\/KML, the Hungarian OTKA \nGrants K76816, K83790, K104607, and MB08C 81013, as well as the \nEuropean Community's Seventh Framework Program (FP7\/2007-2013) \nunder grant agreement no.\\,269194, and the ``Lend\\\"ulet-2009'' \nYoung Researchers Program of the Hungarian Academy of Sciences. \nAD was supported by the Hungarian E\\\"otv\\\"os Fellowship. \nAD has also been supported by a J\\'anos Bolyai Research Scholarship \nof the Hungarian Academy of Sciences. AD is very thankful \nto the staff at The Lodge in the Siding Spring Observatory \nfor their hospitality and very nice food, making the \ntime spent there lovely and special.\nPart of the research leading to these results has received \nfunding from the European Research Council under the European \nCommunity's Seventh Framework Programme (FP7\/2007--2013)\/ERC grant \nagreement no.\\,227224 (PROSPERITY).\nThe {\\it INTEGRAL\\\/} photometric data, pre-processed by \nISDC, have been retrieved from the OMC Archive at CAB (INTA-CSIC). \nWe are indebted to Stanley Walker for sending us some\nunpublished photoelectric observational data. Our thanks are \nalso due to the referee and Dr. M\\'aria Kun for their critical \nremarks leading to a considerable improvement in the presentation \nof the results.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzawtu b/data_all_eng_slimpj/shuffled/split2/finalzzawtu new file mode 100644 index 0000000000000000000000000000000000000000..a1c3802ba4f2c4a1c929aba7a3158e568d0c8dc3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzawtu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and results}\n\nLet us recall the objects we will deal with. Throughout the paper $\\DD$ denotes the unit open disc on the complex plane, $\\TT$ is the unit circle and $p$ --- the Poincar\\'e distance on $\\DD$.\n\nLet $D\\subset\\CC^{n}$ be a domain and let $z,w\\in D$, $v\\in\\CC^{n}$. The {\\it Lempert function}\\\/ is defined as\n\\begin{equation}\\label{lem}\n\\widetilde{k}_{D}(z,w):=\\inf\\{p(0,\\xi):\\xi\\in[0,1)\\textnormal{ and }\\exists f\\in \\mathcal{O}(\\mathbb{D},D):f(0)=z,\\ f(\\xi)=w\\}.\n\\end{equation} The {\\it Kobayashi-Royden \\emph{(}pseudo\\emph{)}metric}\\\/ we define as\n\\begin{equation}\\label{kob-roy}\n\\kappa_{D}(z;v):=\\inf\\{\\lambda^{-1}:\\lambda>0\\text{ and }\\exists f\\in\\mathcal{O}(\\mathbb{D},D):f(0)=z,\\ f'(0)=\\lambda v\\}.\n\\end{equation}\nNote that\n\\begin{equation}\\label{lem1}\n\\widetilde{k}_{D}(z,w)=\\inf\\{p(\\zeta,\\xi):\\zeta,\\xi\\in\\DD\\textnormal{ and }\\exists f\\in \\mathcal{O}(\\mathbb{D},D):f(\\zeta)=z,\\ f(\\xi)=w\\},\n\\end{equation}\n\\begin{multline}\\label{kob-roy1}\n\\kappa_{D}(z;v)=\\inf\\{|\\lambda|^{-1}\/(1-|\\zeta|^2):\\lambda\\in\\CC_*,\\,\\zeta\\in\\DD\\text{ and }\\\\ \\exists f\\in\\mathcal{O}(\\mathbb{D},D):f(\\zeta)=z,\\ f'(\\zeta)=\\lambda v\\}.\n\\end{multline}\n\nIf $z\\neq w$ (respectively $v\\neq 0$), a mapping $f$ for which the infimum in \\eqref{lem1} (resp. in \\eqref{kob-roy1}) is attained, we call a $\\wi{k}_D$-\\textit{extremal} (or a \\textit{Lempert extremal}) for $z,w$ (resp. a $\\kappa_D$-\\textit{extremal} for $z,v$). A mapping being a $\\wi k_D$-extremal or a $\\kappa_D$-extremal we will call just an \\textit{extremal} or an \\textit{extremal mapping}.\n\nWe shall say that $f:\\DD\\longrightarrow D$ is a unique $\\wi{k}_D$-extremal for $z,w$ (resp. a unique $\\kappa_D$-extremal for $z,v$) if any other $\\wi{k}_D$-extremal $g:\\DD\\longrightarrow D$ for $z,w$ (resp. $\\kappa_D$-extremal for $z,v$) satisfies $g=f\\circ a$ for some M\\\"obius function $a$.\n\nIn general, $\\wi{k}_{D}$ does not satisfy a triangle inequality --- take for example $D_{\\alpha}:=\\{(z,w)\\in\\CC^{2}:|z|,|w|<1,\\ |zw|<\\alpha\\}$, $\\alpha\\in(0,1)$. Therefore, it is natural to consider the so-called \\textit{Kobayashi \\emph{(}pseudo\\emph{)}distance} given by the formula \\begin{multline*}k_{D}(w,z):=\\sup\\{d_{D}(w,z):(d_{D})\\text{ is a family of holomorphically invariant} \\\\\\text{pseudodistances less than or equal to }\\widetilde{k}_{D}\\}.\\end{multline*}\nIt follows directly from the definition that $$k_{D}(z,w)=\\inf\\left\\{\\sum_{j=1}^{N}\\wi{k}_{D}(z_{j-1},z_{j}):N\\in\\NN,\\ z_{1},\\ldots,z_{N}\\in\nD,\\ z_{0}=z,\\ z_{N}=w\\right\\}.$$\n\nThe next objects we are dealing with, are the \\textit{Carath\\'eodory \\emph{(}pseudo\\emph{)}distance}\n$$c_{D}(z,w):=\\sup\\{p(F(z),F(w)):F\\in\\mathcal{O}(D,\\DD)\\}$$\nand the \\textit{Carath\\'eodory-Reiffen \\emph{(}pseudo\\emph{)}metric}\n$$\\gamma_D(z;v):=\\sup\\{|F'(z)v|:F\\in\\mathcal{O}(D,\\DD),\\ F(z)=0\\}.$$\n\nA holomorphic mapping $f:\\DD\\longrightarrow D$ is said to be a \\emph{complex geodesic} if $c_D(f(\\zeta),f(\\xi))=p(\\zeta,\\xi)$ for any $\\zeta,\\xi\\in\\DD$.\n\\bigskip\n\nHere is some notation. Let $z_1,\\ldots,z_n$ be the standard complex coordinates in $\\CC^n$ and $x_1,\\ldots,x_{2n}$ --- the standard real coordinates in $\\CC^n=\\RR^n+i\\RR^n\\simeq\\RR^{2n}$. We use $T_{D}^\\mathbb{R}(a)$, $T_{D}^\\mathbb{C}(a)$ to denote a real and a complex tangent space to a $\\cC^1$-smooth domain $D$ at a point $a\\in\\partial D$, i.e. the sets \\begin{align*}T_{D}^\\mathbb{R}(a):&=\\left\\{X\\in\\CC^{n}:\\re\\sum_{j=1}^n\\frac{\\partial r}{\\partial z_j}(a)X_{j}=0\\right\\},\\\\ T_{D}^\\mathbb{C}(a):&=\\left\\{X\\in\\CC^{n}:\\sum_{j=1}^n\\frac{\\partial r}{\\partial z_j}(a)X_{j}=0\\right\\},\\end{align*}\nwhere $r$ is a defining function of $D$. Let $\\nu_D(a)$ be the outward unit normal vector to $\\partial D$ at $a$.\n\nLet $\\mathcal{C}^{k}(\\CDD)$, where $k\\in(0,\\infty]$, denote a class of continuous functions on $\\CDD$, which are of class $\\cC^k$ on $\\DD$ and\n\\begin{itemize}\n\\item if $k\\in\\NN\\cup\\{\\infty\\}$ then derivatives up to the order $k$ extend continuously on~$\\CDD$;\n\\item if $k-[k]=:c>0$ then derivatives up to the order $[k]$ are $c$-H\\\"older continuous on $\\DD$.\n\\end{itemize}\nBy $\\mathcal{C}^\\omega$ class we shall denote real analytic functions. Further, saying that $f$ is of class $\\mathcal{C}^{k}(\\TT)$, $k\\in(0,\\infty]\\cup\\{\\omega\\}$, we mean that the function $t\\longmapsto f(e^{it})$, $t\\in\\RR$, is in $\\mathcal{C}^{k}(\\mathbb R)$. For a compact set $K\\su\\CC^n$ let $\\OO(K)$ denote the set of functions extending holomorphically on a neighborhood of $K$ (we assume that all neighborhoods are open). In that case we shall sometimes say that a given function is of class $\\OO(K)$. Note that $\\CLW(\\TT)=\\OO(\\TT)$. \n\nLet $|\\cdot|$ denote the Euclidean norm in $\\CC^{n}$ and let $\\dist(z,S):=\\inf\\{|z-s|:s\\in S\\}$ be a distance of the point $z\\in\\CC^n$ to the set $S\\su\\CC^n$. For such a set $S$ we define $S_*:=S\\setminus\\{0\\}$. Let $\\BB_n:=\\{z\\in\\CC^n:|z|=1\\}$ be the unit ball and $B_n(a,r):=\\{z\\in\\CC^n:|z-a|0$. Put $$z\\bullet w:=\\sum_{j=1}^nz_{j}{w}_{j}$$ for $z,w\\in\\CC^{n}$ and let $\\langle\\cdotp,-\\rangle$ be a hermitian inner product on $\\CC^n$. The real inner product on $\\CC^n$ is denoted by $\\langle\\cdotp,-\\rangle_{\\RR}=\\re\\langle\\cdotp,-\\rangle$.\n\nWe use $\\nabla$ to denote the gradient $(\\pa\/\\pa x_1,\\ldots,\\pa\/\\pa x_{2n})$. For real-valued functions the gradient is naturally identified with $2(\\pa\/\\pa\\ov z_1,\\ldots,\\pa\/\\pa\\ov z_n)$. Recall that $$\\nu_D(a)=\\frac{\\nabla r(a)}{|\\nabla r(a)|}.$$ Let $\\mathcal{H}$ be the Hessian matrix $$\\left[\\frac{\\pa^2}{\\pa x_j\\pa x_k}\\right]_{1\\leq j,k\\leq 2n}.$$ Sometimes, for a $\\cC^2$-smooth function $u$ and a vector $X\\in\\RR^{2n}$ the Hessian $$\\sum_{j,k=1}^{2n}\\frac{\\partial^2 u}{\\partial x_j\\partial x_k}(a)X_{j}X_{k}=X^T\\HH u(a)X$$ will be denoted by $\\HH u(a;X)$. By $\\|\\cdot\\|$ we denote the operator norm.\n\\bigskip\n\\begin{df}\\label{29}\nLet $D\\subset\\CC^{n}$ be a domain.\n\nWe say that $D$ is \\emph{linearly convex} (resp. \\emph{weakly linearly convex}) if through any point $a\\in\\mathbb C^n\\setminus D$ (resp. $a\\in \\partial D$) there goes an $(n-1)$-dimensional complex hyperplane disjoint from $D$.\n\nA domain $D$ is said to be \\emph{strongly linearly convex} if\n\\begin{enumerate}\n\\item $D$ has $\\mathcal{C}^{2}$-smooth boundary;\n\\item there exists a defining function $r$ of $D$ such that\n\\begin{equation}\\label{48}\\sum_{j,k=1}^n\\frac{\\partial^2 r}{\\partial z_j\\partial\\overline z_k}(a)X_{j}\\overline{X}_{k}>\\left|\\sum_{j,k=1}^n\\frac{\\partial^2 r}{\\partial z_j\\partial z_k}(a)X_{j}X_{k}\\right|,\\ a\\in\\partial D,\\ X\\in T_{D}^\\mathbb{C}(a)_*.\\end{equation}\n\\end{enumerate}\n\nMore generally, any point $a\\in\\pa D$ for which there exists a defining function $r$ satisfying \\eqref{48}, is called a \\emph{point of the strong linear convexity} of $D$.\n\nFurthermore, we say that a domain $D$ has \\emph{real analytic boundary} if it possesses a real analytic defining function.\n\\end{df}\n\nNote that the condition \\eqref{48} does not depend on the choice of a defining function of $D$.\n\n\\begin{rem}\nLet $D\\subset\\CC^{n}$ be a strongly linearly convex domain. Then\n\\begin{enumerate}\n\\item any $(n-1)$-dimensional complex tangent hyperplane intersects $\\partial{D}$ at precisely one point; in other words $$\\overline D\\cap(a+T_{D}^\\mathbb{C}(a))=\\{a\\},\\ a\\in\\pa D;$$\n\\item for $a\\in\\pa D$ the equation $\\langle w-a, \\nu_D(a)\\rangle=0$ describes the $(n-1)$-dimensional complex tangent hyperplane $a+T_{D}^\\mathbb{C}(a)$, consequently $$\\langle z-a, \\nu_D(a)\\rangle\\neq 0,\\ z\\in D,\\ a\\in\\pa D.$$\n\\end{enumerate}\n\\end{rem}\n\\bigskip\nThe main aim of the paper is to present a detailed proof of the following\n\n\\begin{tw}[Lempert Theorem]\\label{lem-car}\nLet $D\\subset\\CC^{n}$, $n\\geq 2$, be a bounded strongly linearly convex domain. Then $$c_{D}=k_{D}=\\wi{k}_{D}\\text{\\,\\ and\\,\\, }\\gamma_D=\\kappa_D.$$\n\\end{tw}\n\n\\bigskip\n\nAn important role will be played by strongly convex domains and strongly convex functions.\n\\begin{df}\nA domain $D\\subset\\CC^{n}$ is called \\emph{strongly convex} if\n\\begin{enumerate}\n\\item $D$ has $\\mathcal{C}^{2}$-smooth boundary;\n\\item there exists a defining function $r$ of $D$ such that\n\\begin{equation}\\label{sc}\\sum_{j,k=1}^{2n}\\frac{\\partial^2 r}{\\partial x_j\\partial x_k}(a)X_{j}X_{k}>0,\\ a\\in\\partial D,\\ X\\in T_{D}^\\mathbb{R}(a)_*.\\end{equation}\n\\end{enumerate}\nGenerally, any point $a\\in\\pa D$ for which there exists a defining function $r$ satisfying \\eqref{sc}, is called a \\emph{point of the strong convexity} of $D$.\n\\end{df}\n\\begin{rem}\nA strongly convex domain $D\\subset\\CC^{n}$ is convex and strongly linearly convex. Moreover, it is strictly convex, i.e. for any different points $a,b\\in\\overline D$ the interior of the segment $[a,b]=\\{ta+(1-t)b:t\\in [0,1]\\}$ is contained in $D$ (i.e. $ta+(1-t)b\\in D$ for any $t\\in(0,1)$).\n\nObserve also that any bounded convex domain with a real analytic boundary is strictly convex. Actually, if a domain $D$ with a real analytic boundary were not strictly convex, then we would be able to find two distinct points $a,b\\in\\pa D$ such that the segment $[a,b]$ lies entirely in $\\partial D$. On the other hand, the identity principle would imply that the set $\\{t\\in\\mathbb R:\\exists\\eps>0:sa+(1-s)b\\in\\pa D\\text{ for }|s-t|<\\eps\\}$ is open-closed in $\\mathbb R$. Therefore it has to be empty. This immediately gives a contradiction.\n\\end{rem}\n\n\\begin{rem}\nIt is well-known that for any convex domain $D\\su\\CC^{n}$ there is a sequence $\\{D_m\\}$ of bounded strongly convex domains with real analytic boundaries, such that $D_m\\su D_{m+1}$ and $\\bigcup_m D_m=D$. \n\nIn particular, Theorem~\\ref{lem-car} holds for convex domains.\n\\end{rem}\n\n\\begin{df}\nLet $U\\su\\CC^n$ be a domain. A function $u:U\\longrightarrow\\RR$ is called \\emph{strongly convex} if\n\\begin{enumerate}\n\\item $u$ is $\\mathcal{C}^{2}$-smooth;\n\\item $$\\sum_{j,k=1}^{2n}\\frac{\\partial^2 u}{\\partial x_j\\partial x_k}(a)X_{j}X_{k}>0,\\ a\\in U,\\ X\\in(\\RR^{2n})_*.$$\n\\end{enumerate}\n\\end{df}\n\n\\begin{df} A degree of a continuous function (treated as a curve) $:\\mathbb T\\longrightarrow\\mathbb T$ is called its winding number. The fundamental group is a homotopy invariant. Thus the definition of the \\emph{winding number of a continuous function} $\\phi:\\mathbb T\\longrightarrow\\mathbb C_*$ is the same. We denote it by $\\wind\\phi$. \n\nIn the case of a $\\cC^1$-smooth function $\\phi:\\TT\\longrightarrow\\CC_*$, its winding number is just the index of $\\phi$ at 0, i.e. $$\\wind\\phi=\\frac{1}{2\\pi i}\\int_{\\phi(\\TT)}\\frac{d\\zeta}{\\zeta}=\\frac{1}{2\\pi i}\\int_{0}^{2\\pi}\\frac{\\frac{d}{dt}\\phi(e^{it})}{\\phi(e^{it})}dt.$$\n\\end{df}\n\n\\begin{rem}\\label{49}\n\\begin{enumerate}\n\\item\\label{51} If $\\phi\\in\\cC(\\TT,\\CC_*)$ extends to a function $\\widetilde{\\phi}\\in\\OO(\\DD)\\cap \\mathcal C(\\CDD)$ then $\\wind\\phi$ is the number of zeroes of $\\widetilde{\\phi}$ in $\\DD$ counted with multiplicities;\n\\item\\label{52} $\\wind(\\phi\\psi)=\\wind\\phi+\\wind\\psi$, $\\phi,\\psi\\in\\cC(\\TT,\\CC_*)$;\n\\item\\label{53} $\\wind\\phi=0$ if $\\phi\\in\\cC(\\TT)$ and $\\re\\phi>0$.\n\\end{enumerate}\n\\end{rem}\n\n\\begin{df}\nThe boundary of a domain $D$ of $\\mathbb C^n$ is \\emph{real analytic in a neighborhood} $U$ of the set $S\\su\\pa D$ if there exists a function $r\\in\\mathcal C^{\\omega}(U,\\RR)$ such that $D\\cap U=\\{z\\in U:r(z)<0\\}$ and $\\nabla r$ does not vanish in $U$.\n\\end{df}\n\n\n\n\\begin{df}\\label{21}\nLet $D\\subset\\CC^{n}$ be a domain. We call a holomorphic mapping $f:\\DD\\longrightarrow D$ a \\emph{stationary mapping} if\n\\begin{enumerate}\n\\item $f$ extends to a holomorphic mapping in a neighborhood od $\\CDD$ $($denoted by the same letter$)$;\n\\item $f(\\TT)\\subset\\partial D$;\n\\item there exists a real analytic function\n$\\rho:\\TT\\longrightarrow\\RR_{>0}$ such that the mapping $\\TT\\ni\\zeta\\longmapsto\\zeta\n\\rho(\\zeta)\\overline{\\nu_D(f(\\zeta))}\\in\\CC^{n}$ extends to a mapping holomorphic in a neighborhood of $\\CDD$ $($denoted by $\\widetilde{f}${$)$}.\n\\end{enumerate}\n\nFurthermore, we call a holomorphic mapping $f:\\DD\\longrightarrow D$ a \\emph{weak stationary mapping} if\n\\begin{enumerate}\n\\item[(1')] $f$ extends to a $\\cC^{1\/2}$-smooth mapping on $\\CDD$ $($denoted by the same letter$)$;\n\\item[(2')] $f(\\TT)\\subset\\partial D$;\n\\item[(3')] there exists a $\\cC^{1\/2}$-smooth function\n$\\rho:\\TT\\longrightarrow\\RR_{>0}$ such that the mapping $\\TT\\ni\\zeta\\longmapsto\\zeta\n\\rho(\\zeta)\\overline{\\nu_D(f(\\zeta))}\\in\\CC^{n}$ extends to a mapping $\\widetilde{f}\\in\\OO(\\DD)\\cap\\cC^{1\/2}(\\CDD)$.\n\\end{enumerate}\n\nThe definition of a $($weak$)$ stationary mapping $f:\\mathbb D\\longrightarrow D$ extends naturally to the case when $\\pa D$ is real analytic in a neighborhood of $f(\\TT)$.\n\\end{df}\n\n\nDirectly from the definition of a stationary mapping $f$, it follows that $f$ and $\\wi f$ extend holomorphically on some neighborhoods of $\\CDD$. By $\\DD_f$ we shall denote their intersection.\n\n\\begin{df}\\label{21e}\nLet $D\\su\\CC^n$, $n\\geq 2$, be a bounded strongly linearly convex domain with real analytic boundary. A holomorphic mapping $f:\\DD\\longrightarrow D$ is called a (\\emph{weak}) $E$-\\emph{mapping} if it is a (weak) stationary mapping and\n\\begin{enumerate}\n\\item[(4)] setting $\\varphi_z(\\zeta):=\\langle z-f(\\zeta),\\nu_D(f(\\zeta))\\rangle,\\ \\zeta\\in\\TT$, we have $\\wind\\phi_z=0$ for some $z\\in D$.\n\\end{enumerate}\n\\end{df}\n\n\\begin{rem}\nThe strong linear convexity of $D$ implies $\\varphi_z(\\zeta)\\neq 0$ for any $z\\in D$ and $\\zeta\\in\\TT$. Therefore, $\\wind\\phi_z$ vanishes for all $z\\in D$ if it vanishes for some $z\\in D$.\n\nAdditionally, any stationary mapping of a convex domain is an $E$-mapping (as $\\re \\varphi_z<0$).\n\\end{rem}\n\nWe shall prove that in a class of non-planar bounded strongly linearly convex domains with real analytic boundaries weak stationary mappings are just stationary mappings, so there is no difference between $E$-mappings and weak $E$-mappings. \n\nWe have the following result describing extremal mappings, which is very interesting in its own.\n\n\\begin{tw}\\label{main} Let $D\\su\\CC^n$, $n\\geq 2$, be a bounded strongly linearly convex domain. \n\nThen a holomorphic mapping $f:\\DD\\longrightarrow D$ is an extremal if and only if $f$ is a weak $E$-mapping.\n\nFor a domain $D$ with real analytic boundary, a holomorphic mapping $f:\\mathbb D\\longrightarrow D$ is an extremal if and only if $f$ is an $E$-mapping.\n\nIf $\\pa D$ is of class $\\cC^k$, $k=3,4,\\ldots,\\infty$, then any weak $E$-mapping $f:\\DD\\longrightarrow D$ and its associated mappings $\\wi f,\\rho$ are $\\mathcal C^{k-1-\\eps}$-smooth for any $\\eps>0$.\n\n\\end{tw}\n\n\nThe idea of the proof of the Lempert Theorem is as follows. In real analytic case we shall show that $E$-mappings are complex geodesics (because they have left inverses). Then we shall prove that for any different points $z,w\\in D$ (resp. for a point $z\\in D$ and a vector $v\\in(\\CC^n)_*$) there is an $E$-mapping passing through $z,w$ (resp. such that $f(0)=z$ and $f'(0)=v$). This will give the equality between the Lempert function and the Carath\\'eodory distance. In the general case, we exhaust a $\\cC^2$-smooth domain by strongly linearly convex domains with real analytic boundaries.\n\nTo prove Theorem \\ref{main} we shall additionally observe that (weak) $E$-mappings are unique extremals.\n\\bigskip\n\n\\begin{center}{\\sc Real analytic case}\\end{center}\n\\bigskip\n\nIn what follows and if not mentioned otherwise, $D\\su\\CC^n$, $n\\geq 2$, is a \\textbf{bounded strongly linearly convex domain with real analytic boundary}.\n\\section{Weak stationary mappings of strongly linearly convex domains with real analytic boundaries are stationary mappings}\\label{55}\nLet $M\\subset\\CC^m$ be a totally real $\\CLW$ submanifold of the real dimension $m$. Fix a point $z\\in M$. There are neighborhoods $U,V\\su\\CC^m$ of $0$ and $z$ respectively and a biholomorphic mapping $\\Phi:U\\longrightarrow V$ such that $\\Phi(\\RR^m\\cap U)=M\\cap V$ (for the proof see Appendix).\n\n\n\\begin{prop}\\label{6}\nA weak stationary mapping of $D$ is a stationary mapping of $D$ with the same associated mappings.\n\\end{prop}\n\\begin{proof}\nLet $f:\\DD\\longrightarrow D$ be a weak stationary mapping. Our aim is to prove that $f,\\widetilde{f}\\in\\OO(\\CDD)$ and $\\rho\\in\\mathcal C^{\\omega}(\\TT)$. Choose a point $\\zeta_0\\in\\TT$. Since $\\widetilde{f}(\\zeta_0)\\neq 0$, we can assume that $\\widetilde{f}_1(\\zeta)\\neq 0$ in $\\CDD\\cap U_0$, where $U_0$ is a neighborhood of $\\zeta_0$. This implies\n$\\nu_{D,1}(f(\\zeta_0))\\neq 0$, so $\\nu_{D,1}$ does not vanish on some set $V_0\\su\\pa D$, relatively open in\n$\\pa D$, containing the point $f(\\zeta_0)$. Shrinking $U_0$, if necessary, we may assume that $f(\\TT\\cap U_0)\\subset V_0$.\n\nDefine $\\psi:V_0\\longrightarrow\\CC^{2n-1}$ by\n$$\\psi(z)=\\left(z_1,\\ldots,z_n,\n\\ov{\\left(\\frac{\\nu_{D,2}(z)}{\\nu_{D,1}(z)}\\right)},\\ldots,\\ov{\\left(\\frac{\\nu_{D,n}(z)}{\\nu_{D,1}(z)}\\right)}\\right).$$ The set $M:=\\psi(V_0)$ is the graph of a $\\CLW$ function defined on the local $\\CLW$ submanifold $V_0$, so it is a local $\\CLW$ submanifold in $\\CC^{2n-1}$ of the real dimension $2n-1$. Assume for a moment that $M$ is totally real.\n\nLet $$g(\\zeta):=\\left(f_1(\\zeta),\\ldots,f_n(\\zeta),\n\\frac{\\widetilde{f}_2(\\zeta)}{\\widetilde{f}_1(\\zeta)},\\ldots,\\frac{\\widetilde{f}_n(\\zeta)}{\\widetilde{f}_1(\\zeta)}\\right),\\ \\zeta\\in\\CDD\\cap U_0.$$ If $\\zeta\\in\\TT\\cap U_0$ then\n$\\widetilde{f}_k(\\zeta)\\widetilde{f}_1(\\zeta)^{-1} =\n\\overline{\\nu_{D,k}(f(\\zeta))}\\ \\overline{\\nu_{D,1}(f(\\zeta))}^{-1}$, so\n$g(\\zeta)=\\psi(f(\\zeta))$. Therefore, $g(\\TT\\cap U_0)\\subset M$. Thanks to the Reflection\nPrinciple (see Appendix), $g$ extends holomorphically past $\\TT\\cap U_0$, so $f$ extends holomorphically on a neighborhood of $\\zeta_0$.\n\nThe mapping $\\overline{\\nu_D\\circ f}$ is real analytic on $\\TT$, so it extends to a mapping $h$ holomorphic in a neighborhood $W$ of $\\TT$. For $\\zeta\\in\\TT\\cap U_0$ we have $$\\frac{\\zeta\nh_1(\\zeta)}{\\widetilde{f}_1(\\zeta)}=\\frac{1}{\\rho(\\zeta)}.$$ The function on the\nleft side is holomorphic in $\\DD\\cap U_0\\cap W$ and continuous in $\\CDD\\cap U_0\\cap W$. Since it\nhas real values on $\\TT\\cap U_0$, the Reflection Principle implies that it is holomorphic in a neighborhood of $\\TT\\cap U_0$. Hence $\\rho$ and $\\widetilde{f}$ are holomorphic in a neighborhood of $\\zeta_0$. Since $\\zeta_0$ is arbitrary, we get the assertion.\n\nIt remains to prove that $M$ is totally real. Let $r$ be a defining function of $D$. Recall that for any point $z\\in V_0$ $$\\frac{\\ov{\\nu_{D,k}(z)}}{\\ov{\\nu_{D,1}(z)}}=\\frac{\\partial r}{\\partial z_k}(z)\\left(\\frac{\\partial r}{\\partial z_1}(z)\\right)^{-1},\\,k=1,\\ldots,n.$$\nConsider the mapping $S=(S_1,\\ldots,S_n):V_0\\times\\CC^{n-1}\\longrightarrow\\RR\\times\\CC^{n-1}$\ngiven by $$S(z,w):=\\left(r(z),\\frac{\\partial r}{\\partial z_2}(z)-w_{1}\\frac{\\partial r}{\\partial z_1}(z),\\ldots,\\frac{\\partial r}{\\partial z_n}(z)-w_{n-1}\\frac{\\partial r}{\\partial z_1}(z)\\right).$$ Clearly, $M=S^{-1}(\\{0\\})$. Hence\n\\begin{equation}\\label{tan} T_{M}^{\\RR}(z,w)\\subset\\ker\\nabla S(z,w),\\ (z,w)\\in M,\\end{equation} where\n$\\nabla S:=(\\nabla S_1,\\ldots,\\nabla S_n)$.\n\nFix a point $(z,w)\\in M$. Our goal is to prove that $T_{M}^{\\CC}(z,w)=\\lbrace 0\\rbrace$. Take an arbitrary vector $(X,Y)=(X_1,\\ldots,X_n,Y_1,\\ldots,Y_{n-1})\\in T_{M}^{\\CC}(z,w)$. Then we infer from \\eqref{tan} that $$\\sum_{k=1}^n\\frac{\\partial r}{\\partial z_k}(z)X_k=0,$$ i.e. $X\\in T_{D}^{\\CC}(z)$. Denoting $v:=(z,w)$, $V:=(X,Y)$ and making use of \\eqref{tan} again we find that\n$$0=\\nabla S_k(v)(V)=\\sum_{j=1}^{2n-1}\\frac{\\pa S_k}{\\pa v_j}(v)V_j+\\sum_{j=1}^{2n-1}\\frac{\\pa S_k}{\\pa\\ov v_j}(v)\\ov V_j$$ for $k=2,\\ldots,n$.\nBut $V\\in T_{M}^{\\CC}(v)$, so $iV\\in T_{M}^{\\CC}(v)$. Thus $$0=\\nabla S_k(v)(iV)=i\\sum_{j=1}^{2n-1}\\frac{\\pa S_k}{\\pa v_j}(v)V_j-i\\sum_{j=1}^{2n-1}\\frac{\\pa S_k}{\\pa\\ov v_j}(v)\\ov V_j.$$ In particular, \\begin{multline*}0=\\sum_{j=1}^{2n-1}\\frac{\\pa S_k}{\\pa\\ov v_j}(v)\\ov V_j=\\sum_{j=1}^{n}\\frac{\\pa S_k}{\\pa\\ov z_j}(z,w)\\ov X_j+\\sum_{j=1}^{n-1}\\frac{\\pa S_k}{\\pa\\ov w_j}(z,w)\\ov Y_j=\\\\=\\sum_{j=1}^n\\frac{\\partial^2r}{\\partial z_k\\partial\\overline{z}_j}(z)\\overline X_j-w_{k-1}\\sum_{j=1}^n\\frac{\\partial^2r}{\\partial z_1\\partial\\overline{z}_j}(z)\\overline X_j.\n\\end{multline*}\nThe equality $M=S^{-1}(\\{0\\})$ gives $$w_{k-1}=\\frac{\\partial r}{\\partial z_k}(z)\\left(\\frac{\\partial r}{\\partial z_1}(z)\\right)^{-1},$$ so $$\\frac{\\partial r}{\\partial z_1}(z)\\sum_{j=1}^n\\frac{\\partial^2r}{\\partial z_k\\partial\\overline{z}_j}(z)\\overline X_j=\\frac{\\partial r}{\\partial z_k}(z)\\sum_{j=1}^n\\frac{\\partial^2r}{\\partial z_1\\partial\\overline{z}_j}(z)\\overline X_j,\\ k=2,\\ldots,n.$$ Note that the last equality holds also for $k=1$. Therefore, \\begin{multline*}\n\\frac{\\partial r}{\\partial z_1}(z)\\sum_{j,k=1}^n\\frac{\\partial^2r}{\\partial z_k\\partial\\overline{z}_j}(z)\\overline X_jX_k=\\sum_{k=1}^n\\frac{\\partial r}{\\partial z_k}(z)\\sum_{j=1}^n\\frac{\\partial^2r}{\\partial z_1\\partial\\overline{z}_j}(z)\\overline X_jX_k =\\\\=\\left(\\sum_{k=1}^n\\frac{\\partial r}{\\partial z_k}(z)X_k\\right)\\left(\\sum_{j=1}^n\\frac{\\partial^2r}{\\partial z_1\\partial\\overline{z}_j}(z)\\overline X_j\\right)=0.\n\\end{multline*}\nBy the strong linear convexity of $D$ we have $X=0$. This implies $Y=0$, since $$0=\\nabla S_k(z,w)(0,Y)=\\sum_{j=1}^{n-1}\\frac{\\pa S_k}{\\pa w_j}(v)Y_j+\\sum_{j=1}^{n-1}\\frac{\\pa S_k}{\\pa\\ov w_j}(v)\\ov Y_j=-\\frac{\\partial r}{\\partial z_1}(z)Y_{k-1}$$ for $k=2,\\ldots,n$. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\section{(Weak) $E$-mappings vs. extremal mappings and complex geodesics}\n\nIn this section we will prove important properties of (weak) $E$-mappings. In particular, we will show that they are complex geodesics and unique extremals.\n\\subsection{Weak $E$-mappings are complex geodesics and unique extremals}\nThe results of this subsection are related to weak $E$-mappings of bounded strongly linearly convex domains $D\\su\\CC^n$, $n\\geq 2$.\n\nLet $$G(z,\\zeta):=(z-f(\\zeta))\\bullet\\widetilde{f}(\\zeta),\\ z\\in\\CC^n,\\ \\zeta\\in\\DD_f.$$\n\n\\begin{propp}\\label{1}\nLet $D\\su\\CC^n$, $n\\geq 2$, be a bounded strongly linearly convex domain and let $f:\\DD\\longrightarrow D$ be a weak $E$-mapping. Then there exist an open set $W\\supset\\overline D\\setminus f(\\TT)$ and a holomorphic mapping $F:W\\longrightarrow\\DD$ such that for any $z\\in W$ the number $F(z)$ is a unique solution of the equation $G(z,\\zeta)=0,\\ \\zeta\\in\\DD$. In particular, $F\\circ f=\\id_{\\DD}$.\n\\end{propp}\n\nIn the sequel we will strengthen the above proposition for domains with real analytic boundaries (see Proposition~\\ref{34}).\n\n\\begin{proof}[Proof of Proposition~\\ref{1}]\nSet $A:=\\overline{D}\\setminus f(\\TT)$. Since $D$ is strongly linearly convex, $\\varphi_z$ does not vanish in $\\TT$ for any $z\\in A$, so by a continuity argument the condition (4) of Definition~\\ref{21e} holds for every $z$ in some open set $W\\supset A$. For a fixed $z\\in W$ we have $$G(z,\\zeta)=\\zeta\\rho(\\zeta)\\varphi_z(\\zeta),\\ \\zeta\\in\\TT,$$ so $\\wind G(z,\\cdotp)=1$. Since $G(z,\\cdotp)\\in\\OO(\\DD)$, it has in $\\DD$ exactly one simple root $F(z)$. Hence $G(z,F(z))=0$ and $\\frac{\\partial G}{\\partial\\zeta}(z,F(z))\\neq 0$. By the Implicit Function Theorem, $F$ is holomorphic in $W$. The equality $F(f(\\zeta))=\\zeta$ for $\\zeta\\in\\DD$ is clear.\n\\end{proof}\n\nFrom the proposition above we immediately get the following\n\\begin{corr}\\label{5}\nA weak $E$-mapping $f:\\DD\\longrightarrow D$ of a bounded strongly linearly convex domain $D\\su\\CC^n$, $n\\geq 2$, is a complex geodesic. In particular,\n$$c_{D}(f(\\zeta),f(\\xi))=\\wi k_D(f(\\zeta),f(\\xi))\\text{\\,\\ and\\,\\, }\\gamma_D(f(\\zeta);f'(\\zeta))=\\kappa_D(f(\\zeta);f'(\\zeta)),$$ for any $\\zeta,\\xi\\in\\DD$.\n\\end{corr}\n\nUsing left inverses of weak $E$-mappings we may prove the uniqueness of extremals.\n\\begin{propp}\\label{2}\nLet $D\\su\\CC^n$, $n\\geq 2$, be a bounded strongly linearly convex domain and let $f:\\DD\\longrightarrow D$ be a weak $E$-mapping. Then for any $\\xi\\in(0,1)$ the mapping $f$ is a unique $\\wi{k}_D$-extremal for $z=f(0)$, $w=f(\\xi)$ \\emph{(}resp. a unique $\\kappa_D$-extremal for $z=f(0)$, $v=f'(0)$\\emph{)}.\n\\end{propp}\n\\begin{proof}\n\nSuppose that $g$ is a $\\wi{k}_D$-extremal for $z,w$ (resp. a $\\kappa_D$-extremal for $z,v$) such that $g(0)=z$, $g(\\xi)=w$ (resp. $g(0)=z$, $g'(0)=v$). Our aim is to show that $f=g$. Proposition~\\ref{1} provides us with the mapping $F$, which is a left inverse for $f$. By the Schwarz Lemma, $F$ is a left inverse for $g$, as well, that is $F\\circ g=\\text{id}_{\\DD}$. We claim that $\\lim_{\\DD\\ni\\zeta\\to\\zeta_0}g(\\zeta)=f(\\zeta_0)$ for any $\\zeta_0\\in\\TT$ (in particular, we shall show that the limit does exist).\n\nAssume the contrary. Then there are $\\zeta_0\\in\\TT$ and a sequence $\\{\\zeta_m\\}\\subset\\DD$ convergent to $\\zeta_0$ such that the limit $Z:=\\lim_{m\\to\\infty}g(\\zeta_m)\\in\\overline{D}$ exists and is not equal to $f(\\zeta_0)$. We have $G(z,F(z))=0$, so putting $z=g(\\zeta_m)$ we infer that $$0=(g(\\zeta_m)-f(F(g(\\zeta_m))))\\bullet \\widetilde{f}(F(g(\\zeta_m)))=(g(\\zeta_m)-f(\\zeta_m))\\bullet\\widetilde{f}(\\zeta_m).\n$$ Passing with $m$ to the infinity we get $$0=(Z-f(\\zeta_0))\\bullet \\widetilde{f}(\\zeta_0)=\\zeta_0\\rho(\\zeta_0)\\langle Z-f(\\zeta_0),\\nu_D(f(\\zeta_0))\\rangle.$$ This means that $Z-f(\\zeta_0)\\in T^{\\CC}_D(f(\\zeta_0))$. Since $D$ is strongly linearly convex, we deduce that $Z=f(\\zeta_0)$, which is a contradiction.\n\nHence $g$ extends continuously on $\\CDD$ and, by the maximum principle, $g=f$.\n\\end{proof}\n\n\n\\begin{propp}\\label{3}\nLet $D\\su\\CC^n$, $n\\geq 2$, be a bounded strongly linearly convex domain, let $f:\\DD\\longrightarrow D$ be a weak $E$-mapping and let $a$ be an automorphism of $\\DD$. Then $f\\circ a$ is a weak $E$-mapping of $D$.\n\\end{propp}\n\\begin{proof}\nSet $g:=f\\circ a$.\n\nClearly, the conditions (1') and (2') of Definition~\\ref{21} are satisfied by $g$.\n\nTo prove that $g$ satisfies the condition (4) of Definition~\\ref{21e} fix a point $z\\in D$. Let $\\varphi_{z,f}$, $\\varphi_{z,g}$ be the functions appearing in the condition (4) for $f$ and $g$ respectively. Then $\\varphi_{z,g}=\\varphi_{z,f}\\circ a$. Since $a$ maps $\\TT$ to $\\TT$ diffeomorphically, we have $\\wind\\varphi_{z,g}=\\pm\\wind\\varphi_{z,f}=0$.\n\nIt remains to show that the condition (3') of Definition~\\ref{21} is also satisfied by $g$. Note that the function $\\wi a(\\zeta):=\\zeta\/a(\\zeta)$ has a holomorphic branch of the logarithm in the neighborhood of $\\TT$. This follows from the fact that $\\wind \\wi a=0$, however the existence of the holomorphic branch may be shown quite elementary. Actually, it would suffices to prove that $\\wi a(\\TT)\\neq\\TT$. Expand $a$ as $$a(\\zeta)=e^{it}\\frac{\\zeta-b}{1-\\overline b\\zeta}$$ with some $t\\in\\RR$, $b\\in\\DD$ and observe that $\\widetilde a$ does not attain the value $-e^{-it}$. Indeed, if $\\zeta\/a(\\zeta)=-e^{-it}$ for some $\\zeta\\in\\TT$, then $$\\frac{1-\\overline b\\zeta}{1-b\\overline\\zeta}=-1,$$ so $2=2\\re(b\\overline\\zeta)\\leq 2|b|$, which is impossible.\n\nConcluding, there exists a function $v$ holomorphic in a neighborhood of $\\TT$ such that $$\\frac{\\zeta}{a(\\zeta)}=e^{i v(\\zeta)}.$$ Note that $v(\\TT)\\su\\RR$. Expanding $v$ in Laurent series $$v(\\zeta)=\\sum_{k=-\\infty}^{\\infty}a_k\\zeta^k,\\ \\zeta\\text{ near }\\TT,$$ we infer that $a_{-k}=\\overline a_k$, $k\\in\\ZZ$. Therefore, $$v(\\zeta)=a_0+\\sum_{k=1}^\\infty 2\\re(a_k\\zeta^k)=\\re\\left(a_0+2\\sum_{k=1}^\\infty a_k\\zeta^k\\right),\\ \\zeta\\in\\TT.$$ Hence, there is a function $h$ holomorphic in the neighborhood of $\\CDD$ such that $v=\\im h$. Put $u:=h-iv$. Then $u\\in\\OO(\\TT)$ and $u(\\TT)\\su\\RR$.\n\nTake $\\rho$ be as in the condition (3') of Definition~\\ref{21} for $f$ and define $$r(\\zeta):=\\rho(a(\\zeta))e^{u(\\zeta)},\\ \\zeta\\in\\TT.$$ Let us compute\n\\begin{eqnarray*}\\zeta r(\\zeta)\\overline{\\nu_D(g(\\zeta))}=\\zeta u^{u(\\zeta)}\\rho(a(\\zeta))\\overline{\\nu_D(f(a(\\zeta)))}&=&\\\\=a(\\zeta)h(\\zeta)\\rho(a(\\zeta))\\overline{\\nu_D(f(a(\\zeta)))}\n&=&h(\\zeta)\\widetilde{f}(a(\\zeta)),\\quad\\zeta\\in\\TT.\n\\end{eqnarray*} Thus $\\zeta\\longmapsto\\zeta r(\\zeta)\\overline{\\nu_D(g(\\zeta))}$ extends holomorphically to a function of class $\\OO(\\DD)\\cap\\cC^{1\/2}(\\CDD)$.\n\\end{proof}\n\n\n\\begin{corr}\\label{28}\nA weak $E$-mapping $f:\\DD\\longrightarrow D$ of a bounded strongly linearly convex domain $D\\su\\CC^n$, $n\\geq 2$, is a unique $\\wi{k}_D$-extremal for $f(\\zeta),f(\\xi)$ \\emph{(}resp. a unique $\\kappa_D$-extremal for $f(\\zeta),f'(\\zeta)$\\emph{)}, where $\\zeta,\\xi\\in\\DD$, $\\zeta\\neq\\xi$.\n\\end{corr}\n\n\\subsection{Generalization of Proposition~\\ref{1}}\nThe results obtained in this subsection will play an important role in the sequel.\n\nWe start with \n\\begin{propp}\\label{4}\nLet $f:\\DD\\longrightarrow D$ be an $E$-mapping. Then the function $f'\\bullet\\widetilde{f}$ is a positive constant.\n\\end{propp}\n\\begin{proof}\nConsider the curve $$\\RR\\ni t\\longmapsto f(e^{it})\\in\\partial D.$$ Its any tangent vector $ie^{it}f'(e^{it})$ belongs to $T_{D}^\\mathbb{R}(f(e^{it}))$, i.e. $$\\re\\langle ie^{it}f'(e^{it}),\\nu_D(f(e^{it}))\\rangle=0.$$ Thus for $\\zeta\\in\\TT$ $$0=\\rho(\\zeta)\\re\\langle i\\zeta f'(\\zeta),\\nu_D(f(\\zeta))\\rangle=-\\im f'(\\zeta)\\bullet\\widetilde{f}(\\zeta),$$ so the holomorphic function $f'\\bullet\\widetilde{f}$ is a real constant $C$.\n\nConsidering the curve $$[0,1+\\eps)\\ni t\\longmapsto f(t)\\in\\overline D$$ for small $\\eps>0$ and noting that $f([0,1))\\su D$, $f(1)\\in\\partial D$, we see that the derivative of $r\\circ f$ at a point $t=1$ is non-negative, where $r$ is a defining function of $D$. Hence $$0\\leq\\re\\langle f'(1),\\nu_D(f(1))\\rangle =\\frac{1}{\\rho(1)} \\re( f'(1)\\bullet\\widetilde{f}(1))=\n\\frac{C}{\\rho(1)},$$ i.e. $C\\geq 0$. For $\\zeta\\in\\TT$\n$$\\frac{f(\\zeta)-f(0)}{\\zeta}\\bullet\\widetilde{f}(\\zeta)=\\rho(\\zeta)\\langle f (\\zeta)-f(0),\\nu_D(f(\\zeta))\\rangle.$$ This function has the winding number equal to $0$. Therefore, the function $$g(\\zeta):=\\frac{f(\\zeta)-f(0)}{\\zeta}\\bullet\\widetilde{f}(\\zeta),$$ which is holomorphic in a neighborhood of $\\CDD$, does not vanish\nin $\\DD$. In particular, $C=g(0)\\neq 0$.\n\\end{proof}\nThe function $\\rho$ is defined up to a constant factor. \\textbf{We choose $\\rho$ so that $ f'\\bullet\\widetilde{f}\\equiv 1$}, i.e. \\begin{equation}\\label{rho}\\rho(\\zeta)^{-1}=\\langle\\zeta f'(\\zeta),\\nu_D(f(\\zeta))\\rangle,\\ \\zeta\\in\\TT.\\end{equation} In that way $\\widetilde{f}$ and $\\rho$ are uniquely determined by $f$.\n\n\\begin{propp}\nAn $E$-mapping $f:\\DD\\longrightarrow D$ is injective in $\\CDD$.\n\\end{propp}\n\n\\begin{proof}The function $f$ has the left-inverse in $\\DD$, so it suffices to check the injectivity on $\\TT$. Suppose that $f(\\zeta_1)=f(\\zeta_2)$ for some $\\zeta_1,\\zeta_2\\in\\TT$, $\\zeta_1\\neq\\zeta_2$, and consider the curves $$\\gamma_j:[0,1]\\ni t\\longmapsto f(t\\zeta_j)\\in\\overline D,\\ j=1,2.$$ Since $$\\re\\langle\\gamma_j'(1),\\nu_D(f(\\zeta_j))\\rangle=\\re\\langle\\zeta_jf'(\\zeta_j),\\nu_D(f(\\zeta_j))\\rangle\n=\\rho(\\zeta_j)^{-1}\\neq 0,$$ the curves $\\gamma_j$ hit $\\pa D$ transversally at their common point $f(\\zeta_1)$. We claim that there exists $C>0$ such that for $t\\in(0,1)$ close to $1$ there is $s_t\\in(0,1)$ satisfying $\\wi k_D(f(t\\zeta_1),f(s_t\\zeta_2))k|z|\\},\\quad k>0,$$ such that $\\gamma_1(t),\\gamma_2(t)\\in A\\cap B$ if $t\\in(0,1)$ is close to $1$. For $z\\in A$ let $k_z>k$ be a positive number satisfying the equality $$|z|=\\frac{-\\re z_1}{k_z}.$$ \n\nNote that for any $a\\in\\gamma_1((0,1))$ sufficiently close to $0$ one may find $b\\in\\gamma_2((0,1))\\cap A\\cap B$ such that $\\re b_1=\\re a_1$. To get a contradiction it suffices to show that $\\wi k_D(a,b)$ is bounded from above by a constant independent on $a$ and $b$. \n\nWe have the following estimate \\begin{multline*}\\wi k_D(a,b)\\leq\\wi k_{\\BB_n-e_1}(a,b)=\\wi k_{\\BB_n}(a+e_1,b+e_1)=\\\\=\\tanh^{-1}\\sqrt{1-\\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\\langle a+e_1,b+e_1 \\rangle|^2}}.\\end{multline*} The last expression is bounded from above if and only if $$\\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\\langle a+e_1,b+e_1\\rangle|^2}$$ is bounded from below by some positive constant. We estimate $$\\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\\langle a+e_1,b+e_1\\rangle|^2}=\\frac{(2\\re a_1+|a|^2)(2\\re b_1+|b|^2)}{|\\langle a, b\\rangle+a_1+\\overline b_1|^2}=$$$$=\\frac{\\left(2\\re a_1+\\frac{(\\re a_1)^2}{k^2_a}\\right)\\left(2\\re a_1+\\frac{(\\re a_1)^2}{k^2_b}\\right)}{|\\langle a, b\\rangle+2\\re a_1+i\\im a_1-i\\im b_1|^2}\\geq\\frac{(\\re a_1)^2\\left(2+\\frac{\\re a_1}{k^2_a}\\right)\\left(2+\\frac{\\re a_1}{k^2_b}\\right)}{2|\\langle a, b\\rangle+i\\im a_1-i\\im b_1|^2+2|2\\re a_1|^2}$$$$\\geq\\frac{(\\re a_1)^2\\left(2+\\frac{\\re a_1}{k^2_a}\\right)\\left(2+\\frac{\\re a_1}{k^2_b}\\right)}{2(|a||b|+|a|+|b|)^2+8(\\re a_1)^2}=\\frac{(\\re a_1)^2\\left(2+\\frac{\\re a_1}{k^2_a}\\right)\\left(2+\\frac{\\re a_1}{k^2_b}\\right)}{2\\left(\\frac{(-\\re a_1)^2}{k^2_ak^2_b}-\\frac{\\re a_1}{k_a}-\\frac{\\re a_1}{k_b}\\right)^2+8(\\re a_1)^2}$$$$=\\frac{\\left(2+\\frac{\\re a_1}{k^2_a}\\right)\\left(2+\\frac{\\re a_1}{k^2_b}\\right)}{2\\left(\\frac{-\\re a_1}{k^2_ak^2_b}+\\frac{1}{k_a}+\\frac{1}{k_b}\\right)^2+8}>\\frac{1}{2(1+2\/k)^2+8}.$$ This finishes the proof.\n\\end{proof}\n\n\\medskip\n\nAssume that we are in the settings of Proposition~\\ref{1} and $D$ has real analytic boundary. Our aim is to replace $W$ with a neighborhood of $\\ov D$.\n\n\\begin{remm}\\label{przed34}\nFor $\\zeta_0\\in\\DD_f$ we have $G(f(\\zeta_0),\\zeta_0)=0$ and $\\frac{\\partial G}{\\partial\\zeta}(f(\\zeta_0),\\zeta_0)=-1$. By the Implicit Function Theorem there exist neighborhoods $U_{\\zeta_0},V_{\\zeta_0}$ of $f(\\zeta_0),\\zeta_0$ respectively and a holomorphic function $F_{\\zeta_0}:U_{\\zeta_0}\\longrightarrow V_{\\zeta_0}$ such that for any $z\\in U_{\\zeta_0}$ the point $F_{\\zeta_0}(\\zeta)$ is the unique solution of the equation $G(z,\\zeta)=0$, $\\zeta\\in V_{\\zeta_0}$.\n\nIn particular, if $\\zeta_0\\in\\DD$ then $F_{\\zeta_0}=F$ near $f(\\zeta_0)$.\n\\end{remm}\n\n\\begin{propp}\\label{34}\nLet $f:\\DD\\longrightarrow D$ be an $E$-mapping. Then there exist arbitrarily small neighborhoods $U$, $V$ of $\\overline D$, $\\CDD$ respectively such that for any $z\\in U$ the equation $G(z,\\zeta)=0$, $\\zeta\\in V$, has exactly one solution.\n\\end{propp}\n\\begin{proof} In view of Proposition~\\ref{1} and Remark~\\ref{przed34} it suffices to prove that there exist neighborhoods $U$, $V$ of $\\overline D$, $\\CDD$ respectively such that for any $z\\in U$ the equation $G(z,\\cdotp)=0$ has at most one solution $\\zeta\\in V$.\n\nAssume the contrary. Then for any neighborhoods $U$ of $\\overline D$ and $V$ of $\\CDD$ there are $z\\in U$, $\\zeta_1,\\zeta_2\\in V$, $\\zeta_1\\neq\\zeta_2$ such that $G(z,\\zeta_1)=G(z,\\zeta_2)=0$. For $m\\in\\NN$ put $$U_m:=\\{z\\in\\CC^n:\\dist(z,D)<1\/m\\},$$ $$V_m:=\\{\\zeta\\in\\CC:\\dist(\\zeta,\\DD)<1\/m\\}.$$ There exist $z_m\\in U_m$, $\\zeta_{m,1},\\zeta_{m,2}\\in V_m$, $\\zeta_{m,1}\\neq\\zeta_{m,2}$ such that $G(z_m,\\zeta_{m,1})=G(z_m,\\zeta_{m,2})=0$. Passing to a subsequence we may assume that $z_m\\to z_0\\in\\ov D$. Analogously we may assume $\\zeta_{m,1}\\to\\zeta_1\\in \\CDD$ and $\\zeta_{m,2}\\to\\zeta_2\\in\\CDD$. Clearly, $G(z_0,\\zeta_1)=G(z_0,\\zeta_2)=0$. Let us consider few cases.\n\n1) If $\\zeta_1,\\zeta_2\\in\\TT$, then $G(z_0,\\zeta_j)=0$ is equivalent to $$\\langle z_0-f(\\zeta_j), \\nu_D(f(\\zeta_j))\\rangle=0,\\ j=1,2,$$ consequently $z_0-f(\\zeta_j)\\in T^{\\CC}_D(f(\\zeta_j))$. By the strong linear convexity of $D$ we get $z_0=f(\\zeta_j)$. But $f$ is injective in $\\CDD$, so $\\zeta_1=\\zeta_2=:\\zeta_0$. It follows from Remark~\\ref{przed34} that in a sufficiently small neighborhood of $(z_0,\\zeta_0)$ all solutions of the equation $G(z,\\zeta)=0$ are of the form $(z,F_{\\zeta_0}(z))$. Points $(z_m,\\zeta_{m,1})$ and $(z_m,\\zeta_{m,2})$ belong to this neighborhood for large $m$, which gives a contradiction.\n\n2) If $\\zeta_1\\in\\TT$ and $\\zeta_2\\in\\DD$, then analogously as above we deduce that $z_0=f(\\zeta_1)$. Let us take an arbitrary sequence $\\{\\eta_m\\}\\su\\DD$ convergent to $\\zeta_1$. Then $f(\\eta_m) \\in D$ and $f(\\eta_m)\\to z_0$, so the sequence $G(f(\\eta_m),\\cdotp)$ converges to $G(z_0,\\cdotp)$ uniformly on $\\DD$. Since $G(z_0,\\cdotp)\\not\\equiv 0$, $G(z_0,\\zeta_2)=0$ and $\\zeta_2\\in\\DD$, we deduce from Hurwitz Theorem that for large $m$ the functions $G(f(\\eta_m),\\cdotp)$ have roots $\\theta_m\\in\\DD$ such that $\\theta_m\\to\\zeta_2$. Hence $G(f(\\eta_m),\\theta_m)=0$ and from the uniqueness of solutions in $D\\times\\DD$ (Proposition~\\ref{1}) we have $$\\theta_m=F(f(\\eta_m))=\\eta_m.$$ This is a contradiction, because the left side tends to $\\zeta_2$ and the right one to $\\zeta_1$, as $m\\to\\infty$.\n\n3) We are left with the case $\\zeta_1,\\zeta_2\\in\\DD$.\nIf $z_0\\in\\overline{D}\\setminus f(\\TT)$ then $z_0\\in W$. In $W\\times\\DD$ all solutions of the equation $G=0$ are of the form $(z,F(z))$, $z\\in W$. But for large $m$ the points $(z_m,\\zeta_{m,1})$, $(z_m,\\zeta_{m,2})$ belong to $W\\times\\DD$, which is a contradiction with the uniqueness.\n\nIf $z_0\\in f(\\TT)$, then $z_0=f(\\zeta_0)$ for some $\\zeta_0\\in\\TT$. Clearly, $G(f(\\zeta_0),\\zeta_0)=0$, whence $G(z_0,\\zeta_0)=G(z_0,\\zeta_1)=0$ and $\\zeta_0\\in\\TT$, $\\zeta_1\\in \\DD$. This is just the case 2), which has been already considered.\n\\end{proof}\n\n\n\\begin{corr} There are neighborhoods $U$, $V$ of $\\overline D$ and $\\CDD$ respectively with $V\\Subset\\DD_f$, such that the function $F$ extends holomorphically on $U$. Moreover, all solutions of the equation $G|_{U\\times V}=0$ are of the form $(z,F(z))$, $z\\in U$.\n\nIn particular, $F\\circ f=\\id_{V}$.\n\\end{corr}\n\n\n\n\n\n\n\n\n\n\n\n\\section{H\\\"older estimates}\\label{22}\n\n\\begin{df}\\label{30} For a given $c>0$ let the family $\\mathcal{D}(c)$ consist of all pairs $(D,z)$, where $D\\su\\CC^n$, $n\\geq 2$, is a bounded pseudoconvex domain with real $\\mathcal C^2$ boundary and $z\\in D$, satisfying\n\\begin{enumerate}\n\\item $\\dist(z,\\partial D)\\geq 1\/c$;\n\\item the diameter of $D$ is not greater than $c$ and $D$ satisfies the interior ball condition with a radius $1\/c$;\n\\item for any $x,y\\in D$ there exist $m\\leq 8 c^2$ and open balls $B_0,\\ldots,B_m\\subset D$ of radius $1\/(2c)$ such that $x\\in B_0$, $y\\in B_m$ and the distance between the centers of the balls $B_j$, $B_{j+1}$ is not greater than $1\/(4c)$ for $j=0,\\ldots,m-1$;\n\\item for any open ball $B\\subset\\mathbb{C}^n$ of radius not greater than $1\/c$, intersecting non-emptily with $\\pa D$, there exists a mapping $\\Phi\\in\\OO(\\overline{D},\\mathbb{C}^n)$ such that\n\\begin{enumerate}\n\\item for any $w\\in\\Phi(B\\cap\\partial D)$ there is a ball of radius $c$ containing $\\Phi(D)$ and tangent to $\\partial\\Phi(D)$ at $w$ (let us call it the ``exterior ball condition'' with a radius $c$);\n\\item $\\Phi$ is biholomorphic in a neighborhood of $\\ov B$ and $\\Phi^{-1}(\\Phi(B))=B$;\n\\item entries of all matrices $\\Phi'$ on $B\\cap\\ov D$ and $(\\Phi^{-1})'$ on $\\Phi(B\\cap\\overline{D})$ are bounded in modulus by $c$;\n\\item $\\dist(\\Phi(z),\\partial\\Phi(D))\\geq 1\/c$;\n\\end{enumerate}\n\\item the normal vector $\\nu_D$ is Lipschitz with a constant $2c$, that is $$|\\nu_D(a)-\\nu_D(b)|\\leq 2c|a-b|,\\ a,b\\in \\partial D;$$\n\\item the $\\eps$-hull of $D$, i.e. a domain $D_{\\eps}:=\\{w\\in\\mathbb C^n:\\dist (w,D)<\\eps\\}$, is strongly pseudoconvex for any $\\eps\\in (0,1\/c).$\n\\end{enumerate}\n\\end{df}\n\nRecall that the {\\it interior ball condition} with a radius $r>0$ means that for any point $a\\in\\pa D$ there is $a'\\in D$ and a ball $B_n(a',r)\\su D$ tangent to $\\pa D$ at $a$. Equivalently $$D=\\bigcup_{a'\\in D'}B_n(a',r)$$ for some set $D'\\su D$.\n\nIt may be shown that (2) and (5) may be expressed in terms of boundedness of the normal curvature, boundedness of a domain and the condition (3). This however lies beyond the scope of this paper and needs some very technical arguments so we omit the proof of this fact. The reasons why we decided to use (2) in such a form is its connection with the condition (3) (this allows us to simplify the proof in some places).\n\n\\begin{rem}\\label{con}\nNote that any convex domain satisfying conditions (1)-...-(4) of Definition~\\ref{30} satisfies conditions (5) and (6), as well.\n\nActually, it follows from (2) that for any $a\\in\\pa D$ there exists a ball $B_n(a',1\/c)\\su D$ tangent to $\\pa D$ at $a$. Then $$\\nu_D(a)=\\frac{a'-a}{|a'-a|}=c(a'-a).$$ Hence $$|\\nu_D(a)-\\nu_D(b)|=c|a'-a-b'+b|=c|a'-b'-(a-b)|\\leq c|a'-b'|+c|a-b|.$$ Since $D$ is convex, we have $|a'-b'|\\leq|a-b|$, which gives (5).\n\nThe condition (6) is also clear --- for any $\\eps>0$ an $\\eps$-hull of a strongly convex domain is strongly convex.\n\\end{rem}\n\n\\begin{rem}\nFor a convex domain $D$ the condition (3) of Definition \\ref{30} amounts to the condition (2).\n\nIndeed, for two points $x,y\\in D$ take two balls of radius $1\/(2c)$ containing them and contained in $D$. Then divide the interval between the centers of the balls into $[4c^2]+1$ equal parts and take balls of radius $1\/(2c)$ with centers at the points of the partition.\n\nNote also that if $D$ is strongly convex and satisfies the interior ball condition with a radius $1\/c$ and the exterior ball condition with a radius $c$, one can take $\\Phi:=\\id_{\\CC^n}$.\n\\end{rem}\n\n\n\\begin{rem}\\label{D(c),4}\nFor a strongly pseudoconvex domain $D$ and $c'>0$ and for any $z\\in D$ such that $\\dist(z,\\partial D)>1\/c'$ there exists $c=c(c')>0$ satisfying $(D,z)\\in\\mathcal{D}(c)$.\n\nIndeed, the conditions (1)-...-(3) and (5)-(6) are clear. Only (4) is non-trivial.\n\nThe construction of the mapping $\\Phi$ amounts to the construction of Forn\\ae ss peak functions. Actually, apply directly Proposition 1 from \\cite{For} to any boundary point of $\\partial D$ (obviously $D$ has a Stein neighborhood basis). This gives a covering of $\\partial D$ with a finite number of balls $B_j$, maps $\\Phi_j\\in\\OO(\\overline{D},\\mathbb{C}^n)$ and strongly convex $C^\\infty$-smooth domains $C_j$, $j=1,\\ldots, N$, such that\n\\begin{itemize}\\item $\\Phi_j(D)\\subset C_j$;\n\\item $\\Phi_j(\\ov D)\\subset\\ov C_j$;\n\\item $\\Phi_j(B_j\\setminus\\ov D)\\subset\\mathbb C^n\\setminus\\ov C_j$;\n\\item $\\Phi_j^{-1}(\\Phi_j(B_j))=B_j$;\n\\item $\\Phi_j|_{B_j}: B_j\\longrightarrow \\Phi_j(B_j)$ is biholomorphic.\n\\end{itemize} Therefore, one may choose $c>0$ such that every $C_j$ satisfies the exterior ball condition with $c$, i.e. for any $x\\in \\partial C_j$ there is a ball of radius $c$ containing $C_j$ and tangent to $\\partial C_j$ at $x$, every ball of radius $1\/c$ intersecting non-emptily with $\\pa D$ is contained in some $B_j$ (here one may use a standard argument invoking the Lebesgue number) and the conditions (c), (d) are also satisfied (with $\\Phi:=\\Phi_j$).\n\\end{rem}\n\n\nIn this section we use the words `uniform', `uniformly' if $(D,z)\\in \\mathcal D(c)$. This means that estimates will depend only on $c$ and will be independent on $D$ and $z$ if $(D,z)\\in\\mathcal{D}(c)$ and on $E$-mappings of $D$ mapping $0$ to $z$. Moreover, in what follows we assume that $D$ is a strongly linearlu convex domain with real-analytic boundary.\n\n\\begin{prop}\\label{7}\nLet $f:(\\mathbb{D},0)\\longrightarrow(D,z)$ be an $E$-mapping. Then $$\\dist(f(\\zeta),\\partial D)\\leq C(1-|\\zeta|),\\ \\zeta\\in\\CDD$$ with $C>0$ uniform if $(D,z)\\in\\mathcal{D}(c)$.\n\\end{prop}\n\\begin{proof} There exists a uniform $C_1$ such that $$\\text{if }\\dist(w,\\partial D)\\geq 1\/c\\text{ then }k_D(w,z)\\varepsilon$ for some $\\varepsilon>0$ independent on $x$. Thus $$\\frac{\\delta(x)}{\\re x_1}=1+O(|x|)\\text{ as }x\\to 0\\text{ transversally. }$$ Consequently \\begin{equation}\\label{50}-\\re x_1\\leq 2\\dist(x,\\partial\\Phi(D))\\text{ as }x\\to 0\\text{ transversally. }\\end{equation}\n\nWe know that $t\\longmapsto f(t\\zeta_0)$ hits $\\partial D$ transversally. Therefore, $t\\longmapsto h(t\\zeta_0)$ hits $\\partial \\Phi(D)$ transversally, as well. Indeed, we have \\begin{multline}\\label{hf}\\left\\langle\\left.\\frac{d}{dt}h(t\\zeta_0)\\right|_{t=1},\\nu_{\\Phi(D)}(h(\\zeta_0))\\right\\rangle=\\left\\langle \\Phi'(0)f'(\\zeta_0)\\zeta_0,\\frac{(\\Phi^{-1})'(0)^*\\nabla r(0)}{|(\\Phi^{-1})'(0)^*\\nabla r(0)|}\\right\\rangle=\\\\=\\frac{\\langle\\zeta_0 f'(\\zeta_0),\\nabla r(0)\\rangle}{|(\\Phi'(0)^{-1})^*\\overline{\\nabla r(0)}|}=\\frac{\\langle\\zeta_0 f'(\\zeta_0),\\nu_D(f(\\zeta_0))|\\nabla r(0)|\\rangle}{|(\\Phi'(0)^{-1})^*\\overline{\\nabla r(0)}|}.\n\\end{multline}\nwhere $r$ is a defining function of $D$. In particular,\n\\begin{multline*} \\re \\left\\langle\\left.\\frac{d}{dt}h(t\\zeta_0)\\right|_{t=1},\\nu_{\\Phi(D)}(h(\\zeta_0))\\right\\rangle=\\re \\frac{\\langle\\zeta_0 f'(\\zeta_0),\\nu_D(f(\\zeta_0))|\\nabla r(0)|\\rangle}{|(\\Phi'(0)^{-1})^*\\overline{\\nabla r(0)}|}=\\\\=\\frac{\\rho(\\zeta_0)^{-1}|\\nabla r(0)|}{|(\\Phi'(0)^{-1})^*\\overline{\\nabla r(0)}|}\\neq 0.\\end{multline*} This proves that $t\\longmapsto h(t\\zeta_0)$ hits $\\partial\\Phi(D)$ transversally.\n\n\nConsequently, we may put $x=h(t\\zeta_0)$ into \\eqref{50} to get \\begin{equation}\\label{hf1}\\frac{-2\\re h_1(t\\zeta_0)}{1-|t\\zeta_0|^2}\\leq\\frac{4\\dist(h(t\\zeta_0),\\partial\\Phi(D))}\n{1-|t\\zeta_0|^2},\\ t\\to 1.\\end{equation}\nBut $\\Phi$ is a biholomorphism near $0$, so \\begin{equation}\\label{nfr}\\frac{4\\dist(h(t\\zeta_0),\\partial\\Phi(D))}{1-|t\\zeta_0|^2}\\leq C_3\\frac{\\dist(f(t\\zeta_0),\\partial D)}{1-|t\\zeta_0|},\\ t\\to 1,\\end{equation} where $C_3$ is a uniform constant depending only on $c$ (thanks to the condition (4)(c) of Definition~\\ref{30}). By Proposition \\ref{7}, the term on the right side of~\\eqref{nfr} does not exceed some uniform constant.\n\nIt follows from \\eqref{hf} that \\begin{multline*}\\rho(\\zeta_0)^{-1}=|\\langle f'(\\zeta_0)\\zeta_0,\\nu_D(f(\\zeta_0))\\rangle|\\leq C_4|\\langle h'(\\zeta_0), \\nu_{\\Phi(D)}(h(\\zeta_0))\\rangle|=\\\\=C_4|h_1'(\\zeta_0)|=\\lim_{t\\to 1}C_4|h_1'(t\\zeta_0)|\\end{multline*} with a uniform $C_4$ (here we use the condition (4)(c) of Definition~\\ref{30} again).\nCombining \\eqref{schh1}, \\eqref{hf1} and \\eqref{nfr} we get the upper estimate for $\\rho(\\zeta_0)^{-1}.$\n\nNow we are proving the lower estimate. Let $r$ be the signed boundary distance to $\\partial D$. For $\\varepsilon=1\/c$ the function $$\\varrho(w):=-\\log(\\varepsilon-r(w))+\\log\\varepsilon,\\ w\\in\nD_\\varepsilon,$$ where $D_\\varepsilon$ is an $\\varepsilon$-hull of $D$, is plurisubharmonic and defining for $D$. Indeed, we have $$-\\log(\\varepsilon-r(w))=-\\log\\dist(w,\\partial D_\\varepsilon),\\ w\\in D_\\varepsilon$$ and $D_\\varepsilon$ is pseudoconvex.\n\nTherefore, a function $$v:=\\varrho\\circ f:\\overline{\\mathbb{D}}\\longrightarrow(-\\infty,0]$$ is subharmonic on $\\DD$. Moreover, since $f$ maps $\\TT$ in $\\partial D$ we infer that $v=0$ on $\\TT$. Moreover, since $|f(\\lambda)-z|0$ such that $$|\\rho(\\zeta_1)-\\rho(\\zeta_2)|\\leq C\\sqrt{|\\zeta_1-\\zeta_2|},\\ \\zeta_1,\\zeta_2\\in\\TT,\\ |\\zeta_1-\\zeta_2|0$. There exists a function $\\psi\\in\\cC^1(\\TT,[0,1])$ such that $\\psi=1$ on $\\TT\\cap B_n(\\zeta_1,2C_1)$ and $\\psi=0$ on $\\TT\\setminus B_n(\\zeta_1,3C_1)$. Then the function $\\phi:\\TT\\longrightarrow\\CC$ defined by $$\\varphi:=(\\overline{\\nu_{D,1}\\circ f}-1)\\psi+1$$ satisfies\n\\begin{enumerate}\n\\item $\\varphi(\\zeta)=\\overline{\\nu_{D,1}(f(\\zeta))}$, $\\zeta\\in\\TT\\cap B_n(\\zeta_1,2C_1)$;\n\\item $|\\varphi(\\zeta)-1|<1\/2$, $\\zeta\\in\\TT$;\n\\item $\\phi$ is uniformly $1\/2$-H\\\"older continuous on $\\TT$, i.e. it is $1\/2$-H\\\"older continuous with a uniform constant (remember that $\\psi$ was chosen uniformly).\n\\end{enumerate}\n\nFirst observe that $\\log\\varphi$ is well-defined. Using using properties listed above we deduce that $\\log\\varphi$ and $\\im\\log\\varphi$ are uniformly $1\/2$-H\\\"older continuous on $\\TT$, as well. The function $\\im\\log\\varphi$ can be extended continuously to a function $v:\\CDD\\longrightarrow\\RR$, harmonic in $\\DD$. There is a function $h\\in\\mathcal O(\\DD)$ such that $v=\\im h$ in $\\DD$. Taking $h-\\re h(0)$ instead of $h$, one can assume that $\\re h(0)=0$. By Theorem \\ref{priv} applied to $ih$, we get that the function $h$ extends continuously on $\\CDD$ and $h$ is uniformly $1\/2$-H\\\"older continuous in $\\CDD$. Hence the function $u:=\\re h:\\CDD\\longrightarrow\\RR$ is uniformly $1\/2$-H\\\"older continuous in $\\CDD$ with a uniform constant $C_2$. Furthermore, $u$ is uniformly bounded in $\\CDD$, since $$|u(\\zeta)|=|u(\\zeta)-u(0)|\\leq C_2\\sqrt{|\\zeta|},\\ \\zeta\\in\\CDD.$$\n\nLet $g(\\zeta):=\\wi{f}_1(\\zeta)e^{-h(\\zeta)}$ and $G(\\zeta):=g(\\zeta)\/\\zeta$. Then $g\\in\\mathcal O(\\DD)\\cap\\mathcal C(\\overline{\\DD})$ and $G\\in\\mathcal O(\\DD_*)\\cap\\mathcal C((\\overline{\\DD})_*)$. Note that for $\\zeta\\in\\TT$ $$|g(\\zeta)|=|\\zeta\n\\rho(\\zeta)\\overline{\\nu_{D,1}(f(\\zeta))}e^{-h(\\zeta)}|\\leq\\rho(\\zeta)e^{-u(\\zeta)},$$ which, combined with\nProposition \\ref{9}, the uniform boundedness of $u$ and the maximum principle, gives a uniform boundedness of $g$ in $\\CDD$. The function $G$ is uniformly bounded in $\\overline{\\DD}\\cap B_n(\\zeta_1,2C_1)$. Moreover, for $\\zeta\\in\\TT\\cap B_n(\\zeta_1,2C_1)$ \\begin{eqnarray*} G(\\zeta)&=&\\rho(\\zeta)\\overline{\\nu_{D,1}(f(\\zeta))}e^{-u(\\zeta)-i\\im\\log \\phi(\\zeta)}=\\\\&=&\\rho(\\zeta)\\overline{\\nu_{D,1}(f(\\zeta))}e^{-u(\\zeta)+\\re\\log\\phi(\\zeta)}e^{-\\log\\phi(\\zeta)}\n=\\rho(\\zeta)e^{-u(\\zeta)+\\re\\log\\phi(\\zeta)}\\in\\mathbb{R}.\\end{eqnarray*} By the Reflection Principle one can extend $G$ holomorphically past $\\TT\\cap B_n(\\zeta_1,2C_1)$ to a function (denoted by the same letter) uniformly bounded in $B_n(\\zeta_1,2C_2)$, where a constant $C_2$ is uniform. Hence, from the Cauchy formula, $G$ is uniformly Lipschitz continuous in $B_n(\\zeta_1,C_2)$, consequently uniformly $1\/2$-H\\\"older continuous in $B_n(\\zeta_1,C_2)$.\n\nFinally, the functions $G$, $h$, $\\nu_{D,1}\\circ f$ are uniformly $1\/2$-H\\\"older continuous on $\\TT\\cap B_n(\\zeta_1,C_2)$, $|\\nu_{D,1}\\circ f|>1\/2$ on $\\TT\\cap B_n(\\zeta_1,C_2)$, so the function $\\rho=Ge^h\/\\overline{\\nu_{D,1}\\circ f}$ is uniformly $1\/2$-H\\\"older continuous on $\\TT\\cap B_n(\\zeta_1,C_2)$.\n\\end{proof}\n\n\n\\begin{prop}\\label{10b}\nLet $f:(\\DD,0)\\longrightarrow (D,z)$ be an $E$-mapping.\nThen $$|\\wi{f}(\\zeta_1)-\\wi{f}(\\zeta_2)|\\leq C\\sqrt{|\\zeta_1-\\zeta_2|},\\ \\zeta_1,\\zeta_2\\in\\overline{\\DD},$$ where\n$C$ is uniform if $(D,z)\\in\\mathcal{D}(c)$.\n\n\\end{prop}\n\\begin{proof}\nBy Propositions \\ref{8} and \\ref{10a} we have desired inequality for $\\zeta_1,\\zeta_2\\in\\TT$. Theorem \\ref{lit2} finishes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Openness of $E$-mappings' set}\\label{27}\nWe shall show that perturbing a little a domain $D$ equipped with an $E$-mapping, we obtain a domain which also has an $E$-mapping, being close to a given one.\n\n\\subsection{Preliminary results}\n\n\\begin{propp}\\label{11}\nLet $f:\\mathbb{D}\\longrightarrow D$ be an $E$-mapping. Then there exist domains $G,\\wi D,\\wi G\\subset\\CC^n$ and a biholomorphism $\\Phi:\\wi D\\longrightarrow\\wi G$ such that\n\\begin{enumerate}\n\\item $\\wi D,\\wi G$ are neighborhoods of $\\overline D,\\overline G$ respectively;\n\\item $\\Phi(D)=G$;\n\\item $g(\\zeta):=\\Phi(f(\\zeta))=(\\zeta,0,\\ldots,0),\\ \\zeta\\in\\CDD$;\n\\item $\\nu_G(g(\\zeta))=(\\zeta,0,\\ldots,0),\\ \\zeta\\in\\TT$;\n\\item for any $\\zeta\\in\\TT$, a point $g(\\zeta)$ is a point of the strong linear convexity of $G$.\n\\end{enumerate}\n\\end{propp}\n\\begin{proof}\nLet $U,V$ be the sets from Proposition \\ref{34}. We claim that after a linear change of coordinates one can assume that $\\widetilde{f}_1,\\widetilde{f}_2$ do not have common zeroes in $V$.\n\nSince $ f'\\bullet\\widetilde{f}=1$, at least one of the functions $\\wi f_1,\\ldots,\\wi f_n$, say $\\wi f_1$, is not identically equal to $0$. Let $\\lambda_1,\\ldots,\\lambda_m$ be all zeroes of $\\wi f_1$ in $V$. We may find $\\alpha\\in\\CC^n$ such that $$(\\alpha_1\\wi f_1+\\ldots+\\alpha_n\\wi f_n)(\\lambda_j)\\neq 0,\\ j=1,\\ldots,m.$$ Otherwise, for any $\\alpha\\in\\CC^n$ there would exist $j\\in\\{1,\\ldots,m\\}$ such that $\\alpha\\bullet\\wi f(\\lambda_j)=0$, hence $$\\CC^n=\\bigcup_{j=1}^m\\{\\alpha\\in\\CC^n:\\ \\alpha\\bullet\\wi f(\\lambda_j)=0\\}.$$ The sets $\\{\\alpha\\in\\CC^n:\\alpha \\bullet \\wi f(\\lambda_j)=0\\}$, $j=1,\\ldots,m$, are the $(n-1)$-dimensional complex hyperplanes, so their finite sum cannot be the space $\\CC^n$.\n\nOf course, at least one of the numbers $\\alpha_2,\\ldots,\\alpha_n$, say $\\alpha_2$, is non-zero. Let\n$$A:=\\left[\\begin{matrix}\n1 & 0 & 0 & \\cdots & 0\\\\\n\\alpha_1 & \\alpha_2 & \\alpha_3 &\\cdots & \\alpha_n\\\\\n0 & 0 & 1 & \\cdots & 0\\\\\n\\vdots & \\vdots & \\vdots &\\ddots & \\vdots \\\\\n0 & 0 & 0 & \\cdots & 1\n\\end{matrix}\\right],\\quad B:=(A^T)^{-1}.$$ We claim that $B$ is a change of coordinates we are looking for. If $r$ is a defining function of $D$ then $r\\circ B^{-1}$ is a defining function of $B_n(D)$, so $B_n(D)$ is a bounded strongly linearly convex domain with real analytic boundary. Let us check that $Bf$ is an $E$-mapping of $B_n(D)$ with associated mappings \\begin{equation}\\label{56}A\\wi f\\in\\OO(\\CDD)\\text{\\ \\ and\\ \\ }\\rho\\frac{|A\\overline{\\nabla r\\circ f}|}{|\\nabla r\\circ f|}\\in\\CLW(\\TT).\\end{equation} The conditions (1) and (2) of Definition~\\ref{21} are clear. For $\\zeta\\in\\TT$ we have \\begin{equation}\\label{57}\\overline{\\nu_{B_n(D)}(Bf(\\zeta))}=\\frac{\\overline{\\nabla(r\\circ B^{-1})(Bf(\\zeta))}}{|\\nabla(r\\circ B^{-1})(Bf(\\zeta))|}=\\frac{(B^{-1})^T\\overline{\\nabla r(f(\\zeta))}}{|(B^{-1})^T\\overline{\\nabla r(f(\\zeta))}|}=\\frac{A\\overline{\\nabla r(f(\\zeta))}}{|A\\overline{\\nabla r(f(\\zeta))}|},\\end{equation} so\n\\begin{equation}\\label{58}\\zeta\\rho(\\zeta)\\frac{|A\\overline{\\nabla r(f(\\zeta))}|}{|\\nabla r(f(\\zeta))|}\\overline{\\nu_{B_n(D)}(Bf(\\zeta))}=\\zeta\\rho(\\zeta)A\\overline{\\nu_D(f(\\zeta))}=A\\wi f(\\zeta).\\end{equation} Moreover, for $\\zeta\\in\\TT$, $z\\in D$ \\begin{multline*}\\langle Bz-Bf(\\zeta), \\nu_{B_n(D)}(Bf(\\zeta))\\rangle=\\overline{\\nu_{B_n(D)}(Bf(\\zeta))}^T(Bz-Bf(\\zeta))=\\\\=\\frac{\\overline{\\nabla r(f(\\zeta))}^TB^{-1}B_n(z-f(\\zeta))}{|(B^{-1})^T\\overline{\\nabla r(f(\\zeta))}|}=\\frac{|\\nabla r(f(\\zeta))|}{|(B^{-1})^T\\overline{\\nabla r(f(\\zeta))}|}\\overline{\\nu_D(f(\\zeta))}^T(z-f(\\zeta))=\\\\=\\frac{|\\nabla r(f(\\zeta))|}{|(B^{-1})^T\\overline{\\nabla r(f(\\zeta))}|}\\langle z-f(\\zeta), \\nu_D(f(\\zeta))\\rangle.\n\\end{multline*}\nTherefore, $B$ is a desired linear change of coordinates, as claimed.\n\nIf necessary, we shrink the sets $U,V$ associated with $f$ to sets associated with $Bf$. There exist holomorphic mappings $h_1,h_2:V\\longrightarrow\\mathbb{C}$ such that\n$$h_1\\widetilde{f}_1+h_2\\widetilde{f}_2\\equiv 1\\text{ in }V.$$ Generally, it is a well-known fact for functions on pseudoconvex domains, however in this case it may be shown quite elementarily. Indeed, if $\\widetilde{f}_1\\equiv 0$ or $\\widetilde{f}_2\\equiv 0$ then it is obvious. In the opposite case, let $\\widetilde{f}_j=F_jP_j$, $j=1,2$, where $F_j$ are holomorphic, non-zero in $V$ and $P_j$ are polynomials with all (finitely many) zeroes in $V$. Then $P_j$ are relatively prime, so there are polynomials $Q_j$, $j=1,2$, such that $$Q_1P_1+Q_2P_2\\equiv 1.$$ Hence $$\\frac{Q_1}{F_1}\\widetilde{f}_1+\\frac{Q_2}{F_2}\\widetilde{f}_2\\equiv 1\\ \\text{ in }V.$$\n\nConsider the mapping $\\Psi:V\\times\\mathbb{C}^{n-1}\\longrightarrow\\mathbb{C}^n$ given by\n\\begin{equation}\\label{et2}\n\\Psi_1(Z):=f_1(Z_1)-Z_2\\widetilde{f}_2(Z_1)-h_1(Z_1)\n\\sum_{j=3}^{n}Z_j\\widetilde{f}_j(Z_1),\n\\end{equation}\n\\begin{equation}\\label{et3}\n\\Psi_2(Z):=f_2(Z_1)+Z_2\\widetilde{f}_1(Z_1)-h_2(Z_1)\n\\sum_{j=3}^{n}Z_j\\widetilde{f}_j(Z_1),\n\\end{equation}\n\\begin{equation}\\label{et4}\n\\Psi_j(Z):=f_j(Z_1)+Z_j,\\ j=3,\\ldots,n.\n\\end{equation}\n\nWe claim that $\\Psi$ is biholomorphic in $\\Psi^{-1}(U)$. First of all observe that $\\Psi^{-1}(\\{z\\})\\neq\\emptyset$ for any $z\\in U$. Indeed, by Proposition \\ref{34} there exists (exactly one) $Z_1\\in V$ such that $$(z-f(Z_1))\\bullet\\widetilde{f}(Z_1)=0.$$ The numbers $Z_j\\in\\CC$, $j=3,\\ldots,n$ are determined uniquely by the equations $$Z_j=z_j-f_j(Z_1).$$ At least one of the numbers $\\wi f_1(Z_1),\\wi f_2(Z_1)$, say $\\wi f_1(Z_1)$, is non-zero. Let $$Z_2:=\\frac{z_2-f_2(Z_1)+h_2(Z_1)\\sum_{j=3}^{n}Z_j\\widetilde{f}_j(Z_1)}{\\wi f_1(Z_1)}.$$ Then we easily check that the equality $$z_1=f_1(Z_1)-Z_2\\widetilde{f}_2(Z_1)-h_1(Z_1)\n\\sum_{j=3}^{n}Z_j\\widetilde{f}_j(Z_1)$$ is equivalent to $(z-f(Z_1))\\bullet\\widetilde{f}(Z_1)=0$, which is true.\n\nTo finish the proof of biholomorphicity of $\\Psi$ in $\\Psi^{-1}(U)$ it suffices to check that $\\Psi$ is injective in $\\Psi^{-1}(U)$. Let us take $Z,W$ such that $\\Psi(Z)=\\Psi(W)=z\\in U$. By a direct computation both $\\zeta=Z_1\\in V$ and $\\zeta=W_1\\in V$ solve the equation\n$$(z-f(\\zeta))\\bullet\\widetilde{f}(\\zeta)=0.$$ From Proposition \\ref{34} we infer that it has exactly one solution. Hence $Z_1=W_1$. By \\eqref{et4} we have $Z_j=W_j$ for $j=3,\\ldots,n$. Finally $Z_2=W_2$ follows from\none of the equations \\eqref{et2}, \\eqref{et3}. Let $G:=\\Psi^{-1}(D)$, $\\wi D:=U$, $\\wi G:=\\Psi^{-1}(U)$, $\\Phi:=\\Psi^{-1}$.\n\nNow we are proving that $\\Phi$ has desired properties. We have $$\\Psi_j(\\zeta,0,\\ldots,0)=f_j(\\zeta),\\ j=1,\\ldots,n,$$ so $\\Phi(f(\\zeta))=(\\zeta,0,\\ldots,0)$, $\\zeta\\in\\CDD$. Put $g(\\zeta):=\\Phi(f(\\zeta))$, $\\zeta\\in\\CDD$. Note that the entries of the matrix $\\Psi'(g(\\zeta))$ are $$\\frac{\\partial\\Psi_1}{\\partial Z_1}(g(\\zeta))=f_1'(\\zeta),\\ \\frac{\\partial\\Psi_1}{\\partial Z_2}(g(\\zeta))=-\\widetilde{f}_2(\\zeta),\\ \\frac{\\partial\\Psi_1}{\\partial Z_j}(g(\\zeta))=-h_1(\\zeta)\\widetilde{f}_j(\\zeta),\\ j\\geq 3,$$$$\\frac{\\partial\\Psi_2}{\\partial Z_1}(g(\\zeta))=f_2'(\\zeta),\\ \\frac{\\partial\\Psi_2}{\\partial Z_2}(g(\\zeta))=\\widetilde{f}_1(\\zeta),\\ \\frac{\\partial\\Psi_2}{\\partial Z_j}(g(\\zeta))=-h_2(\\zeta)\\widetilde{f}_j(\\zeta),\\ j\\geq 3,$$$$\\frac{\\partial\\Psi_k}{\\partial Z_1}(g(\\zeta))=f_k'(\\zeta),\\ \\frac{\\partial\\Psi_k}{\\partial Z_2}(g(\\zeta))=0,\\ \\frac{\\partial\\Psi_k}{\\partial Z_j}(g(\\zeta))=\\delta^{k}_{j},\\ j,k\\geq 3.$$ Thus $\\Psi '(g(\\zeta))^T\\wi f(\\zeta)=(1,0,\\ldots,0)$, $\\zeta\\in\\CDD$ (since $f'\\bullet\\wi f=1$). Let us take a defining function $r$ of $D$. Then $r\\circ\\Psi$ is a defining function of $G$. Therefore, \\begin{multline*}\\nu_G(g(\\zeta))=\\frac{\\nabla(r\\circ\\Psi)(g(\\zeta))}{|\\nabla(r\\circ\\Psi)(g(\\zeta))|}=\n\\frac{\\overline{\\Psi'(g(\\zeta))}^T\\nabla r(f(\\zeta))}{|\\overline{\\Psi'(g(\\zeta))}^T\\nabla r(f(\\zeta))|}=\\\\=\\frac{\\overline{\\Psi'(g(\\zeta))}^T\\ov{\\frac{\\wi f(\\zeta)}{\\zeta\\rho(\\zeta)}}|\\nabla r(f(\\zeta))|}{\\left|\\overline{\\Psi'(g(\\zeta))}^T\\ov{\\frac{\\wi f(\\zeta)}{\\zeta\\rho(\\zeta)}}|\\nabla r(f(\\zeta))|\\right|}=g(\\zeta),\\ \\zeta\\in\\TT.\\end{multline*}\n\nIt remains to prove the fifth condition. By Definition \\ref{29}(2) we have to show that \\begin{equation}\\label{sgf}\\sum_{j,k=1}^n\\frac{\\partial^2(r\\circ\\Psi)}{\\partial z_j\\partial\\overline{z}_k}(g(\\zeta))X_{j}\\overline{X}_{k}>\\left|\\sum_{j,k=1}^n\\frac{\\partial^2(r\\circ\\Psi)}{\\partial z_j\\partial z_k}(g(\\zeta))X_{j}X_{k}\\right|\\end{equation} for $\\zeta\\in\\TT$ and $X\\in(\\CC^{n})_*$ with\n$$\\sum_{j=1}^n\\frac{\\partial(r\\circ\\Psi)}{\\partial z_j}(g(\\zeta))X_{j}=0,$$ i.e. $X_1=0$. We have $$\\sum_{j,k=1}^n\\frac{\\partial^2(r\\circ\\Psi)}{\\partial z_j\\partial\\overline{z}_k}(g(\\zeta))X_{j}\\overline{X}_{k}=\\sum_{j,k,s,t=1}^n\\frac{\\partial^2 r}{\\partial z_s\\partial\\overline{z}_t}(f(\\zeta))\\frac{\\partial\\Psi_s}{\\partial z_j}(g(\\zeta))\\overline{\\frac{\\partial\\Psi_t}{\\partial z_k}(g(\\zeta))}X_{j}\\overline{X}_{k}=$$$$=\\sum_{s,t=1}^n\\frac{\\partial^2 r}{\\partial z_s\\partial\\overline{z}_t}(f(\\zeta))Y_{s}\\overline{Y}_{t},$$ where $$Y:=\\Psi'(g(\\zeta))X.$$ Note that $Y\\neq 0$. Additionally $$\\sum_{s=1}^n\\frac{\\partial r}{\\partial z_s}(f(\\zeta))Y_{s}=\\sum_{j,s=1}^n\\frac{\\partial r}{\\partial z_s}(f(\\zeta))\\frac{\\partial\\Psi_s}{\\partial z_j}(g(\\zeta))X_j=\\sum_{j=1}^n\\frac{\\partial(r\\circ\\Psi)}{\\partial z_j}(g(\\zeta))X_{j}=0.$$ Therefore, by the strong linear convexity of $D$ at $f(\\zeta)$ $$\\sum_{s,t=1}^n\\frac{\\partial^2 r}{\\partial z_s\\partial\\overline{z}_t}(f(\\zeta))Y_{s}\\overline{Y}_{t}>\\left|\\sum_{s,t=1}^n\\frac{\\partial^2 r}{\\partial z_s\\partial z_t}(f(\\zeta))Y_{s}Y_{t}\\right|.$$ To finish the proof observe that $$\\left|\\sum_{j,k=1}^n\\frac{\\partial^2(r\\circ\\Psi)}{\\partial z_j\\partial z_k}(g(\\zeta))X_{j}X_{k}\\right|=\\left|\\sum_{j,k,s,t=1}^n\\frac{\\partial^2 r}{\\partial z_s\\partial z_t}(f(\\zeta))\\frac{\\partial\\Psi_s}{\\partial z_j}(g(\\zeta))\\frac{\\partial\\Psi_t}{\\partial z_k}(g(\\zeta))X_{j}X_{k}+\\right.$$$$\\left.+\\sum_{j,k,s=1}^n\\frac{\\partial r}{\\partial z_s}(f(\\zeta))\\frac{\\partial^2\\Psi_s}{\\partial z_j\\partial z_k}(g(\\zeta))X_{j}X_{k}\\right|=$$$$=\\left|\\sum_{s,t=1}^n\\frac{\\partial^2 r}{\\partial z_s\\partial z_t}(f(\\zeta))Y_{s}Y_{t}+\\sum_{j,k=2}^n\\sum_{s=1}^n\\frac{\\partial r}{\\partial z_s}(f(\\zeta))\\frac{\\partial^2\\Psi_s}{\\partial z_j\\partial z_k}(g(\\zeta))X_{j}X_{k}\\right|$$ and $$\\frac{\\partial^2\\Psi_s}{\\partial z_j\\partial z_k}(g(\\zeta))=0,\\ j,k\\geq 2,\\ s\\geq 1,$$ which gives \\eqref{sgf}.\n\\end{proof}\n\n\\begin{remm}\\label{rem:theta}\nLet $D$ be a bounded domain in $\\mathbb C^n$ and let $f:\\DD\\longrightarrow D$ be a (weak) stationary mapping such that $\\partial D$ is real analytic in a neighborhood of $f(\\TT)$. Assume moreover that there are a neighborhood $U$ of $f(\\CDD)$ and a mapping $\\Theta:U\\longrightarrow\\CC^n$ biholomorphic onto its image and the set $D\\cap U$ is connected. Then $\\Theta\\circ f$ is a (weak) stationary mapping of $G:=\\Theta(D\\cap U)$.\n\nIn particular, if $U_1$, $U_2$ are neighborhoods of the closures of domains $D_1$, $D_2$ with real analytic boundaries and $\\Theta:U_1\\longrightarrow U_2$ is a biholomorphism such that $\\Theta(D_1)=D_2$, then $\\Theta$ maps (weak) stationary mappings of $D_1$ onto (weak) stationary mappings of $D_2$.\n\\end{remm}\n\\begin{proof}\nActually, it is clear that two first conditions of the definition of (weak) stationary mappings are preserved by $\\Theta$. To show the third one we proceed similarly as in the equations \\eqref{56}, \\eqref{57}, \\eqref{58}. Let $f:\\DD\\longrightarrow D $ be a (weak) stationary mapping. The candidates for the mappings in condition (3) (resp. (3')) of Definition~\\ref{21} for $\\Theta\\circ f$ in the domain $G$ are $$((\\Theta'\\circ f)^{-1})^T\\wi f\\text{\\ \\ and\\ \\ }\\rho\\frac{|((\\Theta'\\circ f)^{-1})^T\\overline{\\nabla r\\circ f}|}{|\\nabla r\\circ f|}.$$ Indeed, for $\\zeta\\in\\TT$ \\begin{multline*}\\overline{\\nu_{G}(\\Theta(f(\\zeta)))}=\n\\frac{\\overline{\\nabla(r\\circ\\Theta^{-1})(\\Theta(f(\\zeta)))}}{|\\nabla(r\\circ\\Theta^{-1})(\\Theta(f(\\zeta)))|}=\\frac{[(\\Theta^{-1})'(\\Theta(f(\\zeta)))]^T\\overline{\\nabla r(f(\\zeta))}}{|[(\\Theta^{-1})'(\\Theta(f(\\zeta)))]^T\\overline{\\nabla r(f(\\zeta))}|}=\\\\\n=\\frac{(\\Theta'(f(\\zeta))^{-1})^T\\overline{\\nabla r(f(\\zeta))}}{|(\\Theta'(f(\\zeta))^{-1})^T\\overline{\\nabla r(f(\\zeta))}|},\n\\end{multline*}\nhence\n\\begin{multline*}\\zeta\\rho(\\zeta)\\frac{|(\\Theta'(f(\\zeta))^{-1})^T\\overline{\\nabla r(f(\\zeta))}|}{|\\nabla r(f(\\zeta))|}\\overline{\\nu_{G}(\\Theta(f(\\zeta)))}=\\\\\n=\\zeta\\rho(\\zeta)(\\Theta'(f(\\zeta))^{-1})^T\\overline{\\nu_{D}(f(\\zeta))}=\n(\\Theta'(f(\\zeta))^{-1})^T\\wi f(\\zeta).\n\\end{multline*}\n\\end{proof}\n\n\n\\subsection{Situation (\\dag)}\\label{dag}\nConsider the following situation, denoted by (\\dag) (with data $D_0$ and $U_0$):\n\\begin{itemize}\n\\item $D_0$ is a bounded domain in $\\CC^n$, $n\\geq 2$;\n\\item $f_0:\\CDD\\ni\\zeta\\longmapsto(\\zeta,0,\\ldots,0)\\in\\ov D_0$, $\\zeta\\in\\CDD$;\n\\item $f_0(\\DD)\\subset D_0$;\n\\item $f_0(\\TT)\\subset\\partial D_0$;\n\\item $\\nu_{D_0}(f_0(\\zeta))=(\\zeta,0,\\ldots,0)$, $\\zeta\\in\\TT$;\n\\item for any $\\zeta\\in\\TT$, a point $f_0(\\zeta)$ is a point of the strong linear convexity of $D_0$;\n\\item $\\partial D_0$ is real analytic in a neighborhood $U_0$ of $f_0(\\TT)$ with a function $r_0$;\n\\item $|\\nabla r_0|=1$ on $f_0(\\TT)$ (in particular, $r_{0z}(f_0(\\zeta))=(\\ov\\zeta\/2,0,\\ldots,0)$, $\\zeta\\in\\TT$).\n\\end{itemize}\n\nSince $r_0$ is real analytic on $U_0\\su\\RR^{2n}$, it extends in a natural way to a holomorphic function in a neighborhood $U_0^\\CC\\su\\mb{C}^{2n}$ of $U_0$. Without loss of generality we may assume that $r_0$ is bounded on $U_0^\\CC$. Set $$X_0=X_0(U_0,U_0^{\\mathbb C}):=\\{r\\in\\mc{O}(U_0^\\CC):\\text{$r(U_0)\\su\\mb{R}$ and $r$ is bounded}\\},$$ which equipped with the sup-norm is a (real) Banach space.\n\n\\begin{remm} Lempert considered the case when $U_0$ is a neighborhood of a boundary of a bounded domain $D_0$ with real analytic boundary. We shall need more general results to prove the `localization property'.\n\\end{remm}\n\n\\subsection{General lemmas}\\label{General lemmas}\nWe keep the notation from Subsection \\eqref{dag} and assume Situation (\\dag).\n\nLet us introduce some additional objects we shall be dealing with and let us prove more general lemmas (its generality will be useful in the next section).\n\nConsider the Sobolev space $W^{2,2}(\\TT)=W^{2,2}(\\TT,\\CC^m)$ of functions $f:\\TT\\longrightarrow\\CC^m$, whose first two derivatives (in the sense of distribution) are in $L^2(\\TT)$. The $W^{2,2}$-norm is denoted by $\\|\\cdot\\|_W$. For the basic properties of $W^{2,2}(\\TT)$ see Appendix.\n\nPut $$B:=\\{f\\in W^{2,2}(\\TT,\\CC^n):f\\text{ extends holomorphically on $\\mb{D}$ and $f(0)=0$}\\},$$$$B_0:=\\{f\\in B:f(\\TT)\\su U_0\\},\\quad B^*:=\\{\\overline{f}:f\\in B\\},$$$$Q:=\\{q\\in W^{2,2}(\\TT,\\CC):q(\\TT)\\su\\RR\\},\\quad Q_0:=\\{q\\in Q:q(1)=0\\}.$$\n\nIt is clear that $B$, $B^*$, $Q$ and $Q_0$ equipped with the norm $\\|\\cdot\\|_W$ are (real) Banach spaces. Note that $B_0$ is an open neighborhood of $f_0$. In what follows, we identify $f\\in B$ with its unique holomorphic extension on $\\mb{D}$.\n\nLet us define the projection $$\\pi:W^{2,2}(\\TT,\\CC^n)\\ni f=\\sum_{k=-\\infty}^{\\infty}a_k\\zeta^{k}\\longmapsto\\sum_{k=-\\infty}^{-1}a_k\\zeta^{k}\\in{B^*}.$$ Note that $f\\in W^{2,2}(\\TT,\\CC^n)$ extends holomorphically on $\\mb{D}$ if and only if $\\pi(f)=0$ (and the extension is $\\mathcal C^{1\/2}$ on $\\TT$). Actually, it suffices to observe that\n$g(\\zeta):=\\sum_{k=-\\infty}^{-1}a_k\\zeta^{k}$, $\\zeta\\in\\TT$, extends holomorphically on $\\DD$ if and only if $a_k=0$ for $k<0$. This follows immediately from the fact that the mapping $\\TT\\ni\\zeta\\longmapsto g(\\ov\\zeta)\\in\\CC^n$ extends holomorphically on $\\DD$.\n\nConsider the mapping $\\Xi:X_0\\times\\mb{C}^n\\times B_0\\times\nQ_0\\times\\mb{R}\\longrightarrow Q\\times{B^*}\\times\\mb{C}^n$ defined by\n$$\\Xi(r,v,f,q,\\lambda):=(r\\circ f,\\pi(\\zeta(1+q)(r_z\\circ f)),f'(0)-\\lambda v),$$ where $\\zeta$ is treated as the identity function on $\\TT$.\n\n\nWe have the following\n\n\\begin{lemm}\\label{cruciallemma} There exist a neighborhood $V_0$ of $(r_0,f_0'(0))$ in $X_0\\times\\mb{C}^n$ and a real analytic mapping $\\Upsilon:V_0\\longrightarrow B_0\\times Q_0\\times\\mb{R}$ such that for any $(r,v)\\in V_0$ we have $\\Xi(r,v,\\Upsilon(r,v))=0$.\n\\end{lemm}\n\\bigskip\nLet $\\wi\\Xi:X_0\\times\\mb{C}^n\\times B_0\\times Q_0\\times(0,1)\\longrightarrow Q\\times{B^*}\\times\\mb{C}^n$ be defined as $$\\wi\\Xi(r,w,f,q,\\xi):=(r\\circ f,\\pi(\\zeta(1+q)(r_z\\circ f)),f(\\xi)-w).$$\n\n\nAnalogously we have\n\\begin{lemm}\\label{cruciallemma1} Let $\\xi_0\\in(0,1)$. Then there exist a neighborhood $W_0$ of $(r_0,f_0(\\xi_0))$ in $X_0\\times D_0$ and a real analytic mapping $\\wi\\Upsilon:W_0\\longrightarrow B_0\\times Q_0\\times(0,1)$ such that for any $(r,w)\\in W_0$ we have $\\wi\\Xi(r,w,\\wi\\Upsilon(r,w))=0$.\n\\end{lemm}\n\n\n\n\\begin{proof}[Proof of Lemmas \\ref{cruciallemma} and \\ref{cruciallemma1}]\n\n\nWe will prove the first lemma. Then we will see that a proof of the second one reduces to that proof.\n\nWe claim that $\\Xi$ is real analytic. The only problem is to show that the mapping $$T: X_0\\times B_0\\ni(r,f)\\longmapsto r\\circ f\\in Q$$ is real analytic (the real analyticity of the mapping $X_0\\times B_0\\ni(r,f)\\longmapsto r_z\\circ f\\in W^{2,2}(\\TT,\\CC^n)$ follows from this claim).\n\nFix $r\\in X_0$, $f\\in B_0$ and take $\\eps>0$ so that a $2n$-dimensional polydisc $P_{2n}(f(\\zeta),\\eps)$ is contained in $U_0^\\CC$ for any $\\zeta\\in\\TT$. Then any function $\\wi r\\in X_0$ is holomorphic in $U_0^\\CC$, so it may be expanded as a holomorphic series convergent in $P_{2n}(f(\\zeta),\\eps)$. Losing no generality we may assume that $n$-dimensional polydiscs $P_{n}(f(\\zeta),\\eps)$, $\\zeta\\in\\TT$, satisfy $P_{n}(f(\\zeta),\\eps)\\su U_0$. This gives an expansion of the function $\\wi r$ at any point $f(\\zeta)$, $\\zeta\\in\\TT$, into a series $$\\sum_{\\alpha\\in\\NN_0^{2n}}\\frac{1}{\\alpha!}\\frac{\\pa^{|\\alpha|}\\wi r}{\\pa x^\\alpha}(f(\\zeta))x^\\alpha$$ convergent to $\\wi r(f(\\zeta)+x)$, provided that $x=(x_1,\\ldots,x_{2n})\\in P_n(0,\\eps)$ (where $\\NN_0:=\\NN\\cup\\{0\\}$ and $|\\alpha|:=\\alpha_1+\\ldots+\\alpha_{2n}$). Hence \\begin{equation}\\label{69}T(r+\\varrho,f+h)=\\sum_{\\alpha\\in\\NN_0^{2n}}\\frac{1}{\\alpha!}\\left(\\frac{\\pa^{|\\alpha|}r}{\\pa x^\\alpha}\\circ f\\right)h^\\alpha+\\sum_{\\alpha\\in\\NN_0^{2n}}\\frac{1}{\\alpha!}\\left(\\frac{\\pa^{|\\alpha|}\\varrho}{\\pa x^\\alpha}\\circ f\\right)h^\\alpha\\end{equation} pointwise for $\\varrho\\in X_0$ and $h\\in W^{2,2}(\\TT,\\CC^n)$ with $\\|h\\|_{\\sup}<\\eps$.\n\nPut $P:=\\bigcup_{\\zeta\\in \\TT} P_{2n}(f(\\zeta),\\eps)$ and for $\\wi r\\in X_0$ put $||\\wi r||_P:=\\sup_P|\\wi r|$. Let $\\wi r$ be equal to $r$ or to $\\varrho$, where $\\varrho$ lies is in a neighborhood of $0$ in $X_0$. The Cauchy inequalities give\n\\begin{equation}\\label{series}\\left|\\frac{\\pa^{|\\alpha|}\\wi r}{\\pa x^\\alpha}(f(\\zeta))\\right|\\leq\\frac{\\alpha!\\|\\wi r\\|_{P}}{\\eps^{|\\alpha|}},\\quad\\zeta\\in\\TT.\\end{equation}\nTherefore, $$\\left|\\left|\\frac{\\pa^{|\\alpha|}\\wi r}{\\pa x^\\alpha}\\circ f\\right|\\right|_W\\leq C_1\\frac{\\alpha!\\|\\wi r\\|_{P}}{\\eps^{|\\alpha|}}$$ for some $C_1>0$.\n\nThere is $C_2>0$ such that $$\\|gh^\\alpha\\|_W\\leq C_2^{|\\alpha|+1}\\|g\\|_W\\|h_1\\|^{\\alpha_1}_W\\cdotp\\ldots\\cdotp\\|h_{2n}\\|^{\\alpha_{2n}}_W$$ for $g\\in W^{2,2}(\\TT,\\CC)$, $h\\in W^{2,2}(\\TT,\\CC^n)$, $\\alpha\\in\\NN_0^{2n}$ (see Appendix for a proof of this fact). Using the above inequalities we infer that $$\\sum_{\\alpha\\in\\NN_0^{2n}}\\left|\\left|\\frac{1}{\\alpha!}\\left(\\frac{\\pa^{|\\alpha|}\\wi r}{\\pa x^\\alpha}\\circ f\\right)h^\\alpha\\right|\\right|_W$$ is convergent if $h$ is small enough on the norm $\\|\\cdot\\|_W$. Therefore, the series~\\eqref{69} is absolutely convergent in the norm $\\|\\cdot\\|_W$, whence $T$ is real analytic.\n\n\nTo show the existence of $V_0$ and $\\Upsilon$ we will make use of the Implicit Function Theorem. More precisely, we shall show that the partial derivative $$\\Xi_{(f,q,\\lambda)}(r_0,f_0'(0),f_0,0,1):B\\times Q_0\\times\\mb{R}\\longrightarrow Q\\times{B^*}\\times\\mb{C}^n$$ is an isomorphism.\nObserve that for any $(\\widetilde{f},\\widetilde{q},\\widetilde{\\lambda})\\in B\\times Q_0\\times\\mb{R}$ the following equality holds\n\\begin{multline*}\\Xi_{(f,q,\\lambda)}(r_0,f_0'(0),f_0,0,1)(\\widetilde{f},\\widetilde{q},\\widetilde{\\lambda})=\\left.\\frac{d}{dt}\n\\Xi(r_0,f_0'(0),f_0+t\\widetilde{f},t\\widetilde{q},1+t\\widetilde{\\lambda})\\right|_{t=0}=\\\\\n=((r_{0z}\\circ f_0)\\widetilde{f}+(r_{0\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}},\\pi(\\zeta\\widetilde{q}r_{0z}\\circ f_0+\\zeta(r_{0zz} \\circ\nf_0)\\widetilde{f}+\\zeta(r_{0z\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}}),\\widetilde{f}'(0)-\\widetilde{\\lambda}f_0'(0)),\n\\end{multline*}\nwhere we treat ${r_0}_z,{r_0}_{\\overline{z}}$ as row vectors, $\\widetilde{f},\\overline{\\widetilde{f}}$ as column vectors and $r_{0zz}=\\left[\\frac{\\partial^2r_0}{\\partial z_j\\partial z_k}\\right]_{j,k=1}^n$, $r_{0z\\overline{z}}=\\left[\\frac{\\partial^2r_0}{\\partial z_j\\partial\\overline z_k}\\right]_{j,k=1}^n$ as $n\\times n$ matrices.\n\nBy the Bounded Inverse Theorem it suffices to show that $\\Xi_{(f,q,\\lambda)}(r_0,f_0'(0),f_0,0,1)$ is bijective, i.e. for $(\\eta,\\varphi,v)\\in Q\\times B^*\\times\\mb{C}^n$ there exists exactly one $(\\widetilde{f},\\widetilde{q},\\widetilde{\\lambda})\\in B\\times Q_0\\times\\mb{R}$ satisfying\n\\begin{equation}\n(r_{0z}\\circ f_0)\\widetilde{f}+(r_{0\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}}=\\eta,\n\\label{al1}\n\\end{equation}\n\\begin{equation}\n\\pi(\\zeta\\widetilde{q}r_{0z}\\circ f_0+\\zeta (r_{0zz}\\circ f_0)\\widetilde{f}+\\zeta(r_{0z\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}})=\\varphi,\n\\label{al2}\n\\end{equation}\n\\begin{equation}\n\\widetilde{f}'(0)-\\widetilde{\\lambda} f_0'(0)=v.\n\\label{al3}\n\\end{equation}\nFirst we show that $\\wi\\lambda$ and $\\wi f_1$ are uniquely determined. Observe that, in view of assumptions, (\\ref{al1}) is just $$\\frac{1}{2}\\overline{\\zeta}\\widetilde{f}_1+\\frac{1}{2}\\zeta\\overline{\\widetilde{f}_1}=\\eta$$ or equivalently\n\\begin{equation}\n\\re(\\widetilde{f}_1\/\\zeta)=\\eta\\text{ (on }\\TT).\n\\label{al4}\n\\end{equation}\nNote that the equation (\\ref{al4}) uniquely determines $\\widetilde{f}_1\/\\zeta\\in W^{2,2}(\\TT,\\CC)\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ up to an imaginary additive constant, which may be computed using (\\ref{al3}). Actually, $\\eta=\\re G$ on $\\TT$ for some function $G\\in W^{2,2}(\\TT,\\CC)\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$. To see this, let us expand $\\eta(\\zeta)=\\sum_{k=-\\infty}^{\\infty}a_k\\zeta^{k}$, $\\zeta\\in\\TT$. From the equality $\\eta(\\zeta)=\\ov{\\eta(\\zeta)}$, $\\zeta\\in\\TT$, we get \\begin{equation}\\label{65}\\sum_{k=-\\infty}^{\\infty}a_k\\zeta^{k}=\\sum_{k=-\\infty}^{\\infty}\\ov a_k\\zeta^{-k}=\\sum_{k=-\\infty}^{\\infty}\\ov a_{-k}\\zeta^{k},\\ \\zeta\\in\\TT,\\end{equation} so $a_{-k}=\\ov a_k$, $k\\in\\ZZ$. Hence $$\\eta(\\zeta)=a_0+\\sum_{k=1}^\\infty 2\\re(a_k\\zeta^k)=\\re\\left(a_0+2\\sum_{k=1}^\\infty a_k\\zeta^k\\right),\\ \\zeta\\in\\TT.$$ Set $$G(\\zeta):=a_0+2\\sum_{k=1}^\\infty a_k\\zeta^k,\\ \\zeta\\in\\DD.$$ This series is convergent for $\\zeta\\in\\DD$, so $G\\in\\OO(\\DD)$. Further, the function $G$ extends continuously on $\\CDD$ (to the function denoted by the same letter) and the extension lies in $W^{2,2}(\\TT,\\CC)$. Clearly, $\\eta=\\re G$ on $\\TT$.\n\nWe are searching $C\\in\\RR$ such that the functions $\\widetilde{f}_1:=\\zeta(G+iC)$ and $\\theta:=\\im(\\widetilde{f}_1\/\\zeta)$ satisfy $$\\eta(0)+i\\theta(0)=\\widetilde{f}_1'(0)$$ and\n$$\\eta(0)+i\\theta(0)-\\widetilde{\\lambda}\\re{f_{01}'(0)}-i\\widetilde{\\lambda}\\im{{f_{01}'(0)}}=\\re{v_1}+i\\im{v_1}.$$ But $$\\eta(0)-\\widetilde{\\lambda}\\re{f_{01}'(0)}=\\re{v_1},$$ which yields $\\widetilde{\\lambda}$ and then $\\theta(0)$, consequently the number $C$.\nHaving $\\widetilde{\\lambda}$ and once again using (\\ref{al3}), we find uniquely determined $\\widetilde{f}_2'(0),\\ldots,\\widetilde{f}_n'(0)$.\n\nTherefore, the equations $\\eqref{al1}$ and $\\eqref{al3}$ are satisfied by uniquely determined $\\wi f_1$, $\\wi\\lambda$ and $\\widetilde{f}_2'(0),\\ldots,\\widetilde{f}_n'(0)$.\n\nConsider (\\ref{al2}), which is the system of $n$ equations with unknown $\\widetilde{q},\\widetilde{f}_2,\\ldots,\\widetilde{f}_n$. Observe that $\\widetilde{q}$ appears only in the first of the equations and the remaining $n-1$ equations mean exactly that the mapping\n\\begin{equation}\n\\zeta(r_{0\\widehat{z}\\widehat{z}}\\circ f_0)\n\\widehat{\\widetilde{f}}+\\zeta(r_{0\\widehat{z}\\widehat{\\overline{z}}}\\circ f_0)\\widehat{\\overline{\\widetilde{f}}}-\\psi\n\\label{al5}\n\\end{equation}\nextends holomorphically on $\\mb{D}$, where $\\widehat{a}:=(a_{2},\\ldots,a_{n})$ and $\\psi\\in W^{2,2}(\\TT,\\mb{C}^{n-1})$ may be obtained from $\\varphi$ and $\\widetilde{f}_1$. Indeed, to see this, write (\\ref{al2}) in the form $$\\pi(F_{1}+\\zeta F_{2}+\\zeta F_{3})=(\\phi_1,\\ldots,\\phi_n),$$ where $$F_1:=(\\wi q,0,\\ldots,0),$$$$F_2:=(A_{j})_{j=1}^n,\\ A_{j}:=\\sum\\limits_{k=1}^n(r_{0z_jz_k}\\circ f_0)\\widetilde{f}_k,$$$$F_3=(B_{j})_{j=1}^n,\\ B_{j}:=\\sum\\limits_{k=1}^n(r_{0z_j\\ov z_k}\\circ f_0)\\overline{\\widetilde{f}_k}.$$ It follows that $$\\widetilde{q}+\\zeta A_1+\\zeta B_1-\\phi_1$$ and $$\\zeta A_j+\\zeta B_j-\\phi_j,\\ j=2,\\ldots,n,$$ extend holomorphically on $\\mb{D}$ and $$\\psi:=\\left(\\phi_j-\\zeta(r_{0z_jz_1}\\circ f_0)\\widetilde{f}_1-\\zeta(r_{0z_j\\ov z_1}\\circ f_0)\\overline{\\widetilde{f}_1}\\right)_{j=2}^n.$$\nPut $$g(\\zeta):=\\widehat{\\widetilde{f}}(\\zeta)\/\\zeta,\\quad\\alpha(\\zeta):=\\zeta^2r_{0\\widehat{z}\\widehat{z}}(f_0(\\zeta)),\n\\quad\\beta(\\zeta):=r_{0\\widehat{z}\\widehat{\\overline{z}}}(f_0(\\zeta)).$$\n\nObserve that $\\alpha(\\zeta)$, $\\beta(\\zeta)$ are the $(n-1)\\times(n-1)$ matrices depending real analytically on $\\zeta$ and $g(\\zeta)$ is a column vector in $\\mb{C}^{n-1}$. This allows us to reduce \\eqref{al5} to the following problem: we have to find a unique $g\\in W^{2,2}(\\TT,\\mb{C}^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ such that \\begin{equation}\n\\alpha g+\\beta\\overline{g}-\\psi\\text{ extends holomorphically on $\\mb{D}$ and } g(0)={\\widehat{\\widetilde{f}'}}(0).\n\\label{al6}\n\\end{equation}\nThe fact that every $f_0(\\zeta)$ is a point of strong linear convexity of the domain $D_0$ may be written as\n\\begin{equation}\n|X^T\\alpha(\\zeta)X|0$ independent on $\\zeta$ and $X$. Thus $\\|\\gamma(\\zeta)\\|\\leq 1-\\wi\\eps$ by Proposition \\ref{59}.\n\nWe have to prove that there is a unique solution $h\\in W^{2,2}(\\TT,\\CC^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ of (\\ref{al9}) such that $h(0)=a$ with a given $a\\in\\CC^{n-1}$.\n\nDefine the operator $$P:W^{2,2}(\\TT,\\mb{C}^{n-1})\\ni\\sum_{k=-\\infty}^{\\infty}a_k\\zeta^{k}\\longmapsto\\overline{\\sum_{k=-\\infty}^{-1}a_k\\zeta^{k}}\\in W^{2,2}(\\TT,\\mb{C}^{n-1}),$$ where $a_k\\in\\CC^{n-1}$, $k\\in\\ZZ$.\n\nWe will show that a mapping $h\\in\nW^{2,2}(\\TT,\\mb{C}^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ satisfies (\\ref{al9}) and $h(0)=a$ if and only if it is a fixed point of the mapping $$K:W^{2,2}(\\TT,\\mb{C}^{n-1})\\ni h\\longmapsto P(H^{-1}\\psi-\\gamma h)+a\\in W^{2,2}(\\TT,\\mb{C}^{n-1}).$$\n\nIndeed, take $h\\in\nW^{2,2}(\\TT,\\mb{C}^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ and suppose that $h(0)=a$ and $\\gamma h+\\overline{h}-H^{-1}\\psi$ extends holomorphically on $\\mb{D}$. Then $$h=a+\\sum_{k=1}^{\\infty}a_k\\zeta^{k},\\quad\\overline{h}=\\overline{a}+\\sum_{k=1}^{\\infty}\\overline a_k\\zeta^{-k}=\\sum_{k=-\\infty}^{-1}\\overline a_{-k}\\zeta^{k}+\\overline{a},$$ $$P(h)=0,\\quad P(\\overline{h})=\\sum_{k=1}^{\\infty}a_k\\zeta^{k}=h-a$$ and $$P(\\gamma h+\\overline{h}-H^{-1}\\psi)=0,$$ which implies $$P(H^{-1}\\psi-\\gamma h)=h-a$$ and finally $K(h)=h$. Conversely, suppose that $K(h)=h$. Then $$P(H^{-1}\\psi-\\gamma h)=h-a=\\sum_{k=1}^{\\infty}a_k\\zeta^{k}+a_1-a,\\quad P(h)=0$$ and\n$$P(\\overline{h})=\\sum_{k=1}^{\\infty}a_k\\zeta^{k}=h-a_1,$$ from which follows that $$P(\\gamma h+\\overline{h}-H^{-1}\\psi)=P(\\overline{h})-P(H^{-1}\\psi-\\gamma h)=a-a_1$$ and $$P(\\gamma h+\\overline{h}-H^{-1}\\psi)=0\\text{ iff }a=a_1.$$ Observe that $h(0)=K(h)(0)=P(H^{-1}\\psi-\\gamma h)(0)+a=a$.\n\nWe shall make use of the Banach Fixed Point Theorem. To do this, consider $W^{2,2}(\\TT,\\CC^{n-1})$ equipped with the following norm $$\\|h\\|_{\\varepsilon}:=\\|h\\|_L+\\varepsilon\\|h'\\|_L+\n\\varepsilon^2\\|h''\\|_L,$$ where $\\eps>0$ and $\\|\\cdot\\|_L$ is the $L^2$-norm (it is a Banach space). We will prove that $K$ is a contraction with respect to the norm $\\|\\cdot\\|_{\\varepsilon}$ for sufficiently small $\\eps$. Indeed, there is $\\wi\\eps>0$ such that for any $h_1,h_2\\in W^{2,2}(\\TT,\\CC^{n-1})$\n\\begin{equation}\n\\|K(h_1)-K(h_2)\\|_L=\\|P(\\gamma(h_2-h_1))\\|_L\\leq\\|\\gamma(h_2-h_1)\\|_L\\leq (1-\\wi\\eps)\\|h_2-h_1\\|_L.\n\\label{al10}\n\\end{equation}\nMoreover,\n\\begin{multline}\n\\|K(h_1)'-K(h_2)'\\|_L= \\|P(\\gamma h_2)'-P(\\gamma h_1)'\\|_L\\leq\\\\\n\\leq\\|(\\gamma h_2)'-(\\gamma h_1)'\\|_L= \\|\\gamma '(h_2-h_1)+\\gamma(h_2'-h_1')\\|_L.\n\\label{al11}\n\\end{multline} Furthermore,\n\\begin{equation}\n\\|K(h_1)''-K(h_2)''\\|_L\\leq\\|\\gamma ''(h_2-h_1)\\|_L+2\\|\\gamma '(h_2'-h_1')\\|_L+\\|\\gamma\n(h_1''-h_2'')\\|_L.\\label{al12}\n\\end{equation}\nUsing the finiteness of $\\|\\gamma '\\|$, $\\|\\gamma ''\\|$ and putting (\\ref{al10}), (\\ref{al11}), (\\ref{al12}) together we see that there exists $\\varepsilon>0$ such that $K$ is a contraction w.r.t. the norm $\\|\\cdot\\|_{\\varepsilon}$.\n\nWe have found $\\widetilde{f}$ and $\\widetilde{\\lambda}$ satisfying (\\ref{al1}), (\\ref{al3}) and the last $n-1$ equations from (\\ref{al2}) are satisfied. \n\nIt remains to show that there exists a unique $\\widetilde{q}\\in Q_0$ such that $\\widetilde{q}+\\zeta A_1+\\zeta B_1-\\varphi_1$ extends holomorphically on $\\mb{D}$.\n\nComparing the coefficients as in \\eqref{65}, we see that if $$\\pi(\\zeta A_1+\\zeta B_1-\\varphi_1)=\\sum_{k=-\\infty}^{-1}a_k\\zeta^{k}$$\nthen $\\widetilde{q}$ has to be taken as $$-\\sum_{k=-\\infty}^{-1}a_k\\zeta^{k}-\\sum_{k=0}^{\\infty}b_k\\zeta^{k}$$\nwith $b_k:=\\overline a_{-k}$ for $k\\geq 1$ and $b_0\\in\\RR$ uniquely determined by $\\widetilde{q}(1)=0$.\\\\\n\nLet us show that the proof of the second Lemma follows from the proof of the first one.\nSince $\\wi\\Xi$ is real analytic it suffices to prove that the derivative $$\\wi\\Xi_{(f,q,\\xi)}(r_0,f_0(\\xi_0),f_0,0,\\xi_0):B\\times Q_0\\times\\RR\\longrightarrow Q\\times{B^*}\\times\\mb{C}^n$$ is invertible.\nFor $(\\widetilde{f},\\widetilde{q},\\widetilde{\\xi})\\in B\\times Q_0\\times\\RR$ we get\n\\begin{multline*}\n\\wi\\Xi_{(f,q,\\xi)}(r_0,f_0(\\xi_0),f_0,0,\\xi_0)(\\widetilde{f},\\widetilde{q},\\widetilde{\\xi})=\\left.\\frac{d}{dt}\n\\wi\\Xi(r_0,f_0(\\xi_0),f_0+t\\widetilde{f},t\\widetilde{q},\\xi_0+t\\widetilde{\\xi})\\right|_{t=0}=\\\\\n=((r_{0z}\\circ f_0)\\widetilde{f}+(r_{0\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}},\n\\pi(\\zeta\\widetilde{q}r_{0z}\\circ f_0+\\zeta(r_{0zz}\\circ f_0)\\widetilde{f}+\\zeta(r_{0z\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}}),\\widetilde{f}(\\xi_0)+\\wi\\xi f_0'(\\xi_0)).\n\\end{multline*}\nWe have to show that for $(\\eta,\\varphi,w)\\in Q\\times B^*\\times\\mb{C}^n$ there exists exactly one $(\\widetilde{f},\\widetilde{q},\\widetilde{\\xi})\\in B\\times Q_0\\times\\RR$ satisfying\n\\begin{equation}\n(r_{0z}\\circ f_0)\\widetilde{f}+(r_{0\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}}=\\eta,\n\\label{1al1}\n\\end{equation}\n\\begin{equation}\n\\pi(\\zeta\\widetilde{q}r_{0z}\\circ f_0+\\zeta (r_{0zz}\\circ f_0)\\widetilde{f}+\\zeta(r_{0z\\overline{z}}\\circ f_0)\\overline{\\widetilde{f}})=\\varphi,\n\\label{1al2}\n\\end{equation}\n\\begin{equation}\n\\wi f(\\xi_0)+\\wi\\xi f_0'(\\xi_0)=w.\n\\label{1al3}\n\\end{equation}\nThe equation (\\ref{1al1}) turns out to be\n\\begin{equation}\n\\re(\\widetilde{f}_1\/\\zeta)=\\eta\\text{ (on }\\TT).\n\\label{1al4}\n\\end{equation}\nThe equation above uniquely determines $\\widetilde{f}_1\/\\zeta\\in W^{2,2}(\\TT,\\CC)\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ up to an imaginary additive constant, which may be computed using (\\ref{1al3}). Indeed, there exists $G\\in W^{2,2}(\\TT,\\CC)\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ such that $\\eta=\\re G$ on $\\TT$. We are searching $C\\in\\RR$ such that the functions $\\widetilde{f}_1:=\\zeta(G+iC)$ and $\\theta:=\\im(\\widetilde{f}_1\/\\zeta)$ satisfy $$\\xi_0\\eta(\\xi_0)+i\\xi_0\\theta(\\xi_0)=\\widetilde{f}_1(\\xi_0)$$ and $$\\xi_0(\\eta(\\xi_0)+i\\theta(\\xi_0))+\\widetilde{\\xi}\\re{f_{01}'(\\xi_0)}+i\\widetilde{\\xi}\\im{{f_{01}'(\\xi_0)}}=\n\\re{w_1}+i\\im{w_1}.$$ But $$\\xi_0\\eta(\\xi_0)+\\widetilde{\\xi}\\re{f_{01}'(\\xi_0)}=\\re{w_1},$$ which yields $\\widetilde{\\xi}$ and then $\\theta(\\xi_0)$, consequently the number $C$. Having $\\widetilde{\\xi}$ and once again using (\\ref{1al3}), we find uniquely determined $\\widetilde{f}_2(\\xi_0),\\ldots,\\widetilde{f}_n(\\xi_0)$.\n\nTherefore, the equations $\\eqref{1al1}$ and $\\eqref{1al3}$ are satisfied by uniquely determined $\\wi f_1$, $\\wi\\xi$ and $\\widetilde{f}_2(\\xi_0),\\ldots,\\widetilde{f}_n(\\xi_0)$.\n\nIn the remaining part of the proof we change the second condition of \\eqref{al6} to $$g(\\xi_0)={\\widehat{\\widetilde{f}}}(\\xi_0)\/\\xi_0$$ and we have to prove that there is a unique solution $h\\in W^{2,2}(\\TT,\\CC^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ of (\\ref{al9}) such that $h(\\xi_0)=a$ with a given $a\\in\\CC^{n-1}$. Let $\\tau$ be an automorphism of $\\DD$ (so it extends holomorphically near $\\CDD$), which maps $0$ to $\\xi_0$, i.e. $$\\tau(\\xi):=\\frac{\\xi_0-\\xi}{1-\\ov\\xi_0\\xi},\\ \\xi\\in\\DD.$$ Let the maps $P,K$ be as before. Then $h\\in W^{2,2}(\\TT,\\mb{C}^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ satisfies\n(\\ref{al9}) and $h(\\xi_0)=a$ if and only if $h\\circ\\tau\\in W^{2,2}(\\TT,\\mb{C}^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ satisfies (\\ref{al9}) and $(h\\circ\\tau)(0)=a$. We already know that there is exactly one $\\wi h\\in W^{2,2}(\\TT,\\mb{C}^{n-1})\\cap\\OO(\\DD)\\cap\\cC(\\CDD)$ satisfying (\\ref{al9}) and $\\wi h(0)=a$. Setting $h:=\\wi h\\circ\\tau^{-1}$, we get the claim.\n\\end{proof}\n\n\\subsection{Topology in the class of domains with real analytic boundaries}\\label{topol}\n\nWe introduce a concept of a domain being close to some other domain. Let $D_0\\su\\mb{C}^n$ be a bounded domain with real analytic boundary. Then there exist a neighborhood $U_0$ of $\\partial D_0$ and a real analytic defining function $r_0:U_0\\longrightarrow\\mb{R}$ such that $\\nabla r_0$ does not vanish in $U_0$ and $$D_0\\cap U_0=\\{z\\in U_0:r_0(z)<0\\}.$$\n\n\\begin{dff}\nWe say that domains $D$ \\textit{tend to} $D_0$ $($or are \\textit{close to} $D_0${$)$} if one can choose their defining functions $r\\in X_0$ such that $r$ tend to $r_0$ in $X_0$.\n\\end{dff}\n\n\\begin{remm} If $r\\in X_0$ is near to $r_0$ with respect to the topology in $X_0$, then $\\{z\\in U_0:r(z)=0\\}$ is a compact real analytic hypersurface which bounds a bounded domain. We denote it by $D^{r}$.\n\nMoreover, if $D^{r_0}$ is strongly linearly convex then a domain $D^r$ is also strongly linearly convex provided that $r$ is near $r_0$.\n\\end{remm}\n\n\n\n\n\n\n\n\\subsection{Statement of the main result of this section}\n\n\\begin{remm}\\label{f} Assume that $D^r$ is a strongly linearly convex domain bounded by a real analytic hypersurface $\\{z\\in U_0:r(z)=0\\}$. Let $\\xi\\in(0,1)$ and $w\\in(\\CC^n)_*$.\n\nThen a function $f\\in B_0$ satisfies the conditions $$f\\text{ is a weak stationary mapping of }D^r,\\ f(0)=0,\\ f(\\xi)=w$$ if and only if there exists $q\\in Q_0$ such that $q>-1$ and $\\wi\\Xi(r,w,f,q,\\xi)=0$.\n\nActually, from $\\wi\\Xi(r,w,f,q,\\xi)=0$ we deduce immediately that $r\\circ f=0$ on $\\TT$, $f(\\xi)=w$ and $\\pi(\\zeta(1+q)(r_z\\circ f))=0$. From the first equality we get $f(\\TT)\\subset \\partial D^{r}$. From the last one we deduce that the condition (3') of Definition~\\ref{21} is satisfied (with $\\rho:=(1+q)|r_z\\circ f|$). Since $D^{r}$ is strongly linearly convex, $\\ov{D^r}$ is polynomially convex (use the fact that projections of $\\CC$-convex domains are $\\CC$-convex, as well, and the fact that $D^r$ is smooth). In particular, $$f(\\CDD)=f(\\widehat{\\TT})\\subset\\widehat{f(\\TT)}\\subset\\widehat{\\ov{D^r}}=\\ov{D^r},$$ where $\\wh S:=\\{z\\in\\CC^m:|P(z)|\\leq\\sup_S|P|\\text{ for any polynomial }P\\in\\CC[z_1,\\ldots,z_m]\\}$ is the polynomial hull of a set $S\\su\\CC^m$. \n\nNote that this implies $f(\\DD)\\su D^r$ --- this follows from the fact that $\\pa D^r$ does not contain non-constant analytic discs (as $D^r$ is strongly pseudoconvex).\n\nThe opposite implication is clear.\n\n\\bigskip\n\nIn a similar way we show that for any $v\\in(\\CC^n)_*$ and $\\lambda>0$, a function $f\\in B_0$ satisfies the conditions $$f\\text{ is a weak stationary mapping of }D^r,\\ f(0)=0,\\ f'(0)=\\lambda v$$ if and only if there exists $q\\in Q_0$ such that $q>-1$ and $\\Xi(r,v,f,q,\\lambda)=0$.\n\n\\end{remm}\n\n\n\\begin{propp}\\label{13} Let $D_0\\su\\CC^n$, $n\\geq 2$, be a strongly linearly convex domain with real analytic boundary and let $f_0:\\DD\\longrightarrow D_0$ be an $E$-mapping.\n\n$(1)$ Let $\\xi_0\\in(0,1)$. Then there exist a neighborhood $W_0$ of $(r_0,f_0(\\xi_0))$ in $X_0\\times D_0$ and real analytic mappings $$\\Lambda:W_0\\longrightarrow\\mc{C}^{1\/2}(\\overline{\\mb{D}}),\\ \\Omega:W_0\\longrightarrow(0,1)$$ such that $$\\Lambda(r_0,f_0(\\xi_0))=f_0,\\ \\Omega(r_0,f_0(\\xi_0))=\\xi_0$$ and for any $(r,w)\\in W_0$ the mapping\n$f:=\\Lambda(r,w)$ is an $E$-mapping of $D^{r}$ satisfying $$f(0)=f_0(0)\\text{ and }f(\\Omega(r,w))=w.$$\n\n$(2)$ There exist a neighborhood $V_0$ of $(r_0,f_0'(0))$ in $X_0\\times\\mb{C}^n$\nand a real analytic mapping $$\\Gamma:V_0\\longrightarrow\\mc{C}^{1\/2}(\\overline{\\mb{D}})$$ such that $$\\Gamma(r_0,f_0'(0))=f_0$$ and for any $(r,v)\\in V_0$ the mapping $f:=\\Gamma(r,v)$ is an $E$-mapping of $D^{r}$ satisfying $$f(0)=f_0(0)\\text{ and }f'(0)=\\lambda v\\text{ for some }\\lambda>0.$$\n\\end{propp}\n\n\\begin{proof}\n\nObserve that Proposition \\ref{11} provides us with a mapping $g_0=\\Phi\\circ f_0$ and a domain $G_0:=\\Phi(D_0)$ giving a data for situation (\\dag) (here $\\partial D_0$ is contained in $U_0$). Clearly, $\\rho_0:=r_0\\circ\\Phi^{-1}$ is a defining function of $G_0$.\n\nUsing Lemmas \\ref{cruciallemma}, \\ref{cruciallemma1} we get neighborhoods $V_0$, $W_0$ of $(\\rho_0, g_0'(0))$, $(\\rho_0,g_0(\\xi_0))$ respectively and real analytic mappings $\\Upsilon$, $\\wi\\Upsilon$ such that $ \\Xi(\\rho,v,\\Upsilon(\\rho,v))=0$ on $V_0$ and $ \\wi\\Xi(\\rho,w,\\wi\\Upsilon(\\rho,w))=0$ on $W_0$. Define $$\\wh\\Lambda:=\\pi_B\\circ\\wi\\Upsilon,\\quad\\Omega:=\\pi_\\RR\\circ\\wi\\Upsilon,\\quad\\wh\\Gamma:=\\pi_B\\circ\\Upsilon,$$ where $$\\pi_B:B\\times Q_0\\times\\mb{R}\\longrightarrow B,\\quad\\pi_\\RR:B\\times Q_0\\times\\mb{R}\\longrightarrow\\RR,\\ $$ are the projections.\n\nIf $\\rho$ is sufficiently close to $\\rho_0$, then the hypersurface $\\{\\rho=0\\}$ bounds a strongly linearly convex domain. Moreover, then $\\wh\\Lambda(\\rho,w)$ and $\\wh\\Gamma(\\rho,v)$ are extremal mappings in $G^{\\rho}$ (see Remark~\\ref{f}).\n\nComposing $\\wh\\Lambda(\\rho,w)$ and $\\wh\\Gamma(\\rho,v)$ with $\\Phi^{-1}$ and making use of Remark \\ref{rem:theta} we get weak stationary mappings in $D^r$, where $r:=\\rho\\circ\\Phi$. To show that they are $E$-mappings we proceed as follows. If $D^r$ is sufficiently close to $D_0$ (this depends on a distance between $\\rho$ and $\\rho_0$), the domain $D^r$ is strongly linearly convex, so by the results of Section \\ref{55} $$\\Lambda(r,w):=\\Phi^{-1}\\circ\\wh\\Lambda(\\rho,w)\\text{\\ and\\ }\\Gamma(r,v):=\\Phi^{-1}\\circ\\wh\\Gamma(\\rho,v)$$ are stationary mappings. Moreover, they are close to $f_0$ provided that $r$ is sufficiently close to $r_0$. Therefore, their winding numbers are equal. Thus $f$ satisfies condition (4) of Definition~\\ref{21e}, i.e. $f$ is an $E$-mapping.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Localization property}\n\n\\begin{prop}\\label{localization} Let $D\\su\\mathbb C^n$, $n\\geq 2$, be a domain. Assume that $a\\in\\partial D$ is such that $\\partial D$ is real analytic and strongly convex in a neighborhood of $a$. Then for any sufficiently small neighborhood $V_0$ of $a$ there is a weak stationary mapping of $D\\cap V_0$ such that $f(\\mathbb T)\\su\\partial D$.\n\nIn particular, $f$ is a weak stationary mapping of $D$.\n\\end{prop}\n\n\\begin{proof} Let $r$ be a real analytic defining function in a neighborhood of $a$. The problem we are dealing with has a local character, so replacing $r$ with $r\\circ\\Psi$, where $\\Psi$ is a local biholomorphism near $a$, we may assume that $a=(0,\\ldots,0,1)$ and a defining function of $D$ near $a$ is $r(z)=-1+|z|^2+h(z-a)$, where $h$ is real analytic in a neighborhood of $0$ and $h(z)=O(|z|^3)$ as $z\\to 0$ (cf. \\cite{Rud}, p. 321).\n\nFollowing \\cite{Lem2}, let us consider the mappings\n$$A_t(z):=\\left((1-t^2)^{1\/2}\\frac{z'}{1+tz_n},\\frac{z_n+t}{1+tz_n}\\right),\\quad z=(z',z_n)\\in\\CC^{n-1}\\times\\DD,\\,\\,t\\in(0,1),$$ which restricted to $\\BB_n$ are automorphisms. Let $$r_t(z):=\\begin{cases}\\frac{|1+tz_n|^2}{1-t^2}r(A_t(z)),&t\\in(0,1),\\\\-1+|z|^2,&t=1.\\end{cases}$$ It is clear that $f_{(1)}(\\zeta)=(\\zeta,0,\\ldots,0)$, $\\zeta\\in\\DD$ is a stationary mapping of $\\mathbb B_n$. We want to have the situation (\\dag) which will allow us to use Lemma \\ref{cruciallemma} (or Lemma \\ref{cruciallemma1}). Note that $r_t$ does not converge to $r_1$ as $t\\to 1$. However, $r_t\\to r_1$ in $X_0(U_0,U_0^{\\mathbb C})$, where $U_0$ is a neighborhood of $f_{(1)}(\\TT)$ contained in $\\{z\\in\\mathbb C^n:\\re z_n>-1\/2\\}$ and $U_0^{\\mathbb C}$ is sufficiently small (remember that $h(z)=O(|z|^3)$).\n\nTherefore, making use of Lemma \\ref{cruciallemma} for $t$ sufficiently close to $1$ we obtain stationary mappings $f_{(t)}$ in $D_t:=\\{z\\in \\mathbb C^n: r_t(z)<0,\\ \\re z_n>-1\/2\\}$ such that $f_{(t)}\\to f_{(1)}$ in the $W^{2,2}$-norm (so also in the sup-norm). Actually, it follows from Lemma~\\ref{cruciallemma} that one may take $f_{(t)}:=\\pi_B\\circ\\Upsilon(r_t,f_{(1)}'(0))$ (keeping the notation from this lemma). The argument used in Remark~\\ref{f} gives that $f_{(t)}$ satisfies conditions (1'), (2') and (3') of Definition~\\ref{21}. Since the non-constant function $r\\circ A_t\\circ f_{(t)}$ is subharmonic on $\\DD$, continuous on $\\CDD$ and $r\\circ A_t\\circ f_{(t)}=0$ on $\\TT$, we see from the maximum principle that $f_{(t)}$ maps $\\DD$ in $D_t$. Therefore, $f_{(t)}$ are weak stationary mappings for $t$ close to $1$.\n\nIn particular, $$f_{(t)}(\\DD)\\subset 2\\mathbb B_n \\cap \\{z\\in\\mathbb C^n:\\re z_n>-1\/2\\}$$ provided that $t$ is close to $1$. The mappings $A_t$ have the following important property $$A_t(2\\mathbb B_n\\cap\\{z\\in\\mathbb C^n:\\re z_n>-1\/2\\})\\to\\{a\\}$$ as $t\\to 1$ in the sense of the Hausdorff distance.\n\nTherefore, we find from Remark \\ref{rem:theta} that $g_{(t)}:=A_t\\circ f_{(t)}$ is a stationary mapping of $D$. Since $g_{(t)}$ maps $\\DD$ onto arbitrarily small neighborhood of $a$ provided that $t$ is sufficiently close to $1$, we immediately get the assertion.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proofs of Theorems \\ref{lem-car} and \\ref{main}}\n\nWe start this section with the following\n\\begin{lem}\\label{lemat} For any different $z,w\\in D$ $($resp. for any $z\\in D$, $v\\in(\\CC^n)_*${$)$} there exists an $E$-mapping $f:\\DD\\longrightarrow D$ such that $f(0)=z$, $f(\\xi)=w$ for some $\\xi\\in(0,1)$ $($resp. $f(0)=z$, $f'(0)=\\lambda v$ for some $\\lambda>0${$)$}.\n\\end{lem}\n\n\\begin{proof}\nFix different $z,w\\in D$ (resp. $z\\in D$, $v\\in(\\CC^{n})_*$).\n\nFirst, consider the case when $D$ is bounded strongly convex with real analytic boundary. Without loss of generality one may assume that $0\\in D\\Subset\\BB_n$. We need some properties of the Minkowski functionals.\n\nLet $\\mu_G$ be a Minkowski functional of a domain $G\\subset\\CC^n$ containing the origin, i.e. $$\\mu_G(x):=\\inf\\left\\{s>0:\\frac{x}{s}\\in G\\right\\},\\ x\\in\\CC^n.$$ Assume that $G$ is bounded strongly convex with real analytic boundary. We shall show that\n\\begin{itemize}\n\\item $\\mu_G-1$ is a real analytic outside $0$, defining function of $G$;\n\\item $\\mu^2_G-1$ is a real analytic outside $0$, strongly convex outside $0$, defining function of $G$.\n\\end{itemize}\nClearly, $G=\\{x\\in\\RR^{2n}:\\mu_G(x)<1\\}$. Setting $$q(x,s):=r\\left(\\frac{x}{s}\\right),\\ (x,s)\\in U_0\\times U_1,$$ where $r$ is a real analytic defining function of $G$ (defined near $\\pa G$) and $U_0\\su\\RR^{2n}$, $U_1\\su\\RR$ are neighborhoods of $\\pa G$ and $1$ respectively, we have $$\\frac{\\partial q}{\\partial s}(x,s)=-\\frac{1}{s^2}\\left\\langle\\nabla r\\left(\\frac{x}{s}\\right),x\\right\\rangle_{\\RR}\\neq 0$$ for $(x,s)$ such that $x\\in\\partial G$ and $s=\\mu_G(x)=1$ (since $0\\in G$, the vector $-x$ hooked at the point $x$ is inward $G$, so it is not orthogonal to the normal vector at $x$). By the Implicit Function Theorem for the equation $q=0$, the function $\\mu_G$ is real analytic in a neighborhood $V_0$ of $\\partial G$. To see that $\\mu_G$ is real analytic outside $0$, fix $x_0\\in(\\RR^{2n})_*$. Then the set $$W_0:=\\left\\{x\\in\\RR^{2n}:\\frac{x}{\\mu_G(x_0)}\\in V_0\\right\\}$$ is open and contains $x_0$. Since $$\\mu_G(x)=\\mu_G(x_0)\\mu_G\\left(\\frac{x}{\\mu_G(x_0)}\\right),\\ x\\in W_0,$$ the function $\\mu_G$ is real analytic in $W_0$. Therefore, we can take $d\/ds$ on both sides of $\\mu_G(sx)=s\\mu_G(x),\\ x\\neq 0,\\ s>0$ to obtain $$\\langle\\nabla\\mu_G(x),x\\rangle_{\\RR}=\\mu_G(x),\\ x\\neq 0,$$ so $\\nabla\\mu_G\\neq 0$ in $(\\RR^{2n})_*$.\n\nFurthermore, $\\nabla\\mu^2_G=2\\mu_G\\nabla\\mu_G$, so $\\mu^2_G-1$ is also a defining function of $G$.\nTo show that $u:=\\mu^2_G$ is strongly convex outside $0$ let us prove that $$X^T\\mathcal{H}_aX>0,\\quad a\\in\\pa G,\\ X\\in(\\RR^{2n})_*,$$ where $\\mathcal{H}_x:=\\mathcal{H}u(x)$ for $x\\in(\\RR^{2n})_*$. Taking $\\pa\/\\pa x_j$ on both sides of $$u(sx)=s^2u(x),\\ x,s\\neq 0,$$ we get \\begin{equation}\\label{62}\\frac{\\pa u}{\\pa x_j}(sx)=s\\frac{\\pa u}{\\pa x_j}(x)\\end{equation} and further taking $d\/ds$ $$\\sum_{k=1}^{2n}\\frac{\\pa^2 u}{\\pa x_j\\pa x_k}(sx)x_k=\\frac{\\pa u}{\\pa x_j}(x).$$ In particular, $$x^T\\mathcal{H}_xy=\\sum_{j,k=1}^{2n}\\frac{\\pa^2 u}{\\pa x_k\\pa x_j}(x)x_ky_j=\\langle\\nabla u(x),y\\rangle_{\\RR},\\ x\\in(\\RR^{2n})_*,\\ y\\in\\RR^{2n}.$$ Let $a\\in\\pa G$. Since $\\langle\\nabla\\mu_G(a),a\\rangle_{\\RR}=\\mu_G(a)=1$, we have $a\\notin T^\\RR_G(a)$. Any $X\\in(\\RR^{2n})_*$ can be represented as $\\alpha a+\\beta Y$, where $Y\\in T^\\RR_G(a)$, $\\alpha,\\beta\\in\\RR$, $(\\alpha,\\beta)\\neq(0,0)$. Then \\begin{eqnarray*}X^T\\mathcal{H}_aX&=&\\alpha^2a^T\\mathcal{H}_aa+2\\alpha\\beta a^T\\mathcal{H}_aY+\\beta^2Y^T\\mathcal{H}_aY=\\\\&=&\\alpha^2\\langle\\nabla u(a),a\\rangle_{\\RR} +2\\alpha\\beta\\langle\\nabla u(a),Y\\rangle_{\\RR} +\\beta^2Y^T\\mathcal{H}_aY= \\\\&=&\\alpha^22\\mu_G(a)\\langle\\nabla\\mu_G(a),a\\rangle_{\\RR} +\\beta^2Y^T\\mathcal{H}_aY=\n2\\alpha^2+\\beta^2Y^T\\mathcal{H}_aY.\\end{eqnarray*} Since $G$ is strongly convex, the Hessian of any defining function is strictly positive on the tangent space, i.e. $Y^T\\mathcal{H}_aY>0$ if $Y\\in(T^\\RR_G(a))_*$. Hence $X^T\\mathcal{H}_aX\\geq 0$. Note that it cannot be $X^T\\mathcal{H}_aX=0$, since then $\\alpha=0$, consequently $\\beta\\neq 0$ and $Y^T\\mathcal{H}_aY=0$. On the other side $Y=X\/\\beta\\neq 0$ --- a contradiction.\n\nTaking $\\pa\/\\pa x_k$ on both sides of \\eqref{62} we obtain $$\\frac{\\pa^2 u}{\\pa x_j\\pa x_k}(sx)=\\frac{\\pa^2 u}{\\pa x_j\\pa x_k}(x),\\ x,s\\neq 0$$ and for $a,X\\in(\\RR^{2n})_*$ $$X^T\\mathcal{H}_aX=X^T\\mathcal{H}_{a\/\\mu_G(a)}X>0.$$\n\nLet us consider the sets $$D_t:=\\{x\\in\\CC^n:t\\mu^2_D(x)+(1-t)\\mu^2_{\\BB_n}(x)<1\\},\\ t\\in[0,1].$$ The functions $t\\mu^2_D+(1-t)\\mu^2_{\\BB_n}$ are real analytic in $(\\CC^n)_*$ and strongly convex in $(\\CC^n)_*$, so $D_t$ are strongly convex domains with real analytic boundaries satisfying $$D=D_1\\Subset D_{t_2}\\Subset D_{t_1}\\Subset D_0=\\BB_n\\text{\\ if \\ }00$ such that $\\delta\\BB_n\\Subset D$. Further, $\\nabla\\mu_{D_t}^2\\neq 0$ in $(\\RR^{2n})_*$. Set $$M:=\\sup\\left\\{\\frac{\\mathcal{H}\\mu_{D_t}^2(x;X)}{|\\nabla\\mu_{D_t}^2(y)|}:\nt\\in[0,1],\\ x,y\\in 2\\ov{\\BB}_n\\setminus\\delta\\BB_n,\\ X\\in\\RR^{2n},\\ |X|=1\\right\\}.$$ It is a positive number since the functions $\\mu_{D_t}^2$ are strongly convex in $(\\RR^{2n})_*$ and the `sup' of the continuous, positive function is taken over a compact set. Let $$r:=\\min\\left\\{\\frac{1}{2M},\\frac{\\dist(\\pa D,\\delta\\BB_n)}{2}\\right\\}.$$ For fixed $t\\in[0,1]$ and $a\\in\\pa D_t$ put $a':=a-r\\nu_{D_t}(a)$. In particular, $\\ov{B_n(a',r)}\\su 2\\ov{\\BB}_n\\setminus\\delta\\BB_n$. Let us define $$h(x):=\\mu^2_{D_t}(x)-\\frac{|\\nabla\\mu^2_{D_t}(a)|}{2|a-a'|}(|x-a'|^2-r^2),\\ x\\in 2\\ov{\\BB}_n\\setminus\\delta\\BB_n.$$ We have $h(a)=1$ and $$\\nabla h(x)=\\nabla\\mu^2_{D_t}(x)-\\frac{|\\nabla\\mu^2_{D_t}(a)|}{|a-a'|}(x-a').$$ For $x=a$, dividing the right side by $|\\nabla\\mu^2_{D_t}(a)|$, we get a difference of the same normal vectors $\\nu_{D_t}(a)$, so $\\nabla h(a)=0$. Moreover, for $|X|=1$ $$\\mathcal{H}h(x;X)=\\mathcal{H}\\mu^2_{D_t}(x;X)-\\frac{|\\nabla\\mu^2_{D_t}(a)|}{r}\\leq M|\\nabla\\mu^2_{D_t}(a)|-2M|\\nabla\\mu^2_{D_t}(a)|<0.$$ It follows that $h\\leq 1$ in any convex set $S$ such that $a\\in S\\su 2\\ov{\\BB}_n\\setminus\\delta\\BB_n$. Indeed, assume the contrary. Then there is $y\\in S$ such that $h(y)>1$. Let us join $a$ and $y$ with an interval $$g:[0,1]\\ni t\\longmapsto h(ta+(1-t)y)\\in S.$$ Since $a$ is a strong local maximum of $h$, the function $g$ has a local minimum at some point $t_0\\in(0,1)$. Hence $$0\\leq g''(t_0)=\\mathcal{H}h(t_0a+(1-t_0)y;a-y),$$ which is impossible.\n\nSetting $S:=\\ov{B_n(a',r)}$, we get $$\\mu^2_{D_t}(x)\\leq 1+\\frac{|\\nabla\\mu^2_{D_t}(a)|}{2|a-a'|}(|x-a'|^2-r^2)<1$$ for $x\\in B_n(a',r)$, i.e. $x\\in D_t$.\n\nThe proof of the exterior ball condition is similar. Set $$m:=\\inf\\left\\{\\frac{\\mathcal{H}\\mu_{D_t}^2(x;X)}{|\\nabla\\mu_{D_t}^2(y)|}:\nt\\in[0,1],\\ x,y\\in(\\ov{\\BB}_n)_*,\\ X\\in\\RR^{2n},\\ |X|=1\\right\\}.$$ Note that the $m>0$. Actually, the homogeneity of $\\mu_{D_t}$ implies $\\mathcal{H}\\mu_{D_t}^2(sx;X)=\\mathcal{H}\\mu_{D_t}^2(x;X)$ and $\\nabla\\mu_{D_t}^2(sx)=s\\nabla\\mu_{D_t}^2(x)$ for $x\\neq 0$, $X\\in \\RR^{2n}$, $s>0$. Therefore, there are positive constants $C_1,C_2$ such that $C_1\\leq\\mathcal{H}\\mu_{D_t}^2(x;X)$ for $x\\neq 0$, $X\\in \\RR^{2n}$, $|X|=1$ and $|\\nabla\\mu_{D_t}^2(y)|\\leq C_2$ for $y\\in\\ov\\BB_n$. In particular, $m\\geq C_1\/C_2$.\n\nLet $R:=2\/m$. For fixed $t\\in[0,1]$ and $a\\in\\pa D_t$ put $a'':=a-R\\nu_{D_t}(a)$. Let us define $$\\wi h(x):=\\mu^2_{D_t}(x)-\\frac{|\\nabla\\mu^2_{D_t}(a)|}{2|a-a''|}(|x-a''|^2-R^2),\\ x\\in\\ov{\\BB}_n.$$ We have $\\wi h(a)=1$ and $$\\nabla\\wi h(x)=\\nabla\\mu^2_{D_t}(x)-\\frac{|\\nabla\\mu^2_{D_t}(a)|}{|a-a''|}(x-a''),$$ so $\\nabla\\wi h(a)=0$. Moreover, for $x\\in(\\ov{\\BB}_n)_*$ and $|X|=1$ $$\\mathcal{H}\\wi h(x;X)=\\mathcal{H}\\mu^2_{D_t}(x;X)-\\frac{|\\nabla\\mu^2_{D_t}(a)|}{R}\\geq m|\\nabla\\mu^2_{D_t}(a)|-m\/2|\\nabla\\mu^2_{D_t}(a)|>0.$$ Therefore, $a$ is a strong local minimum of $\\wi h$.\n\nNow using the properties listed above we may deduce that $\\wi h\\geq 1$ in $\\ov\\BB_n$. We proceed similarly as before: seeking a contradiction suppose that there is $y\\in\\ov\\BB_n$ such that $\\wi h(y)<1$. Moving $y$ a little (if necessary) we may assume that $0$ does not lie on the interval joining $a$ and $y$. Then the mapping $\\wi g(t):=\\wi h(ta+ (1-t)y)$ attains its local maximum at some point $t_0\\in(0,1)$. The second derivative of $\\wi g$ at $t_0$ is non-positive, which gives a contradiction with a positivity of the Hessian of the function $\\wi h$. \n\nHence, we get $$\\frac{|\\nabla\\mu^2_{D_t}(a)|}{2|a-a''|}(|x-a''|^2-R^2)\\leq\\mu^2_{D_t}(x)-1<0,$$ for $x\\in D_t$, so $D_t \\subset B_n(a'',R)$.\n\nLet $T$ be the set of all $t\\in[0,1]$ such that there is an $E$-mapping $f_{t}:\\DD\\longrightarrow D_{t}$ with $f_{t}(0)=z$, $f_{t}(\\xi_{t})=w$ for some $\\xi_{t}\\in(0,1)$ (resp. $f_{t}(0)=z$, $f_{t}'(0)=\\lambda_{t}v$ for some $\\lambda_{t}>0$). We claim that $T=[0,1]$. To prove it we will use the open-close argument.\n\nClearly, $T\\neq\\emptyset$, as $0\\in T$. Moreover, $T$ is open in $[0,1]$. Indeed, let $t_{0}\\in T$. It follows from Proposition \\ref{13} that there is a neighborhood $T_{0}$ of $t_{0}$ such that there are $E$-mappings $f_{t}:\\DD\\longrightarrow D_{t}$ and $\\xi_{t}\\in(0,1)$ such that $f_{t}(0)=z$, $f_{t}(\\xi_{t})=w$ for all $t\\in T_{0}$ (resp. $\\lambda_{t}>0$ such that $f_{t}(0)=z$, $f_{t}'(0)=\\lambda_{t} v$ for all $t\\in T_{0}$).\n\nTo prove that $T$ is closed, choose a sequence $\\{t_{m}\\}\\su T$ convergent to some $t\\in[0,1]$. We want to show that $t\\in T$. Since $f_{t_m}$ are $E$-mappings, they are complex geodesics. Therefore, making use of the inclusions $D\\subset D_{t_m}\\subset\\mathbb B_n$ we find that there is a compact set $K\\su(0,1)$ (resp. a compact set $\\widetilde K\\subset(0,\\infty)$) such that $\\{\\xi_{t_m}\\}\\subset K$ (resp. $\\{\\lambda_{t_m}\\}\\subset\\widetilde K$). By Propositions \\ref{8} and \\ref{10b} the functions $f_{t_{m}}$ and $\\widetilde f_{t_{m}}$ are equicontinuous in $\\mathcal{C}^{1\/2}(\\overline{\\DD})$ and by Propositions \\ref{9} and \\ref{10a} the functions $\\rho_{t_{m}}$ are uniformly bounded from both sides by positive numbers and equicontinuous in $\\mathcal{C}^{1\/2}(\\TT)$. From the Arzela-Ascoli Theorem there are a subsequence $\\{s_{m}\\}\\subset\\{t_{m}\\}$ and mappings $f,\\wi f\\in\\OO(\\DD)\\cap\\mathcal C^{1\/2}(\\overline{\\mathbb D})$, $\\rho\\in\\cC^{1\/2}(\\TT)$ such that $f_{s_{m}}\\to f$, $\\widetilde{f}_{s_{m}}\\to\\wi f$ uniformly on $\\overline{\\DD}$, $\\rho_{s_{m}}\\to\\rho$ uniformly on $\\TT$ and $\\xi_{s_m}\\to\\xi\\in (0,1)$ (resp. $\\lambda_{s_m}\\to\\lambda>0$).\n\nClearly, $f(\\CDD)\\su\\overline{D}_{t}$, $f(\\TT)\\su\\partial D_{t}$ and $\\rho>0$. By the strong pseudoconvexity of $D_t$ we get $f(\\DD)\\su D_t$.\n\nThe conditions (3') and (4) of Definitions~\\ref{21} and \\ref{21e} follow from the uniform convergence of suitable functions. Therefore, $f$ is a weak $E$-mapping of $D_{t}$, consequently an $E$-mapping of $D_t$, satisfying $f(0)=z$, $f(\\xi)=w$ (resp. $f(0)=z$, $f'(0)=\\lambda v$).\n\nLet us go back to the general situation that is when a domain $D$ is bounded strongly linearly convex with real analytic boundary. Take a of point $\\eta\\in\\partial{D}$ such that $\\max_{\\zeta\\in\\partial{D}}|z-\\zeta|=|z-\\eta|$. Then $\\eta$ is a point of the strong convexity of $D$. Indeed, by the Implicit Function Theorem one can assume that in a neighborhood of $\\eta$ the defining functions of $D$ and $B:=B_n(z,|z-\\eta|)$ are of the form $r(x):=\\wi r(\\wi x)-x_{2n}$ and $q(x):=\\wi q(\\wi x)-x_{2n}$ respectively, where $x=(\\wi x,x_{2n})\\in\\RR^{2n}$ is sufficiently close to $\\eta$. From the inclusion $D\\su B$ it it follows that $r-q\\geq 0$ near $\\eta$ and $(r-q)(\\eta)=0$. Thus the Hessian $\\mathcal{H}(r-q)(\\eta)$ is weakly positive in $\\CC^n$. Since $\\mathcal{H}q(\\eta)$ is strictly positive on $T_B^\\RR(\\eta)_*=T_D^\\RR(\\eta)_*$, we find that $\\mathcal{H}r(\\eta)$ is strictly positive on $T_D^\\RR(\\eta)_*$, as well.\n\nBy a continuity argument, there is a convex neighborhood $V_0$ of $\\eta$ such that all points from $\\pa D\\cap V_0$ are points of the strong convexity of $D$. It follows from Proposition \\ref{localization} (after shrinking $V_0$ if necessary) that there is a weak stationary mapping $g:\\DD\\longrightarrow D\\cap V_0$ such that $g(\\TT)\\subset\\partial D$. In particular, $g$ is a weak stationary mapping of $D$. Since $D\\cap V_0$ is convex, the condition with the winding number is satisfied on $D\\cap V_0$ (and then on the whole $D$). Consequently $g$ is an $E$-mapping of $D$.\n\nIf $z=g(0)$, $w=g(\\xi)$ for some $\\xi\\in\\DD$ (resp. $z=g(0)$, $v=g'(0)$) then there is nothing to prove. In the other case let us take curves $\\alpha:[0,1]\\longrightarrow D$, $\\beta:[0,1]\\longrightarrow D$ joining $g(0)$ and $z$, $g(\\xi)$ and $w$ (resp. $g(0)$ and $z$, $g'(0)$ and $v$). We may assume that the images of $\\alpha$ and $\\beta$ are disjoint. Let $T$ be the set of all $t\\in[0,1]$ such that there is an $E$-mapping $g_{t}:\\DD\\longrightarrow D$ such that $g_{t}(0)=\\alpha(t)$, $g_{t}(\\xi_{t})=\\beta(t)$ for some $\\xi_{t}\\in(0,1)$ (resp. $g_{t}(0)=\\alpha(t)$, $g_{t}'(0)=\\lambda_{t}\\beta(t)$ for some $\\lambda_{t}>0$). Again $T\\neq\\emptyset$ since $0\\in T$. Using the results of Section \\ref{22} similarly as before (but for one domain), we see that $T$ is closed.\n\nSince $\\wi k_D$ is symmetric, it follows from Proposition \\ref{13}(1) that the set $T$ is open in $[0,1]$ (first we move along $\\alpha$, then by the symmetry we move along $\\beta$). Therefore, $g_1$ is the $E$-mapping for $z,w$.\n\nIn the case of $\\kappa_{D}$ we change a point and then we change a direction. To be more precise, consider the set $S$ of all $s\\in[0,1]$ such that there is an $E$-mapping $h_{s}:\\DD\\longrightarrow D$ such that $h_{s}(0)=\\alpha(s)$. Then $0\\in S$, by Proposition \\ref{13}(1) the set $S$ is open in $[0,1]$ and by results of Section~\\ref{22} again, it is closed. Hence $S=[0,1]$. Now we may join $h'_{1}(0)$ and $v$ with a curve $\\gamma:[0,1]\\longrightarrow \\mathbb C^n$. Let us define $R$ as the set of all $r\\in[0,1]$ such that there is an $E$-mapping $\\wi h_{r}:\\DD\\longrightarrow D$ such that $\\wi h_{r}(0)=h_1(0)$, $\\wi h'_{r}(0)=\\sigma_{r}\\gamma(1-r)$ for some $\\sigma_r>0$. Then $1\\in R$, by Proposition \\ref{13}(2) the set $R$ is open in $[0,1]$ and, by Section \\ref{22}, it is closed. Hence $R=[0,1]$, so $\\wi h_{0}$ is the $E$-mapping for $z,v$.\n\\end{proof}\n\nNow we are in position that allows us to prove the main results of the Lempert's paper.\n\n\\begin{proof}[Proof of Theorem \\ref{lem-car} $($real analytic case$)$] It follows from Lemma \\ref{lemat} that for any different points $z,w\\in D$ (resp. $z\\in D$, $v\\in(\\CC^n)_*$) one may find an $E$-mapping passing through them (resp. $f(0)=z$, $f'(0)=v$). On the other hand, it follows from Proposition \\ref{1} that $E$-mappings have left inverses, so they are complex geodesics.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{main} $($real analytic case$)$] This is a direct consequence of Lemma \\ref{lemat} and Corollary \\ref{28}.\n\\end{proof}\n\n\\bigskip\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{center}{\\sc $\\cC^2$-smooth case}\\end{center}\n\\bigskip\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{lem}\\label{un} Let $D\\su\\mathbb C^n$, $n\\geq 2$, be a bounded strongly pseudoconvex domain with $\\mathcal C^2$-smooth boundary. Take $z\\in D$ and let $r$ be a defining function of $D$ such that \n\\begin{itemize}\\item $r\\in \\mathcal C^2(\\mathbb C^n);$\n\\item $D=\\{x\\in \\mathbb C^n:r(x)<0\\}$;\n\\item $\\mathbb C^n\\setminus D=\\{x\\in \\mathbb C^n:r(x)>0\\}$;\n\\item $|\\nabla r|=1$ on $\\partial D;$\n\\item $\\sum_{j,k=1}^n\\frac{\\partial^2 r}{\\partial z_j\\partial\\overline z_k}(a)X_{j}\\overline{X}_{k}\\geq C|X|^2$ for any $a\\in \\partial D$ and $X\\in \\mathbb C^n$ with some constant $C>0$.\n\\end{itemize}\n\nSuppose that there is a sequence $\\{r_m\\}$ of $\\mathcal C^2$-smooth real-valued functions such that $D^{\\alpha}r_n$ converges to $D^{\\alpha}r$ locally uniformly for any $\\alpha\\in \\mathbb N_0^{2n}$ such that $|\\alpha|:=|\\alpha_1| +\\ldots+|\\alpha_n|\\leq 2$. Let $D_m$ be a connected component of the set $\\{x\\in\\mathbb C^n:r_m(x)<0\\}$, containing the point $z$.\n\nThen there is $c>0$ such that $(D_m,z)$ and $(D,z)$ belong to $\\mathcal D(c)$, $m>>1.$\n\\end{lem}\n\n\n\\begin{proof} Losing no generality assume that $D\\Subset\\mathbb B_n.$\nNote that the conditions (1), (5), (6) of Definition \\ref{30} are clearly satisfied. To find $c$ satisfying ($2$), we take $s>0$ such that $\\mathcal H r (x;X)< s |X|^2$ for $x\\in\\ov\\BB_n$ and $X\\in(\\mathbb R^{2n})_*$. Then $\\HH r_m (x;X)<2s|X|^2$ for $x\\in\\ov\\BB_n$, $X\\in(\\mathbb R^{2n})_*$ and $m>>1$. Let $U_0\\subset\\mathbb B_n$ be an open neighborhood of $\\pa D$ such that $|\\nabla r|$ is on $U_0$ between $3\/4$ and $5\/4$. Note that $\\partial D_m\\subset U_0$ and $|\\nabla r_m|\\in (1\/2, 3\/2)$ on $U_0$ for $m>>1$.\n\nFix $m$ and $a\\in \\partial D_m$ and put $b:=a-R\\nu_{D_m}(a)$, where a small number $R>0$ will be specified later. There is $t>0$ such that $\\nabla r_m(a)=2t(a-b)$. Note that $t$ may be arbitrarily large provided that $R$ was small enough. We take $t:=2s$ and $R:=|\\nabla r_m(a)|\/t$. Then we have $\\mathcal H r_m(x;X)<2t |X|^2$ for $x\\in\\ov\\BB_n$, $X\\in(\\mathbb R^{2n})_*$ and $m>>1$. Then a function $$h(x):=r_m(x)-t(|x-b|^2-R^2),\\ x\\in \\mathbb C^n,$$ attains at $a$ its global maximum on $\\ov\\BB_n$ ($a$ is a strong local maximum and the Hessian of $h$ is negative on the convex set $\\ov\\BB_n$, cf. the proof of Lemma \\ref{lemat}).\nThus $h\\leq 0$ on $\\mathbb B_n$. From this we immediately get (2).\n\nNote that it follows from (2) that $D_m=\\{x\\in\\mathbb C^n:r_m(x)<0\\}$ for $m$ big enough (i.e. $\\{x\\in \\mathbb C^n:\\ r_m(x)<0\\}$ is connected).\n\nMoreover, the condition (2) implies the condition (3) as follows. We infer from Remark~\\ref{D(c),4} that there is $c'>0$ such that $D$ satisfies (3) with $c'$. Let $m_0$ be such that the Hausdorff distance between $\\partial D$ and $\\partial D_m$ is smaller than $1\/c'$ for $m\\geq m_0$. There is $c''$ such that $D_{m_0}$ satisfies (3) with $c''$. Losing no generality we may assume that $c''c'$ such that every $D_m$ satisfies (4) with $c$ for $m$ big enough. To do it let us cover $\\partial D$ with a finite number of balls $B_j$, $j=1,\\ldots,N$, from condition (4) and let $B'_j$ be a ball contained relatively in $B_j$ such that $\\{B_j\\}$ covers $\\partial D$, as well. Let $\\Phi_j$ be mappings corresponding to $B_j$. Let $\\eps$ be such that any ball of radius $\\eps$ intersecting $\\partial D$ non-emptily is relatively contained in $B_j'$ for some $j$. Observe that any ball $B$ of radius $\\eps\/2$ intersecting non-emptily $\\partial D_m$ is contained in a ball of radius $\\eps$ intersecting non-emptily $\\partial D$; hence it is contained in $B_j'$ for some $j$. Then the pair $B$, $\\Phi_j$ satisfies the conditions (4) (b), (c) and (d). Therefore, it suffices to check that there is $c>2\/\\eps$ such that each pair $B_j'$, $\\Phi_j$ satisfies the condition (4) for $D_m$ with $c$ ($m>>1$). This is possible since $\\Phi_j(D_m)\\subset\\Phi_j(D)$, $D^\\alpha\\Phi_j(\\pa D_m\\cap B_j)$ converges to $D^\\alpha\\Phi_j(\\pa D\\cap B_j)$ for $|\\alpha|\\leq 2$ and for any $w\\in\\Phi(\\pa D\\cap B_j)$ there is a ball of radius $2\/\\eps$ containing $\\Phi_j(D)$ and tangent to $\\partial\\Phi_j(D)$ at $w$. To be precise, we proceed as follows. \n\nLet $a,b\\in\\CC^n$ and let $x\\in\\pa B_n(a,\\wi c)$, where $\\wi c>c'$. Then a ball $B_n(2a-x,2\\wi c)$ contains $B_n(a,\\wi c)$ and is tangent to $B_n(a,\\wi c)$ at $x$. There is a number $\\eta=\\eta(\\delta,\\wi c)>0$, independent of $a,b,x$, such that the diameter of the set $B_n(b,\\wi c)\\setminus B_n(2a-x,2\\wi c)$ is smaller than $\\delta>0$, whenever $|a-b|<\\eta$ (this is a simple consequence of the triangle inequality).\n\nLet $\\wi s>0$ be such that $\\mathcal H(r\\circ\\Phi_j^{-1})(x;X)\\geq 2\\wi s|X|^2$ for $x\\in U_j$, $j=1,\\ldots,N$, where $U_j$ is an open neighborhood of $\\Phi_j(\\partial D\\cap B_j)$. Then, for $m$ big enough, $\\mathcal H(r_m\\circ \\Phi_j^{-1})(x;X)\\geq\\wi s|X|^2$ for $x\\in U_j$ and $\\Phi_j(\\partial D_m\\cap B_j')\\subset U_j$, $j=1,\\ldots,N$. Repeating for the function $$x\\longmapsto(r_m\\circ\\Phi_j^{-1})(x)-\\wi t(|x-\\wi b|^2-\\wi R^2)$$ the argument used in the interior ball condition with suitable chosen $\\wi t$ and uniform $\\wi R>c$, we find that there is uniform $\\wi\\eps>0$ such that for any $j,m$ and $w\\in\\Phi_j(\\partial D_m\\cap B_j')$ there is a ball $B$ of radius $\\wi R$, tangent to $\\Phi_j(\\partial D_m\\cap B_j')$ at $w$, such that $\\Phi_j(\\partial D_m\\cap B_j')\\cap B_n(w,\\wi\\eps)\\subset B$. Let $a_{j,m}(w)$ denote its center.\n\nOn the other hand for any $w\\in \\Phi_j(\\partial D_m\\cap B_j')$ there is $t>0$ such that $w'=w+t\\nu (w)\\in \\Phi_j(\\partial D\\cap B_j)$, where $\\nu(w)$ is a normal vector to $\\Phi_j(\\partial D_m\\cap B_j')$ at $w$. Let $a_j(w')$ be a center of a ball of radius $\\wi R$ tangent to $\\Phi_j(\\partial D\\cap B_j)$ at $w'$. It follows that $|a_{j,m}(w)-a_j(w')|<\\eta(\\wi\\eps\/2,\\wi R)$ provided that $m$ is big enough. \n\nJoinining the facts presented above, we finish the proof of the exterior ball condition (with a radius dependent only on $\\wi\\eps$ and $\\wi R$).\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Theorems \\ref{lem-car} and \\ref{main} \\emph{(}$\\mathcal C^2$-smooth case$)$]\nLosing no generality assume that $0\\in D\\Subset\\BB_n$.\n\nIt follows from the Weierstrass Theorem that there is sequence $\\{P_k\\}$ of real polynomials on $\\CC^n\\simeq\\mathbb R^{2n}$ such that $$D^{\\alpha}P_{k}\\to D^{\\alpha}r \\text{ uniformly on }\\ov\\BB_n,$$ where $\\alpha=(\\alpha_1,\\ldots, \\alpha_{2n})\\in \\mathbb N_0^{2n}$ is such that $|\\alpha|=\\alpha_1+\\ldots +\\alpha_{2n}\\leq 2$. Consider the open set $$\\wi D_{k,\\eps}:=\\{x\\in \\mathbb C^n:P_{k}(x)+\\eps<0\\}.$$ Let $\\eps_{m}$ be a sequence of positive numbers converging to $0$ such that $3\\eps_{m+1}<\\eps_m.$\n\nFor any $m\\in \\mathbb N$ there is $k_{m}\\in\\NN$ such that $\\sup_{\\ov\\BB_n}|P_{k_{m}}-r|<\\eps_{m}$. Putting $r_{m}:=P_{k_{m}}+2\\eps_{m}$, we get $r+\\eps_{m}>1$. Therefore, for any $m>>1$ one may find an $E$-mapping $f_m$ of $D_m$ for $z,w$ (resp. for $z,v$). Since $(D_m,z)\\in \\mathcal D(c)$ for some uniform $c>0$ ($m>>1$) (Lemma~\\ref{un}), we find that $f_m$, $\\wi f_m$ and $\\rho_m$ satisfy the uniform estimates from Section~\\ref{22}. Thus, passing to a subsequence we may assume that $\\{f_m\\}$ converges uniformly on $\\CDD$ to a mapping $f\\in\\OO(\\DD)\\cap\\cC^{1\/2}(\\CDD)$ passing through $z,w$ (resp. such that $f(0)=z$, $f'(0)=\\lambda v$, $\\lambda>0$), $\\{\\wi f_m\\}$ converges uniformly on $\\CDD$ to a mapping $\\wi f\\in\\OO(\\DD)\\cap\\mathcal C^{1\/2}(\\overline{\\mathbb D})$ and $\\{\\rho_m\\}$ is convergent uniformly on $\\TT$ to a positive function $\\rho\\in\\cC^{1\/2}(\\TT)$ (in particular, $f'\\bullet\\wi f=1$ on $\\DD$, so $\\wi f$ has no zeroes in $\\CDD$). We already know that this implies that $f$ is a weak $E$-mapping of $D$.\n\nTo get $\\cC^{k-1-\\eps}$-smoothness of the extremal $f$ and its associated mappings for $k\\geq 3$, it suffices to repeat the proof of Proposition~5 of \\cite{Lem2}. This is just the Webster Lemma (we have proved it in the real analytic case --- see Proposition~\\ref{6}). Namely, let $$\\psi:\\partial D\\ni z\\longmapsto(z,T_{D}^\\mathbb{C}(z))\\in \\mathbb C^n\\times(\\mathbb P^{n-1})_*,$$ where $\\mathbb P^{n-1}$ is the $(n-1)$-dimensional complex projective space. Let $\\pi:(\\CC^n)_*\\longrightarrow\\mathbb P^{n-1}$ be the canonical projection. \n\nBy \\cite{Web}, $\\psi(\\partial D)$ is a totally real manifold of $\\mathcal C^{k-1}$ class. Observe that the mapping $(f,\\pi\\circ \\wi f):\\CDD\\longrightarrow\\CC^n\\times\\mathbb P^{n-1}$ is $1\/2$-H\\\"older continuous, is holomorphic on $\\mathbb D$ and maps $\\mathbb T$ into $\\psi(\\partial D)$. Therefore, it is $\\mathcal C^{k-1-\\eps}$-smooth for any $\\eps>0$, whence $f$ is $\\mathcal C^{k-1-\\eps}$-smooth. Since $\\nu_D\\circ f$ is of class $\\mathcal C^{k-1-\\eps}$, it suffices to proceed as in the proof of Proposition~\\ref{6}.\n\\end{proof}\n\n\n\n\\section{Appendix}\\label{Appendix}\n\\subsection{Totally real submanifolds}\nLet $M\\subset\\CC^m$ be a totally real local $\\CLW$ submanifold of the real dimension $m$. Fix a point $z\\in M$. There are neighborhoods $U_0\\su\\RR^m$, $V_0\\su\\CC^m$ of $0$ and $z$ and a $\\CLW$ diffeomorphism $\\widetilde{\\Phi}:U_0\\longrightarrow M\\cap V_0$ such that $\\widetilde{\\Phi}(0)=z$. The mapping $\\widetilde{\\Phi}$ can be extended in a natural way to a mapping $\\Phi$ holomorphic in a neighborhood of $0$ in $\\CC^m$. Note that this extension will be biholomorphic in a neighborhood of $0$. Actually, we have $$\\frac{\\partial\\Phi_j}{\\partial z_k}(0)=\\frac{\\partial\\Phi_j}{\\partial\nx_k}(0)=\\frac{\\partial\\widetilde{\\Phi}_j}{\\partial x_k}(0),\\ j,k=1,\\ldots,m,$$ where $x_k=\\re z_k$. Suppose that the complex derivative $\\Phi'(0)$ is not an isomorphism. Then there is $X\\in(\\CC^m)_*$ such that $\\Phi'(0)X=0$, so \\begin{multline*}0=\\sum_{k=1}^m\\frac{\\partial\\Phi}{\\partial z_k}(0)X_k=\\sum_{k=1}^m\\frac{\\partial\\wi\\Phi}{\\partial x_k}(0)(\\re X_k+i\\im X_k)=\\\\=\\underbrace{\\sum_{k=1}^m\\frac{\\partial\\wi\\Phi}{\\partial x_k}(0)\\re X_k}_{=:A}+i\\underbrace{\\sum_{k=1}^m\\frac{\\partial\\wi\\Phi}{\\partial x_k}(0)\\im X_k}_{=:B}.\\end{multline*}\nThe vectors $$\\frac{\\partial\\wi\\Phi}{\\partial x_k}(0),\\ k=1,\\ldots,m$$ form a basis of $T^{\\RR}_M(z)$, so $A,B\\in T^{\\RR}_M(z)$, consequently $A,B\\in iT^{\\RR}_M(z)$. Since $M$ is totally real, i.e. $T^{\\RR}_M(z)\\cap iT^{\\RR}_M(z)=\\{0\\}$, we have $A=B=0$. By a property of the basis we get $\\re X_k=\\im X_k=0$, $k=1,\\ldots,m$ --- a contradiction.\n\nTherefore, $\\Phi$ in a neighborhood of $0$ is a biholomorphism of two open subsets of $\\CC^m$, which maps a neighborhood of $0$ in $\\RR^m$ to a neighborhood of $z$ in $M$.\n\n\n\\begin{lemm}[Reflection Principle]\\label{reflection}\nLet $M\\subset\\CC^m$ be a totally real local $\\CLW$ submanifold of the real\ndimension $m$. Let $V_0\\subset\\CC$ be a neighborhood of $\\zeta_0\\in\\TT$ and let $g:\\overline{\\DD}\\cap V_0\\longrightarrow\\CC^m$ be a continuous mapping. Suppose that $g\\in\\OO(\\DD\\cap V_0)$ and $g(\\TT\\cap V_0)\\subset M$. Then $g$ can be extended holomorphically past $\\TT\\cap V_0$.\n\\end{lemm}\n\\begin{proof}\nIn virtue of the identity principle it is sufficient to extend $g$ locally\npast an arbitrary point $\\zeta_0\\in\\TT\\cap V_0$. For a point $g(\\zeta_0)\\in M$ take $\\Phi$ as above. Let $V_1\\subset V_0$ be a neighborhood of $\\zeta_0$ such that $g(\\CDD\\cap V_1)$ is contained in the image\nof $\\Phi$. The mapping $\\Phi^{-1}\\circ g$ is holomorphic in $\\DD\\cap V_1$ and has\nreal values on $\\TT\\cap V_1$. By the ordinary Reflection Principle we can\nextend this mapping holomorphically past $\\TT\\cap V_1$. Denote this extension by\n$h$. Then $\\Phi\\circ h$ is an extension of $g$ in a neighborhood of $\\zeta_0$.\n\\end{proof}\n\n\n\n\n\\subsection{Schwarz Lemma for the unit ball}\n\\begin{lemm}[Schwarz Lemma]\\label{schw}\nLet $f\\in\\OO(\\DD,B_n(a,R))$ and $r:=|f(0)-a|$. Then $$|f'(0)|\\leq \\sqrt{R^2-r^2}.$$\n\\end{lemm}\n\n\n\\subsection{Some estimates of holomorphic functions of $\\cC^{\\alpha}$-class}\n\nLet us recall some theorems about functions holomorphic in $\\DD$ and continuous in $\\CDD$. Concrete values of constants $M,K$ are possible to calculate, seeing on the proofs. In fact, it is only important that they do not depend on functions.\n\\begin{tww}[Hardy, Littlewood, \\cite{Gol}, Theorem 3, p. 411]\\label{lit1}\nLet $f\\in\\OO(\\DD)\\cap\\cC(\\CDD)$. Then for $\\alpha\\in(0,1]$ the following conditions are equivalent\n\\begin{eqnarray}\\label{47}\\exists M>0:\\ |f(e^{i\\theta})-f(e^{i\\theta'})|\\leq M|\\theta-\\theta'|^{\\alpha},\\ \\theta,\\theta'\\in\\RR;\\\\\n\\label{45}\\exists K>0:\\ |f'(\\zeta)|\\leq K(1-|\\zeta|)^{\\alpha-1},\\ \\zeta\\in\\DD.\n\\end{eqnarray}\nMoreover, if there is given $M$ satisfying \\eqref{47} then $K$ can be chosen as $$2^{\\frac{1-3\\alpha}{2}}\\pi^\\alpha M\\int_0^\\infty\\frac{t^\\alpha}{1+t^2}dt$$ and if there is given $K$ satisfying \\eqref{45} then $M$ can be chosen as $(2\/\\alpha+1)K$.\n\\end{tww}\n\\begin{tww}[Hardy, Littlewood, \\cite{Gol}, Theorem 4, p. 413]\\label{lit2}\nLet $f\\in\\OO(\\DD)\\cap\\cC(\\CDD)$ be such that $$|f(e^{i\\theta})-f(e^{i\\theta'})|\\leq M|\\theta-\\theta'|^{\\alpha},\\ \\theta,\\theta'\\in\\RR,$$ for some $\\alpha\\in(0,1]$ and $M>0$. Then $$|f(\\zeta)-f(\\zeta')|\\leq K|\\zeta-\\zeta'|^{\\alpha},\\\n\\zeta,\\zeta'\\in\\CDD,$$ where $$K:=\\max\\left\\{2^{1-2\\alpha}\\pi^\\alpha M,2^{\\frac{3-5\\alpha}{2}}\\pi^\\alpha\\alpha^{-1} M\\int_0^\\infty\\frac{t^\\alpha}{1+t^2}dt\\right\\}.$$\n\\end{tww}\n\\begin{tww}[Privalov, \\cite{Gol}, Theorem 5, p. 414]\\label{priv}\nLet $f\\in\\OO(\\DD)$ be such that $\\re f$ extends continuously on $\\CDD$ and $$|\\re f(e^{i\\theta})-\\re f(e^{i\\theta'})|\\leq M|\\theta-\\theta'|^\\alpha,\\ \\theta,\\theta'\\in\\RR,$$ for some $\\alpha\\in(0,1)$ and $M>0$. Then $f$ extends continuously on $\\CDD$ and $$|f(\\zeta)-f(\\zeta')|\\leq K|\\zeta-\\zeta'|^\\alpha,\\ \\zeta,\\zeta'\\in\\CDD,$$ where $$K:=\\max\\left\\{2^{1-2\\alpha}\\pi^\\alpha,2^{\\frac{3-5\\alpha}{2}}\\pi^\\alpha\\alpha^{-1}\\int_0^\\infty\\frac{t^\\alpha}{1+t^2}dt\\right\\}\\left(\\frac{2}{\\alpha}+1\\right)2^{\\frac{3-3\\alpha}{2}}\\pi^{\\alpha}M\\int_0^\\infty\\frac{t^\\alpha}{1+t^2}dt.$$\n\\end{tww}\n\\subsection{Sobolev space}\nThe Sobolev space $W^{2,2}(\\TT)=W^{2,2}(\\TT,\\CC^m)$ is a space of functions $f:\\TT\\longrightarrow\\CC^m$, whose first two derivatives (in the sense of distribution) are in $L^2(\\TT)$ (here we use a standard identification of functions on the unit circle and functions on the interval $[0,2\\pi]$). Then $f$ is $\\mathcal C^1$-smooth.\n\nIt is a complex Hilbert space with the following scalar product\n$$\\langle f,g\\rangle_W:=\\langle f,g\\rangle_{L}+\\langle f',g'\\rangle_{L}+\\langle f'',g''\\rangle_{L},$$\nwhere $$\\langle\\wi f,\\wi g\\rangle_{L}:=\\frac{1}{2\\pi}\\int_0^{2\\pi}\\langle\\wi f(e^{it}),\\wi g(e^{it})\\rangle dt.$$ Let $\\|\\cdot\\|_L$, $\\|\\cdot\\|_W$ denote the norms induced by $\\langle\\cdotp,-\\rangle_L$ and $\\langle\\cdotp,-\\rangle_W$. The following characterization simply follows from Parseval's identity $$W^{2,2}(\\TT)=\\left\\{f\\in L^2(\\TT):\\sum_{k=-\\infty}^{\\infty}(1+k^2+k^4)|a_k|^2<\\infty\\right\\},$$ where $a_k\\in\\CC^m$ are the $m$-dimensional Fourier coefficients of $f$, i.e. $$f(\\zeta)=\\sum_{k=-\\infty}^{\\infty}a_k\\zeta^k,\\ \\zeta\\in\\TT.$$ More precisely, Parseval's identity gives $$\\|f\\|_W=\\sqrt{\\sum_{k=-\\infty}^{\\infty}(1+k^2+k^4)|a_k|^2},\\ f\\in W^{2,2}(\\TT).$$ Note that $W^{2,2}(\\TT)\\su\\mc{C}^{1\/2}(\\TT)\\su\\mc{C}(\\TT)$ and both inclusions are continuous (in particular, both inclusions are real analytic). Note also that\n \\begin{equation}\\label{67}\\|f\\|_{\\sup}\\leq\\sum_{k=-\\infty}^{\\infty}|a_k|\\leq\\sqrt{\\sum_{k=-\\infty}^{\\infty}\\frac{1}{1+k^2}\\sum_{k=-\\infty}^{\\infty}(1+k^2)|a_k|^2}\\leq\\frac{\\pi}{\\sqrt 3}\\|f\\|_W.\\end{equation}\\\\\n\nNow we want to show that there exists $C>0$ such that $$\\|h^\\alpha\\|_W\\leq C^{|\\alpha|}\\|h_1\\|^{\\alpha_1}_W\\cdotp\\ldots\\cdotp\\|h_{2n}\\|^{\\alpha_{2n}}_W,\\quad h\\in W^{2,2}(\\TT,\\CC^n),\\,\\alpha\\in\\NN_0^{2n}.$$ Thanks to the induction it suffices to prove that there is $\\wi C>0$ satisfying $$\\|h_1h_2\\|_W\\leq\\wi C\\|h_1\\|_W\\|h_2\\|_W,\\quad h_1,h_2\\in W^{2,2}(\\TT,\\CC).$$ Using \\eqref{67}, we estimate $$\\|h_1h_2\\|^2_W=\\|h_1h_2\\|^2_L+\\|h_1'h_2+h_1h_2'\\|^2_L+\\|h_1''h_2+2h_1'h_2'+h_1h_2''\\|^2_L\\leq$$$$\\leq C_1\\|h_1h_2\\|_{\\sup}^2+(\\|h_1'h_2\\|_L+\\|h_1h_2'\\|_L)^2+(\\|h_1''h_2\\|_L+\\|2h_1'h_2'\\|_L+\\|h_1h_2''\\|_L)^2\\leq$$\\begin{multline*}\\leq C_1\\|h_1\\|_{\\sup}^2\\|h_2\\|_{\\sup}^2+(C_2\\|h_1'\\|_L\\|h_2\\|_{\\sup}+C_2\\|h_1\\|_{\\sup}\\|h_2'\\|_L)^2+\\\\+(C_2\\|h_1''\\|_L\\|h_2\\|_{\\sup}+C_2\\|2h_1'h_2'\\|_{\\sup}+C_2\\|h_1\\|_{\\sup}\\|h_2''\\|_L)^2\\leq\\end{multline*}\\begin{multline*}\\leq C_3\\|h_1\\|_W^2\\|h_2\\|_W^2+(C_4\\|h_1\\|_W\\|h_2\\|_W+C_4\\|h_1\\|_W\\|h_2\\|_W)^2+\\\\+(C_4\\|h_1\\|_W\\|h_2\\|_W+2C_2\\|h_1'\\|_{\\sup}\\|h_2'\\|_{\\sup}+C_4\\|h_1\\|_W\\|h_2\\|_W)^2\\leq\\end{multline*}$$\\leq C_5\\|h_1\\|_W^2\\|h_2\\|_W^2+(2C_4\\|h_1\\|_W\\|h_2\\|_W+2C_2\\|h_1'\\|_{\\sup}\\|h_2'\\|_{\\sup})^2$$ with constants $C_1,\\ldots,C_5$. Expanding $h_j(\\zeta)=\\sum_{k=-\\infty}^{\\infty}a^{(j)}_k\\zeta^{k}$, $\\zeta\\in\\TT$, $j=1,2$, we obtain $$\\|h_j'\\|_{\\sup}\\leq\\sum_{k=-\\infty}^{\\infty}|k||a^{(j)}_k|\\leq\\sqrt{\\sum_{k\\in\\ZZ_*}\\frac{1}{k^2}\\sum_{k\\in\\ZZ_*}k^4|a^{(j)}_k|^2}\\leq\\frac{\\pi}{\\sqrt 3}\\|h_j\\|_W$$ and finally $\\|h_1h_2\\|^2_W\\leq C_6\\|h_1\\|_W^2\\|h_2\\|_W^2$ for some constant $C_6$.\n\\subsection{Matrices}\n\\begin{propp}[Lempert, \\cite{Lem2}, Th\\'eor\\`eme $B$]\\label{12}\nLet $A:\\TT\\longrightarrow\\CC^{n\\times n}$ be a matrix-valued real analytic mapping such\nthat $A(\\zeta)$ is self-adjoint and strictly positive for any $\\zeta\\in\\TT$. Then there exists $H\\in\\OO(\\CDD,\\CC^{(n-1)\\times(n-1)})$ such that $\\det H\\neq 0$ on $\\CDD$ and $HH^*=A$ on $\\TT$.\n\\end{propp}\nIn \\cite{Lem2}, the mapping $H$ was claimed to be real analytic in a neighborhood of $\\CDD$ and holomorphic in $\\DD$, but it is equivalent to $H\\in\\OO(\\CDD)$. Indeed, since $\\ov\\pa H$ is real analytic near $\\CDD$ and $\\ov\\pa H=0$ in $\\DD$, the identity principle for real analytic functions implies $\\ov\\pa H=0$ in a neighborhood of $\\CDD$.\n\\begin{propp}[\\cite{Tad}, Lemma $2.1$]\\label{59}\nLet $A$ be a complex symmetric $n\\times n$ matrix. Then $$\\|A\\|=\\sup\\{|z^TAz|:z\\in\\CC^n,\\,|z|=1\\}.$$\n\\end{propp}\n\\bigskip\n\\textsc{Acknowledgements.} We would like to thank Sylwester Zaj\\k ac for helpful discussions. We are also grateful to our friends for the participation in preparing some parts of the work.\n\\medskip\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{sec:introduction}}\nBrillouin scattering (BS) refers to the nonlinear interaction between optical and mechanical fields inside a material. BS has been widely exploited in optical fibers to implement a wide range of devices, including optical amplifiers, ultra-narrow linewidth lasers, radio-frequency (RF) signal generators, and distributed sensors \\cite{garmire2017perspectives}.\n\nBrillouin scattering was for long thought to be mediated by electrostrictive forces only. Thus, its spectrum was considered to be governed by material properties \\cite{wiederhecker_controlling_2009}. In 2006, microstructuration of optical fibers enabled shaping the BS spectrum \\cite{dainese2006stimulated}, opening a new path for geometric control of this effect \\cite{beugnot2007guided}. In 2012, a new theory \\cite{peter_t_rakich_giant_2012} predicted that Brillouin interactions could be greatly magnified by strong radiation\npressure on the boundaries of suspended silicon waveguides with nanometric-scale core sizes \\cite{qiu_stimulated_2013,wolff_stimulated_2015}. The simultaneous confinement of optical and mechanical modes is challenging in silicon-on-insulator (SOI) waveguides due to a strong phonon leakage towards the silica cladding \\cite{eggleton2019brillouin_LL,wiederhecker_brillouin_2019,safavi-naeini_controlling_2019}. However, this limitation can be circumvented by isolating the silicon waveguide core by complete or partial removal of the silica cladding \\cite{shin_tailorable_2013,laer_net_2015,peter_t_rakich_giant_2012}. Suspended or quasi-suspended structures such as silicon membrane rib waveguides \\cite{kittlaus_large_2016} and fully suspended silicon nanowires \\cite{laer_net_2015} have demonstrated large Brillouin gain. These results generated a great scientific interest for its potential for laser sources \\cite{otterstrom_silicon_2018}, microwave signal generation \\cite{li_microwave_2013} and processing \\cite{liu_chip-based_2018}, sensing applications \\cite{chow_distributed_2018, lai_earth_2020} and non-reciprocal optical devices \\cite{kittlaus_non-reciprocal_2018}.\nIn particular, pedestal waveguides \\cite{van_laer_interaction_2015} yield an experimental Brillouin gain of 3000 W$^{-1}$m$^{-1}$. \nHowever, the need for narrow-width pedestals to optimize the Brillouin gain complicates the fabrication process and may compromise the mechanical stability of the structures. On the other hand, a lower experimental Brillouin gain (1000 W$^{-1}$m$^{-1}$) was obtained for silicon membrane rib waveguides due to the very different confinement of optical and mechanical modes \\cite{kittlaus_large_2016}. Still, this comparatively modest Brillouin gain was compensated by achieving ultra-low optical propagation loss, allowing the demonstration of lasing effect \\cite{otterstrom_silicon_2018}. The use of photonic crystals with simultaneous photonic and phononic bandgaps \\cite{zhang2017design} (also referred to as phoxonic crystals) has been proposed to maximize the Brillouin gain in silicon membrane waveguides, achieving calculated values up to 8000 W$^{-1}$m$^{-1}$. Yet, the narrow bandwidth and high optical propagation loss, typically linked to bandgap confinement \\cite{baba_slow_2008}, may compromise the performance of these phoxonic crystals. \n\nSubwavelength grating silicon waveguides, with periods shorter than half of the wavelength of the guided light, exploit index-contrast confinement to yield low optical loss and wideband operation \\cite{halir_waveguide_2015,cheben2018subwavelength}. Interestingly, near-infrared photons and GHz phonons in nanoscale Si waveguides have comparable wavelengths (near 1 \\textmu m) \\cite{safavi-naeini_controlling_2019}. Thus, the same periodic structuration could operate in the subwavelength regime for both, photons and phonons. In addition, forward Brillouin scattering (FBS), used to demonstrate Brillouin gain in Si, relies on longitudinally propagating photons and transversally propagating phonons \\cite{eggleton2019brillouin_LL,wiederhecker_brillouin_2019, safavi-naeini_controlling_2019}. Hence, engineering the longitudinal and transversal subwavelength geometries would allow independent control of photonic and phononic modes. Brillouin optimization in silicon membranes has been proposed based on index-contrast confinement of photons (longitudinal subwavelength grating) and bandgap confinement of phonons (transversal phononic crystal) \\cite{schmidt2019suspended}, achieving a calculated gain of 1750 W$^{-1}$m$^{-1}$. More recently, the combination of subwavelength index-contrast and subwavelength softening has been proposed to optimize Brillouin gain in suspended Si waveguides, achieving a calculated value of 3000 W$^{-1}$m$^{-1}$, for a minimum feature size of 50 nm \\cite{zhang_subwavelength_2020}. Still, these two approaches require several etch steps of the silicon core, complicating the device's fabrication. In this work, we propose a novel subwavelength-structured Si membrane, illustrated in Fig. \\ref{fig:structure}, requiring only one etch step of silicon. We develop an optimization method to design the waveguide geometry, combining multi-physics optical and mechanical simulations with a genetic algorithm (GA) capable of handling a large number of parameters \\cite{hakansson_generating_2019}. The optimized geometry yields a calculated Brillouin gain of 3300 W$^{-1}$m$^{-1}$, with a minimum feature size of 50 nm, compatible with electron-beam lithography.\n\n\\section{Design and Results \\label{sec:Results}}\nThe proposed optomechanical waveguide geometry, depicted in Fig. \\ref{fig:structure}, comprises a suspended central strip of width $W_g=400$ nm that is anchored to the lateral silicon slabs by a lattice of arms with a longitudinal period ($z$-direction) of $\\Lambda=300$ nm. This period is shorter than half of the optical wavelength, ensuring optical operation in the subwavelength regime. The anchoring arms are symmetric with respect to the waveguide center. We split the arms into five different sections with widths and lengths of $W_i$ ($x$-direction) and $L_i$ ($z$-direction), respectively. The index $i=1$ refers to the section adjacent to the waveguide core, while the index $i=5$ refers to the outermost section (see Fig. \\ref{fig:structure}, inset). The fifth section has a fixed width of $W_5=500$ nm and length of $L_5=50$ nm to ensure proper guidance and localization of the optical mode. The widths and lengths of sections 1 to 4 are optimized using the genetic algorithm. The whole waveguide has a fixed silicon thickness of $t=220$ nm, allowing fabrication in a single-etch step.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=\\columnwidth]{fig1_structure_single.png}\n \\caption{Proposed optomechanical waveguide. In the inset, the different sections of the anchoring arms are numbered from $1$ to $5$. The width of the waveguide core ($W_g=400$ nm), the period ($\\Lambda=300$ nm), and the dimensions of the outermost section ($L_5=50$ nm, $W_5=500$ nm) remain fixed throughout the optimization process. The thickness of the silicon slab is $t=220$ nm.}\n \\label{fig:structure}\n\\end{figure}\n\nWe focus on FBS, where only near-cut-off acoustic modes are involved. In the absence of optical absorption, which is the case of silicon at near-infrared wavelengths, the optical and mechanical mode equations describing FBS decouple and can be solved separately \\cite{safavi-naeini_controlling_2019}. We use here COMSOL Multiphysics software for the optomechanical simulations. For the calculation of optical and mechanical modes in the optimization process, we reduce the 3D structure to an equivalent 2D geometry. The effective index method \\cite{chen_foundations_2005} is considered for the computation of the transverse-electric (TE) polarized optical modes while the in-plane mechanical modes are calculated assuming the plane stress approximation \\cite{auld_acoustic_1973}. We compute the Brillouin gain, $G_\\mathrm{B}$, as \\cite{wiederhecker_brillouin_2019}\n\\begin{equation}\n G_\\mathrm{B}(\\Omega_\\mathrm{m}) = Q_\\mathrm{m} \\, \\frac{2 \\omega_\\mathrm{p}}{m_\\mathrm{eff} \\, \\Omega_\\mathrm{m}^2} \\, \\left| \\int f_\\mathrm{MB} \\,\\mathrm{d}\\ell + \\int f_\\mathrm{PE} \\,\\mathrm{d} A \\right|^2 ,\n\\label{eq:GB}\n\\end{equation} \nwhere $\\omega_\\textrm{p}$ is the frequency of the optical pump, $\\Omega_\\mathrm{m}$ is the mechanical frequency, $Q_\\mathrm{m}$ is the mechanical quality factor, $m_\\mathrm{eff} = \\int\\rho\\,|\\mathbf{u}_\\mathrm{m}|^2\/\\max|\\mathbf{u}_\\mathrm{m}|^2 \\,\\mathrm{d}A$ is the effective linear mass density of the mechanical mode with displacement profile $\\mathbf{u}_\\mathrm{m}$, and $f_\\mathrm{MB}$ and $f_\\mathrm{PE}$ are the linear and surface overlap of optical force density and deformation representing the moving boundaries effect (MB) and the photoelastic effect (PE), respectively,\n\\begin{align}\n & f_\\mathrm{MB} = \\frac{\\mathbf{u}_\\mathrm{m}^*\\cdot\\mathbf{n} \\, \\left(\\delta\\varepsilon_\\mathrm{MB} \\, \\mathbf{E}^*_\\mathrm{p,t}\\cdot \\mathbf{E}_\\mathrm{s,t} - \\delta\\varepsilon_\\mathrm{MB}^{-1} \\, \\mathbf{D}_\\mathrm{p,n}^*\\cdot\\mathbf{D}_\\mathrm{s,n}\\right)}{\\max|\\mathbf{u}_\\mathrm{m}| \\, P_\\mathrm{p} \\, P_\\mathrm{s}} \\nonumber \\\\\n & \\mathrm{and} \\quad f_\\mathrm{PE} = \\frac{\\mathbf{E}^*_\\mathrm{p}\\cdot \\delta\\varepsilon_\\mathrm{PE}^* \\cdot \\mathbf{E}_\\mathrm{s}}{\\max|\\mathbf{u}_m| \\, P_\\mathrm{p} \\, P_\\mathrm{s}} ,\n\\label{eq:MB_PE}\n\\end{align}\nwhere the permittivity differences due to the moving boundaries effects are given by $\\delta\\varepsilon_\\mathrm{MB} = \\varepsilon_1 - \\varepsilon_2$ and $\\delta\\varepsilon_\\mathrm{MB}^{-1} = 1\/\\varepsilon_1 - 1\/\\varepsilon_2$, with $\\varepsilon_i=\\varepsilon_0 n_i^2$ being the permittivities of the silicon ($i=1$) and air ($i=2$). The photoelastic tensor perturbation in the material permittivity is $\\delta\\varepsilon_\\mathrm{PE} = -\\varepsilon_0 \\, n^4 \\, \\mathbf{p}:\\mathbf{S}$, with $n$ being the material refractive index, $\\mathbf{p}$ the photoelastic tensor, and $\\mathbf{S}$ the mechanical stress tensor induced by the mechanical mode. The term $\\mathbf{u}_\\mathrm{m}\\cdot\\mathbf{n}$ is the normal component of the mechanical displacement and $\\mathbf{E}_{j,\\mathrm{t}}$ and $\\mathbf{D}_{j,\\mathrm{n}}$ are the tangential electric field and normal dielectric displacement for the pump ($j=\\mathrm{p}$) and the scattered field ($j=\\mathrm{s}$). The denominator represents the power normalization given by $P_j = [2 \\Re(\\int [\\mathbf{E}_j\\times\\mathbf{H}_j^*] \\cdot \\mathbf{z} \\, \\mathrm{d}A)]^{1\/2}$.\n\nThe symmetry directions $[100]$, $[010]$, and $[001]$ of the crystalline silicon are set to coincide with the $x$, $y$, and $z$ simulation axis, respectively. With this orientation, the photoelastic tensor \\cite{qiu_stimulated_2013,rakich_tailoring_2010} is $[p_{11},p_{12},p_{44}]=[-0.094,0.017,-0.051]$. The refractive index of silicon is $n=3.45$ and its density $\\rho=2329$ kg m$^{-3}$ while the corresponding values for the air are $n=1$ and $\\rho=1.293$ kg m$^{-3}$. \n\nThe quality factor of the mechanical mode, $Q_\\mathrm{m}$, is related to the full width at half maximum (FWHM) of the gain spectrum, $\\gamma_\\mathrm{m}$, through $Q_\\mathrm{m}=\\Omega_\\mathrm{m}\/\\gamma_\\mathrm{m}$ and it is limited by different loss mechanisms, \n\\begin{equation}\n \\frac{1}{Q_\\mathrm{m}} = \\frac{1}{Q_\\mathrm{TE}} + \\frac{1}{Q_\\mathrm{L}} + \\frac{1}{Q_\\mathrm{air}}.\n\\label{eq:Q}\n\\end{equation}\nHere, we consider the thermoelastic loss ($Q_\\mathrm{TE}$), the mechanical leakage towards the silica under-cladding ($Q_\\mathrm{L}$), and the viscous loss from surrounding air ($Q_\\mathrm{air}$). The thermoelastic loss yields mechanical quality factors of $Q_\\mathrm{TE}\\sim6\\cdot10^5$ \\cite{comsol_2018} for silicon nanostructures while the leakage loss is mainly governed by the geometries of the waveguide and the arms anchoring it to the lateral silicon slab. These two effects are directly considered in the mechanical-mode simulations performed in COMSOL Multiphysics. The viscous loss induced by the surrounding air is considered here by imposing a limiting value to the mechanical quality factor of $Q_\\mathrm{m}=4\\cdot10^3$, which is the highest expected value at atmospheric pressure and room temperature for phonon frequency in the order of GHz \\cite{ghaffari_quantum_2013}.\n\nBased on the resulting optomechanical coupling calculations, a genetic algorithm \\cite{xin-she_yang_chapter_2021} is used to maximize the FBS gain. Starting with randomly generated combinations of parameters $W_i$ and $L_i$ (individuals), optomechanical simulations are carried out and the individuals are ranked according to their Brillouin gain. Recombination is used to produce a successor set of individuals, the next generation. The best-performing individuals directly become part of the next generation (elitism). A large number of individuals of the new generation is obtained by combining the parameter of pairs of individuals from the current generation (crossover). Finally, the remaining individuals of the new generation are produced by randomly modifying the parameters of single individuals of the current generation (mutation). This process continues until the convergence criterion has been reached.\n\nIn our particular optimization problem, an individual is a possible geometry, represented by a set of 8 parameters (width and length of each of the arm sections). Each generation is composed of 50 individuals and the successive generations are obtained applying a rate of elitism and crossover of 6\\% and 80\\%, respectively, with the remaining elements obtained through mutation. The convergence criterion was defined in terms of the difference between the best and the average performance, $G_\\mathrm{B} - \\langle G_\\mathrm{B}\\rangle <$ 10 W$^{-1}$m$^{-1}$, over 10 generations. For this work, we have used a standard computer with the following specifications: a 64-bit operating system with an x64-based processor Intel\\textsuperscript{\\tiny\\textregistered} Core\\textsuperscript{\\tiny\\texttrademark} i7-4790 (4 total cores, 8 total threads, base-frequency of 3.60 GHz), and an installed RAM of 8.00 GB. Under these conditions, the optimization process was completed in 12h 35 min, comprising 1500 optomechanical simulations of 30 seconds each.\n\nThe method we propose here relies on a defined geometry whose parameters are allowed to vary within a specific range of values. Hence, the optimized structure will depend strongly on our initial guess.\n\nIn Fig. \\ref{fig:convergence}, we present the optimization process. Figures \\ref{fig:convergence}a and \\ref{fig:convergence}b show the Brillouin gain and mechanical frequency, respectively, as a function of the generation number. As a result of the evolution of the geometry, we observe an increase in the gain and a variation in the mechanical frequency. This result should be expected as the Brillouin shift in FBS is particularly sensitive to the waveguide dimensions. The optimum performance is achieved after 10 generations while 30 generations are required for convergence. The optimized geometry, whose dimensions are listed in Table \\ref{tab:geom}, is characterized by a Brillouin gain of $G_\\mathrm{B}=3350$ W$^{-1}$m$^{-1}$ for a mechanical mode with frequency of $\\Omega_\\mathrm{m}=14.357$ GHz and mechanical quality factor of\n$Q_\\mathrm{m}\\approx3.2\\cdot10^3$. The optical mode has a mode effective index of 2.36 and wavelength in vacuum of $\\lambda=1556.5$ nm ($\\omega_\\mathrm{p}=2\\pi\\cdot192.6$ THz in (\\ref{eq:GB})).\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=\\columnwidth]{fig2_convergence_double.png}\n \\caption{Optimization process. a) Best (in blue) and average (in orange) Brillouin gain as a function of the number of generations during genetic optimization. b) Evolution of the mechanical frequency as a function of the number of generations. During the optimization process, all possible mechanical losses are considered, including thermoelastic loss, mechanical leakage, and viscous loss due to air (operation in air ambient at room temperature).}\n \\label{fig:convergence}\n\\end{figure}\n\nIn terms of geometry, the first and fourth sections, with considerably larger widths, generate reflections that help localize the mechanical mode in the waveguide core. The frequency of the mechanical mode is governed by the interplay between the waveguide width and the length of the partial cavity formed by the fourth section on each side.\n\n\\begin{table}[htb]\n \\centering\n \\caption{Dimensions for the GA-optimized geometry when operating in air ambient at room temperature. In the table above, S$_i$ stands for section $i$ in Fig. \\ref{fig:structure}.} \n \\label{tab:geom}\n \\begin{tabular}{@{}ccccc}\n \\toprule\n & S1 & S2 & S3 & S4 \\\\\n \\midrule\n Width & 170 nm & 320 nm & 330 nm & 100 nm \\\\\n Length & 130 nm & 60 nm & 60 nm & 190 nm \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nFull 3D simulations are realized to verify the performance of the optimized geometry. This structure provides a Brillouin gain of $G_\\mathrm{B}=3310$ W$^{-1}$m$^{-1}$ for a mechanical mode with a frequency of $\\Omega_\\mathrm{m}=14.579$ GHz. The optical mode has a mode effective index of 2.23 and wavelength in vacuum of $\\lambda=1557.2$ nm ($\\omega_\\mathrm{p}=2\\pi\\cdot 192.52$ THz in (\\ref{eq:GB})). Figure \\ref{fig:modes} shows the calculated field distribution for the mechanical and optical modes in the optimized geometry.\n\n\\begin{figure}[htbp] \n \\centering\n \\includegraphics[width=\\columnwidth]{fig3_modes_double.png}\n \\caption{Optical and mechanical modes of the optimized geometry operating in air ambient and room temperature (table \\ref{tab:geom}): a) Approximated 2D structure. The upper structure corresponds to the normalized mechanical displacement at 14.357 GHz and the lower figure to the $x$-component of the electric field at 1556.5~nm (mode effective index 2.36). b) Full 3D device. On the bottom left, $x$-component of the electric field at 1557.2 nm (mode effective index 2.23), and on the top right, normalized mechanical displacement at 14.579 GHz.}\n\\label{fig:modes}\n\\end{figure}\n\nThese results show a good agreement between the approximated 2D geometry used for the optimization and the full 3D structure. The small discrepancies in the optical mode index and mechanical frequency are due to the influence of the thickness. \n\nFinally, we study the fabrication tolerance of the proposed structure using again 3D simulations. We consider under- and over-etching errors that we model by a variation of all the waveguide lengths and widths by a factor $\\Delta$, measured in nm (Fig. \\ref{fig:fab_tolerance}a). Figure \\ref{fig:fab_tolerance}c shows the variation of the Brillouin gain (in blue) and mechanical frequency (in orange) as a function of $\\Delta$. The Brillouin gain remains above 2000 W$^{-1}$m$^{-1}$ for geometry variations of $\\pm 10$\\,nm. It should be noted that for the over-etch case ($\\Delta<0$ in Fig. \\ref{fig:fab_tolerance}c), the Brillouin gain is larger than the optimized case due to the larger optomechanical coupling resulting from a better overlap of the mechanical mode with the optical field. However, these smaller structures are incompatible with the target minimum feature size of 50 nm that was chosen to guarantee fabrication reliability. The mechanical frequency varies less than 2\\% (Fig.\\ref{fig:fab_tolerance}c, in orange) and the mechanical profile is not modified significantly.\n\nWe also study the effect of stitching errors, modeled by a deviation $\\zeta$ (in nm) of the arm axis at both sides of the waveguide core, hence breaking the symmetry of the structure (Fig. \\ref{fig:fab_tolerance}b). Figure \\ref{fig:fab_tolerance}d shows the variation of the Brillouin gain (in blue) and mechanical frequency (in orange) as a function of $\\zeta$. A non-perfectly symmetric structure is slightly detrimental to the Brillouin gain but does not affect the mechanical frequency or profile. Interestingly, both parameters (Brillouin gain and mechanical frequency) remain constant over a large range of stitching errors.\n\nLastly, we examine the effect of random fabrication errors affecting each section independently (Table \\ref{tab:geom_random}). We consider deviations of 5 to 20 nm, both in positive (enlargement) or negative (shrinking) directions. Our geometry exhibits a robust performance despite these errors with Brillouin gains above 2000 W$^{-1}$m$^{-1}$ (Fig. \\ref{fig:fab_tolerance}e, blue) and mechanical frequencies between 14 and 15 GHz (Fig. \\ref{fig:fab_tolerance}e, orange). It should be noted that the period remains constant, $\\Lambda = 300$ nm since it is controlled with high precision ($\\pm 2$ nm) in terms of fabrication.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{fig4_fab_double.png}\n \\caption{Fabrication tolerance of the optimized geometry. a) and b) Variation of the geometry due to fabrication errors. The solid black line corresponds to optimized geometry, dotted (solid) blue depicts a positive deviation from the nominal design, and dotted orange refers to a negative deviation from the expected design. c) and d) Evolution of the Brillouin gain (in blue, left axis) and the mechanical frequency (in orange, right axis) for different values of under- and over-etching (c), different values of stitching errors (d), and different structures with randomized geometrical parameters (e). In e), N stands for the nominal design obtained after the optimization problem and $i$ for the different geometries listed in Table \\ref{tab:geom_random}.}\n \\label{fig:fab_tolerance}\n\\end{figure}\n\n\\begin{table}\n \\centering\n \\caption{Dimensions for the different geometries used for studying the effect of randomization of the design parameters. In the table, S$_i$ stands for section $i$ in Fig. \\ref{fig:structure}, N stands for the nominal design as obtained from the optimization (Table 1), and $i$ stands for the different geometries in Fig. \\ref{fig:fab_tolerance}e. In all cases, the period, $\\Lambda = 300$ nm, remains constant.}\n \\label{tab:geom_random}\n \\begin{tabular}{@{}cccccccc}\n \\toprule\n Geometry & & S1 & S2 & S3 & S4 & S5 & $W_g$ \\\\\n \\midrule\n \\multirow{2}*{N} & Width & 170 nm & 320 nm & 330 nm & 100 nm & 500 nm & 400 nm \\\\\n & Length & 130 nm & 60 nm & 60 nm & 190 nm & 50 nm & \\\\\n \\midrule\n \\multirow{2}*{1} & Width & 165 nm & 305 nm & 345 nm & 90 nm & 510 nm & 405 nm \\\\\n & Length & 130 nm & 45 nm & 65 nm & 180 nm & 60 nm & \\\\\n \\midrule\n \\multirow{2}*{2} & Width & 165 nm & 320 nm & 340 nm & 115 nm & 495 nm & 400 nm \\\\\n & Length & 110 nm & 45 nm & 55 nm & 170 nm & 35 nm & \\\\\n \\midrule\n \\multirow{2}*{3} & Width & 155 nm & 340 nm & 340 nm & 100 nm & 485 nm & 405 nm \\\\\n & Length & 150 nm & 40 nm & 70 nm & 200 nm & 55 nm & \\\\\n \\midrule\n \\multirow{2}*{4} & Width & 185 nm & 300 nm & 325 nm & 95 nm & 480 nm & 385 nm \\\\\n & Length & 140 nm & 65 nm & 75 nm & 185 nm & 60 nm & \\\\\n \\midrule\n \\multirow{2}*{5} & Width & 160 nm & 320 nm & 330 nm & 95 nm & 510 nm & 390 nm \\\\\n & Length & 140 nm & 65 nm & 55 nm & 210 nm & 60 nm & \\\\\n \\midrule\n \\multirow{2}*{6} & Width & 185 nm & 340 nm & 315 nm & 120 nm & 520 nm & 420 nm \\\\\n & Length & 135 nm & 40 nm & 50 nm & 190 nm & 35 nm & \\\\\n \\midrule\n \\multirow{2}*{7} & Width & 185 nm & 340 nm & 340 nm & 110 nm & 480 nm & 410 nm \\\\\n & Length & 140 nm & 55 nm & 65 nm & 175 nm & 40 nm & \\\\\n \\midrule\n \\multirow{2}*{8} & Width & 150 nm & 300 nm & 345 nm & 110 nm & 510 nm & 395 nm \\\\\n & Length & 120 nm & 80 nm & 40 nm & 175 nm & 65 nm & \\\\\n \\midrule\n \\multirow{2}*{9} & Width & 170 nm & 340 nm & 325 nm & 105 nm & 520 nm & 410 nm \\\\\n & Length & 120 nm & 70 nm & 50 nm & 190 nm & 70 nm & \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\section{Conclusions}\nIn summary, we have proposed a new approach to optimizing Brillouin gain in silicon membrane waveguides. We exploit genetic optimization to maximize Brillouin gain in subwavelength-structured Si waveguides, requiring only one etch step. Genetic algorithm is a well-known optimization technique capable of handling design spaces of moderate dimension \\cite{xin-she_yang_chapter_2021}. It has the main advantage over gradient-based algorithms in its capability to search the design space in many directions simultaneously. On the other hand, the genetic algorithms cannot guarantee a global optimum solution, being the final result strongly dependent on the initial population. Based on this strategy, a calculated Brillouin gain up to 3310 W$^{-1}$m$^{-1}$ is achieved for air environment. This result compares favorably to previously reported subwavelength-based Brillouin waveguides requiring several etching steps \\cite{schmidt2019suspended,zhang_subwavelength_2020}, with calculated Brillouin gain of 1750 W$^{-1}$m$^{-1}$ and 3000 W$^{-1}$m$^{-1}$. Our results show the potential of optimization for obtaining novel designs with improved performance in the context of Brillouin scattering. Moreover, they show the reliability of computationally efficient optimizations based on approximated 2D simulations.\n\n\n\\section*{Declaration of Competing Interest}\nThe authors declare that they have no known competing financial\ninterests or personal relationships that could have appeared to influence\nthe work reported in this paper.\n\n\\section*{Author Statement}\nPaula Nu\u00f1o Ruano, Jianhao Zhang, and Carlos Alonso Ramos proposed the concept. Paula Nu\u00f1o Ruano, Jianhao Zhang, and Daniele Melati developed the simulation framework. Paula Nu\u00f1o Ruano, Jianhao Zhang, Daniele Melati, David Gonz\u00e1lez Andrade, and Carlos Alonso Ramos optimized and analyzed the results. All authors contributed to the manuscript.\n\n\\section*{Data Availability Statement}\nThe data supporting this study's findings are available from the corresponding author upon reasonable request.\n\n\\section*{Acknowledgements}\nThe authors want to thank the Agence Nationale de la Recherche for supporting this work through BRIGHT ANR-18-CE24-0023-01 and MIRSPEC ANR-17-CE09-0041. P.N.R. acknowledges the support of Erasmus Mundus Grant: Erasmus+ Erasmus Mundus Europhotonics Master program (599098-EPP-1-2018-1-FR-EPPKA1-JMD-MOB) of the European Union. This project has received funding from the European Union's Horizon Europe research and innovation program under the Marie Sklodowska-Curie grant agreement N\u00ba 101062518.\n\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction} The idea of spatial\ncoupling emerged in the coding context from the study of Low-Density\nParity-Check Convolutional (LDPCC) codes. LDPCC codes were introduced\nby Felstr{\\\"{o}}m and Zigangirov \\cite{FeZ99}. We refer the reader\nto \\cite{EnZ99, ELZ99, LTZ01, TSSFC04} as well as to the introduction\nin \\cite{KRU10} which contains an extensive review. It has long\nbeen known that LDPCC codes outperform their block coding counterparts\n\\cite{SLCZ04, LSZC10, LSZC05}. Subsequent work isolated\nand identified the key system structure which is responsible for\nthis improvement.\n\nIn particular, it was conjectured in \\cite{KRU10} that spatially\ncoupled systems exhibit BP threshold behavior corresponding to the MAP threshold behavior of\n uncoupled component system. This phenomenon was termed ``threshold saturation\" \nand a\nrigorous proof of the threshold saturation phenomenon over the BEC\nand regular LDPC ensembles was given. The proof was generalized\nto all binary-input memoryless output-symmetric (BMS) channels in\n\\cite{KRU12}. From these results it follows that universal\ncapacity-achieving codes for BMS channels can be constructed by\nspatially coupling regular LDPC codes. Spatial coupling has also\nbeen successfully applied to the CDMA multiple-access channel\n\\cite{ScT11,TTK11}, to compressed sensing \\cite{KP10,KMSSZ11,DMM,DJM11},\nto the Slepian-Wolf coding problem \\cite{YPN11}, to models in\nstatistical physics \\cite{HMU11a,HMU11b}, and to many other problems in\ncommunications, statistical physics and computer science, see\n\\cite{KRU12} for a review.\n\nThe purpose of this paper is two-fold. First, we establish the existence of wave-like solutions to\nspatially coupled graphical models which, in the large size limit,\ncan be characterized by a one-dimensional real-valued state.\nThis is applied to give a rigorous\nproof of the threshold saturation phenomenon for all such\nmodels. This includes spatial\ncoupling of irregular LDPC codes over the BEC, but it also addresses\nother cases like hard-decision decoding for transmission over general\nchannels, and the CDMA multiple-access problem \\cite{ScT11,TTK11}\nand compressed sensing \\cite{DJM11}.\nAs mentioned above, transmission over the BEC using spatially-coupled\nregular LDPC codes was already solved in \\cite{KRU10}, but our\ncurrent set-up is more general. Whereas the proof in \\cite{KRU10}\ndepends on specific features of the BEC, here we derive a graphical\ncharacterization of the threshold saturation phenomena in terms of\nEXIT-like functions that define the spatial system. This broadens\nthe range of potential applications considerably.\n\nConsider the example of coding over the BEC. In\nthe traditional irregular LDPC EXIT chart setup the condition for successful decoding reduces\nto the two EXIT charts not crossing. We will show that the EXIT\ncondition for good performance of the spatially-coupled system is\nsignificantly relaxed and reduces to a condition on the area bounded\nbetween the component EXIT functions.\n\nThe criteria is best demonstrated by a simple example. Consider transmission\nover the BEC using the $(3, 6)$ ensemble.\nFigure~\\ref{fig:positivegapbec36} shows the corresponding EXIT\ncharts for $\\epsilon=0.45$ and $\\epsilon=0.53$. Note that both these\nchannel parameters are larger than the BP threshold which is\n$\\epsilon^{\\text{\\small BP}} \\simeq 0.4294$.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/positivegapbec36}\n}\n\\caption{\\label{fig:positivegapbec36} Both pictures show the EXIT\ncurves for the $(3, 6)$ ensemble and transmission over the BEC.\nLeft: $\\epsilon=0.45$. In this case $A=0.03125>0$, i.e., the white\narea is larger than the dark gray area. Right: $\\epsilon=0.53$.\nIn this case $A=-0.0253749<0$, i.e., the white area is smaller than\nthe dark gray area. } \n\\end{figure}\nIf we consider the signed area bounded by the two\nEXIT charts and integrate from $0$ to $u$ then on the left hand side, with $\\epsilon=0.45,$ this\narea is positive for all $u \\in [0, 1]$. This property guarantees\nthat the decoder for the spatially coupled system succeeds for this\ncase. On the right-hand side with $\\epsilon=0.53$, however, the area becomes\nnegative at some point (the total area in white is smaller than the\ntotal area in dark gray) and by our condition this implies that the\ndecoder for the spatially coupled system does not succeed. The\nthreshold of the spatially coupled system is that channel\nparameter such that the area in white and the area in dark gray are\nexactly equal. \n\nThis simple graphical condition is the essence of our result and\napplies regardless whether we look at coding systems or other\ngraphical models. Given any system characterized by two EXIT functions,\nwe can plot these two functions and consider the signed area area bound between them,\nsay for the first coordinate ranging from $0$ to a point $u$. As long as this area is positive for all\n$u \\in (0, 1]$ the iterative process succeeds, i.e., it converges to $0.$ Indeed, we will even\nbe able to make predictions on the speed of the process based on\nthe ``excess\" area we have. \n\nA few conclusions can immediately be drawn from such a picture.\nFirst, if the threshold of the uncoupled system is determined by\nthe so-called stability condition, i.e., the behavior of the EXIT\ncharts for $u$ around $0$ then spatial coupling does not increase\nthe threshold. Indeed, if we increase the parameter beyond what is\nallowed according to the stability condition, the area will become\nnegative around $0$. Second, if the curves only have\none non-trivial crossing (besides the one at $0$ and at the right\nend point) then the threshold is given by a balance of the two\nenclosed areas.\n\nFor ``nice\" EXIT charts (e.g., continuous, and with a finite number of\ncrossings) the above picture contains all that is needed. But since\nwe want to develop the theory for the general case, some care is\nneeded when defining all relevant quantities. When reading the\ntechnical parts below, it is probably a good idea to keep the above\nsimple picture in mind. For readers familiar with the so-called\nMaxwell conjecture, it is worth pointing out that the above picture\nshows that this conjecture is generically correct for coupled\nsystems. To show that it is also correct for uncoupled systems one\nneeds to show in addition that under MAP decoding the coupled and\nthe uncoupled system behave identically. This can often be accomplished\nby using the so-called interpolation method. For e.g., regular\nensembles with no odd check degrees this second step was shown to\nbe correct in \\cite{GMU12}.\n\nLet us point out a few differences to the set-up in \\cite{KRU10}.\nFirst, rather than analyzing directly the spatially-discrete system,\nkey results are established in the limit of continuum spatial\ncomponents. We will see that for such systems the solution for the\ncoupled system is characterized in terms of traveling waves, especially fixed points. The\nspatially discrete version is then recovered as a sampling of the continuum\nsystem. The existence of traveling wave solutions and their\nrelationship to the EXIT charts of the underlying component systems\nis the essential technical content of the analysis and does not depend\non information theoretic aspects of the coding case.\n\nThe second purpose of this paper is to show that herein-developed\none-dimensional theory can model many higher-dimensional or even\ninfinite-dimensional systems to enable accurate prediction of their\nperformance. This is very much in the spirit of the use of EXIT\ncharts and Gaussian approximations for the the design of iterative\nsystems. \nUsing this interpretation,\nwe apply our method to channel coding over general channels. Even\nthough the method is no longer rigorous in these cases, we show\nthat our graphical characterization gives very good predictions on\nthe system performance and can therefore provide a convenient design\ntool.\n\nRecently several alternative approaches to the analysis of\nspatially-coupled systems have been developed independently by\nvarious authors \\cite{TTK11b, DJM11, YJNP12a, YJNP12b}. These\napproaches share some important aspects with our work but there are\nalso some important differences. Let us quickly discuss this. \n\nIn \\cite{DJM11} a proof was given that spatially coupled measurement\nmatrices, together with a suitable iterative decoding algorithm,\nthe so-called {\\em approximate message-passing} (AMP) algorithm,\nallows to achieve the information theoretic limits of compressive sensing\nin a wide array of settings. The key technical idea is to show that\nthe iterative system is characterized in the limit of large block\nsizes by a one-dimensional parameter (which in this case represents\nper-coordinate mean square error) and which can be tracked faithfully by\n{\\em state evolution equations}. To ease the analysis the authors\nconsider continuum state evolution equations. The discrete state\nevolution is then obtained by sampling the continuous state evolution\nequations. The most important ingredient of the proof is a construction\nof an appropriate free energy or potential function for the system such\nthat the fixed points of the state evolution are the stationary\npoints. It is then shown that if the under-sampling ratio is greater\nthan the information dimension, then the solution of the state evolution\nbecomes arbitrarily small. Suppose this did not happen. Then\nby perturbing slightly the non-trivial fixed point (the solution\nis ``moved'' inside) one can show that the potential strictly\ndecreases. However, since the fixed point is a stationary point of\nthe potential function, we get a contradiction. \n\nIn \\cite{YJNP12a, YJNP12b} the two main ingredients are also the\ncharacterization of the iterative system by a one-dimensional (or\nfinite-dimensional) parameter and the construction of a suitable\npotential function whose stationary points are the fixed points of\ndensity evolution. A significant innovation introduced in\n\\cite{YJNP12a, YJNP12b} is that it is shown how to {\\em systematically}\nconstruct such a potential function in a very general setting. This\nmakes it possible to apply the analysis to a wide array of setting\nand provides a systematic framework for the proof. In addition,\nthis framework allows not only to attack the scalar case but can\nbe carried over to vector-valued states.\n\nOur starting point is the set of EXIT functions, a familiar tool\nin the setting of iterative systems.\nWe also use a type of potential function for the underlying component system.\nUnlike the works mentioned above we retain the symmetry of the iterative system rather than\ncollapsing one of the equations.\nIn addition, the form of the potential function used is such that each step of the iteration \nminimizes the potential function for the variable being updated.\nThe potential function can be lifted to the spatially coupled system but that is not the approach we take in this paper.\nWe consider the spatially coupled system of infinite extent and analize solutions.\nSpatial fixed point solutions that interpolate between fixed points of the component system are of particular importance.\nWe show that the evaluation of the potential function of the component system at points determined by the spatial fixed point can be expressed in terms an integral accross space of the\nfixed point. Surprisingly, the underlying potential can be expressed using the spatial fixed point solution in a way that uses the portion of the solution local to the evaluation points.\nThis basic result yields structural information on the fixed point solution.\nThis result is used as a foundation to characterize and construct wave-like solutions for \nspatially coupled systems.\nPerhaps one of the strong points of the current paper is\nthat it gives a fairly detailed and complete picture of the system\nbehavior. I.e., we not only characterize the threshold(s) but\nwe also are able to characterize {\\em how} the system converges to\nthe various FPs (these are the wave solutions) and {\\em how fast}\nit does so.\n\nThe outline of the paper is as follows. In Section~\\ref{sec:main}\nwe consider an abstract system, characterized by two EXIT-like\nfunctions. In terms of these functions we state a graphical criterion\nfor the occurrence of threshold saturation. In\nSection~\\ref{sec:applications} we then apply the method to several\none-dimensional systems. We will see that in each case the analysis\nis accomplished in just a few paragraphs by applying the general\nframework to the specific setting. In Section~\\ref{sec:gaussapprox}\nwe develop a framework that can be used to analyze higher-dimensional\nsystems in a manner analogous to the way the Gaussian approximation\nis used together with EXIT charts in iterative system design. We\nalso show by means of several examples that this approach typically\ngives accurate predictions. In Section~\\ref{sec:proof} we give a\nproof of the main results. Many of the supporting lemmas and bounds\nare relegated to appendices.\n\n\\section{Threshold Saturation in One-Dimensional Systems}\\label{sec:main}\nIn\nthis section we develop and state the main ingredients which we\nwill later use to analyze various spatially coupled systems. Although\nin most cases we are ultimately interested in ``spatially discrete'' and\n``finite-length'' coupled systems, i.e., systems where we have a\nfinite number of ``components'' which are spatially coupled along\na line, it turns out that the theory is more elegant and simpler\nto derive if we start with spatially continuous and unterminated systems, i.e., stretching from\n$-\\infty$ to $\\infty$. Once a suitably defined continuous system\nis understood, one can make contact with the actual system at hand\nby spatially discretizing it and by imposing specific boundary\nconditions.\n\nThroughout this section we use the example of the spatially-coupled\n$(\\dl, \\dr)$-regular LDPC ensemble. \\bexample[$(\\dl, \\dr, w, L)$\nEnsemble]\\label{def:ensemble} The $(\\dl, \\dr, w, L)$ random ensemble\nis defined as follows, see \\cite{KRU10}. In the ensuing paragraphs\nwe use $[a, a+b]\\Delta$, for integers $a$ and $b$, $b\\geq0$, and\nthe real non-negative number $\\Delta$, to denote the set of points\n$a\\Delta, (a+1)\\Delta, \\dots, (a+b)\\Delta$.\n\nWe assume that the variable nodes are located at positions $[0,\nL] \\Delta$, where $L \\in \\naturals$ and $\\Delta >0$. At each position\nthere are $M$ variable nodes, $M \\in \\naturals$. Conceptually we\nthink of the check nodes as located at all positions $[- \\infty,\n\\infty] \\Delta$. Only some of these positions are used and contain check nodes\nthat are actually connected to\nvariable nodes. At each position there are $\\frac{\\dl}{\\dr}\nM$ check nodes. It remains to describe how the connections are\nchosen.\n\nWe assume that each of the $\\dl$ neighbors of a variable node at\nposition $i \\Delta$ is uniformly and independently chosen from the\nrange $[i-w, \\dots, i+w] \\Delta$, where $w$ is a ``smoothing''\nparameter.\\footnote{Full independence is not possible while satisfying the degree constraints.\nThis does not affect the analysis since we only need the independence to hold asymptotically in large block size over finite neighborhoods in the graph.}\n In the same way, we assume that each of the $\\dr$\nconnections of a check node at position $i$ is independently chosen\nfrom the range $[i-w, \\dots, i+w] \\Delta$. Note that this deviates\nfrom the definition in \\cite{KRU10} where the ranges were $[i,\n\\dots, i+w-1] \\Delta$ and $[i-w+1, \\dots, i] \\Delta$ respectively.\nIn our current setting the symmetry of the current definition\nsimplifies the presentation. The present definition is equivalent\nto the previous one with $w$ replaced by $2w+1.$\n\nThis ensemble is spatially discrete. As we mentioned earlier, it\nis somewhat simpler to start with a system which is spatially\ncontinuous. We will discuss later on in detail how to connect these\ntwo points of view. Just to get started -- how might one go from a\nspatially discrete system as the $(\\dl, \\dr, w, L)$ ensemble to a\nspatially continuous system? Assume that we let $\\Delta$ tend to\n$0$ while $L$ and $w$ tend to infinity so that $L \\Delta$ tends to $\\infty$ and\n$W=w\\Delta$ is held constant. In\nthis case we can imagine that in the limit there is a component\ncode at each location $x \\in (-\\infty, +\\infty)$ in space and that\na component at position $x$ ``interacts'' with all components in a\nparticular ``neighborhood'' of $x$ of width $2W.$ {\\hfill $\\ensuremath{\\Box}$}\n\\eexample\n\nConsider a system on $(-\\infty, +\\infty)$ (the spatial component)\nwhose ``state'' at each point (in space) is described by a scalar\n(more precisely an element of $[0, 1]$). This means, the state of\nthe system at time $t$, $t \\in \\naturals$, is described by a function\n$\\ff^t$, where $\\ff^t(x) \\in [0, 1]$, $x \\in (-\\infty, \\infty)$.\n\n\\bexample[Coding for the BEC]\nConsider transmission over a binary erasure channel (BEC) using the\n$(\\dl, \\dr, w, L)$ ensemble described in Definition~\\ref{def:ensemble}.\nThen the ``state'' of each component code at a particular point in\ntime is the fraction of erasure messages that are\nemitted by variable nodes at this iteration. Hence the state of\neach component is indeed an element of $[0, 1]$. {\\hfill\n$\\ensuremath{\\Box}$} \\eexample\n\n\\bdefinition\nWe denote the space of non-decreasing functions $[0,1] \\rightarrow [0,1]$ by $\\exitfns.$\nA function $h\\in\\exitfns$ has right limits $h(x+)$ for $x\\in (0,1]$ and left limits $h(x-)$ for $x\\in[0,1).$\nTo simplify some notation we define $h(0-)=0$ and $h(1+)=1.$\nThe function $h$ is continuous at $x$ if $h(x-) = h(x+).$ \n\nSimilarly, let $\\sptfns$ denote the space\nof non-decreasing functions on $(\\minfty, \\pinfty).$ \nWe denote $\\lim_{x\\rightarrow -\\infty} f(x)$ as $f(\\minfty)$\n$\\lim_{x\\rightarrow +\\infty} f(x)$ as \n$f(\\pinfty).$ \nWe call a function $f \\in \\sptfns$ {\\em$(a,b)$-interpolating} if\n$f(\\minfty) = a$\nand\n$f(\\pinfty) = b.$\nWe will generally use the term ``interpolating\" with the understanding that $b>a.$\nThe canonical case will be $(0,1)$-interpolating functions and we will also use the term\n``$(0,1)$-interpolating spatial fixed point'' to refer to a pair of $(0,1)$-interpolating functions.\nWe may also refer to a pair of functions $f,g$ interpolating over $[a,b]\\times[c,d]$ to mean\nthat $f$ is $(b,d)$-interpolating and $g$ is $(a,c)$-interpolating. \n\nIn general we work with discontinuous functions. Because of this we\noccasionally need to distinguish between functions in $\\exitfns$ or in $\\sptfns$\nthat differ only on a set of measure $0.$ \nWe say $h_1 \\equiv h_2$ if $h_1$ and $h_2$ differ on a set of measure $0.$\nThese functions are equivalent in the $L_1$ sense.\nWe still enforce monotonicity so equivalent functions can differ only\nat points of discontinuity.\n \\edefinition\n\nWe think of $\\hf$ and $\\hg$ as EXIT-like functions describing the\nevolution of the underlying component system under an iterative\noperation. Usually, we will have $(0,0)$ and $(1,1)$ as key fixed points.\n\nWe say that a sequence $h_i \\rightarrow h$ in $\\exitfns$ if $h_i(u)\n\\rightarrow h(u)$ for all points of continuity of $h.$ We use a\nsimilar definition of convergence in $\\sptfns.$ In general only the\nequivalence class of the limit is determined. I.e., if the limit $h$ is\ndiscontinuous then it is not uniquely determined.\n\nAny function $h \\in \\exitfns$ has a unique equivalence class of inverse functions in $\\exitfns$. \nFor $h \\in \\exitfns$ we will use $h^{-1}$ to denote any member of the equivalence class.\nFormally, we can set $h^{-1}(v)$ to any value $u$ such that\n$v \\in [ h(u-), h(u+) ].$ \nNote that $h^{-1}(v-)$ and $h^{-1}(v+)$ are uniquely determined\nfor each $v\\in [0,1].$ \nThus, we see that the function $h^{-1}$ is uniquely determined at all of its points of continuity and\nit is not uniquely determined at points of discontinuity.\nSimilarly, any function $f \\in \\sptfns$ has\na well defined monotonically non-decreasing inverse equivalence class\nand we use $f^{-1}:[0,1]\\rightarrow [-\\infty,\\infty]$ to denote any member.\nFor notational completeness we define $f^{-1}(0-)=-\\infty$ and $f^{-1}(1+)=+\\infty.$\n\nWe assume that the dynamics of the underlying component system is\ndescribed by iterative updates according to the two functions $\\hf,\\hg \\in \\exitfns.$\nIn deference to standard nomenclature in coding, we refer to these\niterative updates as the {\\em density evolution} (DE) equations.\nIf we assume that $\\xf$ and $\\xg$ are scalars describing the component\nsystem state then these update equations are given by\n\\begin{equation}\\label{eqn:DE}\n\\begin{split}\n{\\xg}^{t} & = \\hg (\\xf^t), \\\\\n{\\xf}^{t+1} & = \\hf (\\xg^{t})\\,.\n\\end{split}\n\\end{equation}\n\n\\bexample[DE for the BEC]\nConsider a $(\\dl, \\dr)$-regular ensemble.\nLet $\\lambda(u)=u^{\\dl-1}$ and $\\rho(v)=v^{\\dr-1}$. Let $\\xf^t$ be\nthe fraction of erasure messages emitted at variable nodes at time $t$\nand let $\\xg^t$ be the fraction of erasure messages emitted at\ncheck nodes at iteration $t$.\\footnote{Conventionally, in iterative coding these quantities are denoted by $x$ and $y$.\nBut since we soon will introduce a continuous spatial dimension, which naturally is denoted by $x$, we prefer\nto stick with this new notation to minimize confusion.}\nLet $\\epsilon$ be the\nchannel parameter. Then we have\n\\begin{equation}\\label{eqn:DEBEC}\n\\begin{split}\n\\xg^{t} & = 1-\\rho(1-\\xf^t), \\\\\n\\xf^{t+1} & = \\epsilon \\lambda(\\xg^{t})\\,.\n\\end{split}\n\\end{equation}\nIn words, we have the correspondences $\\hg(\\xf)=1-\\rho(1-\\xf)$, and\n$\\hf(\\xg)=\\epsilon \\lambda(\\xg)$. As written, the function $\\hf(\\xg)$\nis not continuous at $\\xg=1.$ More explicitly, $\\hf(1) = \\epsilon <1$, whereas\nwe defined the right limit at $1$ to be generically equal to $1$.\nWe will see shortly how to deal with this. {\\hfill $\\ensuremath{\\Box}$}\n\\eexample\n\nLet us now discuss DE for the spatial continuum version. Consider a spatially-coupled\nsystem with the following update equations:\n\\begin{equation}\\label{eqn:gfrecursion}\n\\begin{split}\n\\fg^{t}(x) & = \\hg ((\\ff^t \\otimes \\smthker) (x)), \\\\\n\\ff^{t+1}(x) & = \\hf ( (\\fg^{t} \\otimes \\smthker) (x) )\\,.\n\\end{split}\n\\end{equation}\nHere, $\\otimes$ denotes the standard convolution operator on $\\reals$\nand $\\smthker$ is an {\\em averaging kernel.}\n\\begin{definition}[Averaging Kernel]\nAn averaging kernel $\\smthker$ is a non-negative even function,\n$\\smthker(x)=\\smthker(-x)$, of bounded variation that integrates to $1,$\ni.e., $\\int \\smthker(x) \\text{d} x =1.$\nWe call $\\smthker$ {\\em regular} if there exists $W \\in (0,\\pinfty]$ such that\n$\\smthker(x) = 0$ for $x \\not\\in [-W,W]$ and \n$\\smthker(x) > 0$ for $x \\in (-W,W).$ Note that we do not require $W$ to be finite,\nwe may have $W=\\infty.$\n\\end{definition}\n\nWe will generally assume a regular averaging kernel. This assumption can largely be dropped when $\\hf$ and $\\hg$ are continuous.\n\n\\bexample[Continuous Version of DE for the BEC]\nIf we specialize the maps to the case of transmission over the BEC we get\nthe update equations:\n\\begin{equation}\\label{eqn:gfrecursionBECcont}\n\\begin{split}\n\\fg^{t}(x) & = 1-\\rho(1-(\\ff^t \\otimes \\smthker) (x)), \\\\\n\\ff^{t+1}(x) & = \\epsilon \\lambda( (\\fg^{t} \\otimes \\smthker) (x) )\\,.\n\\end{split}\n\\end{equation}\n{\\hfill $\\ensuremath{\\Box}$}\n\\eexample\nFor compactness we will often use the notation\n$\\fS$ to denote $\\ff \\otimes \\smthker.$\n\nIn the usual manner of EXIT chart analysis, it is convenient to\nconsider simultaneously the plots of $\\hf$ and the reflected plot\nof $\\hg.$ More precisely, in the unit square $[0,1]^2,$ we consider\nthe monotonic curves\\footnote{If $\\hf$ or $\\hg$ is discontinuous then the curve interpolates\nthe jump with a line segment.} $(\\xg, \\hf(\\xg))$ and $(\\hg(\\xf), \\xf)$ for $\\xf, \\xg \\in [0,1].$ \nDensity evolution (DE)\nof the underlying (uncoupled) iterative system can then be viewed as a path drawn out by\nmoving alternately between these two curves (see Fig. \\ref{fig:exitbec36}).\nThis path has the characteristic ``staircase'' shape. \nWe will sometimes refer to the system being defined on $[0,1]\\times[0,1]$ with this picture in mind.\nThe fixed\npoints of DE of the uncoupled system correspond to the\npoints where these two curves meet or cross. Assuming continuity of $\\hf$ and $\\hg,$ they are the\npoints $(\\xg, \\xf)$ such that $(\\xg, \\hf(\\xg)) = (\\hg(\\xf), \\xf).$\n\nTo help with analysis in the discontinuous case we introduce the following notation.\nFor any $h \\in \\exitfns$ we write\n\\[\nu \\veq h(v)\n\\]\nto mean $u \\in [h(v-),h(v+)].$\n\\begin{definition}[Crossing Points]\nGiven $(\\hf,\\hg)\\in\\exitfns^2$ and we say that $(\\xg, \\xf)$ is a crossing point\nif \n\\[ \n\\xg \\veq \\hg(\\xf),\\text{ and } \\xf \\veq \\hf(\\xg)\\,.\n\\]\nThe following are three equivalent characterizations of crossing points.\n\\begin{itemize}\n\\item $u \\veq \\hfinv(v)$ and $v \\veq \\hginv(u),$\n\\item $u \\veq \\hg(v)$ and $u \\veq \\hfinv(v),$\n\\item $v \\veq \\hf(u)$ and $v \\veq \\hginv(u).$\n\\end{itemize}\n\nThe set of all crossing points will be denoted $\\cross(\\hf,\\hg).$ \nIt is easy to see that $\\cross(\\hf,\\hg)$ is closed as a subset of $[0,1]^2.$\nBy definition of $\\exitfns$,\nwe have $(0,0) \\in \\cross(\\hf,\\hf)$ and $(1,1) \\in \\cross(\\hf,\\hg).$ \nWe term $(0,0)$ and $(1,1)$\nthe {\\em trivial} crossing points and denote the non-trivial crossing points by\n\\[\n\\intcross (\\hf,\\hg) = \\cross(\\hf,\\hg)\\backslash \\{(0,0),(1,1)\\}.\n\\]\n\\end{definition}\n\nIf $(u,v)\\in\\cross(\\hf,\\hf)$ and $\\hf$ and $\\hg$ are continuous at\n$u$ and $v$ respectively then $(u,v)$ is a fixed point of density\nevolution. In general, if $(u,v)\\in\\cross(\\hf,\\hf)$ then $(u,v)$\nis a fixed point of density evolution for a pair of exit functions\nequivalent to the pair $(\\hf,\\hg).$ \n\n\\begin{lemma}\\label{lem:crossorder}\nFor any $\\hf,\\hg \\in \\exitfns$ the set $\\cross(\\hf,\\hg)$ is component-wise ordered,\ni.e., given $(u_1,v_1),(u_2,v_2) \\in \\cross(\\hf,\\hg)$ we have \n$(u_2-u_1)(v_2-v_1)\\ge 0.$\n\\end{lemma}\n\\begin{IEEEproof}\nLet $(u_1,v_1),(u_2,v_2) \\in \\cross(\\hf,\\hg).$\nIf $u_10,$ we have\n$\\cross(\\hf^i,\\hg^i) \\subset \\neigh{\\cross(\\hf,\\hg)}{\\delta}$ for\nall $i$ sufficiently large.\n\\end{lemma}\n\\begin{IEEEproof}\nAssume $(u^i,v^i)\\in\\cross(\\hf^i,\\hg^i)$ converges in $i$ to a limit point $(u,v).$ \nSince $\\hf^i(u) \\rightarrow \\hf(u)$ at points of continuity of $\\hf$\nit is easy to see that\n\\[\n\\liminf_{i\\rightarrow \\infty} \\hf^i(u^i-)\\ge \\hf(u-)\n\\]\nand\n\\[\n\\limsup_{i\\rightarrow \\infty} \\hf^i(u^i+)\\le \\hf(u+)\n\\]\nand it follows that $v \\veq \\hf(u).$\nSimilarly, $u \\veq\\hg(v).$\nHence, $(u,v) \\in \\cross(\\hf,\\hg).$\n \nSince $[0,1]^2 \\backslash \\neigh{\\cross(\\hf,\\hg)}{\\delta}$ is compact\nand the same argument applies to subsequences\nthe Lemma follows.\n\\end{IEEEproof}\n\n\\begin{lemma}\nConsider initialization of system \\eqref{eqn:DE} with an arbitrary choice of $u^0.$\nThen the sequence $(u_1,v_1),(u_2,v_2),\\ldots$ is monotonic (either non-increasing or non-decreasing)\nin both coordinates.\n\\end{lemma}\n\\begin{IEEEproof} \nIf $v^{t+1} = \\hf(u^t) \\ge v^t$ then $u^{t+1}=\\hg(v^{t+1}) \\ge \\hg(v^t) = u^t.$\nAnd if $u^{t+1} \\ge u^t$ then $v^{t+2}=\\hg(u^{t+1}) \\ge \\hg(u^t) = v^{t+1}.$\n\\end{IEEEproof}\nIt follows that the sequence $(u_i,v_i)$ converges and the limit point is clearly a crossing point of $(\\hf,\\hg).$\nThus, the limiting behavior of the scalar component system is governed by crossing points.\nIn the spatially coupled system the behavior is often involves a pair\nof crossing points. \n\nSuppose $(u_1,v_1) < (u_2,v_2)$ are fixed points of the component DE.\nIf $\\ff^0(x)\\in[v_1,v_2]$ for all $x$\nthen $\\ff^t(x)\\in[v_1,v_2]$ and\nthen $\\fg^t(x)\\in[u_1,u_2]$ \nfor all $x$ and $t.$ and \n$\\fg^t$ is in the range between $u_1$ and $u_2$ for all $t.$\nThus, in this situation the system is effectively confined to $[u_1,u_2]\\times [v_1,v_2].$\nThis circumstance occurs frequently but we can easily transform this into our canonical form.\nWe can introduce new coordinates $\\tilde{u},\\tilde{v}$ characterized by the inverse map\n\\begin{align*}\nu & = a \\tilde{u} + b \\\\\nv & = c \\tilde{v} + d\\,.\n\\end{align*}\nBy choosing $(b,d) = (u_1,v_1)$ and $(a,c) = (u_2-u_1,v_2-v_1)$\nwe map $(u,v) =(u_1,v_1)$ to $(\\tilde{u},\\tilde{v}) =(0,0)$ and $(u,v) =(u_1,u_2)$ to $(\\tilde{u},\\tilde{v}) =(1,1).$\nSimilarly, by choosing $(b,d) = (u_2,v_2)$ and $(a,c) = (u_1-u_2,v_1-v_2)$\nwe map $(u,v) = (u_2,v_2)$ to $(\\tilde{u},\\tilde{v}) = (0,0)$ and $(u,v)=(u_1,v_1)$ to $(\\tilde{u},\\tilde{v}) =(1,1).$\n(This shows the symmetry that allows us to occasionally exchange $(0,0)$ and $(1,1)$\nin the analysis.)\nBy rescaling the update functions appropriately we can thereby redefine the system on $[0,1]\\times[0,1].$\nIn particular we can define ${\\thf}(\\tilde{u}) = \\frac{1}{c}(\\hf( a \\tilde{u} + b) - d)$\nand ${\\thg}(\\tilde{v}) = \\frac{1}{a}(\\hf( c \\tilde{u} + d) - b).$\n\n\\bexample[EXIT Chart Analysis for the BEC] Figure~\\ref{fig:exitbec36}\nshows the EXIT chart analysis for the $(3, 6)$-regular ensemble\nwhen transmission takes place over the BEC. The left picture shows\nthe situation when the channel parameter is below the BP threshold.\nIn this case we only have the trivial FP at $(0, 0)$. According to our definition we\nhave $(1,1)$ as a crossing point, but it is not a fixed point because $\\hf(1)=\\epsilon<1.$ The right\npicture shows a situation when we transmit above\nthe BP threshold. We now see two further crossings of the EXIT curves\nand so $\\cross(\\hf,\\hg)$ is non-trivial.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/exitbec36}\n}\n\\caption{\\label{fig:exitbec36} Left: The figure shows the EXIT\nfunctions $\\hf(\\xg)=\\epsilon \\lambda(\\xg)$ and $\\hg(\\xf)=1-\\rho(1-\\xf)$ for the $(3, 6)$-regular\nensemble and $\\epsilon=0.35$. Note that the horizontal axis is $\\xg$\nand the vertical axis is $\\xf$ so that we effectively plot the inverse\nof the function $1-\\rho(1-\\xf)$. Since $0.35=\\epsilon < \\epsilon^{\\text{BP}}\n\\approx 0.4292$, the two curves do not cross. The dashed ``staircase''\nshaped curve indicates how DE proceeds. Right: In this figure the\nchannel parameter is $\\epsilon=0.5 > \\epsilon^{\\text{BP}}$. Hence,\nthe two EXIT curves cross. In fact, they cross exactly twice (besides\nthe trivial FP at $(0, 0)$), the first point corresponds to an\nunstable FP of DE, whereas the second one is a stable FP. }\n\\end{figure}\n\nIn this case we can renormalize the system according to our prescription as follows.\nConsider the DE\nequations stated in (\\ref{eqn:DEBEC}). If $(\\xg^*, \\xf^*)$ is the\nlargest (in both components) FP of the corresponding\nDE and if we can set $\\hf(\\xg)=\\epsilon\\lambda(\\xg \\xg^*)\/\\xf^*$\nand $\\hg(\\xf)=(1-\\rho(1 - \\xf \\xf^*))\/\\xg^*$ then system \\eqref{eqn:DE}\nis again equivalent to \\eqref{eqn:DEBEC} on the restricted domain but, in addition, the component\nfunctions are continuous at $0$ and $1$ and $(0,0)$ and $(1,1)$ are the relevant fixed points. This rescaling is indicated\nin the right picture of Figure~\\ref{fig:exitbec36} through the\ndashed gray box. Since the standard (unscaled) EXIT chart picture\nis very familiar in the coding context, we will continue to plot\nthe unscaled picture. But we will always indicate the scaled version\nby drawing a gray box as in the right picture of\nFigure~\\ref{fig:exitbec36}. This hopefully will not cause any\nconfusion. There is perhaps only one point of caution. The behavior\nof the coupled system depends on certain areas in this EXIT chart.\nThese areas are defined in the scaled version and are different by\na factor $\\xg^* \\xf^*$ in the unscaled version.\n{\\hfill $\\ensuremath{\\Box}$} \\eexample\n\n\n\nSo far we have considered the uncoupled system and seen that its\nbehavior can be characterized in terms of fixed points, or more generally crossing points. \nThe behavior of the\nspatially coupled system can also be characterized by its FPs. For the\nspatially coupled system a FP is not a pair of scalars, but a pair\nof functions $(\\tmplF(x), \\tmplG(x))$ such that if we set $\\ff^t(x)=\\tmplF(x)$\nand $\\fg^t(x)=\\tmplG(x)$, $t \\geq 0$, then these functions fulfill\n(\\ref{eqn:gfrecursion}). One set of FPs arise as the constant functions\ncorresponding to the fixed points of the underlying component system.\nThe crucial phenomena in spatial coupling is the emergence of interpolating spatial\nfixed points, i.e., non-constant monotonic fixed point solutions.\nFor the coupled system it is\nfruitful not only to look at interpolating FPs but slightly more general objects,\nnamely interpolating {\\em waves}. Here a wave is like a FP, except that it\n{\\em shifts}. I.e., for $(\\tmplF(x), \\tmplG(x))$ fixed and for some real\nvalue $\\ashift$, if we set $\\ff^t(x)=\\tmplF(x-\\ashift t)$ and $\\fg^t(x)=\\tmplG(x-\\ashift\nt)$, $t \\geq 0$, then these functions fulfill (\\ref{eqn:gfrecursion}).\nWe will see that the behavior of coupled systems is governed\nby the (non)existence of such waves and this\n(non)existence has a simple graphical characterization in terms of\nthe component-wise EXIT functions and their associated FPs. This\nis the main technical result of this paper. In fact, the {\\em\ndirection} of travel of the wave\ndepends in a simple way on the EXIT functions and the area bound\nby them. \nThe extremal values of spatial wave solutions are generally crossing points of the\nunderlying component system. One aspect of the analysis involves\ndetermining the crossing points that can appear as such an extremal value.\nThe solution is is formulated in terms of the following definition.\n\n\\begin{definition}[Component Potential Functions]\nFor any pair $(\\hf,\\hg) \\in \\exitfns^2$ and point $(u,v) \\in [0,1]^2,$ we define\n\\[\n\\altPhi(\\hf,\\hg;u,v) = \\int_0^u \\hginv(u')\\text{d}u' + \\int_0^v \\hfinv(v')\\text{d}v' \\, - uv\\,.\n\\]\n\\end{definition}\n\n{\\em Discussion:} The functional $\\altPhi$ serves as a potential function\nfor the scalar system. \nAssuming continuity of $\\hfinv$ at $v$ and continuity of $\\hginv$ at $u$ we have\n$\\nabla \\altPhi(\\hf,\\hg; u,v) = (\\hginv(u)-v,\\hfinv(v)-u).$\nThus, under some regularity conditions\na crossing point $(\\xg,\\xf)$ is a stationary point of $\\altPhi.$\ni.e., $ \\nabla \\altPhi(\\hf,\\hg; u,v) =0$\nfor $(u,v)\\in \\cross(\\hf,\\hg).$\n\nNote that in the definition of $\\altPhi$ we have used\n$(0,0)$ as an originating point. We can choose the origin arbitrarily.\nWe have\n\\begin{align}\\begin{split}\\label{eqn:potdiff}\n&\\altPhi(\\hf,\\hg;u,v) - \\altPhi(\\hf,\\hg;u_1,u_2)\\\\\n= &\n\\int_{u_1}^u \\hginv(u')\\text{d}u' + \\int_{v_1}^v \\hfinv(v')\\text{d}v' \\, - uv + u_1 v_1\n\\\\= &\n\\int_{u_1}^u (\\hginv(u')-v_1)\\text{d}u' + \\int_{v_1}^v (\\hfinv(v')-u_1)\\text{d}v' \\,\n\\\\&\\qquad - (u-u_1)(v-v_1)\n\\end{split}\n\\end{align}\nand we see that we can place the origin at $(u_1,u_2)$ while preserving differences.\n\nA straightforward calculation, noting that\n\\(\nu\\hf(u) = \\int_0^u \\hf(u')\\, du' + \\int_0^{\\hf(u)} \\hfinv(v)\\, dv\n\\)\nand\n\\(\nv\\hg(v) = \\int_0^v \\hg(y)\\, dy + \\int_0^{\\hg(v)} \\hginv(x)\\, dx\\,,\n\\)\nshows that for all $(\\hf,\\hg)$ we have\n\\begin{align}\n\\begin{split}\\label{eqn:altPhiPhi}\n&\\altPhi(\\hf,\\hg;\\hg(v),\\hf(u))\\\\\n=&\nuv - (u-\\hg(v))(v-\\hf(u)) \\\\\n& - \\int_0^u \\hf(u')\\text{d}u' - \\int_0^v \\hg(v')\\text{d}v'\\, .\n\\end{split}\n\\end{align}\nA similar potential function form, along the lines of \\eqref{eqn:altPhiPhi}, is\n\\begin{align*}\n\\Phi(\\hf,\\hg;u,v) = uv - \\int_0^u \\hf(u')\\text{d}u' - \\int_0^v \\hg(v')\\text{d}v'\\, .\n\\end{align*}\nThis functional is also stationary on the FPs of the component density evolution and is equal\nto $\\altPhi(\\hf,\\hg;\\cdot,\\cdot)$ on the points $(u,\\hf(u))$ and $(v,\\hg(v)).$\nThis form underlies the work in \\cite{TTK11b, DJM11, YJNP12a, YJNP12b}.\nWe prefer $\\altPhi$ because of various properties developed below. The two forms are\nrelated through Legendre transforms, e.g., $\\int_0^u \\hf(u')\\text{d}u'$ is the Legendre transform\nof $\\int_0^v \\hfinv(v')\\text{d}v'.$ We see that on the graph of \n$\\hf$ or $\\hg,$ i.e., on the either the set with\n$v=\\hf(u)$ or $u=\\hg(v),$ the two functionals are equivalent up to reparameterization.\nIn terms of $\\altPhi$ we have\n\\begin{align}\n\\label{eqn:altPhiuhfu}\n\\altPhi(\\hf,\\hg;u,\\hf(u)) &= \\int_0^u (\\hginv(u')-\\hf(u'))\\,du'\\\\\n\\altPhi(\\hf,\\hg;\\hg(v),v) &= \\int_0^v (\\hfinv(v')-\\hg(v'))\\,dv' \\label{eqn:altPhivhgv}\n\\end{align}\n\n\n\\begin{lemma}\\label{lem:monotonic}\nThe function $\\altPhi(\\hf,\\hg;u,v)$ is convex in $u$ for fixed $v$ and convex in $v$ for fixed $u.$\nIn addition, for all $(u,v) \\in[0,1]^2$ we have\n\\begin{align*}\n\\altPhi(\\hf,\\hg;u,v) &\\ge \\altPhi(\\hf,\\hg;u,\\hf(u)) \\\\\n\\altPhi(\\hf,\\hg;u,v) &\\ge \\altPhi(\\hf,\\hg;\\hg(v),v) \n\\end{align*}\nwith equality holding in the first case if and only if $v\\veq\\hf(u)$\nand\n in the second case if and only if $u\\veq\\hg(v).$\n\\end{lemma}\n\\begin{IEEEproof}\nIt is easy to check that $\\altPhi(\\hf,\\hg;u,v)$ is Lipschitz (hence absolutely) continuous \nand we have almost everywhere\n\\begin{align}\n\\begin{split}\\label{eqn:altPhiderivatives}\n\\frac{\\partial}{\\partial u} \\altPhi(\\hf,\\hg;u,v) &= \\hginv(u) - v, \\\\\n\\frac{\\partial}{\\partial v} \\altPhi(\\hf,\\hg;u,v) &= \\hfinv(v) - u \\,.\n\\end{split}\n\\end{align}\nThe lemma now follows immediately from the monotonicity (non-decreasing) of $\\hginv$ and $\\hfinv.$\n\\end{IEEEproof}\nWe have immediately the following two results.\n\\begin{corollary}\\label{cor:monotonic}\nIf $(u^0,v^0) \\in [0,1]^2$ and we define $(u^t,v^t)$ for $t \\ge 1$ via\n\\eqref{eqn:DE} then $\\altPhi(\\hg,\\hg;u^t,v^t)$ is a non-increasing sequence in $t.$\n\\end{corollary}\n\\begin{lemma}\\label{lem:miniscross}\nIf $(u,v) \\in\\cross(\\hf,\\hg)$ if and only if\n$\\altPhi(\\hf,\\hg;u',v)$ is minimized at $u'=u$ and\n$\\altPhi(\\hf,\\hg;u,v')$ is minimized at $v'=v.$\n\\end{lemma}\n\nWe will be most interested in the value of $\\altPhi$\nat crossing points $(u,v) = \\cross(\\hf,\\hg).$\n\n\nOne of the key results on the existence of wave solutions, and especially spatial fixed points,\nis that the crossing points associated to the extremal values of the solution are extreme\n(minimizing) values of the the $\\altPhi$ over the range spanned by the solution. The following definition characterizes this.\n\n\\begin{definition}[Strictly Positive Gap Condition]\nWe say that the pair of functions $(\\hf,\\hg)$ satisfies the {\\em\nstrictly positive gap condition} if $\\cross(\\hf,\\hg)$ is {\\em\nnon-trivial} and if \n\\[\n(u,v) \\in \\intcross(\\hf,\\hg) \\Rightarrow \\altPhi(\\hf,\\hg;u,v) > \\max\\{0,A(\\hf,\\hg)\\}\n\\] \nwhere we define $A(\\hf,\\hg) =\\altPhi(\\hf,\\hg;1,1).$\nWe say that the pair of functions $(\\hf,\\hg)$ satisfies the {\\em\npositive gap condition} (no longer strict) if $\\cross(\\hf,\\hg)$ is {\\em\nnon-trivial} and \n\\[\n(u,v) \\in \\intcross(\\hf,\\hg) \\Rightarrow \\altPhi(\\hf,\\hg;u,v) \\ge \\max\\{0,A(\\hf,\\hg)\\}\\,.\n\\] \n {\\hfill $\\ensuremath{\\Box}$}\n\\end{definition}\n\n{\\em Discussion:} The (strictly) positive gap condition is related to the existence of interpolating spatial fixed point solutions.\nIn particular, we will see that systems possessing $(0,1)$-interpolating fixed point solutions must satisfy the positive gap condition\nand have $A(\\hf,\\hg)=0.$ Systems satisfying the strictly positive gap condition with $A(\\hf,\\hg)=0$ will be proven to\npossess $(0,1)$-interpolating spatial fixed point solutions. The cases where $A(\\hf,\\hg) \\neq 0$ correspond to $(0,1)$-interpolating traveling wave solutions.\n\n\\blemma[Trivial Behavior]\\label{lem:trivialbehavior}\nIf $\\cross(\\hf,\\hg)$ is trivial then the system behavior is simplified\nand under DE, i.e., under $\\eqref{eqn:gfrecursion}$, the only spatial fixed points are with\n$\\ff^t$ and $\\fg^t$ set to either the constant $0$ or the constant\n$1,$ one of which is stable and one of which is unstable.\nThe system converges for all initial values, other than the unstable spatial fixed point itself,\nto the stable spatial fixed point.\n\\elemma\n\nNow that we have covered the ``trivial'' cases, let us consider\nthe system behavior when $\\cross(\\hf,\\hg)$ is non-trivial. As we\nwill see, it is qualitatively different. \nThe value $A(\\hf,\\hg)=\\altPhi(\\hf,\\hg;1,1)$ plays an important role\nin the behavior of the system. This is why we \nintroduced a special notation. The strictly positive gap condition\nimplies that the value of $\\altPhi(\\hf,\\hg;\\xg,\\xf)$ for $(\\xg,\\xf)\\in \\intcross(\\hf,\\hg)$\nis strictly larger than\nthe values $0$ and $A(\\hf,\\hg)$ found at the two trivial fixed points. We will\nsee that this condition is related to the existence of wave-like\nsolutions that interpolate between the two trivial fixed\npoints.\n\n\\bexample[Positive Gap Condition for the BEC]\nFigure~\\ref{fig:positivegapbec36} illustrates the (strictly) positive\ngap condition for the $(3, 6)$-regular ensemble when transmission\ntakes place over the BEC. The left picture shows the situation when\nthe channel parameter is between the BP and the MAP threshold of\nthe underlying ensemble. The right picture shows the situation when\nthe channel parameter is above the MAP threshold of the underlying\nensemble. In both cases $\\cross(\\hf,\\hg)$ contains one non-trivial\nFP $(\\xg, \\xf)$ and for this FP $\\altPhi(\\hf,\\hg;\\xg,\\xf) > \\max\\{0,A\\}$, i.e.,\nboth cases fulfill the strictly positive gap condition.\n\nIn the first case $A>0$, whereas in the second case $A<0$. We\nwill see in Theorem~\\ref{thm:mainexist} below that this\nchange in the sign of $A$ leads to a reversal of direction of a wave-like\nsolution to the system and hence to fundamentally different asymptotic behavior.\nBoth pictures show the unscaled curve and the lightly shaded box\nshows what the picture would look like if we rescaled it so that\nthe largest FP appears at $(1, 1)$.\n\nIt is not hard to see that the strictly positive gap\ncondition is necessarily satisfied for any $\\hf,\\hg$ for which $\\cross(\\hf,\\hg)$\nhas a single non-trivial fixed point, and for which $(0,0)$ and\n$(1,1)$ are stable fixed points under the DE equations \\eqref{eqn:DE}.\n{\\hfill $\\ensuremath{\\Box}$}\n\\eexample\n\nWe are now ready to state the main result concerning the existence of interpolating wave solutions.\n\n\\begin{theorem}[Existence of Continuum Spatial Waves]\\label{thm:mainexist}\nAssume that $\\smthker$ is a regular averaging kernel.\nLet $(\\hf,\\hg)$ be a pair of functions in $\\exitfns$ satisfying the strictly positive\ngap condition.\n\nThen there exist $(0,1)$-interpolating functions $\\tmplF,\\tmplG \\in\\sptfns$ and a\nreal-valued constant $\\ashift,$ satisfying $\\sgn(\\ashift) = \\sgn(A(\\hf,\\hg))$\nand $|\\ashift| \\ge |A(\\hf,\\hg)|\/\\|\\smthker\\|_\\infty$, such that setting\n$f^t(x) = \\tmplF(x - \\ashift t)$ and $g^t(x) = \\tmplG(x - \\ashift t)$ for\n$t=0,1,\\ldots$ solves \\eqref{eqn:gfrecursion}\\,.\n\\end{theorem}\n\nWe remark that we can relax the regularity condition on $\\smthker$ if\n$\\hf$ and $\\hg$ are continuous; cf. Lemma \\ref{lem:pathology}.\n\n\\bexample[Spatial wave for the BEC]\nFigure~\\ref{fig:spatialfpbec36} shows the spatial waves whose existence\nis guaranteed by Theorem~\\ref{thm:mainexist} for the $(3, 6)$\nensemble and transmission over the BEC. The top picture corresponds\nto the cases $\\epsilon=0.45$ and the bottom picture to the case\n$\\epsilon=0.53$. In both cases we used the smoothing kernel\n$\\omega(x)=\\frac12 \\mathbbm{1}_{\\{|x|\\leq 1\\}}$. As predicted, in\nthe first case the curve moves to the right by a value of\n$0.142 \\geq |A|\/\\|\\smthker\\|_\\infty = 0.03125 \\times 2 =0.06245$\nand in the second case the curve moves to the left by an amount\nof $0.101 \\geq |A|\/\\|\\smthker\\|_\\infty = 0.0253740 \\times 2 = 0.0507498$.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/fpbec36}\n}\n\\caption{\\label{fig:spatialfpbec36}\nFPs whose existence is guaranteed by Theorem~\\ref{thm:mainexist}\nfor the $(3, 6)$ ensemble and transmission over the BEC. The top\npicture corresponds to the cases $\\epsilon=0.45$ and the bottom\npicture to the case $\\epsilon=0.53$. In both cases we used the\nsmoothing kernel $\\omega(x)=\\frac 12 \\mathbbm{1}_{\\{|x|\\leq 1\\}}$. The dashed curve\nis the result of applying one step of DE to the solid curve. As\npredicted, in the top picture the curve moves to the right (the\ncorresponding gap $A$ in Figure~\\ref{fig:positivegapbec36} is\npositive) whereas in bottom picture the curve moves to the left\n(the corresponding gap $A$ is negative). The shifts are $0.142$ and\n$-0.102$, respectively. } \\end{figure} {\\hfill $\\ensuremath{\\Box}$}\n\\eexample\n\nOne consequence of Theorem \\ref{thm:mainexist} is that the existence of an\n$(0,1)$-interpolating fixed point implies $A(\\hf,\\hg) = 0.$ This is true even without\nregularity assumptions.\n\n\\begin{theorem}[Continuum Fixed Point Positivity]\\label{thm:FPAzero}\nLet $\\smthker$ be an averaging kernel (not necessarily regular) and\nassume that there exists a $(0,1)$-interpolating fixed point solution to \\eqref{eqn:gfrecursion}.\nThen $(\\hf,\\hg)$ satisfies the positive gap condition and $A(\\hf,\\hg) = 0.$\n\\end{theorem}\nA more general version of this result appears as Lemma \\ref{lem:FPAzero}.\n\nTheorem \\ref{thm:mainexist} is our most fundamental result concerning the spatially coupled system.\nOne limitation of the result arises in cases with infinitely many crossing points. In such a case it\ncan be difficult to extract asymptotic behavior since there may exist many wave-like solutions\nand the strictly positive gap condition may not hold globally.\nFor such cases we develop the following altered analysis.\n\nLet $\\hf$ and $\\hg$ be given and define\n\\[\nm(\\hf,\\hg)=\\min_{(u,v) \\in [0,1]^2} \\altPhi(\\hf,\\hg;u,v)\n\\]\nDefine\n\\[\n\\cross_m(\\hf,\\hg) = \\{(u,v)\\in\\cross(\\hf,\\hg) : \\altPhi(\\hf,\\hg;u,v) = m\\}\\,.\n\\]\nSince $\\altPhi(\\hf,\\hg;\\cdot,\\cdot)$ is continuous it follows that\n$\\cross_m(\\hf,\\hg)$ is closed. Since $\\cross(\\hf,\\hg)$ is component-wise linearly\nordered we can define\n\\begin{align*}\n(u',v') & = \\min \\cross_m(\\hf,\\hg)\\\\\n\\intertext{and}\n(u'',v'')& = \\max \\cross_m(\\hf,\\hg)\\\\\n\\end{align*}\nwhere $\\min$ and $\\max$ are component-wise.\n\n\\begin{theorem}[General Continuum Convergence]\\label{thm:globalconv}\nLet $(\\hf,\\hg)$ be given as above, let $\\smthker$ be regular,\nand assume $\\ff^0 \\in \\sptfns$ is given\nwith $\\ff^0(\\minfty)\\le v''$ and \n$\\ff^0(\\pinfty)\\ge v'.$\nThen in system \\ref{eqn:gfrecursion} we have for all $x\\in \\reals$\n\\begin{align*}\n\\liminf_{t\\rightarrow \\infty} f^t(x) &\\ge v' \\quad \\liminf_{t\\rightarrow \\infty} g^t(x) \\ge u' \\\\\n\\limsup_{t\\rightarrow \\infty} f^t(x) &\\le v'' \\quad \\limsup_{t\\rightarrow \\infty} g^t(x) \\le u'' \\,.\n\\end{align*}\n\\end{theorem}\nThe proof may be found in appendix \\ref{app:E}.\n\nNote, in particular, that if $\\altPhi$ is uniquely minimized at some crossing point $(u,v),$ then\nthis point is a fixed point of the component system and if the spatial system is initialized (either $f$ or $g$) with this point (the appropriate coordinate)\n in the closed range spanned by the initial condition, then the coupled system\nglobally converges to the constant function associated to this fixed point.\n\nIn many applications the problems are spatially discrete and finite length.\nThe analysis can be applied to these cases with suitable adjustments. As a first step we state a\nresult analogous to Theorem \\ref{thm:mainexist}\nfor a spatially discrete system. The DE equations\nfor the spatially discrete version can be written as in\n\\eqref{eqn:gfrecursion} with the following modifications:\nthe variable $x$ is discrete, the averaging kernel is a discrete sequence, and\nthe convolution\noperation is convolution of discrete sequences. The analysis views the\nspatially discrete problem as a sampled version of the continuum\nversion. In the limit of infinitely fine sampling the discrete\nversion converges to the continuum version.\n\n\\subsection{Discrete Spatial Sampling}\n\nLet $x_i = i \\Delta$ and let $\\discsmthker$ be a non-negative\nfunction over $\\integers$ that is even, $\\discsmthker_i =\n\\discsmthker_{-i},$ and sums to $1,$ $\\sum_i \\discsmthker_i = 1.$\nIt is convenient to interpret $\\discsmthker$ as a discretization of\n$\\smthker,$ i.e.,\n\\begin{equation}\\label{eqn:kerdiscretetosmth}\n\\discsmthker_i = \\int_{(i-\\frac{1}{2})\\Delta}^{(i+\\frac{1}{2})\\Delta} \\smthker(z)\\text{d}z.\n\\end{equation}\nThis relationship then makes it clear that the discrete ``width'' of\nspatial averaging is inversely proportional to $\\Delta.$\nA good example is the smoothing kernel\n$\\smthker(x) = \\frac{1}{2} \\mathbbm{1}_{\\{|x|\\le 1\\}}.$\nIf we set $\\Delta = \\frac{2}{2W+1}$ then $\\discsmthker_i =\n\\frac{1}{2W+1} \\mathbbm{1}_{\\{|i|\\le W\\}}.$\nGiven a real-valued function $\\fg$ defined on $\\Delta\\integers$ we will call \nthe function $\\tfg \\in \\sptfns,$ defined as \n$\\tfg(x)=\\fg(x_i)$ for $x\\in[x_i-\\Delta\/2,x_i+\\Delta\/2),$\nthe {\\em piecewise constant extension} of $\\fg.$ \nNote that by this definition, we have\n\\begin{align*}\n\\tfg^{\\smthker}(x_i)&=\\int_{-\\infty}^\\infty \\smthker(x_i-y)\\tfg(y)dy\n\\\\&=\n\\sum_{j=-\\infty}^\\infty\n\\int_{x_j-\\Delta\/2}^{x_j+\\Delta\/2} \\smthker(x_i-y)\\tfg(y)dy\n\\\\\n&=\n\\sum_{j=-\\infty}^\\infty \\discsmthker_{i-j}\\fg(x_j)dy\\,\n\\\\&=\n\\fg^{\\discsmthker}(x_i)\n\\end{align*}\n\nWith this framework in mind, we can write the spatially discrete DE equations as follows.\n\\begin{equation}\\label{eqn:discretegfrecursion}\n\\begin{split}\n\\fg^{t}(x_i) & = \\hg ((\\ff^t \\otimes \\discsmthker) (x_i)) \\\\\n\\ff^{t+1}(x_i) & = \\hf ( (\\fg^{t} \\otimes \\discsmthker) (x_i) )\\,.\n\\end{split}\n\\end{equation}\n\n\\bexample[Spatially Discrete DE for the BEC]\n\\begin{equation}\\label{eqn:gfrecursionBEC}\n\\begin{split}\n\\fg^{t}(x_i) & = 1-\\rho((\\ff^t \\otimes \\discsmthker) (x_i)), \\\\\n\\ff^{t+1}(x_i) & = \\epsilon \\lambda( (\\fg^{t} \\otimes \\discsmthker) (x_i) )\\,.\n\\end{split}\n\\end{equation}\n{\\hfill $\\ensuremath{\\Box}$}\n\\eexample\n\nAn elementary but critical result relating the spatially continuous case to the\ndiscrete case is the following.\n\\begin{lemma}\\label{lem:disccontbnd}\nLet $\\tmplF \\in \\sptfns$ and let $f$ be a real valued function defined on $\\Delta \\integers.$\nThen, if for all $i$ we have\n\\(\nf(x_i) \\le \\tmplF(x_i)\n\\)\nthen \n\\(\nf^{\\discsmthker}(x_i) \\le \\tmplF^{\\smthker}(x_i+\\half\\Delta)\n\\)\nand if\n\\(\nf(x_i) \\ge \\tmplF(x_i)\n\\)\nthen \n\\(\nf^{\\discsmthker}(x_i) \\ge \\tmplF^{\\smthker}(x_i-\\half\\Delta)\n\\)\n\\end{lemma}\n\\begin{IEEEproof}\nAssume\n$\\ff(x_i) \\le \\tmplF(x_i)$ (for all $i$).\nConsider the piecewise constant extension $\\tilde{\\ff}.$ \nIt follows that $\\tilde{\\ff}(x) \\le \\tmplF(x+\\half\\Delta)$ for all $x$\nand so $\\ff^{\\discsmthker}(x_i) = \\tilde{\\ff}^{\\smthker}(x_i) \\le \\tmplF^{\\smthker}(x_i+\\half\\Delta)$\nfor each $i.$\n\nThe opposite inequality is handled similarly.\n\\end{IEEEproof}\n\nApplying the above to system \\eqref{eqn:discretegfrecursion} we obtain the following.\n\\begin{theorem}[Continuum-Discrete Bounds] \\label{thm:mainquantize}\nAssume that $\\discsmthker$ is a discrete sequence\nrelated to a regular smoothing kernel $\\smthker$\nas indicated in \\eqref{eqn:kerdiscretetosmth}.\nLet $\\ff^t_c,\\fg^t_c \\in \\sptfns,\\, t=0,1,2,\\ldots$ denote spatially continuous functions determined\naccording to \\eqref{eqn:gfrecursion} and let\n$\\ff^t,\\fg^t$ denote spatially discrete functions determined\naccording to \\eqref{eqn:discretegfrecursion}.\nThen, if\n$\\ff^0(x_i) \\le \\ff^0_c (x_i)$ (for all $i$) then $\\ff^t(x_i) \\le \\ff^t_c(x_i - t\\Delta )$ and\nif $\\fg^t(x_i) \\ge \\fg_c^t(x_i - (t-\\half) \\Delta)$ for all $t.$\nSimilarly, if\n$\\ff^0(x_i) \\ge \\ff^0_c (x_i)$ (for all $i$) then $\\ff^t(x_i) \\ge \\ff^t_c(x_i + t\\Delta )$ and\nif $\\fg^t(x_i) \\ge \\fg_c^t(x_i + (t+\\half) \\Delta)$ for all $t.$\n\\end{theorem}\n\\begin{IEEEproof}\nAssume\n$\\ff^0(x_i) \\le \\ff_c^0(x_i)$ (for all $i$).\nBy Lemma \\ref{lem:disccontbnd} $\\ftdS{0}(x_i) = \\ff_c^{0,\\smthker}(x_i+\\half\\Delta)$ for each $i.$\nBy monotoniciy of $\\hg$ we have\n\\[\n\\fg^0(x_i) = \\hg(\\ff^0(x_i))\\le \\hg(\\ff_c^{0,\\smthker}(x_i+\\half\\Delta)) = \\fg_c^0(x_i+\\half\\Delta).\n\\]\nBy the same argument we obtain\n\\(\n\\gtdS{0}(x_i) \\le \\fg_c^{0,\\smthker}(x_i+\\Delta)\\,\n\\)\nand hence\n\\(\n\\ff^1(x_i)\\le \\ff_c^1(x_i+\\half\\Delta).\n\\)\nThe general result now follows by induction.\n\nThe reverse inequality can be handled similarly.\n\\end{IEEEproof}\n\nThis result is convenient when there exist wave-like solutions.\nFor example, if $\\ff_c^t(x) = \\tmplF(x-\\ashift t)$ with $\\ashift>0$ and\n$\\tmplF$ is a $(0,1)$-interpolating function, and\n$\\ff^0(x_i) \\le \\ff_c^0(x_i),$ then we have\n$\\ff^t(x_i) \\le \\tmplF(x-(\\ashift-\\Delta) t).$\nThus, if $\\ashift > \\Delta$ then we obtain asymptotic convergence for the\nspatially discrete case.\nThus we have the following result\n\n\\begin{theorem}[Discrete Spatial Convergence]\\label{thm:mainqqqqq}\nAssume that $\\smthker$ is a regular averaging kernel.\nLet $(\\hf,\\hg)$ be a pair of functions in $\\exitfns$ satisfying the strictly positive\ngap condition.\nAssume $\\Delta < |A(\\hf,\\hg)|\/\\|\\smthker\\|_\\infty$\nand initialize system \\eqref{eqn:discretegfrecursion} with\nany $(0,1)$ interpolating $\\ff^0 \\in \\sptfns.$\nIf $A(\\hf,\\hg)>0$ then $\\ff^t(x_i) \\rightarrow 0$ and\nif $A(\\hf,\\hg)<0$ then $\\ff^t(x_i) \\rightarrow 1$ \nfor all $x_i$ \n\\end{theorem}\n\n\nThis result gives order ${\\Delta}$ convergence of the spatially discrete system to the continuum one (under positive gap assumptions).\nMuch faster convergence is observed in many situations. \nIn \\cite{HMU11b} a particular example is presented with a compelling heuristic argument for exponential convergence.\nIn general the rate of convergence appears to depend on the regularity of $\\hf$ and $\\hg$ and\n$\\smthker.$\nA $(0,1)$-interpolating spatial fixed point does not sample $\\hf$ and $\\hg$ at every value, so one cannot\nconclude that $A(\\hf,\\hg) = 0$ and, indeed, this generally does not hold. One can construct fixed point examples where\n$|A(\\hf,\\hg)|$ is of order $\\Delta.$\nAs a general result we have the following.\n\\begin{theorem}\\label{thm:discreteFPDelta}\nAssume $\\hf$ and $\\hg$ have a $(0,1)$-interpolating fixed point\nfor the spatially discrete system. Then,\n\\[\n|A(\\hf,\\hg) | \\le 2{\\Delta}\\|\\smthker\\|_\\infty\n\\]\n\\end{theorem}\nAs indicated, regularity assumptions on $\\hf,\\hg$ can lead to stronger results. \nIn this direction we have the following.\n\\begin{theorem}[$C^2$ Discrete Fixed Point Bound]\\label{thm:discreteFPsum}\nAssume $\\hf$ and $\\hg$ are $C^2$ and there exists an $(0,1)$-interpolating\nspatial fixed point for the spatially discrete system. Then\n\\[\n|A(\\hf,\\hg) | \\le \n\\frac{1}{2} (\\|\\hf''\\|_\\infty+\\|\\hg''\\|_\\infty)\\|\\smthker\\|_\\infty^2{\\Delta^2}\n\\]\n\\end{theorem}\nProofs for the above are presented in appendix \\ref{app:Aa}.\n\nFor discrete systems where gap conditions may be difficult to verify we may require more general\nresults.\nEspecially challenging are cases with an infinite number of crossing points\nclustering near the extremal ones. For such generic situations we have\nthe following spatially discrete version of Theorem \\ref{thm:globalconv}.\n\\begin{theorem}[General Discrete Convergence]\\label{thm:discreteglobalconv}\nLet $(\\hf,\\hg)$ be given as in Theorem \\ref{thm:globalconv}, let $\\smthker$ be regular,\nand assume $\\ff^0 \\in \\sptfns$ is given\nwith $\\ff^0(\\minfty)\\le v''$ and \n$\\ff^0(\\pinfty)\\ge v'.$\nThen, for any $\\epsilon>0,$ in system \\ref{eqn:gfrecursion} \nwith $\\Delta$ sufficiently small we have for all $x\\in \\reals$\n\\begin{align*}\n\\liminf_{t\\rightarrow \\infty} f^t(x) &\\ge v'-\\epsilon \\quad \\liminf_{t\\rightarrow \\infty} g^t(x) \\ge u'-\\epsilon \\\\\n\\limsup_{t\\rightarrow \\infty} f^t(x) &\\le v''+\\epsilon \\quad \\limsup_{t\\rightarrow \\infty} g^t(x) \\le u''+\\epsilon \\,.\n\\end{align*}\n\\end{theorem}\nThe proof may be found in appendix \\ref{app:E}.\n\n\nFinite length systems can be modeled by introducing spatial dependence\ninto the definition $\\hf$ and\/or $\\hg.$ For example, in the LDPC-BEC case termination corresponds\nto setting $\\hf = 0$ outside some finite region.\nThe analysis most closely follows the unterminated case\nwith one-sided termination, e.g., setting $\\hf=0$ for $x<0.$\nWhen $A(\\hf,\\hg)>0$ and the strictly positive gap condition holds we can apply Theorem \\ref{thm:mainexist}\nto conclude that the infinite length unterminated system has a wave-like solution\nthat converges point-wise to $0.$\nSuch a solution can often be used to bound from above the\nsolutions for terminated cases to show that\ntheir solutions also tend to $0.$\nAlternatively, we can apply Theorem \\ref{thm:globalconv} to conclude that even if we remove the termination\nafter initialization the system will converge to $0.$ \n\n\\subsection{One-sided Termination}\nSince setting $\\hf=0$ over some region reduces $f$ relative to the\nunterminated case, it is more difficult to obtain lower bounds for\nthe terminated case. It turns out for one-sided termination,\nhowever, that an analogy can be drawn between the spatial variation\nin $\\hf$ and a global perturbation in $\\hf$ that is spatially invariant\nand which then allows application of Theorem \\ref{thm:mainexist}.\n\nLet us formally define the one-sided termination version of\n\\eqref{eqn:gfrecursion} to be the system that follows\n\\eqref{eqn:gfrecursion}\nexcept that when $x<0$ we set $f^t(u)=0$ for all $u\\in [0,1].$\nThis is equivalent to redefining $\\hf = \\hf(x;u)$ to have spatial dependence so that\nwhen $x<0$ we have $\\hf(x;u)=0$ and for $x \\ge 0$ we have $\\hf(x;u) =\\hf(u)$ as before.\n\nSince this system is not translation invariant,\nit does not admit interpolating traveling wave-like solutions. It does, however,\nadmit interpolating spatial fixed points.\n\nLet us define\n\\[\n\\unitstep_a (x) = \\begin{cases}\n0 & x <0 \\\\\na & x = 0 \\\\\n1 & x > 0\n\\end{cases}\n\\]\nIn some cases the value of $a$ is immaterial and we may drop the subscript from the \nnotation. \n\n\\begin{theorem}[Continuum Terminated Fixed Point]\\label{thm:terminatedexist}\nAssume $\\smthker$ is regular.\nLet $(\\hf,\\hg)\\in \\exitfns^2$ and assume that $\\hg$ is continuous at $0$ and that\n$\\altPhi(\\hf,\\hg;u,v)$ is uniquely minimized at $(1,1)$\n(hence $A(\\hf,\\hg) < 0$ but we do not assume that the strictly positive gap condition holds).\nThen there exists $(0,1)$-interpolating $\\tmplF,\\tmplG \\in \\sptfns$\nthat form a fixed point of the one-sided termination of\n\\eqref{eqn:gfrecursion}.\n\\end{theorem}\n\\begin{IEEEproof}\nDefine ${\\hf}(z;u) = \\hf(u)\\wedge \\unitstep(u-z)$ (where $a \\wedge b = \\min \\{a,b\\}$) and choose $z$ so that\n$A(\\hf(z;\\cdot),\\hg) =0.$\nIf $(u,v) \\in \\intcross(\\hf(z;\\cdot),\\hg)$ then, since $\\hg$ is continuous at $0,$ we have\n$u \\ge z.$ \nIf $v < \\hf(z+)$ then we have\n$\\altPhi(\\hf(z;\\cdot),\\hg;u,v) = \\int_0^{z}\\hginv(u') du' > 0$\nand if $v \\ge \\hf(z+)$ then we have\n$\\altPhi(\\hf(z;\\cdot),\\hg;u,v) = \\altPhi(\\hf,\\hg;u,v) - A(\\hf,\\hg) > 0.$\nIt follows that $(\\hf(z;\\cdot),\\hg)$ satisfies the strictly positive gap condition.\nBy Theorem \\ref{thm:mainexist} there exists $\\tmplF,\\tmplG \\in \\Psi_{[-\\infty,\\infty]}$\nthat form a $(0,1)$-interpolating spatial fixed point ($\\ashift=0$) for\n\\eqref{eqn:gfrecursion} with $\\hf(z;\\cdot)$ replacing $\\hf.$\nIt is easy to see that there is some finite maximal $y$ such that\n$\\tmplF(x)=0$ for $x < y.$ Translate $\\tmplF$ and $\\tmplG$ so that $y=0.$\nIt follows that the resulting $\\tmplF,\\tmplG$ pair is a fixed point of the one-sided termination version of\n\\eqref{eqn:gfrecursion}.\n\\end{IEEEproof}\nIt is interesting to note in the above construction that the fixed point solution\nhas $\\tmplG^{\\smthker}(0)=z$ and $\\tmplF(0+) = \\hf(z+).$ Hence the value of \nthe discontinuity at the boundary of the termination is determined by the condition\n$A=0.$\nIn the case where $\\hg$ is not continuous at $0,$ i.e., $\\hg(0+)>0$ we can construct a fixed point solution\nas above with $\\tmplG(\\minfty) =\\hg(0+).$\n\nFor the case $A(\\hf,\\hg) \\ge 0$ we have the following.\n\\begin{theorem}[Continuum Terminated Convergence]\\label{thm:terminatedzero}\nAssume $\\smthker$ is regular.\nLet $(\\hf,\\hg) \\in \\exitfns^2$ and assume that\n$\\altPhi(\\hf,\\hg;u,v) > 0$ for $(u,v) \\neq (0,0).$\nThen $\\ff^t \\rightarrow 0$ for the\none-sided termination of\n\\eqref{eqn:gfrecursion}\nfor any choice of $\\ff^0.$\nIf $\\hf(x) > 0$ and $\\hg(x)>0$ on $(0,1]$ then\n$\\ff^t \\rightarrow 0$ also when $\\altPhi(\\hf,\\hg;) \\ge 0.$\n\\end{theorem}\nThe proof is presented in Appendix \\ref{app:E}.\n\nWe can, of course, also terminate the spatially\ndiscrete versions of the system.\nThus, consider the one sided termination of\n\\eqref{eqn:discretegfrecursion}\nin which the equations are modified so that\nwe set $f^t(x_i)=0$ if $x_i < 0,$\nwhich is equivalent to redefining $\\hf$ to have spatial dependence so that\n$\\hf = 0$ if $x_i < 0.$\nWe assume that $\\discsmthker$ is related to $\\smthker$ (for a continuum version)\nas indicated in \\eqref{eqn:kerdiscretetosmth}.\nFor this case we have the following quantitative result.\n\\begin{theorem}[Discrete Fixed Point Positive Gap]\\label{thm:discreteterminatedexist}\nAssume $\\smthker$ is regular.\nLet $(\\hf,\\hg)$ satisfy the strictly positive gap condition and assume that\n$A(\\hf,\\hg) < -\\Delta\\|\\smthker\\|_\\infty.$\nThen there exists $(0,1)$ interpolating $\\tmplF,\\tmplG \\in \\sptfns$\nthat form a spatial fixed point of the one-sided termination of\n\\eqref{eqn:discretegfrecursion}.\n\\end{theorem}\n\\begin{IEEEproof}\nDefine $\\hf(z;u) = \\hf(u) \\wedge \\unitstep_1(u-z)$\n with $z>0$ chosen sufficiently small\nso that $A(\\hf(z;\\cdot),\\hg) \\le -\\Delta\\|\\smthker\\|_\\infty.$\nBy Theorem \\ref{thm:mainexist}\nthere exists $\\tmplF,\\tmplG \\in \\Psi_{[-\\infty,\\infty]}$\nthat form a spatial wave solution for\n\\eqref{eqn:gfrecursion} with $\\hf(z;\\cdot)$ replacing $\\hf$\nand $\\ashift \\le -\\Delta.$\nBy Theorem \\ref{thm:mainquantize} we see that by setting\n$\\ff^0(x_i) = {\\tmplF}(x_i)$ in\n\\eqref{eqn:discretegfrecursion} (the non-terminated case)\nwe have $\\ff^1(x_i) \\ge {\\tmplF}(x_i).$\nBy translation, we can assume that ${\\tmplF}(x_i)=0$ for $x_i<0.$\nNow, the inequality $\\ff^1(x_i) \\ge {\\tmplF}(x_i)$ also holds in the one sided terminated case\nsince the values of $\\ff^1(x_i)$ are unchanged for $x_i\\ge 0.$\nThus, in the one-sided termination case the sequence $\\ff^t$ is monotonically non-decreasing\nfor each $x_i$ and must therefore have a limit $\\ff^{\\infty}.$\nIf $\\hf$ and $\\hg$ are continuous then the pair $\\ff^{\\infty},\\fg^{\\infty}$ then constitute a fixed point of the one-sided termination case.\nIf $\\hf$ and $\\hg$ are not continuous then it is possible that the pair $\\ff^{\\infty},\\fg^{\\infty}$ does not constitute a fixed point and that initializing with\n$\\ff^{\\infty}$ we obtain another non-decreasing sequence.\nIn general we can use transfinite recursion together with monotonicity in $x$\nto conclude the existence of\n a fixed point at least as large point-wise as $ (\\ff^{\\infty},\\fg^{\\infty}).$ \n\\end{IEEEproof}\n\nThe previous result gives quantitative information on the discrete approximation but it requires the strictly positive gap assumption.\nThe following result, whose proof is in Appendix \\ref{app:E},\nremoves that requirement at the cost of the quantitative bound.\n\n\\begin{theorem}[Discrete Fixed Point General]\\label{thm:discreteterminatedexistB}\nAssume $\\smthker$ is regular.\nAssume that\n$\\altPhi(\\hf,\\hg;\\cdot,\\cdot)$ is uniquely minimized at $(1,1)$\nwith $A(\\hf,\\hg) < 0.$ Then\nfor all $\\Delta$ sufficiently small\nthere exists $\\tmplF,\\tmplG \\in \\sptfns$\nthat form a spatial fixed point of the one-sided termination of\n\\eqref{eqn:discretegfrecursion} with $\\lim_{\\Delta\\rightarrow 0}\\tmplF(\\pinfty) =1.$\n\\end{theorem}\n\n\nFor the case $A(\\hf,\\hg) \\ge 0$ we have the following quantitative result.\n\\begin{theorem}[Discrete Terminated Convergence]\\label{thm:discreteterminatedzero}\nAssume that $\\smthker$ is regular.\nLet $(\\hf,\\hg)$ satisfy the strictly positive gap condition and assume that\n$A(\\hf,\\hg) > \\Delta\\|\\smthker\\|_\\infty.$\nThen $\\ff^t \\rightarrow 0$ for the\none-sided termination of\n\\eqref{eqn:discretegfrecursion}\nfor any choice of $\\ff^0.$\n\\end{theorem}\n\\begin{IEEEproof}\nWe consider the initialization\n\\[\n\\ff^0(x_i)=\\unitstep_1(x_i)\\,\n\\] \nand show that $\\ff^t \\rightarrow 0.$\nDefine\n\\[\n\\hf(\\eta;u) =\\hf(u) \\vee \\unitstep_1(u-(1-\\eta))\n\\]\nwhere we assume $\\eta>0$ sufficiently small so that\n$A(\\hf(\\eta;\\cdot),\\hg) > \\Delta\\|\\smthker\\|_\\infty.$\n\nBy Theorem \\ref{thm:mainquantize} there exists $(0,1)$-interpolating $\\tmplF$ and\n$\\tmplG$ and $\\ashift>\\Delta$ such that, even for the unterminated case,\n$\\ff^0(x_i) \\le\\tmplF(x_i)$ implies $\\ff^t(x_i) \\le\\tmplF(x_i-(\\ashift-\\Delta)t)$\nfor all $t.$ The same clearly holds also in the terminated case.\nClearly, $\\tmplF(x)=1$ for all $x \\ge z$ for some finite $z,$ and by translation\nwe can take $z$ to be $0$ yielding the desired result.\n\\end{IEEEproof}\n\n\n\\subsection{Two-sided Termination}\nThe two-sided termination of system \\eqref{eqn:gfrecursion}\nis defined by setting $\\ff^t(x) = 0$ for\nall $x$ outside some finite region, say $[0,Z]$ for all $t.$\nThis can be understood as a spatial dependence of $\\hf =\\hf(x;u)$\nwhere $\\hf(x;u) =0$ for $x \\not\\in [0,Z]$ and $\\hf(x;u) =\\hf(u)$ as before otherwise.\nThis system can be bounded from above by the one-sided termination case.\nThus, Theorem \\ref{thm:terminatedzero}\nand Theorem \\ref{thm:discreteterminatedzero}\napply equally well to the two-sided terminated case.\nTheorem \\ref{thm:discreteterminatedexist} on the other hand does not\nimmediately generalize, but a similar statement holds.\n\n\\begin{theorem}[Two Sided Continuum Fixed Point]\\label{thm:twoterminatedexist}\nAssume that $\\smthker$ is regular.\nLet $(\\hf,\\hg)$ satisfy the strictly positive gap condition and let\n$A(\\hf,\\hg) < 0.$\nThen, for any $\\epsilon > 0,$ and for all $Z$ sufficiently large,\nthere exists $\\ff,\\fg$\nthat form a fixed point of the two-sided termination of\n\\eqref{eqn:gfrecursion} such that\n$\\ff$ and $\\fg$ are symmetric about $\\frac{Z}{2},$ monotonically non-decreasing on $(-\\infty,\\frac{Z}{2}]$ and have left and right limits at least\n$1-\\epsilon$ at $\\frac{Z}{2}.$\n\\end{theorem}\nThe proof is presented in appendix \\ref{app:C}.\n\nWe have also the following spatially discrete version of the above,\nwhose proof is also in appendix \\ref{app:C}.\nIn the discrete case the termination is taken to hold for $x_i < 0$ and\n$x_i > Z = L\\Delta$ where $L$ is an integer.\nSymmetry in the spatial dimension then takes the form\n$\\ff(x_i) = \\ff(x_{L-i}).$ \n\\begin{theorem}[Two Sided Discrete Fixed Point with Gap]\\label{thm:discretetwoterminatedexist}\nAssume that $\\smthker$ is regular.\nLet $(\\hf,\\hg)$ satisfy the strictly positive gap condition and assume that\n$A(\\hf,\\hg) < -\\Delta\\|\\smthker\\|_\\infty.$\nThen, for any $\\epsilon > 0,$ and for all $Z$ sufficiently large,\nthere exists $\\tmplF,\\tmplG$\nthat form a fixed point of the two-sided termination of\n\\eqref{eqn:discretegfrecursion}\nsuch that\n$\\tmplF$ and $\\tmplG$ are spatially symmetric, monotonically non-decreasing on $(-\\infty,\\half Z]$ and satisfy $\\max_i \\{\\tmplF(x_i)\\} \\ge 1-\\epsilon$\nand $\\max \\{\\tmplG(x_i)\\} \\ge 1-\\epsilon.$ \n\\end{theorem}\n\n\nWe have also the following qualitative version that relaxes the strictly positive gap condition\nand\nwhose proof is in appendix \\ref{app:E}.\n\\begin{theorem}[Two Sided Discrete Fixed Point]\\label{thm:discretetwoterminatedexistGB}\nAssume that $\\smthker$ is regular.\nLet $(\\hf,\\hg)$ be given such that $\\altPhi(\\hf,\\hg;\\cdot,\\cdot)$ is\nuniquely minimized at $(1,1)$ and therefore \n$A(\\hf,\\hg) < 0.$\nThen, for any $\\epsilon > 0,$ and for all $Z=L\\Delta$ sufficiently large and $\\Delta$ sufficiently small,\nthere exists $\\tmplF,\\tmplG$\nthat form a fixed point of the two-sided termination of\n\\eqref{eqn:discretegfrecursion}\nsuch that\n$\\tmplF$ and $\\tmplG$ are spatially symmetric, monotonically non-decreasing on $(-\\infty,\\half Z)$ and satisfy $\\max_i \\{\\tmplF(x_i)\\} \\ge 1-\\epsilon$\nand $\\max_i \\{\\tmplG(x_i)\\} \\ge 1-\\epsilon.$ \n\\end{theorem}\n\n\n\n\n\n\\section{Examples of 1-D Systems}\\label{sec:applications}\n\n\\subsection{Binary Erasure Channel}\nLet us start by re-deriving a proof that for transmission over the\nBEC regular spatially-coupled ensembles achieve the MAP threshold\nof the underlying ensemble. By keeping the rate fixed and by\nincreasing the degrees it then follows that one can achieve capacity\nthis way. This was first shown in \\cite{KRU10}. Given the current\nframework, this can be accomplished in a few lines. Before we prove this\nlet us see a few more examples.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/capacitybec}\n}\n\\caption{\\label{fig:capacitybec}\nEXIT charts for the $(4, 8)$-regular (left) and the $(5, 10)$-regular (right)\ndegree distributions and transmission over the BEC. The respective coupled thresholds\nare\n$\\epsilon^{\\BPsmall}_{\\text{\\tiny coupled}}(4, 8)=0.497741$, and\n$\\epsilon^{\\BPsmall}_{\\text{\\tiny coupled}}(5, 10)=0.499486$.\n}\n\\end{figure}\nWe have already seen the corresponding EXIT charts for the $(3,\n6)$-regular case in Figure~\\ref{fig:exitbec36}. Figure~\\ref{fig:capacitybec}\nshows two more examples, namely the $(4, 8)$-regular as well as the\n$(5, 10)$-regular case. Numerically, the thresholds are\n$\\epsilon^{\\BPsmall}_{\\text{\\tiny coupled}}(3, 6)=0.48814$,\n$\\epsilon^{\\BPsmall}_{\\text{\\tiny coupled}}(4, 8)=0.497741$, and\n$\\epsilon^{\\BPsmall}_{\\text{\\tiny coupled}}(5, 10)=0.499486$. As\nwe see these thresholds quickly approach the Shannon limit of\none-half.\n\nConsider now a degree distribution pair $(\\lambda, \\rho)$. The BP\nthreshold of the uncoupled system is determined by the maximum\nchannel parameter $\\epsilon$ so that $\\epsilon \\lambda(x) \\leq\n1-\\rho^{-1}(1-x)$ for all $x \\in (0, 1]$. Therefore, dividing both\nsides by $\\lambda(x)$ we get for each $x \\in (0, 1]$ an upper bound\non the BP threshold. In other words, the BP threshold of the uncoupled\nensemble can be characterized as\n\\begin{align*}\n\\epsilon^{\\BPsmall}_{\\text{\\tiny uncoupled}} =\n\\inf_{x\\in(0,1]}\\frac{1-\\rho^{-1}(1-x)}{\\lambda(x)}\\,.\n\\end{align*}\nThe limiting spatially coupled threshold (when $L$ and $w$ tend to infinity)\ncan be characterized in a similar way. In this case the determining quantity is\nthe area enclosed by the curves. Therefore,\n\\begin{align*}\n\\epsilon^{\\BPsmall}_{\\text{\\tiny coupled}} =\n\\inf_{x\\in(0,1]}\\frac{\\int_0^x 1-\\rho^{-1}(1-u)\\,\\text{d}u}{\\int_0^x \\lambda(u)\\text{d}u}\\,.\n\\end{align*}\nIn the case where the BP threshold equals $\\frac{1}{\\lambda'(0)\\rho'(1)},$\ni.e., when the threshold equals the stability threshold, then the\nspatially coupled threshold equals the BP threshold.\n\nIn the regular case and in many other cases\n\\begin{align*}\n\\epsilon^{\\BPsmall}_{\\text{\\tiny coupled}} =\n\\frac{\\int_0^{x^*} 1-\\rho^{-1}(1-u)\\,\\text{d}u}{\\int_0^{x^*} \\lambda(u)\\text{d}u}\\,\n\\end{align*}\nwhere $x^*$ corresponds to the forward BP fixed point with channel\nparameter $\\epsilon^{\\small}_{\\text{\\tiny coupled}}.$ In this case\none can check that the threshold is exactly equal to the area\nthreshold. Further, we already know that the area threshold is an\nupper bound on the MAP threshold of the underlying ensemble and we\nknow that the MAP threshold of the underlying system is equal to\nthe MAP threshold of the coupled system when $L$ tends to infinity.\nWe therefore conclude that for all such underlying ensembles where\nthe area threshold satisfies the strictly positive gap condition, the area\nthreshold equals the MAP threshold.\n\nOur current framework can also be adapted to more complicated\ncases. The following example is from \\cite[Fig. 4.15]{Mea06}. Consider the\ndegree distribution $(\\lambda(x)=\\frac{3 x+3 x^2+14 x^{50}}{20},\n\\rho(x)=x^{15})$. The left picture in Figure~\\ref{fig:complicated}\nshows the BP EXIT curve of the whole code.\n\\begin{figure}[htp]\n\\centering\n\\input{ps\/complicated}\n\\caption{\\label{fig:complicated} BP EXIT curves for the ensemble\n$(\\lambda(x)=\\frac{3 x+3 x^2+14 x^{50}}{20}, \\rho(x)=x^{15})$ and\ntransmission over the BEC. Left: Determination of the BP threshold.\nRight: Determination of MAP behavior as conjectured by the Maxwell construction.}\n\\end{figure}\nAs one can see, the BP threshold of the uncoupled ensemble in this\ncase is $\\epsilon^{\\BPsmall}_{\\text{\\tiny uncoup.}} = 0.3531$ and\nthe BP EXIT curve has a single jump.\n\nThe right picture shows the MAP EXIT curve according to the Maxwell\nconstruction, see \\cite[Section 3.20]{RiU08}. According to this\nconstruction, the MAP EXIT curve has two jumps, namely at\n$\\epsilon=0.403174$, the conjectured MAP threshold, and at\n$\\epsilon=0.4855$. These two thresholds are determined by local\nbalances of areas. This is in particular easy to see for the threshold\nat $\\epsilon=0.4855$, where the two areas are quite large.\n\nLet us now show that for the coupled ensemble the Maxwell conjecture\nis indeed correct, i.e., we show that the asymptotic (in the coupling\nlength $L$) BP EXIT curve for the spatially-coupled ensemble indeed\nlooks as shown in the right-hand side of Figure~\\ref{fig:complicated}.\nTo show that the Maxwell conjecture is also correct for the uncoupled\nsystem requires a second step which we do not address here. This\nsecond step consists in showing that the MAP behavior of the uncoupled\nand coupled system is identical and is typical accomplished by using\nthe so called ``interpolation'' technique.\n\n\nThe left picture in Figure~\\ref{fig:complicatedgap} shows the\nindividual EXIT curves according to our framework for $\\epsilon=0.4855$.\nFor this channel parameter the two EXIT curves cross four times,\nnamely for $u=0$, $u=0.824784$, $u=0.967733$, and $u=0.999952$.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/complicatedgap}\n}\n\\caption{\\label{fig:complicatedgap}\nConfirmation of the Maxwell conjecture using the one-dimensional framework of\nspatial coupling for the ensemble $(\\lambda(x)=\\frac{3 x+3 x^2+14 x^{50}}{20}, \\rho(x)=x^{15})$\nand transmission over the BEC.\nThe two inlets show in a magnified way the behavior of the curves\ninside the two gray boxes.\n}\n\\end{figure}\nNote that for this channel parameter the curves do not fulfill the\npositive gap condition since initially the curve $\\epsilon \\lambda(\\xg)$\nis above the curve $1-\\rho(1-\\xf)$. Nevertheless we can use our\nformalism. Let us explain the idea informally. Let us first check\nthe behavior of the system for $\\epsilon=0.4855$. Let us shift both\ncurves and renormalize them in such a way that first (from the left)\nnon-trivial FP is mapped to zero and the last FP (on the right) is\nmapped to one. Then these curves {\\em do} fulfill the our conditions\nand our theory applies. This shows that once the channel parameter\nhas reached slightly below $0.4855$, the EXIT function drops as\nindicated in the righ-hand side of Figure~\\ref{fig:complicated}.\n\nNow where we know what the curve looks like above $\\epsilon=0.4855$\nwe can look at the remaining part. The right picture in\nFigure~\\ref{fig:complicatedgap} shows the individual EXIT curves\naccording to our framework for $\\epsilon=0.4032$. Again, we can\nredefine our curves above this parameter and reparametrize and then\nthey do fulfill the positive gap condition. So this marks the second\nthreshold. The inlet shows the curve magnified by 1.5 and 15 respectively.\nFrom this we see that the curves are quite well matched, so the areas\nare not so easy to see.\n\n\n\\subsection{Hard-Decision Decoding}\nLow-dimensional descriptions appear naturally when we investigate\nthe performance of quantized decoders. The perhaps simplest case\nis the Gallager decoder A, \\cite{Gal63} (see \\cite{RiU01} for an\nin-depth discussion). All messages in this case are from $\\{\\pm\n1\\}$. The initial message sent out by the variable nodes is the\nreceived message. At a check node, the outgoing message is the\nproduct of the incoming messages. At variable nodes, the outgoing\nmessage is the received message unless all incoming messages agree,\nin which case we forward this incoming message.\n\nLet $x^{(\\ell)}$, $\\ell \\in \\naturals$, be the state of the decoder,\nnamely the fraction of ``$-1$\"-messages sent out by the variable\nnodes in iteration $\\ell$. We have $x^{(0)} = \\epsilon$, and for\n$\\ell \\geq 1$, the DE equations read\n\\begin{align*}\ny^{(\\ell)} & = \\frac{1-\\rho(1-2 x^{(\\ell-1)})}{2}, \\\\\nx^{(\\ell)} & = \\epsilon (1- \\lambda(1-y^{(\\ell)}))+ (1-\\epsilon) \\lambda(y^{(\\ell)}).\n\\end{align*}\nSince the state of this system is a scalar, our theory can be applied\ndirectly. Unfortunately, as discussed in \\cite{BRU04}, for most\n(good) degree-distributions the threshold under the Gallager A\nalgorithm is determined by the behavior either at the very beginning\nof the decoding process or at the very end. In neither of those\ncases does spatial coupling improve the threshold.\n\nIn more detail, consider Figure~\\ref{fig:exitgalA}.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/exitgalA}\n}\n\\caption{\\label{fig:exitgalA}\nLeft: EXIT charts for the $(4, 8)$-regular degree distribution\nunder the Gallager algorithm A with $\\epsilon^{\\GalAsmall}_{\\text{\\tiny uncoup}} = 0.0476$. The curves\ndo not cross. The threshold is determined by the stability condition. Right:\nEXIT charts for the $(3, 6)$-regular degree distribution\nunder the Gallager algorithm A with $\\epsilon^{\\GalAsmall}_{\\text{\\tiny uncoup}} = 0.0395$. The\nthreshold is determined by the behavior at the start of the algorithm.\n}\n\\end{figure}\nThe left picture shows the two EXIT functions for the $(4, 8)$-regular\nensemble under the Gallager algorithm A and\n$\\epsilon^{\\GalAsmall}_{\\text{\\tiny uncoup}} = 0.0476$. As one can\nsee from this picture, this is the threshold for the uncoupled case.\nThis threshold is determined by the stability condition, i.e., the\nbehavior of the decoder towards the end of the decoding process.\nIn other words, the functions $\\hg(\\xf)$ and the inverse of $\\hf(\\xg)$\nhave the same derivative at $0$. If we increase the channel parameter\nthen the resulting EXIT curves no longer fulfill the positive gap\ncondition (since they cross already at $0$). This implies that the\nthreshold of the spatially coupled ensemble is the same as for the\nuncoupled one.\n\nThe right picture in Figure~\\ref{fig:exitgalA} shows the two EXIT\nfunctions for the $(3, 6)$-regular ensemble under the Gallager\nalgorithm A and $\\epsilon = 0.0395$, the threshold for the uncoupled\ncase. In this case the threshold is determined by the behavior at\nthe beginning of the decoding process. As one can see from the\npicture, there are two non-zero FPs. The ``smaller'' one is unstable\nand the ``larger'' one is stable. If the initial state of the system\nis below the small FP then the decoder converges to $0$, i.e., it\nsucceeds. But if it starts above the small FP, then the decoder\nconverges to the large and stable non-zero FP, i.e., it fails. As\none can see from the picture, already for the channel parameter\nwhich corresponds to the threshold of the uncoupled these two EXIT\ncurves do not fulfill the positive gap condition -- the total area\nenclosed by the two curves is negative. And if we increase the\nchannel parameter, the area would become even more negative. Hence,\nalso in this case spatial coupling does not help.\n\nLet us therefore consider the Gallager algorithm B, \\cite{Gal63,RiU01}.\nAs for the Gallager algorithm A, all messages are from the set\n$\\{\\pm 1\\}$. The initial message and the message-passing rule at\nthe check nodes are identical. But at variable nodes we have a\nparameter $b$, an integer. If at least $b$ of the incoming messages\nagree, then we send this value, otherwise we send the received\nvalue. This threshold $b$ can be a function of time. Initially the\ninternal messages are quite unreliable. Therefore, $b$ should be\nchosen large in this stage (if we choose $b$ to be the degree of\nthe node minus one we recover the Gallager algorithm A). But as\ntime goes on, the internal messages become more and more reliable\nand a simple majority of the internal nodes will be appropriate.\nThe DE equations for this case are\n\\begin{align*}\ny^{(\\ell)} & = \\frac{1-\\rho(1-2 x^{(\\ell-1)})}{2}, \\\\\nx^{(\\ell)} = & (1-\\epsilon) \\sum_{k=b}^{\\dl-1}\n\\binom{\\dl-1}{k} (y^{(\\ell)})^k (1-y^{(\\ell)})^{\\dl-1-k}\\\\\n& + \\epsilon \\sum_{\\dl-1-b}^{\\dl-1}\n\\binom{\\dl-1}{k} (y^{(\\ell)})^k (1-y^{(\\ell)})^{\\dl-1-k}.\n\\end{align*}\nAssume at first that we keep $b$ constant over time. Consider\nthe $(4, 10)$-regular ensemble and choose $b=3$.\nThe left picture in Figure~\\ref{fig:exitgalB} shows this example for\n$\\epsilon^{\\GalBsmall}_{\\text{\\tiny uncoup}} = 0.02454$. As we can see, this is the largest\nchannel parameter for which the two curves do not cross, i.e., this is\nthe threshold for the uncoupled case.\nThe right picture in Figure~\\ref{fig:exitgalB} shows the same example\nbut for $\\epsilon^{\\GalBsmall}_{\\tiny \\text{coup}} = 0.0333$. For\nthis channel parameter the strictly positive gap condition is fulfilled and\nthe two areas are exactly in balance, i.e., this is the threshold\nfor the coupled ensemble.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/exitgalB}\n}\n\\caption{\\label{fig:exitgalB}\nLeft: EXIT charts for the\nthe $(4, 10)$-regular ensemble and the Gallager algorithm B with $b=3$\nand $\\epsilon^{\\GalBsmall}_{\\text{\\tiny uncoup}} = 0.02454$. The curves do not cross.\nRight: The same example but with\n$\\epsilon^{\\GalBsmall}_{\\tiny \\text{coup}} = 0.0333$. For\nthis channel parameter the positive gap condition is fulfilled and\nthe two areas are in balance. In both cases, the inlets show a\nmagnified version of the gray box.}\n\\end{figure}\nWe see that the increase in the threshold is substantial for this\ncase.\n\n\nWe can do even better if we allow $b$ to vary as a function of the state of the system. The optimum\nchoice of $b$ as a function of the state $x$ was already determined\nby Gallager and we have\n\\[\nb(\\epsilon, x) = \\Big\\lceil \\Bigl(\\frac{\\log \\frac{1-\\epsilon}{\\epsilon}}{\\log \\frac{1-x}{x}} + (\\dr-1) \\Bigr)\/2 \\Big\\rceil.\n\\]\nAssume that at any point we pick the optimum $b$ value. For the\nEXIT charts this corresponds to looking at the minimum of the EXIT\nchart at the variable node over all admissible values of $b$. If\nwe apply this to the $(4, 10)$-regular ensemble then we get a\nthreshold of $\\epsilon^{\\GalBsmall, \\text{\\tiny opt}}_{\\tiny\n\\text{coup}}(4, 8) = 0.04085$, another marked improvement. As a\nsecond example, consider the $(6, 12)$-regular ensemble. For this\nensemble no fixed-$b$ decoding strategy improves the threshold under\nspatial coupling compared to the uncoupled case. But if we admit\nan optimization over $b$ then we get a substantially improved\nthreshold, namely the threshold is now $\\epsilon^{\\GalBsmall,\\text{\\tiny\nopt}}_{\\tiny \\text{\\tiny coup}}(6, 12) = 0.0555$. For comparison,\n$\\epsilon^{\\GalBsmall}_{\\text{\\tiny uncoup}}(6, 12) = 0.0341$.\n\n{\\em Discussion:} The optimum strategy assumes that at the\ndecoder we know at each iteration (at at each position if we consider\nspatially coupled ensembles) the current state of the system. Whether\nor not this is realistic depends somewhat on the circumstances. For\nvery large codes the evolution of the state is well predicted by\nDE and can hence be determined once and for all. For smaller systems\nthe evolution shows more variation. One option is to measure e.g.\nthe number unsatisfied check nodes given the current decisions and\nto estimate from this the state.\n\n\n\n\n\n\\subsection{CDMA Demodulation}\n\nSpatially coupling has been considered for CDMA demodulation in\n \\cite{ScT11} and \\cite{TTK11}.\nWe will follow \\cite{ScT11}.\n\nThe basic (uncoded) CDMA transmission model is\n\\[\ny=\\sum_{k=1}^Kd_k \\bold{a}_k + \\sigma \\bold{n}\n\\]\nwhere there are $K = \\alpha N$ users each transmitting a single bit $d_k = \\pm 1$ \nusing random spreading sequence $\\bold{a}_k$ of unit energy and length $N$,\nand $\\bold{n}$ is a vector of independent $N(0,1)$ random variables\n(for details see \\cite{ScT11}).\n\nIn \\cite{Tan2002} statistical mechanical methods were used to analyze randomly spread synchronous CDMA detectors over the additive white Gaussian noise channel.\nThe non-rigorous replica method predicted the optimal (asymptotic in system size) performance of various detectors.\nIn this setting the solution states that\nthe symbol-wise marginal-posterior-mode detector in the large $K$ and $N$ limit has posterior probabilities with signal to interference ratio $(1\/x)$ satisfying the equation\n\\begin{equation}\\label{eqn:cdmaFP}\nx= \n\\sigma^2 + \\alpha \\expectation \\Biggl(1-\\tanh\\Bigl(\\frac{1}{x}+\\sqrt{\\frac{1}{x}} \\xi\\Bigr) \\Biggr)^2 \n\\end{equation}\nwhere the expectation is over $\\xi \\sim N(0,1).$\nHere $x$ represents the variance of the posterior equivalent Gaussian channel $d_k + \\sqrt{x} n.$\n\nFor $\\alpha < \\alpha_{\\text{crit}} \\simeq 1.49$ (numerically determined) this\nequation has single solution\n(including the case $x=0$ for $\\sigma^2=0.$)\nFor $\\alpha \\ge \\alpha_{\\text{crit}}$ it is observed\nthat the equation has one, two, or three solutions depending \non $\\sigma^2.$\n\nIn \\cite{ScT11}, a message passing scheme was developed such that the associated density evolution gives rise to\n\\eqref{eqn:cdmaFP} as a fixed point equation.\nThe scheme requires a modification of the transmission setup.\nTo describe the scheme first consider repeating each bit $M$ times and scaling power accordingly.\n\\[\ny=\\sum_{k=1}^K \\frac{1}{M} \\sum_{m=1}^M d_{k,m} \\boldmath{a}_k + \\sigma \\boldmath{n}\n\\]\nNow, take $l=1,2,...,L$ instances of this system and permute the indices on a per-user basis to get\n\\[\ny_l=\\sum_{k=1}^K \\frac{1}{\\sqrt{M}} \\sum_{m=1}^M d_{k\\pi_k(m,l)} \\boldmath{a}_k + \\sigma \\boldmath{n}\n\\]\nwhere $\\pi_k$ is a (randomizing) permutation on $[M]\\times [L].$ Note the change in scaling with respect to $M$ due to\nnon-coherent addition of the bit values (take $L\\gg M$). Belief propagation is applied to this setup and the analysis\nleads to the density evolution equations. \n\nDefine \n$g:[0,\\infty] \\rightarrow [0,\\infty].$\n\\begin{align*}\ng(x) & = \\expectation{(1-\\tanh{(x+\\sqrt{x}\\xi))}^2} \n\\end{align*}\nwhere $\\xi \\sim N(0,1).$\nNow, further define\n\\begin{align*}\n\\hf(u)&=\\alpha g(u)+\\sigma^2\\\\\n\\hg(v)&=1\/v\n\\end{align*}\nwhere, here, $u,v \\in [0,+\\infty].$\nThe fixed point equation \\eqref{eqn:cdmaFP} can now be written\n\\[\nx = \\hf(\\hg(x))\\,.\n\\]\nThe function $\\hf$\ncorresponds to updating the LLRs of the bits taking into account the repetition of the bits and the function\n$\\hg$ corresponds to a soft cancellation step. In each case the resulting message LLR values are (symmetric) \nGaussian distributed and the density evolution update corresponds to the input-output map of the effective\nvariances of the equivalent AWGN channel. The iterations can be initialized with $x = \\infty.$\n\nThe DE corresponding to the message passing decoder will converge to the solution of\n\\eqref{eqn:cdmaFP} having the largest magnitude.\nHence for $\\alpha \\le \\alpha_{\\text{crit}}$ the BP decoder will not generally achieve optimal performance.\n\nIn \\cite{ScT11} the authors further introduce spatial coupling. The basic construction uses a chain of\ninstances of the above system and couples them by exchanging bits between neighboring instances\n\nThe spatially coupled version of \\eqref{eqn:cdmaFP} (corresponding to local uniform coupling of width $W$) \nappearing in \\cite{ScT11} reads\n\\begin{align*}\nx_i^t & = \\sigma^2 + \\frac{\\alpha}{2W+1}\\sum_{j=-W}^W g\\Bigl( \\frac{1}{2W+1} \\sum_{l=-W}^W \\frac{1}{x_{i-1}^{t+j+l}}\\Bigr)\\, \\\\\n&= \\frac{1}{2W+1}\\sum_{j=-W}^W \\Biggl(\\sigma^2 +\\alpha g\\Bigl( \\frac{1}{2W+1} \\sum_{l=-W}^W \\frac{1}{x_{i-1}^{t+j+l}}\\Bigr)\\Biggr)\\,.\n\\end{align*}\nTermination is accomplished by setting bits outside some finite region to be known.\n\nWe are now in the regime where our results may be applied.\nThe solutions to \\eqref{eqn:cdmaFP} correspond to crossing points of $\\hf,\\hg.$\nIf $(u,v) \\in \\cross(\\hf,\\hg)$ then the corresponding solution to \\eqref{eqn:cdmaFP}\nis $x = v = 1\/u.$\nLet $x_1$ be the smallest solution and when there are multple solutions let $x_2$ be the largest.\nThe system is initialized with $x=u=\\infty$ and terminated with $x=0.$\nFor definition of $\\altPhi$ consistent with our canonical form we can take $(u,v)=(x_1,1\/x_1)$ as the origin and invert the sign of $v.$.\nThus, we see that for $W$ large enough the spatially coupled\nsystem will converge to the solution $x_2$ (or better near the termination) if \n\\[\n\\int_{x_1}^{x_2} \\Bigl( \\hf(z) - \\frac{1}{z} \\Bigr) \\, dz < 0\\,.\n\\]\n\nThe case $\\sigma^2=0$ ($x_2 = \\infty$) is special.\nIn \\cite{ScT11} it is claimed that $x_i^t \\rightarrow 0$ in this case.\nThis is now an easy consequence and a special case of our results since $g(x)$ approaches $0$ exponentially for large $x.$\n\n\n\\subsection{Compressed Sensing}\n\nIn a typical variant of compressed sensing one observes a ``sparse\" vector $x$\nthrough a underdetermined linear system as\n\\[\ny = Ax +n\\,.\n\\]\nwhere $n$ is an additive noise vector.\nThe matrix $A$ is $m \\times n$ typically with \n$m \\ll n$ where $\\delta= m\/n$ is termed the undersampling ratio.\nThe vector $x$ is constrained to be sparse, or, alternatively, to have\nentries distributed according to a distribution $p_X$ with small R\\'{e}nyi information dimension\n\\cite{VWcs}. \nIn the setup we consider here the entries of $A$ are sampled independent zero mean Gaussians random variables.\nLetting $V$ denote the $m \\times n$ all-1 matrix, the\n variances of the entries of $A$ are component-wise given by $\\frac{1}{m} V$ so that columns of \n$A$ have (roughly) unit norm.\nThe problem is to estimate $x$ from knowledge of $y$ and $A.$\nHere we also assume knowledge of $p_X.$\nThe problem can be scaled up by letting $n$ and $m$ tend to infinity while keeping\n$\\delta$ fixed. \nAsymptotic performance is characterized in terms of the limit.\n\nOne can associate a bipartite graph to $A$ in which one set of nodes corresponds to \nthe columns (and the entries of $x$) and the other set of nodes corresponds to the rows.\nThe graphical representation suggests the use of message passing algorithms for\nthis problem and they have indeed been proposed and studied \\cite{DMM}[and references therein].\nIn \\cite{DMM} a reduced complexity variation, AMP (Approximate Message Passing,) is developed in which there are\nonly $n$ or $m$ distinct messages, depending on the direction.\nAn additional term, the so-called Onsager reaction term, is brought into the algorithm to compensate\nof the feedback inherent in AMP (due to the violation of the extrinsic information principle and the denseness of the graph).\nIn \\cite{DMM} an analysis of AMP is given that leads in the large system limit\nto an iterative function system called\nstate evolution, which is analogous to density evolution.\nThe large system limit analysis is quite different from the usual density evolution analysis in that,\nrather than relying on sparseness and tree-like limits, the state evolution analysis relies on the central limit theorem and the fact that contributions from single edges are asymptotically negligible.\nIn the large system limit, messages (or their errors) in the AMP algorithm are normally distributed \n(this is the important consequence of the including the Onsager reaction term)\nand state evolution captures the variance (SNR) associated to the messages.\nFor our current setup: a $m\\times n$ sensing matrix with independent $\\frac{1}{\\sqrt{m}}N(0,1)$ Gaussian entries \nand known $p_X,$\nthe state evolution equations take the form\n\\cite{DJM11}\n\\begin{align*}\n\\phi_{t+1} = \\sigma^2+\\frac{1}{\\delta}\\text{mmse}(\\phi_t^{-1})\n\\end{align*}\nwhere $\\phi$ is the estimation error variance. In this expression\n\\[\n\\text{mmse}(s) =\n\\expectation (X-\\expectation(X\\mid Y))^2\n\\]\nis the minimum mean square error of an estimator of $X$ given $Y$\nwhere $X$ is distributed as $p_X$ and $Y = \\sqrt{s}X+Z$ where $Z$ is $N(0,1)$\nand independent of $X.$\nThe main properties of $\\text{mmse}$ that are relevant here are\n\\begin{align*}\n\\limsup_{x\\rightarrow \\infty} s\\,\\text{mmse}(s) & = \\bar{D}_{p_X} \\\\\n\\intertext{and}\n\\limsup_{x\\rightarrow \\infty} \\int_0^s \\text{mmse}(s) ds &= \\log(s) \\bar{d}_{p_X}\n\\end{align*}\nwhere $\\bar{D}_{p_X}$ is termed the mmse dimension \\cite{VWcs} \nand $\\bar{d}_{p_X}$ is the upper information dimension \\cite{VWcs} of\n$p_X$ which can be defined by\n\\[\n\\bar{d}_{p_X}\n=\\limsup_{\\ell \\rightarrow \\infty} \\frac{H\\lfloor \\ell X \\rfloor}{\\log \\ell}\n\\]\nwhere, here, $H$ denotes the Shannon entropy.\n\nSpatial coupling can be introduced by imposing additional structure on $A.$\nLet us first consider a collection of parallel systems.\nThus, let $\\tilde{A}$ be a doubly infinite array of $m\\times n$ matrices in \nwhich one diagonal $\\tilde{A}_{i,i}$ is non-zero with each $\\tilde{A}_{i,i}$ \ni.i.d. Gaussian samples with entry-wise variance matrix $\\frac{1}{m}V.$\nThe variance matrix associated to matrix $\\tilde{A}$ is \n$\\tilde{V}$ with $\\tilde{V}_{i,i}= \\frac{1}{m}V$ and $\\tilde{V}_{i,j}=0$ for $i\\neq j.$\nSpatial coupling is achieved by setting $V_{i,j} = w_{i-j} \\frac{1}{m} V.$\nTermination can be effected by providing additional measurements for variables associated\nto the termination.\nSpatially coupled constructions of this type and resulting performance improvements were first presented\nin \\cite{KMSSZ11}. \nThe analytical results on information theoretic optimal results that we reproduce here were presented in\n\\cite{DJM11}.\n\nThe spatially coupled system can be understood within our framework as having the following exit functions.\n\\begin{align*}\n\\hf(u) &= \\sigma^2+\\frac{1}{\\delta} \\text{mmse}(u)\\\\\n\\hg (v) &= 1\/v \n\\end{align*}\nNote in this case that rather than $[0,1]^2$ the system is defined on $[0,+\\infty]^2.$\nThere is a crossing point $(u_1,v_1)$ where $u_1$ is minimal and $v_1$ is maximal.\nIt is easy to see that we have the bound $u_1 \\ge \\frac{\\delta}{ \\expectation(X^2)}$\nsince $\\hf$ is decreasing in $u$ and $\\hf(0) = \\frac{1}{\\delta} \\expectation(X^2).$\nTo apply our analysis we can use this point as our origin in defining $\\altPhi$\nand we can invert the sign of $v$ to recover our canonical ordering.\n\nWe can now easily recover the main results in \\cite{DJM11}.\nConsider first the noiseless case $\\sigma^2 = 0.$ The FP of interest in the\ncomponent system above occurs at $(\\infty,0).$\nIf $\\bar{d}_{p_X} < \\delta$ then we have\n\\[\n\\altPhi(\\hf,\\hg;\\infty,0)\n=\n\\int_{x_1}^{\\infty} (\\frac{1}{\\delta} \\text{mmse}(x)-\\frac{1}{x}) dx\n=\n-\\infty\\,.\n\\]\nThus we get convergence to the fixed point at $(\\infty,0).$ \n(Some simple adjustment of our arguments are needed to handle the unbounded case.)\n\nConsider now $\\sigma^2 > 0$ and let $(u^*(\\sigma^2),v^*(\\sigma^2))$ denote the crossing point\nwith maximal $u$ and minimal $v.$\nIn this case we have\n\\begin{align*}\n&\\altPhi(\\hf,\\hg;u^*(\\sigma^2),v^*(\\sigma^2))\n\\\\= &\n\\int_{u_1}^{u^*} (\\sigma^2 + \\frac{1}{\\delta} \\text{mmse}(u)-\\frac{1}{u}) du\n\\\\ = &\n\\sigma^2 (u^*-u_1) +\\frac{1}{\\delta}\\int_{u_1}^{u^*} \\text{mmse}(u) dx -\\log(u^*\/u_1)\\,.\n\\end{align*}\nBy choosing $\\sigma^2$ small enough we can have $u^*(\\sigma^2)$ as large\nas desired.\nAssume $\\bar{d}_{p_X}<\\delta.$ Then,\nassuming $\\sigma^2$ small enough we have $\\altPhi(\\hf,\\hg;u',v')>\\altPhi(\\hf,\\hg;u^*,v^*)$ for\nany crossing point $(u',v')$ with $u' \\le z.$ It then follows that the crossing point that minimizes $\\altPhi(\\hf,\\hg)$\nhas $u$ value larger than $z.$\n\nAssume now that $\\bar{D}_{p_X}<\\delta$ then for all $\\sigma^2$ small enough we have\n$(u^*(\\sigma^2),v^*(\\sigma^2))$ is minimizing $\\altPhi(\\hf,\\hg).$\nFurthermore it follows that $u^*(\\sigma^2) > C \\sigma^{-2}$ for some constant $C.$\n\n\\section{Higher-Dimensional Systems and the Gaussian Approximation}\\label{sec:gaussapprox}\nWe have discussed in the previous section several scenarios where\nthe state of the system is one dimensional and the developed theory\ncan be applied directly and gives precise predictions on the threshold\nof coupled systems. But we can considerably expand the field of\napplications if we are content with {\\em approximations}. For\nuncoupled systems a good example is the use of EXIT functions. EXIT\nfunctions are equivalent to DE for the case of the BEC, where the\nstate is indeed one dimensional. For transmission over general BMS\nchannels they are no longer exact but they are very useful engineering\ntools which give accurate predictions and valuable insight into\nthe behavior of the system.\n\nThe idea of EXIT functions is to replace the unknown message densities\nappearing in DE by Gaussian densities. If one assumes that the\ndensities are symmetric (all densities appearing in DE are symmetric)\nthen each Gaussian density has only a single degree of freedom and\nwe are back to a one-dimensional system. Clearly, the same approach\ncan be applied to coupled systems. Let us now discuss several\nconcrete examples. We start with transmission over general BMS\nchannels.\n\n\\subsection{Coding and Transmission over General Channels}\nAs we have just discussed, for transmission over general BMS channels\nit is natural to use EXIT charts as a one-dimensional approximation\nof the DE process \\cite{teB99a,teB99b,teB00,teB01}. This strategy\nhas been used successfully in a wide array of settings to approximately\npredict the performance of the BP decoder. As we have seen, whereas\nfor the BP decoder the criterion of success is that the two EXIT\ncurves do not overlap, for the performance of spatially coupled\nsystems the criterion is the positive gap condition and the area condition.\n\nWe demonstrate the basic technique by considering the simple setting\nof point-to-point transmission using irregular LDPC ensembles. It\nis understood that the same ideas can be applied to any of the many\nother scenarios where EXIT charts have been used to predict the\nperformance of the BP decoder of uncoupled systems.\n\nIn the sequel, let $\\psi(m)$ denote the function which gives the\nentropy of a symmetric Gaussian of mean $m$ (and therefore standard\nvariation $\\sigma=\\sqrt{2\/m}$). Although there is no elementary\nexpression for this function, there are a variety of efficient\nnumerical methods to determine its value, see \\cite{RiU08}.\n\nDefine the two functions\n\\begin{align*}\n\\hg(\\xf) & = 1-\\sum_{i} \\rho_i \\psi\\bigl((i-1)\\psi^{-1}(1-\\xf)\\big), \\\\\n\\hf(\\xg) & = \\sum_{i} \\lambda_i \\psi\\bigl((i-1)\\psi^{-1}(\\xg)+\\psi^{-1}(c) \\big).\n\\end{align*}\nNote that $\\hg(\\xf)$ describes the entropy at the output of a check\nnode assuming that the input entropy is equal to $\\xf$ and $\\hf(\\xg)$\ndescribes the entropy at the output of a variable node assuming\nthat the input entropy is equal to $\\xg$ and that the entropy of the\nchannel is $c$. Both of these functions are computed under the\nassumption that all incoming densities are symmetric Gaussians (with\nthe corresponding entropy). In addition, for the computation of the\nfunction $\\hg(\\xf)$ we have used the so-called ``dual'' approximation,\nsee \\cite[p. 236]{RiU08}.\n\n\nFig.~\\ref{fig:positivegapbawgnc36} plots the EXIT charts for the\n$(3, 6)$-regular ensemble and transmission over the BAWGNC. The\nplot on the left shows the determination of the BP threshold for\nthe uncoupled system according to the EXIT chart paradigm. The\nthreshold is determined by the largest channel parameter so that\nthe two curves do not cross. This parameter is equal to $\\ent^{\\BPsmall,\n\\EXITsmall}=0.42915$. Note that according to DE the BP threshold\nis equal to $\\ent^{\\BPsmall} = 0.4293$, see \\cite[Table 4.115\n]{RiU08}, a good match.\n\nThe plot on the right show the determination of the BP threshold\nfor the coupled ensemble according to the positive gap condition.\nSince for this case we only have a single nontrivial FP, this\nthreshold is given by the maximum channel entropy so that the gap\nfor the largest FP is equal to $0$. This means, that for this\nchannel parameter the ``white'' and the ``dark gray'' area are\nequally large. This parameter is equal to $\\ent^{\\BPsmall,\n\\EXITsmall}_{\\text{\\tiny coupled}}=0.4758$. Note that according\nto DE, the BP threshold of the coupled system is equal to\n$\\ent^{\\BPsmall}_{\\text{\\tiny coupled}} = 0.4794$, see \\cite[Table\nII]{KRU12}, again a good match.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/positivegapbawgnc36}\n}\n\\caption{\\label{fig:positivegapbawgnc36} Left: Determination of the\nBP threshold according to the EXIT chart paradigm for the $(3,\n6)$-regular ensemble and transmission over the BAWGNC. The two\ncurves are shown for $\\ent^{\\BPsmall, \\EXITsmall}=0.42915$. As one\ncan see from this picture, the two curves touch but do not cross.\nRight: Determination of the BP threshold for the coupled ensemble\naccording to the EXIT chart paradigm and the positive gap condition.\nThe two curves are shown for $\\ent^{\\BPsmall, \\EXITsmall}_{\\text{\\tiny\ncoupled}}=0.4758$. For this parameter the ``white'' and the ``dark\ngray'' area are in balance. } \\end{figure}\n\n\\subsection{Min-Sum Decoder}\nAs a second application let us consider the min-sum decoder. The\nmessage-passing rule at the variable nodes is identical to the one\nused for the BP decoder. But at a check nodes the rule differs --\nfor the min-sum decoder the sign of the output is the product of\nthe signs of the incoming messages (just like for the BP decoder)\nbut the absolute value of the outgoing message is the minimum of\nthe absolute values of the incoming messages. \n\nFor, e.g., the $(3, 6)$-regular ensemble DE predicts a min-sum\ndecoding threshold on the BAWGNC of $\\ent^{\\MinSumsmall}_{\\text{\\tiny\nuncoup}}=0.381787$, \\cite{Chu00}. For the coupled case this threshold\njumps to $\\ent^{\\MinSumsmall}_{\\text{\\tiny\ncoupled}}=0.429$.\\footnote{Strictly speaking it is not known that\nmin-sum {\\em has} a threshold, i.e., that there exists a channel\nparameter so that for all better channels the decoder converges\nwith high probability in the large system limit and that for all\nworse channels it does not. Nevertheless, one can numerically compute\n``thresholds'' and check empirically that indeed they behave in the\nexpected way. }\n\nIn order to derive a one-dimensional representation of DE , we\nrestrict the class of densities to symmetric Gaussians. Of course,\nthis introduces some error. Contrary to BP decoding, the messages\nappearing in the min-sum decoding are not in general symmetric (and\nneither are they Gaussian).\n\nThe DE rule at variable nodes is identical to the one used when we\nmodeled the BP decoder. The DE rule for the check nodes is more\ndifficult to model but it is easy to compute numerically. \n\nRather than plotting EXIT charts using entropy, we use the error\nas our basic parameter. There are two reasons for this choice.\nFirst, our one-dimensional theory does not depend on the choice of\nparameters and so it is instructive see an example which uses a\nparameter other than entropy. Second, the min-sum decoder is\ninherently invariant to a scaling, whereas entropy is quite sensitive\nto such a scaling. Error probability on the other hand is also\ninvariant to scaling.\n\nFigure~\\ref{fig:minsum36} shows the predictions we get by applying our\none-dimensional model.\n\\begin{figure}[htp]\n{\n\\centering\n\\input{ps\/minsum36}\n}\n\\caption{\\label{fig:minsum36}\nLeft: Determination of the MinSum threshold according to the EXIT\nchart paradigm for the $(3, 6)$-regular ensemble and transmission\nover the BAWGNC. The two curves are shown for $\\ent^{\\BPsmall,\n\\EXITsmall}=0.401$. As one can see from this picture, the two curves\ntouch but do not cross. Right: Determination of the MinSum threshold\nfor the coupled ensemble according to the EXIT chart paradigm and\nthe positive gap condition. The two curves are shown for\n$\\ent^{\\MinSumsmall, \\EXITsmall}_{\\text{\\tiny coupled}}=0.436$. For\nthis parameter the ``white'' and the ``dark gray'' area are in\nbalance. } \\end{figure} \nThe predicted thresholds are $\\ent^{\\MinSumsmall, \\EXITsmall}_{\\text{\\tiny\nuncoup}}= 0.401$, $\\ent^{\\MinSumsmall, \\EXITsmall}_{\\text{\\tiny\ncoupled}}=0.436$. These predictions are less accurate than the\nequivalent predictions for the BP decoder. Most likely this is due\nto the lack of symmetry of the min-sum decoder. But the predictions\nstill show the right qualitative behavior.\n\n\n\n\n\n\n\n\n\n\n\\section{Analysis and Proofs}\\label{sec:proof}\n\nIn the analysis we allow discontinuous update (EXIT) functions.\nThis is not merely for generality\nbut also for modeling of termination and to allow discontinuous perturbations.\nWe will require some notation for dealing with this.\n\nGiven a monotonically non-decreasing function $\\ff$ we write\n\\[\nv \\veq \\ff(u)\n\\]\nto mean $v \\in [\\ff(u-),\\ff(u+)].$\nGiven $\\fg\\in\\sptfns,$ continuous $\\ff \\in \\sptfns,$ and $h\\in\\exitfns,$ we write\n\\[\n\\fg \\veq h\\circ\\ff\n\\]\nto mean $\\fg(x)\\in [h(\\ff(x)-),h(\\ff(x)+)],$ i.e.,\n$\\fg(x)\\veq h(\\ff(x)),$\nfor all $x\\in \\reals.$ We write\n\\[\n\\fg = h\\circ\\ff\n\\]\nto mean $\\fg(x) = h(\\ff(x))$ for all $x.$\nIn some contexts we may have equality holding\nup to a set of $x$ of measure $0.$\nTo distinguish this we write\n\\[\n\\fg \\equiv h\\circ\\ff\n\\]\nto mean $\\fg(x) = h(\\ff(x))$ for all $x$\nup to a set of measure $0.$\nNote that modifying $\\fg$ on a set of measure $0$ has no impact on $\\gS.$\nIn general we use $\\equiv$ to indicate equality up to sets of measure $0.$\n\nGiven a real number $\\ashift$ we use the notation\n$\\gSa$ to denote the reverse shift of $\\gS$ by $\\ashift,$ i.e.,\n\\[\n\\gSa(x) = \\gS(x+\\ashift) \\,.\n\\]\nUltimately we are interested in interpolating functions such that\n\\(\n\\fg = \\hg\\circ\\fS\\,,\n\\)\nand\n\\(\n\\ff = \\hf\\circ\\gSa\\,,\n\\)\nsince this represents a wave-like solution to system \\ref{eqn:gfrecursion}.\nThe mathematical arguments, however, sometimes only give rise to functions\n{\\em consistent} with the equations, i.e., such that\n\\(\n\\fg \\veq \\hg\\circ\\fS\\,,\n\\)\nand\n\\(\n\\ff \\veq \\hf\\circ\\gSa\\,.\n\\)\nMuch of the analysis works with this weaker condition and then strengthens it to obtain proper solutions.\n\n\n\\subsection{Spatial Fixed Points and Waves}\n\n\nEven when one exists, it is typically difficult to analytically determine an interpolating spatial fixed point solution $(\\ff,\\fg) \\in \\sptfns^2$\nfor a given pair $(\\hg,\\hf) \\in \\exitfns.$\nThe reverse direction, however, is relatively easy.\nIn particular, given a putative $(0,1)$-interpolating spatial fixed point $(\\ff,\\fg)$\nthe corresponding $(\\hf,\\hg)$ is essentially determined by the requirement that\n$\\fg(x) = \\hg(\\fS(x))$ and $f(x) = \\hf(\\gS(x)).$\nSome degeneracy is possible if, for example, $\\fS$ is constant over some interval on which $\\fg$ varies. \nEven in this degenerate case, however, the equivalence class of $\\hg$ is uniquely determined.\nThus, given $(0,1)$-interpolating $\\ff$ and $\\fg$ where $\\ff$ is continuous, we define\n$h_{[\\fg,\\ff]}$ to be any element of the uniquely determined equivalence class such that,\n\\[\n\\fg \\veq h_{[\\fg,\\ff]}\\circ\\ff\n\\]\ni.e., for each $x,$\n$\\fg(x) \\in [h_{[\\fg,\\ff]}(\\ff(x)-), h_{[\\fg,\\ff]}(\\ff(x)+)].$\n(A simple argument shows that the equivalence class is indeed uniquely determined.)\nIn general, if $f$ and $g$ are not $(0,1)$-interpolating, then we still consider \n$h_{[\\fg,\\ff]}$ to be defined on $[\\ff(\\minfty),\\ff(\\pinfty)]$ and the inverse to be define on\n$[\\fg(\\minfty),\\fg(\\pinfty)].$ By definition, $f$ and $g$ are\n$(\\ff(\\minfty),\\ff(\\pinfty))$-interpolating and \n$(\\fg(\\minfty),\\fg(\\pinfty))$-interpolating respectively.\n\nIf $(\\ff,\\fg)$ is a $(0,1)$-interpolating spatial fixed point solution to \\eqref{eqn:gfrecursion}\nthen we have $h_{[\\ff,\\gS]} \\equiv \\hf$ and $h_{[\\fg,\\fS]} \\equiv \\hg.$\nIn the reverse direction, $h_{[\\ff,\\gS]} \\equiv \\hf$ and $h_{[\\fg,\\fS]} \\equiv \\hg$ implies,\nand, (assuming $\\ff$ and $\\fg$ are $(0,1)$-interpolating) is in fact equivalent to,\n\\begin{align*}\n\\fg \\veq \\hg\\circ \\fS,\\quad\n\\ff \\veq \\hf\\circ \\gS \n\\end{align*}\nbut does not in general imply the stronger condition\n\\begin{align*}\n\\fg \\equiv \\hg\\circ \\fS,\\quad\n\\ff \\equiv \\hf\\circ \\gS \\,.\n\\end{align*}\nIf $\\hf$ and $\\hg$ are continuous then equivalence, and in fact equality, is implied.\nIn general, given the above equivalence we can achieve equality by replacing $\\fg$ with $\\hg \\circ \\fS$\nand $\\ff$ with $\\hf \\circ \\gS,$ since $\\fS$ and $\\gS$ are thereby unchanged.\n\n\\subsubsection{Sensitivity with Irregular Smoothing}\\label{sec:pathology}\n\nIn this section we illustrate by example some of the subtlety that\ncan arise with non-regular smoothing kernels. We also show how\nnon-uniqueness of fixed point solutions can occur when the positive\ngap condition is satisfied but the strictly positive gap condition is not satisfied.\n\nThe following example shows that changing $\\hf$ or $\\hg$ on a set of measure $0$\ncan, for some choices of $\\smthker,$ have a dramatic effect on the solution\nto \\eqref{eqn:gfrecursion}.\nAssume an averaging kernel $\\smthker$ that is positive everywhere on $\\reals$ except on\n$[-2,2],$ where it equals $0.$\nConsider\n\\[\n\\hf(u) = \\unitstep_a(u-\\frac{1}{2})\n\\]\nand\n\\[\n\\hg(u) = \\unitstep_b(u-\\frac{1}{2})\n\\]\nwhere $a$ and $b$ are specified below.\nLet $\\ff(x) = \\unitstep(x),$ then we have \nwe have \n$\\fS(x)<\\frac{1}{2}$ for $x \\in (\\infty,-2),$\n$\\fS(x)=\\frac{1}{2}$ for $x \\in [-2,2],$ and\n$\\fS(x)>\\frac{1}{2}$ for $x \\in (2,\\infty)\\,.$\nConsider initializing system \\eqref{eqn:gfrecursion} with $\\ff^0(x)=\\unitstep(x).$ \nIf $a=b=\\frac{1}{2}$ then the solution is the fixed point\n\\[\n\\ff^t(x)=\\fg^t(x) =\\frac{1}{2}(\\unitstep_1(x+2)+\\unitstep_0(x-2))\\,.\n\\]\nIf $a=b=1$ then the solution is \n\\begin{align*}\n\\ff^t(x)&= \\unitstep_1(x+4t)\n\\\\\n\\fg^t(x)&=\\unitstep_1(x+4t+2)\\,,\n\\end{align*}\nand $\\ff^t(x) \\rightarrow 1.$\nIf $a=b=0$ then the solution is \n\\begin{align*}\n\\ff^t(x)&= \\unitstep_0(x-4t)\n\\\\\n\\fg^t(x)&=\\unitstep_0(x-4t-2)\\,,\n\\end{align*}\nand $\\ff^t(x) \\rightarrow 0.$\nIf $a=0$ and $b=1$ then the solution is \n\\begin{align*}\n\\ff^t(x)&= \\unitstep_0(x)\n\\\\\n\\fg^t(x)&=\\unitstep_1(x-2)\\,,\n\\end{align*}\nanother fixed point.\n\nTo give a more general example, if we define $\\ff$ and $\\fg$ as any functions in $\\sptfns$ that\nequal $0$ on $(\\minfty,-1)$ and $1$ on $(1,\\pinfty)$\nthen we have \n$\\fg \\veq \\hg\\circ\\fS$ \nand\n$\\ff \\veq \\hf\\circ\\gS.$ \nIt follows in all such cases that \n$h_{[\\ff,\\gS]} \\equiv \\hf$ and\n$h_{[\\fg,\\fS]} \\equiv \\hg.$\nThis example shows that it is possible to have many solutions that are ``consistent\" with\nthe equation \\eqref{eqn:gfrecursion}, in that\n$\\fg \\veq \\hg\\circ\\fS$ \nand\n$\\ff \\veq \\hf\\circ\\gS.$ \n\\subsubsection{Non-Unique Solutions}\\label{sec:nonunique}\nNow assume $\\smthker = \\frac{1}{2}\\indicator_{|x|<1}.$\nLet $\\tilde{f}$ and $\\tilde{g}$ be any functions in $\\sptfns$ that\nequal $0$ on $(\\minfty,-1)$ and $1$ on $(1,\\pinfty)$ and take values in\n$(0,1)$ on $(-1,1).$\nNow consider\n\\begin{align*}\n\\ff_a(x) &= \\frac{1}{2} \\bigl(\\tilde{f}(x+a)+\\tilde{f}(x-a)\\bigr) \\\\\n\\fg_a(x) &= \\frac{1}{2} \\bigl(\\tilde{g}(x+a)+\\tilde{g}(x-a)\\bigr) \n\\end{align*}\nFor all $a>3$ we see that\n$(h_{[\\ff_a,\\gS_a]},h_{[\\fg_a,\\fS_a]})$\ndoes not depend on $a$ and the given functions form a family\nof spatial fixed points for the system.\nThis gives an example where system \\eqref{eqn:DE} exhibits multiple\nspatial fixed point solutions.\nNote that $(h_{[\\ff_a,\\gS_a]},h_{[\\fg_a,\\fS_a]})$ does not satisfy the strictly positive\ngap condition since $\\altPhi(h_{[\\ff_a,\\gS_a]},h_{[\\fg_a,\\fS_a]};\\frac{1}{2},\\frac{1}{2})=0.$\n\n\\subsection{Spatial fixed point integration.}\n\nConsider a $(0,1)$-interpolating spatial fixed point $(\\ff,\\fg).$\nThen, at $v=\\fS(z_1)$ the integral $\\int_0^v \\hg$\ncan be expressed as\n\\[\n\\int_0^v \\hg(z) \\text{d}z=\n\\int_{-\\infty}^{x_1} \\fg(x) \\bigl(\\frac{d}{dx} \\fS(x )\\bigr) \\text{d}x\n=\n\\int_{-\\infty}^{x_1} \\fg(x) \\text{d}\\fS(x)\\,.\n\\]\nSimilarly, at $u=\\gS(x_2)$ we have\n\\[\n\\int_0^u \\hf(z)\\text{d}z = \\int_{-\\infty}^{x_2} \\ff(x) \\text{d}\\gS(x)\\,.\n\\]\nThe product rule of calculus gives, under mild regularity assumptions, $\\fg(x) \\text{d}\\ff(x) + \\ff(x) \\text{d}\\fg(x) = \\text{d}(\\fg(x)\\ff(x))$ and, were it not for the\nspatial smoothing, this would solve directly the sum of the above two integrals in terms of the product\n$\\fg(x)\\ff(x).$ By properly handling the spatial smoothing we can accomplish something similar, and the result is\npresented in Lemma \\ref{lem:twofint}.\nWe obtain a succinct formula for the evaluation of\n$\\altPhi(\\hf,\\hg;\\fg(x_1),\\ff(x_2))$ that is {\\em local}\nin its dependence on $\\ff$ and $\\fg.$ \nThis formula captures a valuable information concerning\nthe $(0,1)$-interpolating spatial fixed point solution and its relation to $\\altPhi.$ In particular it relates local properties of fixed point solutions\nto corresponding values of $\\altPhi.$\n\nFor $(\\ff, \\fg) \\in \\sptfns^2$ and $\\smthker$ an even averaging kernel,\n\\begin{align}\\label{eqn:definexi}\n\\PhiSI (\\smthker;f,g;x_1,x_2) & :=\n(\\fS(x_1) - f(x_2+))\n(\\gS(x_2) - g(x_1+))\\nonumber\n\\\\ &\\quad\n+\n\\altPhiSI(\\smthker;f,g;x_1,x_2)\n\\end{align}\nwhere\n\\begin{align*}\n\\altPhiSI(\\smthker;& f,g;x_1,x_2) = \\\\\n& \\int_{0}^\\infty \\smthker(x)\n\\Bigl(\n\\int_{(x_2,x_1+x]} (g(x_1+)-g(y-x))df(y)\n\\Bigr) dx\n\\\\ & +\n\\int_{0}^\\infty \\smthker(x)\n\\Bigl(\n\\int_{(x_1,x_2+x]} (f(x_2+)-f(y-x))dg(y)\n\\Bigr) dx\n\\\\ & =\n\\int_{-\\infty}^\\infty \\smthker(x)\n\\Bigl(\n\\int_{(x_2,x_1+x]} (g(x_1+)-g(y-x))df(y)\n\\Bigr) dx\n\\\\ & =\n\\int_{-\\infty}^\\infty \\smthker(x)\n\\Bigl(\n\\int_{(x_1,x_2+x]} (f(x_2+)-f(y-x))dg(y)\n\\Bigr) dx\n\\end{align*}\nwhere the integrals are Lebesgue-Stieltjes integrals.\nIf $x'0$ the following inequalities hold\n\\begin{align*}\n\\altPhiSI(\\smthker,\\ff,\\fg;x,x)\n& \\le\n\\Delta_L f(x)\\Delta_L g(x)+ e_L \n\\end{align*}\n\\begin{align*}\n|\\fS(x)-f(x)|\n \\le\n\\Delta_L f(x) + e_L \n\\end{align*}\n\nFor $(0,1)$-interpolating $\\ff,\\fg \\in \\sptfns$ we have\n\\begin{align*}\n\\altPhi(h_{[\\ff,\\gS]},h_{[\\fg,\\fS]};\\fg(x),\\ff(x))\n& \\le\n\\Delta_L f(x)\\Delta_L g(x)+ e_L \n\\\\\n\\altPhi(h_{[\\ff,\\gS]},h_{[\\fg,\\fS]};\\gS(x),\\ff(x)) & \\le\n2\\Delta_L g(x) + 2e_L\n\\\\\n\\altPhi(h_{[\\ff,\\gS]},h_{[\\fg,\\fS]};\\fg(x),\\fS(x)) & \\le\n2\\Delta_L f(x) + 2e_L\n\\\\\n\\altPhi(h_{[\\ff,\\gS]},h_{[\\fg,\\fS]};\\gS(x),\\fS(x)) & \\le\n2\\Delta_L f(x) + 2\\Delta_L g(x) + 3e_L\n\\end{align*}\n\\end{lemma}\nThe Lemma is proved in appendix \\ref{app:A}. \n\n\\subsubsection{Invariance of Fixed Point Potential}\n\n\\begin{lemma}\\label{lem:FPequal}\nAssume $(\\hf,\\hg) \\in \\exitfns^2$ and $\\smthker$ an averaging kernel.\nLet $(\\ff,\\fg) \\in \\sptfns^2$ be a consistent travelling wave solution (not necessarily interpolating) to\nthe system \\eqref{eqn:gfrecursion} with shift $\\ashift,$ i.e.,\n\\begin{align*}\n\\ff &\\veq \\hf \\circ \\fg^{\\smthker,\\ashift} \\\\\n\\fg &\\veq \\hg \\circ \\ff^{\\smthker} \\,.\n\\end{align*}\n Then\n\\begin{itemize}\n\\item[A.] $(\\ff(\\minfty),\\fg(\\minfty)) \\in \\cross(\\hf,\\hg).$\n\\item[B.] $(\\ff(\\pinfty),\\fg(\\pinfty)) \\in \\cross(\\hf,\\hg).$\n\\item[C.] If $\\ashift = 0$ then \n\\[\n\\altPhi(\\hf,\\hg;\\ff(\\minfty),\\fg(\\minfty)) =\n\\altPhi(\\hf,\\hg;\\ff(\\pinfty),\\fg(\\pinfty))\\,.\n\\]\n\\end{itemize}\n\\end{lemma}\n\\begin{IEEEproof}\nBy definition we have $\\ff(x) \\veq \\hf(\\fg^{\\smthker,\\ashift}(x))$ for each $x \\in \\reals.$\nTaking limits we have $\\ff(\\minfty) \\veq \\hf(\\fg^{\\smthker,\\ashift}(\\minfty)).$\nSince $\\fg^{\\smthker,\\ashift}(\\minfty)=\\fg^{\\smthker}(\\minfty)=\\fg^{}(\\minfty)$ we have\n$\\ff(\\minfty) \\veq \\hf (\\fg(\\minfty)).$ Parts A and B now follow easily.\n\nIf $(\\ff(\\minfty),\\fg(\\minfty)) = \\ff(\\pinfty),\\fg(\\pinfty))$ then part C is immediate,\nso assume $(\\ff(\\minfty),\\fg(\\minfty)) < \\ff(\\pinfty),\\fg(\\pinfty)).$\nWe now apply \\eqref{eqn:potdiff} and Lemma \\ref{lem:twofint} to write\n\\begin{align*}\n&\\altPhi(\\hf,\\hg;\\ff(x_2+),\\fg(x_1+)) -\n\\altPhi(\\hf,\\hg;\\ff(\\minfty),\\fg(\\minfty)) \\\\\n=& \\int_{\\fg(\\minfty)}^{\\fg(x_1+)} \\hg^{-1} (u)\\text{d}u\n+ \\int_{\\ff(\\minfty)}^{\\ff(x_2+)} \\hf^{-1} (v)\\text{d}v \\\\\n&-\\ff(x_2+)\\fg(x_1+) + \\ff(\\minfty)\\fg(\\minfty)\\\\\n=& \\altPhiSI(\\smthker;f,g;x_1,x_2) \n\\end{align*}\nLetting $x_1$ and $x_2$ tend to $+\\infty$ the result follows from\nLemma \\ref{lem:transitionPhiBounds}.\n\\end{IEEEproof}\n\n\\subsubsection{Transition length.}\n\nIn this section our aim is to show that fixed point solutions arising from systems satisfying\nthe strictly positive gap condition have bounded transition regions.\nWe show that the transition of solutions from one value to another\nis confined to a region whose width can be bound from above using properties of $\\altPhi$\n\n\\begin{lemma}\\label{lem:transitionBounds}\nLet $\\ff,\\fg$ be $(0,1)$-interpolating functions in $\\sptfns.$\nLet $\\hf \\equiv h_{[\\ff,\\gS]}$ and $\\hg \\equiv h_{[\\fg,\\fS]}.$\nLet $0 < a < b < 1$ and let $x_a,x_b$ satisfy $a=\\gS(x_a)$\nand $b=\\gS(x_b).$\nDefine\n\\begin{align*}\n\\delta &= \\inf \\{ \\altPhi(\\hf,\\hg;\\gS(x),\\ff(x)) : x\\in [x_a,x_b]\\}\n\\\\\n& = \\inf \\{ \\altPhi(\\hf,\\hg;u,\\hf(u)) : u\\in [a,b]\\}\n\\end{align*}\nthen\n\\[\n\\Bigl(\\frac{1}{2}\\delta-e_L\\Bigr)\\lfloor \\frac{x_b-x_a}{2L} \\rfloor \\le\n1\n\\]\nand\n\\begin{align*}\n\\Bigl( \\frac{1}{2}{\\delta- e_L}\\Bigr) \n\\lfloor\n\\frac{x_b-x_a-2L}{2L}\n\\rfloor \n\\le b-a\\,.\n\\end{align*}\n\\end{lemma}\n\\begin{IEEEproof}\nFor any $x \\in [x_a,x_b]$ we have\n\\(\n\\Delta_L g(x) \\ge \\frac{1}{2}{\\delta- e_L}\\,\n\\)\nby Lemma \\ref{lem:transitionPhiBounds}.\nIn the interval $[x_a,x_b]$ we can find\n\\(\n\\lfloor\n\\frac{x_b-x_a}{2L}\n\\rfloor\n\\)\nnon-overlapping intervals of length $2L.$\nFrom this we obtain\n\\[\n\\lfloor\n\\frac{x_b-x_a}{2L}\n\\rfloor \\Bigl( \\frac{1}{2}{\\delta- e_L}\\Bigr) \\le g(x_b-)-g(x_a+) \\le 1\\,.\n\\]\nA similar argument considering $x_a+L$ and $x_b-L$ gives\n\\begin{align*}\n\\lfloor\n&\\frac{x_b-x_a-2L}{2L}\n\\rfloor \\Bigl( \\frac{1}{2}{\\delta- e_L}\\Bigr) \n\\\\ \\le &\ng((x_b-L)-)-g((x_a+L)+)\n\\\\ \\le & b-a\\,.\n\\end{align*}\n\\end{IEEEproof}\n\n\\subsubsection{Discrete Spatial Integration}\n\nPerhaps somewhat surprisingly, a version of Lemma \\ref{lem:twofint} that\napplies to spatially discrete systems also holds.\nIf $\\ff,\\fg$ are spatially discrete functions and $\\tff,\\tfg$ are their\npiecewise constant extensions, then\nLemma \\ref{lem:twofint} can be applied to\nthese extensions. If we then restrict $x_1$ and $x_2$ to points in\n$\\Delta \\integers,$ then $\\altPhiSI$ can be written as discrete sums.\n\nLet $\\discsmthker$ be related to $\\smthker$ as in \\eqref{eqn:kerdiscretetosmth}\nand let $x_1,x_2 \\in \\Delta \\integers,$ denoted\n$x_{i_1},x_{i_2}.$\nThen \n\\begin{align*}\n&\\altPhiSI(\\smthker;\\tff,\\tfg;x_{i_1},x_{i_2}) \\\\&=\n\\frac{1}{2}\\sum_{j=-W}^W \\discsmthker_j\\sum_{i\\in (i_1-j,i_2]} (2\\dv{f}_{i_2}-\\dv{f}_{i}-\\dv{f}_{i-1}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} ) \n\\\\&=\\frac{1}{2}\\sum_{j=-W}^W \\discsmthker_j\\sum_{i\\in (i_2-j,i_1]} (2\\dv{g}_{i_1}-\\dv{g}_{i}-\\dv{g}_{i-1}) (\\dv{f}_{i+j} - \\dv{f}_{i+j-1} ) \n\\end{align*}\n\nLemma \\ref{lem:twofint} continues to hold and a proof using entirely discrete\nsummation can be found in appendix \\ref{app:Aa}.\n\n{\\em Discussion:} The proof of Lemma \\ref{lem:twofint} as well as the spatially discrete version found in appendix \\ref{app:Aa} are entirely algebraic in character. Consequently, they apply to spatially coupled systems generally and not only those with a one dimensional state. In a follow-up paper we apply the result to the arbitrary binary memoryless symmetric channel case to obtain a new proof that spatially coupled regular ensembles achieve capacity universally on such channels.\n\n\n\n\n\\subsection{Bounds on Translation Rates}\n\n\\begin{lemma}\\label{lem:shiftlowerbound}\nLet $f,g \\in \\sptfns$ be $(0,1)$-interpolating and let $\\smthker$ be an averaging kernel.\nThen \n\\[\nA(h_{[\\ff,\\gSa]},h_{[\\fg,\\fS]}) \\le |\\ashift|\\|\\smthker\\|_\\infty\\,.\n\\]\n\\end{lemma}\n\\begin{IEEEproof}\nWe have $A(h_{[\\ff,\\gS]},h_{[\\fg,\\fS]})=0$ and hence\n\\begin{align*}\nA(h_{[\\ff,\\gSa]},h_{[\\fg,\\fS]})\n&=\nA(h_{[\\ff,\\gSa]},h_{[\\fg,\\fS]})\n-\nA(h_{[\\ff,\\gS]},h_{[\\fg,\\fS]})\n \\\\\n&= \\int_0^1 h_{[\\ff,\\gSa]}(u) - h_{[\\ff,\\gS]}(u) \\,\\text{d}u \\\\\n&= \\int_{\\minfty}^{\\pinfty} (\\ff(x-\\ashift) - \\ff(x)) \\gS_x(x) dx \\,.\n\\end{align*}\nSince $|\\gS_x(x)| \\le \\| \\smthker \\|_{\\infty}$ we obtain\n$|A(h_{[\\ff,\\gSa]},h_{[\\fg,\\fS]})|\\le |\\ashift|\\|\\smthker\\|_\\infty.$\n\\end{IEEEproof}\n\nIn general this estimate can be weak. In Section \\ref{sec:pathology} we gave an example\nof a system with $A(\\hf,\\hg) =0$ and irregular $\\smthker$ that can exhibit both left\nand right moving waves by changing the value $\\hf$ and $\\hg$ at a point of discontinuity.\nFurther, given $(0,1)$-interpolating $\\ff,\\fg$ and positive $\\smthker$ the system\n $(h_{[\\ff,\\fg^{\\smthker,a+\\ashift}]},h_{[\\fg,\\ff^{\\smthker,-a}]})$ (with real parameter $a$)\nhas a \ntraveling solution with shift $\\ashift$ and yet\n$A(h_{[\\ff,\\fg^{\\smthker,a+\\ashift}]},h_{[\\fg,\\ff^{\\smthker,-a}]})$\ncan be made arbitrarily close to $0$ by choosing $a$ with large enough magnitude.\n\nNow we consider upper bounds on $|\\ashift|.$\nLet\n\\[\n\\intsmthker(x) =\\int_{-\\infty}^x \\smthker(y) \\text{d}y.\n\\]\nIf $\\smthker$ has compact support then\nthe width of the support is an upper bound. Consider a $\\smthker$ that is strictly positive\non $\\reals.$ Let $\\hf(x)=\\hg(x)=\\unitstep(x-(1-\\epsilon))$ for small positive $\\epsilon.$\nA traveling wave solution for this system is $\\ff^t(x)=\\unitstep(x-t\\ashift)$\nand $\\fg^t(x)=\\unitstep(x-t\\ashift-\\ashift\/2)$\nwhere $\\ashift$ is given by\n$\\intsmthker(-\\ashift\/2) = (1-\\epsilon).$\nThis example motivates the following bound.\n\n\\begin{lemma}\\label{lem:shiftupperbound}\nLet $\\ff,\\fg\\in\\sptfns$ be $(0,1)$-interpolating and let\n$\\hf\\equiv h_{[\\ff,\\gSa]},$ and $\\hg\\equiv h_{[\\fg,\\fS]}$\nGiven $(u,v)$ with $v < \\hf(u-)$ and $u<\\hg(v-)$ we have the bound\n\\[\n\\ashift \\le \\Omega^{-1}\\Bigl(\\frac{v}{\\hf(u-)}+\\Bigr)\n+\n\\Omega^{-1} \\Bigl(\\frac{u}{\\hg(v-)}+\\Bigr)\\,.\n\\]\nand given $(u,v)$ with $v > \\hf(u+)$ and $u>\\hg(v+)$ we have the bound\n\\[\n-\\ashift \\le \\Omega^{-1}\\Bigl(\\frac{1-v}{1-\\hf(u+)}+\\Bigr)\n+\n\\Omega^{-1} \\Bigl(\\frac{1-u}{1-\\hg(v+)}+\\Bigr)\\,.\n\\]\n\\end{lemma}\n\\begin{IEEEproof}\nWe will show the first bound, the second is similar.\nFor any $x_1,x_2\\in\\reals$ we have\n\\begin{align*}\n\\fS(x_1) &= \\int_{\\minfty}^{\\pinfty} \\ff(x)\\smthker(x_1-x) dx \\\\\n&\\ge \\int_{x_2}^{\\pinfty} \\ff(x)\\smthker(x_1-x) dx \\\\\n&\\ge \\ff(x_2+) \\int_{x_2}^{\\pinfty} \\smthker(x_1-x) dx \\\\\n&= \\ff(x_2+) \\int_{\\minfty}^{x_1-x_2} \\smthker(x) dx\\,.\n\\end{align*}\nThus, we obtain the inequality\n\\[\nx_1-x_2 \\le \\Omega^{-1} \\Bigl(\\frac{\\fS(x_1)}{\\ff(x_2+)}+\\Bigr)\n\\]\nChoose $x_1$ so that $\\fS(x_1)=v.$\nThen we have $\\fg(x_1+) \\ge \\fg(x_1) \\ge \\hg(v-).$\nChoose $x_2$ so that $\\gSa(x_2)=\\gS(x_2+\\ashift)=u.$\nThen we have $\\ff(x_2+) \\ge \\ff(x_2) \\ge \\hf(u-).$\n\nApplying the above inequality we obtain\n\\[\nx_1-x_2 \\le \\Omega^{-1} \\Bigl(\\frac{\\fS(x_1)}{\\ff(x_2+)}+\\Bigr)\n \\le \\Omega^{-1} \\Bigl(\\frac{v}{\\hf(u-)}+\\Bigr)\n\\]\nand\n\\[\nx_2+\\ashift-x_1 \\le \\Omega^{-1}\\Bigl(\\frac{\\gS(x_2+\\ashift)}{\\fg(x_1+)}+\\Bigr)\n\\le \\Omega^{-1}\\Bigl(\\frac{u}{\\hg(v-)}+\\Bigr)\n\\]\nSumming, we obtain\n\\[\n\\ashift \\le \\Omega^{-1}\\Bigl(\\frac{v}{\\hf(u-)}+\\Bigr)\n+\n\\Omega^{-1} \\Bigl(\\frac{u}{\\hg(v-)}+\\Bigr)\\,.\n\\]\n\\end{IEEEproof}\n\n\\begin{lemma}\\label{lem:stposbound}\nLet $(\\hf,\\hg) \\in \\exitfns^2$ satisfy the strictly positive gap condition.\nThen there exists $(u,v)$ with $v < \\hf(u-)$ and $u<\\hg(v-)$\nand $(u,v)$ with $v > \\hf(u+)$ and $u<\\hg(v+).$\n\\end{lemma}\n\\begin{IEEEproof}\nLet $G^+ = \\{(u,v):v > \\hf(u+) \\text{ and } u>\\hg(v+)\\}$\nand\n$G^- = \\{(u,v):v < \\hf(u-) \\text{ and } u<\\hg(v-)\\}.$\nThen\n\\[\nA(\\hf,\\hg) = \\mu(G^+) - \\mu(G^-)\n\\]\nwhere $\\mu(G)$ denotes the 2-D Lebesgue measure of $G.$\nSince the strictly positive gap condition is satisfied, there\nexists $(u^*,v^*) \\in \\intcross(\\hf,\\hg)$ and\n$\\altPhi(\\hf,\\hg;u^*,v^*) > \\max \\{0,A(\\hf,\\hg)\\}.$\nLet $R = [0,u^*]\\times[0,v^*].$ Now\n\\[\n\\altPhi(\\hf,\\hg;u^*,v^*)\n=\n \\mu(G^+ \\cap R ) - \\mu(G^- \\cap R )\n\\]\nhence $\\mu(G^+ \\cap R ) > \\max \\{0,A(\\hf,\\hg)\\}$\nand it follows that $\\mu(G^-) > 0.$\n\\end{IEEEproof}\n\\begin{corollary}\\label{cor:regshiftbound}\nLet $\\ff,\\fg \\in \\sptfns$ be $(0,1)$-interpolating.\nIf $( h_{[\\ff,\\gSa]},h_{[\\fg,\\fS]})$ satisfies the strictly positive gap condition\nand $\\smthker$ is regular\nthen $\\ashift < 2W.$\n\\end{corollary}\n\\begin{IEEEproof}\nThis combines Lemma \\ref{lem:shiftupperbound} with Lemma \\ref{lem:stposbound}.\n\\end{IEEEproof}\n\n\\subsection{Monotonicity of $\\altPhi$ and the Gap Conditions}\n\nIn this section we collect some basic results on $\\altPhi$ and the component DE that are useful for constructing spatial wave solutions.\n\n\\begin{lemma}\\label{lem:descend}\nLet $\\hf,\\hg \\in \\exitfns.$\nIf $(u,v) \\in [0,1]^2$ satisfies $v <\\hf(u-)$ and $u < \\hg(v-),$\nthen there exists a minimal element $(u^*,v^*) \\in \\cross(\\hf,\\hg)$ with\n$(u^*,v^*)>(u,v)$ component-wise and $\\altPhi(u^*,v^*) < \\altPhi(u,v).$\n\nSimilarly, if $(u,v)$ satisfies $v >\\hf(u+)$ and $u > \\hg(v+),$\nthen there exists a maximal element $(u^*,v^*) \\in \\cross(\\hf,\\hg),$\nwith $(u^*,v^*)<(u,v)$ (component-wise) and $\\altPhi(u^*,v^*) < \\altPhi(u,v).$\n\\end{lemma}\n\\begin{IEEEproof}\nWe show only the first case since the other case is analogous.\nAssuming $v <\\hf(u-)$ and $u < \\hg(v-)$ we have $\\hginv(u+) < v < \\hf(u-)$ and we\nsee that there is no crossing point $(u',v')$ with $u'=u.$ Similarly,\nthere is no crossing point with $v'=v.$ \nSince $\\cross(\\hf,\\hg)$ is closed, the set $(u,1]\\times(v,1] \\cap \\cross(\\hf,\\hg)$ is closed.\nBy Lemma \\ref{lem:crossorder} $\\cross(\\hf,\\hg)$ is ordered so there exists a minimal element $(u^*,v^*)$ in\n$(u,1]\\times(v,1] \\cap \\cross(\\hf,\\hg).$\nSet $(u^0,v^0)=(u,v)$ and consider the sequence of points\n$(u^0,v^0),(u^0,v^1),(u^1,v^1),(u^1,v^2),(u^2,v^2),\\ldots$ as determined by \\eqref{eqn:DE}.\nIt follows easily from \\eqref{eqn:DE} that this sequence is non-decreasing.\nIf $u^t < u^*$ then $v^{t+1} \\le v^*$\nand\nif $v^t < v^*$ then $u^{t} \\le u^*.$\nThus we have either $(u^t,v^t) < (u^*,v^*)$ or there is some minimal $t$ where at least one of the coordinates\nis equal.\nIf $u^t < u^*$ and $v^t < v^*$ for all $t$ then the sequence must converge to $(u^*,v^*)$\nsince the limit is in $\\cross(\\hf,\\hg)$ by continuity and $(u^*,v^*)$ is minimal.\nIt then follows by continuity of $\\altPhi(\\hf,\\hg;)$ and Lemma \\ref{lem:monotonic} that \n\\[\n\\altPhi(\\hf,\\hg;u^*,v^*) \\le \\altPhi(\\hf,\\hg;u^0,v^1) < \\altPhi(\\hf,\\hg;u^0,v^0)\\,.\n\\]\nAssume now that $u^t = u^*$ for some $t.$ Then $t>0$ and Lemma \\ref{lem:monotonic} gives\n\\[\n\\altPhi(\\hf,\\hg;u^*,v^*) = \\altPhi(\\hf,\\hg;u^t,v^{t+1}) < \\altPhi(\\hf,\\hg;u^0,v^0)\\,.\n\\]\nFinally, assume that $v^t = v^*$ for some $t.$ Then $t>0$ and Lemma \\ref{lem:monotonic} gives\n\\[\n\\altPhi(\\hf,\\hg;u^*,v^*) = \\altPhi(\\hf,\\hg;u^t,v^{t}) < \\altPhi(\\hf,\\hg;u^0,v^0)\\,.\n\\]\nThis completes the proof.\n\\end{IEEEproof}\n\n\\begin{lemma}\\label{lem:crosspointmono}\nLet $(\\hf,\\hg)\\in \\exitfns^2$ and \nlet $(u,v) \\in [0,1]^2.$\nWe then have the following trichotomy:\n\\begin{itemize}\n\\item\nIf $\\hg(\\hf(u))=u$ then $(u,\\hf(u))\\in\\cross(\\hf,\\hg).$\n\\item\nIf $\\hg(\\hf(u))>u$ then \n\\(\n\\altPhi(\\hf,\\hg;u^*,v^*)\n\\le\n\\altPhi(\\hf,\\hg;u,v)\\,\n\\)\nwhere $(u^*,v^*)\\in\\cross(\\hf,\\hg)$ is coordinate-wise minimal with\n $(u^*,v^*)\\ge(u,\\hf(u)).$ \n\\item\nIf $\\hg(\\hf(u))u,$ we now have $\\hg(\\hf(u)-)>u.$\nWe also have $\\hginv(u+) < \\hf(u).$\n\nLet $(u^*,v^*)\\in\\cross(\\hf,\\hg)$ be the minimal element such that $(u^*,v^*)\\ge(u,\\hf(u)).$\nIt follows that $u^* >u$ which implies $v^* \\ge \\hf(u+).$\nFor all $\\epsilon>0$ sufficiently small we have\n$u+\\epsilon < \\hg((\\hf(u)-\\epsilon)-)$ and we obviously have\n$\\hf(u)-\\epsilon <\\hf((u+\\epsilon)-).$\nAssuming $\\epsilon$ sufficiently small $(u^*,v^*)$ is the minimal element in $\\cross(\\hf,\\hg)$\nwith $(u^*,v^*)> (u+\\epsilon,\\hf(u)-\\epsilon)$ and\nby Lemma \\ref{lem:descend} we have\n$\\altPhi(u^*,v^*) < \\altPhi(u+\\epsilon,\\hf(u)-\\epsilon).$\nLetting $\\epsilon$ tend to $0$ we obtain\n$\\altPhi(u^*,v^*) \\le \\altPhi(u,\\hf(u)).$\n\nThe argument for the case $\\hg(\\hf(u))0.$\nBy \\eqref{eqn:altPhiderivatives} we see that $\\altPhi(\\hf,\\hg;0,v)=0$ for all\n$v \\in [0,\\hf(0+)).$\nIt follows that $\\hginv(0+)=0$ or we obtain a contradiction with the strictly positive gap condition.\nHence $\\hg(\\hf(0+))>0$ and\nthen for $u\\in(0,\\hg(\\hf(0+))$ we have $\\hg(\\hf(u))>u.$\nBy Lemma \\ref{lem:crosspointmono} the minimal crossing point $(u^*,v^*) \\ge (0,\\hf(u+))$\nsatisfies $\\altPhi(\\hf,\\hg;u^*,v*)<0.$ By the strictly positive gap condition\n$(u^*,v^*) < (1,1)$ and we obtain a contradiction.\nTherefore, we must have $\\hf(0+) = 0.$\n\nAll other conditions, $\\hg(0+) = 0, \\hf(1-) = 1,$ and $\\hg(1-) = 1$ can be shown similarly.\n\\end{IEEEproof}\n\n\\begin{lemma}\\label{lem:Sstructure}\nLet $(\\hf,\\hg) \\in \\exitfns^2$ satisfy the strictly positive gap condition.\nIf $A(\\hf,\\hg) \\ge 0$ then $\\altPhi(\\hf,\\hg;u,v) >0$ \nfor $(u,v)\\in [0,1]^2\\backslash\\{(0,0),(1,1)\\}.$\nIf $A(\\hf,\\hg) > 0$ then there exists a minimal point $(u^*,v^*) \\in \\intcross(\\hf,\\hg)$\nand the set\n\\[\nS(\\hf,\\hg) = \\{(u,v): \\altPhi(\\hf,\\hg;u,v) < A(\\hf,\\hg)\\}\n\\]\nis simply connected \nand $\\closure{S(\\hf,\\hg)} \\subset[0,u^*)\\times[0,v^*).$\nMoreover,\n\\[\n\\{(u,v): \\altPhi(\\hf,\\hg;u,v) \\le A(\\hf,\\hg)\\}\n= \\closure{S(\\hf,\\hg)}\\cup \\{(1,1)\\}.\n\\]\n\\end{lemma}\n\\begin{IEEEproof} \nAssume $(\\hf,\\hg) \\in \\exitfns^2$ satisfies the strictly positive gap condition and that\n$A(\\hf,\\hg) \\ge 0.$\nIt follows from Lemma \\ref{lem:miniscross} that $\\altPhi(\\hf,\\hg;u,v)$\nachieves its minimum on $\\cross(\\hf,\\hg),$\nhence, the strictly positive gap condition implies $\\altPhi(\\hf,\\hg;) \\ge 0.$\nLemma \\ref{lem:miniscross} now further implies that if there exists $(u,v)$ with\n$\\altPhi(\\hf,\\hg;u,v) = 0$ then $(u,v) \\in \\cross(\\hf,\\hg).$\nThus, we have $\\altPhi(\\hf,\\hg;u,v) > 0$ for $(u,v) \\not\\in \\{(0,0),(1,1)\\}.$\n\nAssume now that $A(\\hf,\\hg) > 0.$ \nLet $(u^*,v^*)$ be the infimum of $\\intcross(\\hf,\\hg).$\nThen $(u^*,v^*)\\in \\cross(\\hf,\\hg)$ and\n$(u^*,v^*) \\neq (0,0)$ \nand therefore $(u^*,v^*) \\in \\intcross(\\hf,\\hg).$\nBy Lemma \\ref{lem:zocontinuity} we also have $u^*>0$ and $v^*>0.$\n\nLet $(u,v) \\in S(\\hf,\\hg)$ and assume that $u \\neq 0$ and $v\\neq 0.$\nBy Lemma \\ref{lem:monotonic} and Lemma \\ref{lem:crosspointmono}\nwe see that we must have $\\hg(\\hf(u)) u^*,$ and, consequently, $v^* \\le \\hf(u).$ \nThen, by Lemma \\ref{lem:monotonic} we have\n$(u,\\hf(u)) \\in S(\\hf,\\hg).$ Lemma \\ref{lem:crosspointmono}\nnow implies the existence of $(u',v') \\in \\cross(\\hf,\\hg)$ with $(u',v') \\ge (u^*,v^*)$ and\n$\\altPhi(\\hf,\\hg;u',v') < A(\\hf,\\hg),$ which contradicts the strictly positive gap condition.\nHence $S(\\hf,\\hg) \\subset [0,u^*) \\times [0,v^*).$\nFurthermore, Lemma \\ref{lem:monotonic} implies that $\\altPhi(\\hf,\\hg;u^*,v) > A(\\hf,\\hg)$ for all $v\\in [0,1]$\nand $\\altPhi(\\hf,\\hg;u,v^*) > A(\\hf,\\hg)$ for all $u\\in [0,1]$\nso $\\closure{S(\\hf,\\hg)} \\subset [0,u^*) \\times [0,v^*).$\n\nAssume there exists $(u,v)\\not\\in \\closure{S(\\hf,\\hg)} \\cup \\{ (1,1)\\}$ with $\\altPhi(\\hf,\\hg;u,v) = A(\\hf,\\hg).$\nThen $(u,v)$ is a local minimum of $\\altPhi(\\hf,\\hg;u,v)$\nwhich, by Lemma \\ref{lem:miniscross}, implies $(u,v) \\in \\cross(\\hf,\\hg),$\ncontradicting the strictly positive gap condition.\n\\end{IEEEproof}\n\n\\subsection{Inverse Formulation.\\label{sect:inverse}}\n\nIt is instructive in to the analysis to view the system in terms of inverse functions.\nLet $\\fg(x) = h((\\ff\\otimes \\smthker) (x))$ with $f \\in \\sptfns.$\nThen, for almost all $u\\in [0,1]$ we have\n\\(\n\\hg^{-1}(u) = \\int_0^1 \\intsmthker (\\fg^{-1}(u)-\\ff^{-1}(v)) \\text{d}v\\,.\n\\)\nTo show this we first integrate by parts to write\n\\(\n(\\ff\\otimes \\smthker) (x) = \\int_{-\\infty}^\\infty \\intsmthker(x-y) d\\ff(y)\n\\)\nand then make the substitutions $v=f(y)$ and $u=g(x).$\nIt follows that, up to equivalence, the recursion \\eqref{eqn:gfrecursion} may be expressed as\n\\begin{equation}\\label{eqn:gfrecursionInv}\n\\begin{split}\n\\hginv(u) & = \\int_0^1 \\intsmthker ((\\fg^t)^{-1}(u)-(\\ff^t)^{-1}(v)) \\text{d}v, \\\\\n\\hfinv(v) & = \\int_0^1 \\intsmthker ((\\ff^{t+1})^{-1}(v)-(\\fg^t)^{-1}(u)) \\text{d}u\\,.\n\\end{split}\n\\end{equation}\nSince $\\smthker$ is even we have $\\Omega(x) = 1-\\Omega(-x),$ so we immediately observe\nthat if $(\\ff,\\fg)\\in\\sptfns^2$ is a $(0,1)$-interpolating fixed point of the above system then\n\\begin{align*}\n1 & =\n\\int_0^1 \\hginv(u) \\text{d}u + \\int_0^1 \\hfinv(v)\\text{d}v \\\\\n& =\n\\int_0^1 \\hg(u) \\text{d}u + \\int_0^1 \\hf(v)\\text{d}v \\,.\n\\end{align*}\nThis puts a very strong requirement on $(\\hf,\\hg)$ to admit a $(0,1)$-interpolating spatial fixed point.\n\nAssume that $\\ff$ and $\\fg,$ both in $\\sptfns,$ form a $(0,1)$-interpolating spatial fixed point.\nConsider perturbing the inverse functions by $\\delta \\ffinv$ and $\\delta \\fginv$ respectively.\nWe could then perturb $\\hfinv$ and $\\hginv,$ by $\\delta\\hfinv$ and $\\delta\\hginv$ respectively so that\nthe perturbed system would remain a fixed point. To first order we will have from \\eqref{eqn:gfrecursionInv},\n\\begin{equation}\\label{eqn:delgfrecursionInv}\n\\begin{split}\n\\delta\\hginv(u) & = \\int_0^1 \\smthker (\\fginv(u)-\\ffinv(v)) (\\delta\\fginv(u)-\\delta\\ffinv(v)) \\text{d}v, \\\\\n\\delta\\hfinv(v) & = \\int_0^1 \\smthker (\\ffinv(v)-\\fginv(u)) (\\delta\\ffinv(v)-\\delta\\fginv(u))\\text{d}u\\,.\n\\end{split}\n\\end{equation}\n\nThis formulation is at the heart of the analysis in the next section.\n\n\n\\subsection{Existence: The Piecewise Constant Case}\\label{sec:PCcase}\nIn this section we focus on the case where $\\hf$ and $\\hg$ are piecewise constant.\nIn this case the spatially coupled system is finite dimensional.\nWe further assume that $\\smthker$ is strictly positive on $\\reals.$\nThis ensures in a simple way that no degeneracy occurs when determining EXIT functions\nfrom spatial functions since $\\frac{d}{dx} \\gS(x) > 0$ for any non-constant $g \\in \\sptfns.$\n\nWe will write piecewise constant functions $\\hf,\\hg \\in \\exitfns$ as\n\\begin{align*}\n\\hf(u) &= \\sum_{j=1}^{\\Kf} \\delhf_j \\,\\unitstep(u - \\uf_j) \\\\\n\\hg(u) &= \\sum_{i=1}^{\\Kg} \\delhg_i \\,\\unitstep(u - \\ug_i)\n\\end{align*}\nwhere $\\unitstep$ is the unit step (Heaviside)\n function\\footnote{The regularity assumptions on $\\smthker$ ensure that the precise value of $\\hf$ and $\\hg$\nat points of discontinuity has no impact on the analysis.}\nand where we assume $\\delhf_j,\\delhg_i > 0,$ and $\\sum_{j=1}^{\\Kf} \\delhf_j = 1$\nand $\\sum_{i=1}^{\\Kg} \\delhg_i = 1.$\n\nGenerally we will have\n$0 < \\uf_1 \\le \\uf_2 \\le \\cdots \\le \\uf_{\\Kf} < 1$\nand\n$0 < \\ug_1 \\le \\ug_2 \\le \\cdots \\le \\ug_{\\Kg} < 1$\nbut the ordering is actually not critical to the definition.\nWe generally view the vectors $\\delhf$ and $\\delhg$ as fixed and \nto indicate the dependence on $\\uf = (\\uf_1,\\ldots,\\uf_{\\Kf})$ and $\\ug$ we will\nwrite $\\hf(u;\\uf)$ and $\\hg(u;\\ug).$\n\nPiecewise constant $\\hf$ and $\\hg$ also have piecewise constant inverses.\nGiven $\\hf$ as above we have\n\\[\n\\hfinv (v) = \\sum_{j=1}^{\\Kf} (\\uf_j-\\uf_{j-1})\\unitstep(v-\\sum_{k=1}^{j} \\delhf_j)\n\\]\nwhere we set $\\uf_0=0.$\n\n\nIf $\\fg \\in \\sptfns$ is continuous, strictly increasing, and $(0,1)$-interpolating\nand $\\hf$ is piecewise constant as above then $\\ff \\in\\sptfns$ defined by\n$\\ff(x) = \\hf(\\fg(x))$ is also piecewise constant and\ncan be written as\n\\[\n\\ff(x) = \\sum_{i=1}^{\\Kf} \\delhf_i \\,\\unitstep(x - \\zf_i)\n\\]\nwith\n$-\\infty < \\zf_1 \\le \\zf_2 \\le \\cdots \\le \\zf_{\\Kf} < \\infty$\nsatisfying $\\uf_i = \\fg(\\zf_i).$ \nAs before, we have\n\\[\n\\ffinv(v) = \\sum_{j=1}^{\\Kf} (\\zf_j-\\zf_{j-1})\\unitstep(v-\\sum_{k=1}^{j} \\delhf_j) \\,.\n\\]\nwhere we set $\\zf_0 =0$\n\n\nThe purpose of this section is to prove a special case of Theorem\n\\ref{thm:mainexist} under piecewise constant assumptions on the EXIT functions and\nregularity conditions on\n$\\smthker.$ In this special case we obtain in addition uniqueness and continuous dependence of the solution. For convenience we state the main result here.\n\\begin{theorem}\\label{thm:PCexist}\nAssume $\\smthker$ is a strictly positive and $C^1$ averaging kernel.\nLet $(\\hf,\\hg)$ be a pair of piecewise constant functions in $\\exitfns$\nsatisfying the strictly positive gap condition.\nThen there exists unique (up to translations) $(0,1)$-interpolating functions\n$\\tmplF,\\tmplG \\in\\sptfns$ and $\\ashift \\in \\reals$ satisfying $\\sgn (\\ashift) = \\sgn (A(\\hf,\\hg)),$ such that\nsetting\n$\\ff^t(x) = \\tmplF(x-\\ashift t)$ and\n$\\fg^t(x) = \\tmplG(x-\\ashift t)$\nsolves \\eqref{eqn:gfrecursion}.\nFurther,\n$\\tmplF^{-1}(v)-\\tmplG^{-1}(u)$ depends continuously on the vectors $\\uf,\\ug.$\n\\end{theorem}\n\nThe remainder of this section is dedicated to the proof of this result.\nOur proof constructs the solutions $\\tmplF$ and $\\tmplG$ by a method of continuation.\nIn the case where $\\hf$ and $\\hg$ are unit step functions it is easy to find\nthe solution: $\\tmplF$ and $\\tmplG$ are also unit step functions.\nStarting there we deform the solution to\narrive at a solution for a given $\\hf$ and $\\hg.$\nWe do this in two stages where in the first stage $\\ashift = 0$ and in the second\nis $\\ashift$ varied while $\\hg$ is held fixed.\nThe deformation is obtained as a solution to a differential equation.\nTo set up the equation we require a detailed description of the dependence\nof $\\uf$ and $\\ug$ on $\\zf,\\zg$ and $\\ashift.$\n\n\nLet us first consider the case $\\ashift=0.$\nThus, let $\\ff(x;\\zf)$ and $\\fg(x;\\zg)$ be a piecewise constant functions\nparameterized by their jump point locations\n$\\zf$ and $\\zg$ as\n\\begin{align}\n\\begin{split}\\label{eqn:fgtdef}\ng(x;\\zg) & = \\sum_{i=1}^{\\Kg} \\delhg_i \\,\\unitstep(x - \\zg_{i}) \\\\\nf(x;\\zf) & = \\sum_{j=1}^{\\Kf} \\delhf_j \\,\\unitstep(x - \\zf_{j})\n\\end{split}\n\\end{align}\nThen, from \\eqref{eqn:gfrecursionInv} we have\n\\begin{align}\n\\begin{split}\\label{eqn:discreteInv}\n\\ug_i &= \\fS(\\zg_i) = \\sum_{j=1}^{\\Kf} \\delhf_j \\Omega (\\zg_i-\\zf_j) \\\\\n\\uf_j &= \\gS(\\zf_j) =\\sum_{i=1}^{\\Kg} \\delhg_i \\Omega (\\zf_j-\\zg_i)\n\\end{split}\n\\end{align}\nNow, suppose we introduce smooth dependence on a parameter $t,$\ni.e., we are given smooth functions $\\zf(t)$ and $\\zg(t)$\nand then determine $\\uf(t)$ and $\\ug(t)$ from the above.\nBy differentiating we obtain\n\\begin{align}\\label{eqn:zeroAdiffeq}\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\ug(t) \\\\\n\\uf(t)\n\\end{bmatrix}\n&=\nH(\\zf(t),\\zg(t))\n\\;\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}\n\\end{align}\nwhere $H(\\zf(t),\\zg(t))$ is a $(\\Kg+\\Kf)\\times(\\Kg+\\Kf)$ matrix\n\\begin{align}\n\\label{eqn:matrixdif}\nH(\\zf,\\zg)\n& =\n\\begin{bmatrix}\n \\Df & -\\Bf \\\\\n-\\Bg & \\Dg\n\\end{bmatrix},\n\\end{align}\nwhich we rewrite as $H=D(I-M),$\nand where\n\\[\nD =\n\\begin{bmatrix}\n \\Df & 0 \\\\\n 0 & \\Dg\n\\end{bmatrix}\n\\text{ and }\nM =\n\\begin{bmatrix}\n 0 & (\\Df)^{-1}\\Bf \\\\\n(\\Dg)^{-1}\\Bg & 0\n\\end{bmatrix}\n\\]\nand where \n\\begin{itemize}\n\\item $\\Dg$ is the $\\Kf \\times \\Kf$ diagonal matrix with\n\\[\n\\Dg_{i,i} = \\gS_x(\\zf_i) = \\sum_{j=1}^{\\Kg} \\smthker(\\zg_j-\\zf_i) \\delhg_j,\\]\n\\item $\\Df$ is the $\\Kg \\times \\Kg$ diagonal matrix with\n\\[\n\\Df_{i,i} = \\fS_x(\\zg_i) = \\sum_{j=1}^{\\Kf} \\smthker(\\zf_j-\\zg_i) \\delhf_j,\n\\]\n\\item $\\Bga$ is the $\\Kf \\times \\Kg$ matrix with \n\\[\n\\Bg_{i,j} = -\\frac{\\partial \\gS(\\zf_i;\\zg)}{\\partial \\zg_j} = \\smthker(\\zg_j-\\zf_i) \\delhg_j,\n\\]\n\\item $\\Bf$ is the $\\Kg \\times \\Kf$ matrix with \n\\[\n\\Bf_{i,j} = \n-\\frac{\\partial \\fS(\\zg_i;\\zf)}{\\partial \\zf_j} \n \\smthker(\\zf_j-\\zg_i) \\delhf_j.\n\\]\n\\end{itemize}\n\nSince $\\Dg_{i,i} = \\sum_{j=1}^{\\Kg} \\Bg_{i,j}$ and\n$\\Df_{i,i} = \\sum_{j=1}^{\\Kf} \\Bf_{i,j}$\nwe observe that $M$ is a stochastic matrix: $\\sum_{j=1}^{\\Kf+\\Kg} M_{i,j} = 1.$\n\nOur strategy for constructing fixed points for a given $\\hf,\\hg$ involves solving\n\\eqref{eqn:zeroAdiffeq} for $\\zg(t),\\zf(t)$ given $\\uf(t),\\ug(t).$\nThe main difficulty we face is that $H(\\zf,\\zg)$ is not invertible.\nIn particular, $(I-M)\\vec{1} = 0.$\nThis is a consequence of the fact that\ntranslating $\\zf$ and $\\zg$ together does not alter $\\ug$ and $\\uf$ as defined\nby \\eqref{eqn:discreteInv}.\nThe corresponding left null eigenvector of $H(\\zf,\\zg)$ \narises from the fixed point condition $\\int_0^1 \\hg(x) dx + \\int_0^1\\hf(x) dx = 1$ which here reduces to\n\\[\n1 = \\sum_{j=1}^{\\Kf} (1-\\uf_j) \\delhf_j\n+ \\sum_{i=1}^{\\Kg} (1-\\uf_i) \\delhg_i,\n\\]\nhence\n\\[\n\\sum_{j=1}^{\\Kf} \\delhf_j \\frac{d\\uf_j}{dt}\n+ \\sum_{i=1}^{\\Kg} \\delhg_i \\frac{d\\ug_i}{dt}= 0\n\\]\nas can be verified directly.\n\nLet us consider the matrix\n\\[\nH(\\zf,\\zg) + \\vec{1}\\vec{\\delta}^T\n\\]\nwhere $\\vec{1}$ is the column vector of all $1$s and $\\vec{\\delta}$ is the column\nvector obtained by stacking $\\delhg$ on $\\delhf.$ \nThis matrix is invertible, i.e., its determinant is non-zero. \nTo see this note that $M$ is a positive stochastic matrix \n$M\\vec{1} = \\vec{1}$ and by the Perron-Frobenious theorem all other eigenvalues of $M$\nhave magnitude strictly less than $1.$\nIt follows that $\\vec{1}$ is the unique right null vector of $H(\\zf,\\zg)$ (up to scaling)\nand that $\\vec{\\delta}$ is the corresponding left null vector.\nThe left subspace orthogonal to $\\vec{1}$ is invariant under $H(\\zf,\\zg).$\nIt now follows that $H(\\zf,\\zg) + \\vec{1}\\vec{\\delta}^T$ has no left null vector and\nis therefore invertible.\n\nNow, consider the differential equation\n\\begin{align}\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}\n&=\n(H(\\zf(t),\\zg(t))+ \\vec{1}\\vec{\\delta}^T)^{-1}\n\\;\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\ug(t) \\\\\n\\uf(t)\n\\end{bmatrix}\n\\label{eqn:diffEQR}\n\\end{align}\nIf \n\\(\n\\frac{d}{dt}\n\\vec{\\delta}^T \n\\begin{bmatrix}\n\\ug(t) \\\\\n\\uf(t)\n\\end{bmatrix}=0\n\\)\nthen we obtain\n\\(\n\\frac{d}{dt}\n\\vec{\\delta}^T \n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}=0\n\\)\nand we see that \\eqref{eqn:zeroAdiffeq} is satisfied.\n\n\\begin{lemma} \\label{lem:PCexitcont}\nLet $\\smthker$ be a strictly positive smoothing kernel.\nLet $\\uf(t)$ and $\\ug(t)$ be $C^2$ ordered vector valued functions on $[0,1]$\nsuch that $(\\hf(;\\uf(t)),\\hg(;\\ug(t))$ are satisfy the strictly positive\ngap condition uniformly\nin the sense that $\\min_{t\\in [0,1]} \\altPhi(\\hf(;\\uf(t)),\\hg(;\\ug(t);u,v) > 0$ \nfor all $(u,v) \\in [0,1]^2\\backslash \\{(0,0),(1,1)\\}$\nand $A(\\hf(;\\uf(t)),\\hg(;\\ug(t))=0$ for all $t\\in [0,1].$\nAssume further that $\\zf(0)$ and $\\zg(0)$\nare given so that $\\ff(;\\zf(0))$ and $\\fg(;\\zg(0))$ as defined by\n\\eqref{eqn:fgtdef}\nsatisfies $\\fg(x;\\zg(0)) = \\hg(\\fS(x;\\zf(0));\\ug(0))$\nand $\\ff(x;\\zg(0)) = \\hf(\\gS(x;\\zg(0));\\uf(0))$ for all $x\\in\\reals,$\ni.e., the determined functions form a $(0,1)$-interpolating spatial fixed point for the $t=0$ system.\nThen there exist bounded $C^1$ ordered vector valued functions\n$\\zf(t)$ and $\\zg(t)$ on $[0,1],$\nwith $\\zf(0)$ and $\\zg(0)$ as specified,\nsuch that $\\fg(x;\\zg(t)) = \\hg(\\fS(x;\\zf(t));\\ug(t))$\nand $\\ff(x;\\zg(t)) = \\hf(\\gS(x;\\zg(t));\\uf(t))$\nfor all $x\\in \\reals$ and $t\\in [0,1].$\n\\end{lemma}\n\\begin{IEEEproof}\nConsider the differential equation \\eqref{eqn:diffEQR} with initial condition\ngiven by $\\zg(0)$ and $\\zf(0).$ By translating (adding a constant to both) we can assume\n$\\sum_{i=1}^{\\Kg} \\delhg_i \\zg_i(0) +\\sum_{j=1}^{\\Kf} \\delhf_j \\zf_j(0) = 0.$\nBy standard results on differential equations, the equation\nhas a unique $C^1$ solution $(\\zg(t),\\zf(t))$ on $[0,T)$ for some maximal $T>0.$\nSince $\\frac{d}{dt}(\\sum_{i=1}^{\\Kg} \\delhg_i \\ug_i(t) +\\sum_{j=1}^{\\Kf} \\delhf_j \\uf_j(t) ) = 0$\nwe have $\\frac{d}{dt}(\\sum_{i=1}^{\\Kg} \\delhg_i \\zg_j(t) +\\sum_{j=1}^{\\Kf} \\delhf_j \\zf_j(t)) = 0$\nand therefore $\\sum_{i=1}^{\\Kg} \\delhg_i \\zg(t) +\\sum_{j=1}^{\\Kf} \\delhf_j \\zf_j(t) = 0.$\nFurther, the solution satisfies \\eqref{eqn:zeroAdiffeq} and it follows that the determined\nfunctions $f(;\\zf(t))$ and $g(;\\zg(t))$ are corresponding $(0,1)$-interpolating spatial fixed points\nfor $\\hf(;\\uf(t)),\\hg(;\\ug(t)).$\nLemma \\ref{lem:transitionPhiBounds} implies that $\\zf(t)$ and $\\zg(t)$ are bounded\nand we can conclude that the solution exists for all $t\\in [0,1].$\n\\end{IEEEproof}\n\nNow we consider adding a shift to the model.\nWe generalize \\eqref{eqn:discreteInv} as follows\n\\begin{align}\n\\begin{split}\\label{eqn:discreteInvShift}\n\\ug_i &= \\fS(\\zg_i) = \\sum_{j=1}^{\\Kf} \\delhf_j \\Omega (\\zg_i-\\zf_j) \\\\\n\\uf_j &= \\gS(\\zf_j+\\ashift) =\\sum_{j=1}^{\\Kf} \\delhf_i \\Omega (\\zf_j+\\ashift-\\zg_i)\n\\end{split}\n\\end{align}\nNow, let $\\zg(t),\\zf(t),\\ashift(t)$ be smooth functions of time, then\n\\begin{align}\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\ug(t) \\\\\n\\uf(t)\n\\end{bmatrix}\n&=\nH(\\zf(t),\\zg(t))\n\\;\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}\n+\n\\begin{bmatrix}\n0 \\\\\n\\Dga \\vec{1}\n\\end{bmatrix}\n\\frac{d}{dt}\\ashift(t)\n\\label{eqn:diffEQ}\n\\end{align}\nwhere $H(\\zf(t),\\zg(t))$ is a $(\\Kg+\\Kf)\\times(\\Kg+\\Kf)$ matrix\n\\begin{align}\n\\label{eqn:matrixdif}\nH(\\zf,\\zg)\n& =\n\\begin{bmatrix}\n \\Df & -\\Bf \\\\\n-\\Bga & \\Dga\n\\end{bmatrix}\n\\\\\n& =\nD\n\\begin{bmatrix}\nI-M\n\\end{bmatrix}\n\\end{align}\nwhere\n\\[\nD =\n\\begin{bmatrix}\n \\Df & 0 \\\\\n 0 & \\Dga\n\\end{bmatrix}\n\\text{ and }\nM =\n\\begin{bmatrix}\n 0 & (\\Df)^{-1}\\Bf \\\\\n(\\Dga)^{-1}\\Bga & 0\n\\end{bmatrix}\n\\]\n\n\n\nwhere $\\Df$ and $\\Bf$ are as before and\n\\begin{itemize}\n\\item $\\Dga$ is the $\\Kf \\times \\Kf$ diagonal matrix with\n\\[\n\\Dga_{i,i} = \\gS_x(\\zf_i+\\ashift) = \\sum_{j=1}^{\\Kg} \\smthker(\\zg_j-(\\zf_i+\\ashift)) \\delhg_j,\n\\]\n\\item $\\Bga$ is the $\\Kf \\times \\Kg$ matrix with \n\\[\n\\Bga_{i,j} = \n\\smthker(\\zg_j-(\\zf_i+\\ashift)) \\delhg_j,\n\\]\n\\end{itemize}\n\nSince $\\Dga_{i,i} = \\sum_{j=1}^{\\Kg} \\Bga_{i,j}$ and\n$\\Df_{i,i} = \\sum_{j=1}^{\\Kf} \\Bf_{i,j}$\nwe observe that $M$ is a stochastic matrix: $\\sum_{j=1}^{\\Kf+\\Kg} M_{i,j} = 1.$\n\n\nLet $P$ be the projection matrix which is the $(\\Kf+\\Kg) \\times (\\Kf+\\Kg)$\nidentity matrix except that $P_{\\Kf+\\Kg , \\Kf+\\Kg}=0.$\nIt follows that $I-PMP$ is invertible and $PMP$ has spectral radius less than one.\nIndeed, let\n$\\tilde{B_1}$ denote the matrix obtained from $(\\Df)^{-1}\\Bf$ be removing the rightmost column and\nlet $\\tilde{B_2}$ denote the matrix obtained from $(\\Dga)^{-1}\\Bga$ be removing the bottom row.\nLet $\\tilde{M}$ denote the upper left $\\Kf+\\Kg -1 \\times \\Kf+\\Kg-1$ submatrix of $M.$\nThen\n\\[\n\\tilde{M}^{2} =\n\\begin{bmatrix}\n(\\tilde{B_1}\\tilde{B_2})^{2k} & 0 \\\\\n0 & (\\tilde{B_2}\\tilde{B_1})^{2}\n\\end{bmatrix}\\,.\n\\]\nLet $\\xi < 1$ denote the maximum row sum from $\\tilde{B_1}.$\nBy the Perron-Frobenious theorem $\\tilde{B_2}\\tilde{B_1}$ has a maximal positive eigenvalue $\\lambda$\nwith positive left eigenvector $x.$ Then $x^T \\tilde{B_2}\\tilde{B_1} \\vec{1} = \\lambda x^T \\vec{1},$\nbut $\\tilde{B_2}\\tilde{B_1} \\vec{1} \\le \\xi \\vec{1}$ (component-wise) so $\\lambda \\le \\xi.$\nWe easily conclude that $\\| \\tilde{M}^2 \\|_2 \\le \\xi.$\nHence $(I-PMP)^{-1}$ exists and is strictly positive.\n\nGiven $\\zf(0),\\zg(0)$ we define $\\zg(t),\\zf(t)$ as the solution to\n\\begin{align}\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}\n&=\n-(I-PM(\\zf(t),\\zg(t))P)^{-1}\n\\;\nP\n\\begin{bmatrix}\n0 \\\\\n\\vec{1}\n\\end{bmatrix}\\,.\n\\label{eqn:diffEQRshift}\n\\end{align}\nNote that \n\\[\nP\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix} =\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}\\,.\n\\]\nNow we substitute the solution into \\eqref{eqn:diffEQ}\nand we obtain\n\\begin{align*}\nP\\frac{d}{dt}\n\\begin{bmatrix}\n\\ug(t) \\\\\n\\uf(t)\n\\end{bmatrix} \n&=\nD\\Biggl[\nP(I-M)\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}\n+P\n\\begin{bmatrix}\n0 \\\\\n\\vec{1}\n\\end{bmatrix}\n\\Biggr] \\\\\n&=\nD\\Biggl[\nP(I-PMP)\n\\frac{d}{dt}\n\\begin{bmatrix}\n\\zg(t) \\\\\n\\zf(t)\n\\end{bmatrix}\n+P\n\\begin{bmatrix}\n0 \\\\\n\\vec{1}\n\\end{bmatrix}\n\\Biggr] \\\\\n& = 0\\,.\n\\end{align*}\n\n\\begin{lemma}\\label{lem:PCshiftexist}\nLet $\\zf(0),\\zg(0)$ be given, thereby defining\n$f(;\\zf(0))$ and $g(;\\zg(0))$ in $\\sptfns$\nand set\n\\[\n\\hg = \\sum_{i=1}^{\\Kg} \\delhg_j \\,\\unitstep( x- \\fS(\\zg_{i}(0);\\zf(0)) )\\,.\n\\]\nLet $\\ashift(0) \\ge 0$ also be given and\nlet $\\hf(;r)$ be parametrized piecewise constant functions defined by\n$\\hf(x;r) = \\hf(x;\\uf(r))=\\sum_{i=1}^{\\Kf} \\delhf_i \\,\\unitstep( x - \\uf_i(r) )$\nwhere\n\\[\n\\uf_i(r) =\n\\begin{cases}\n\\gS(\\zf_i(0)+\\ashift(0);\\zg(0)) & i < \\Kf \\\\\n\\gS(\\zf_{\\Kf}(0)+\\ashift(0);\\zg(0)) +r & i = \\Kf\\,.\n\\end{cases}\n\\]\nNote that $f(;\\zf(0))$ and $g(;\\zg(0))$ form a traveling wave solution\nfor $\\hf(;0),\\hg$ with shift $\\ashift(0).$\n\nLet $r'>0$ be such that $\\uf_{\\Kf}( r') < 1$\nand assume that $(\\hf(;r'),\\hg)$ satisfies\nthe strictly positive gap condition\nwith $A(\\hf(;r'),\\hg) > 0.$\nDefine $\\ashift(t) = \\ashift(0) + t.$\n\nThen there exists $C^1$ functions $\\zf(t)$ and $\\zg(t)$\nfor $t\\in [0,T],$ where $T<\\infty,$\nwith $\\zf(0)$ and $\\zg(0)$ as given, such that \n$f(;\\zf(t)),g(;\\zg(t)) \\in \\sptfns,$\nform a spatial wave solution with shift $\\ashift(t)$ for\n$(\\hf(;r(t)),\\hg)$\nand where $r(t)$ is an increasing $C^1$ function\nwith $r(0)=0,$ and $r(T)=r'.$\n\\end{lemma}\n\\begin{IEEEproof}\nConsider the differential equation\n\\eqref{eqn:diffEQRshift}.\nBy standard results on differential equations\na unique solution exists on $[0,T')$ for some maximal $T'>0.$\nNote that $\\zf(t)$ and $\\zg(t)$ are component-wise decreasing in $t,$\nexcept for $\\zf_{\\Kf}(t)$ which is constant.\n\nIt follows that \n\\begin{align*}\n& \\frac{d}{dt} \\gS(\\zf_{\\Kf}+\\ashift(t);\\zg(t)) \\\\\n& =\n\\gS_x(\\zf_{\\Kf}+\\ashift(t);\\zg(t))\\cdot \\\\\n&\\qquad\\Bigl(1 - \\sum_{j=1}^{\\Kg}\n\\smthker(\\zf_{\\Kf}+\\ashift(t)-\\zg_j(t))\\, \\delhg_j\n\\frac{d}{dt} \\zg_j(t)\n\\Bigr)\n\\end{align*}\nwhich is strictly positive for all $t\\in [0,T')$ since $\\frac{d}{dt} \\zg_j(t) \\le 0.$\nIt follows that $\\ff(;\\zf(t)),\\fg(;\\zg(t))$ form a spatial wave solution with\nshift $\\ashift(t)$ for $(\\hf(;r(t)),\\hg)$ for all $t\\in [0,T')$\nand that $r(t)=\\gS(\\zf_{\\Kf} + \\ashift(t);\\zg(t)) - \\gS(\\zf_{\\Kf} + \\ashift(0);\\zg(0))$\nis monotonically increasing on $[0,T').$\n\nWe now show that $r(t) \\le r'$ implies that $Z(t)$ is bounded.\nAs a first step we show that $\\ashift(t)$ is bounded.\nIf $r \\le r'$ then there exists $(u,v)$ with $v < \\hf(u-)$ and $u<\\hg(v-).$\nIn particular if we take $u>r'$ and $v > \\fS(\\zg_{\\Zg}(0);\\zf(0)) = \\fS(\\zg_{\\Zg}(t);\\zf(t))$\nthen we obtain\na finite upper bound on $\\ashift(t)$ from Lemma \\ref{lem:shiftupperbound}.\n\nAssume there exists $t_i \\rightarrow T'' \\le T'$\nfrom below such that $f(;t_i) \\rightarrow f,$\n$g(;t_i) \\rightarrow g$ and $r(t_i) \\rightarrow r \\le r'.$\nFrom Theorem \\ref{thm:mainlimit}\nit follows that $(f(\\minfty),g(\\minfty)) \\in \\cross(\\hf(;r),\\hg)$\nand $\\altPhi(\\hf(;r),\\hg;g(\\minfty),f(\\minfty)) = 0.$\nBy the assumptions on\n$\\hg$ we see that the crossing point cannot be interior.\nThus, $(f(\\minfty),g(\\minfty)) = (0,0)$\nand we conclude that $\\zg(t)$ and $\\zf(t)$ are bounded.\nHence $T'' < T'.$\n\nFinally, an upper bound on $\\|\\zg(t)\\|$ yields a positive lower bound on\n$\\frac{d}{dt} r(t)$ so we conclude that\nthere exists $T < T'$ such that $r(T)=r'.$\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:PCexist}]\nLet us first consider the case $A(\\hf,\\hg)= 0.$\nLet the target EXIT functions be $\\hf=\\hf(;\\uf)$ and $\\hg=\\hg(;\\ug).$\nLet $B_{\\hf} = \\int_0^1 \\hfinv(x) dx = \\sum_j \\delhf_{j} \\uf_{j},$\nand $B_{\\hg} = \\int_0^1 \\hginv(x) dx = \\sum_i \\delhg_{i} \\ug_{i}= 1-B_{\\hf}.$\nFor $t \\in [0,1]$ define the vector valued functions\n\\begin{align}\n\\begin{split}\n\\uf (t) & = (1-t) B_{\\hf}\\vec{1} + t \\uf \\\\\n\\ug (t) & = (1-t) B_{\\hg}\\vec{1} + t \\ug\\,,\\label{eqn:pcscale}\n\\end{split}\n\\end{align}\nwhere $\\vec{1}$ denotes a vector of all $1$s (of appropriate length).\nNote that we have $\\uf(1)=\\uf$ and $\\ug(1)=\\ug.$\nNote that $\\hf(;\\uf(t))$ and $\\hg(;\\ug(t))$ are in $\\exitfns$ for all $t.$\nNote that $\\int_0^1 \\hfinv(v;\\uf(t)) dv = B_{\\hf}$\nand $\\int_0^1 \\hginv(u;\\ug(t)) du = B_{\\hg}$\nso $A (\\hf(;\\uf(t)),\\hg(;\\ug(t))) = 0$ for all $t \\in [0,1].$\n\nLet $h \\in \\exitfns$ and let $(u,v)$ be in the graph of $h.$ Then\n\\begin{align*}\n&\\altPhi(h,\\hg(;\\ug(t));u,v) - \\altPhi(h,\\hg(;\\ug(1));u,v)\\\\\n= &\n\\int_0^u (\\hginv(z;\\ug(t))-\\hginv(z;\\ug(1)))\\text{d}z \\\\\n= &\n(1-t)(u B_{\\hg} - \\int_0^u \\hginv(z;\\ug(1)))\\text{d}z \\\\\n\\ge & 0\n\\end{align*}\nwhere the last inequality holds since we have equality at $u=0$ and $u=1$\nand $(u B_{\\hg} - \\int_0^u \\hginv(z;\\ug(1)))\\text{d}z$ is concave.\nThis implies that if $(h,\\hg(;\\ug(1)))$ satisfies the strictly positive gap condition\nwith $A(h,\\hg(;\\ug(1))) = 0$ then\n$(h,\\hg(;\\ug(t)))$ also satisfies the strictly positive gap condition\nwith $A(h,\\hg(;\\ug(t))) = 0$ for all $t \\in [0,1].$\n\nThe above argument shows that $(\\hf(;\\uf(1)),\\hg(;\\ug(t)))$\nsatisfies the strictly positive gap condition for all $t\\in [0,1].$\nApplying the argument analogously to $\\hf$ we see that\n$(\\hf(;\\uf(s)),\\hg(;\\ug(t)))$\nsatisfies the strictly positive gap condition for all $s,t\\in [0,1],$\nand, in particular, \n$(\\hf(;\\uf(t)),\\hg(;\\ug(t)))$\nsatisfies the strictly positive gap condition for all $t\\in [0,1].$\n\nAll that remains to apply Lemma \\ref{lem:PCexitcont} over $t \\in [0,1]$ and\nconclude the proof for the case $A(\\hf,\\hg)=0$\nis to find $\\zf(0)$ and $\\zg(0).$\nSet $\\zf(0) = 0$ so that\n$f(x;\\zf(0)) = \\,\\unitstep(x).$\nLet $y$ be the unique point such that $\\fS(y;\\zf(0)) = B_{\\hg}$ and\nset each component of $\\zg(0)$ to $y$ so that $\\fg(x;\\zg(0)) = \\unitstep(x-y).$\nIt follows that $\\fg(x;\\zg(0)) = \\hg(\\fS(x;\\zf(0));\\ug(0))$ and that\n$\\ff(x;\\zf(0)) = \\hf(\\gS(x;\\zg(0));\\uf(0)).$\n\nApplying Lemma \\ref{lem:PCexitcont} for $t \\in [0,1]$ we obtain\n$\\ff(;\\zf(t))$ and $\\fg(;\\zg(t))$ such that\n$\\fg(x;\\zg(t)) = \\hg(\\fS(x;\\zf(t));\\ug(t))$ and\n$\\ff(x;\\zf(t)) = \\hf(\\gS(x;\\zg(t));\\uf(t))$\ncompleting the proof when $A(\\hf,\\hg)=0.$\n\nWe now consider the case $A(\\hf,\\hg) \\neq 0.$\nWithout loss of generality we assume $A(\\hf,\\hg) > 0.$\nThe case $A(\\hf,\\hg) < 0$ is equivalent to the case $A(\\hf,\\hg) > 0$\nunder symmetry.\n\nLet us introduce a modification of $\\hf(;\\uf(t))$ as follows.\nFor $r \\in (0,1)$ define $\\uf(t;r)$ by $\\uf_i (t;r) = \\min \\{ \\uf_{i}(t), r \\}.$\nThen $\\int_0^u \\hfinv(x;\\uf(t;r)) dx$ is non-decreasing in $r$ for all $u \\in [0,1].$\nSince $\\int_0^1 \\hfinv(x) dx > 1 -\\int_0^1 \\hginv(x) dx$ and\n$\\int_0^1 \\hfinv(x;\\uf(t;0)) dx = 0$\nthere exists a unique positive $r_0 < \\uf_{\\Kf}$ such that\n$\\int_0^1 \\hginv(x;\\uf(t;r_0)) dx = 1-\\int_0^1 \\hginv(x) dx.$\n\nWe claim that for all $r \\in [r_0,\\uf_{\\Kf}]$ the pair\n$(\\hf(;r),\\hg)$ satisfies the strictly positive gap condition.\nTo establish the claim we need to show that\n$\\altPhi(\\hf(;r),\\hg) > A(\\hf(;r),\\hg )$ on $\\intcross(\\hf(;r),\\hg ).$\nLet $ (u,v) \\in \\intcrossing (\\hf(;r),\\hg ).$\nNote that this implies $u \\le r$ since $\\hg$ is continuous at $1$ \nby Lemma \\ref{lem:zocontinuity}.\nIf $u < r$ then $(u,v) \\in \\intcrossing (\\hf,\\hg)$\nand we have\n\\(\n\\altPhi(\\hf(;r),\\hg;u,v) =\n\\altPhi(\\hf,\\hg;u,v) > A(\\hf,\\hg) \\ge A(\\hf(;r),\\hg)\n\\,.\n\\)\nIf $u = r$ then we have $v \\ge \\hf(u-;r)$ and\n$\\hfinv(v';r) = v$ for all $v' \\in (v,1].$\nApplying \\eqref{eqn:altPhiderivatives} we have\n$\\altPhi(\\hf(;r),\\hg;u,v) = \\altPhi(\\hf(;r),\\hg;u,1)$\nand applying \\eqref{eqn:altPhiderivatives} and using continuity of\n$\\hg$ at $1,$ we have\n$\\altPhi(\\hf(;r),\\hg;u,1) > \\altPhi(\\hf(;r),\\hg;1,1).$\nHence \n$\\altPhi(\\hf(;r),\\hg;u,v) > A(\\hf(;r),\\hg )$ and the claim is established.\n\nApplying our result for the $\\ashift = 0$ case\nwe can find $f,g \\in \\sptfns$ that\nform a $(0,1)$-interpolating spatial fixed point pair for $(\\hf, \\hg(x;r_0)).$\nWe now apply Lemma \\ref{lem:PCshiftexist} in a series of stages\nincreasing $\\ashift$ (linearly $\\ashift = t$) and adjusting\n$\\zf$ and $\\zg$ so that\n$h^{\\fS,g}$ is held fixed while\n$h^{\\gSa,f}$ tracks $\\hg(x;r(t)).$\nThe process can be decomposed into stages where in each stage\n$u_{2,i-1} \\le r(t) \\le u_{2,i}$ for some $i,$\nand where in the final stage $i = \\Kf.$\nIn each stage $\\ashift(t)$ and $r(t)$ is increasing until,\nat the end of the stage, $r(t) = u_{2,i}.$\nDuring the stage where $r(t)$ increases to $u_{2,i}$ we have\n$u_{2,j}(t) = r(t)$ for all $j \\ge i.$\nWe can, in principle, simplify the notation by collapsing these indices to a single value, i.e.,\nto assume that $i = \\Kf.$\nIn this way we reduce each stage to the case $i= \\Kf$\n(while admitting an initial condition $\\ashift_0 \\ge 0).$\nLemma \\ref{lem:PCshiftexist} therefore completes the proof.\n\\end{IEEEproof}\n\n\\subsubsection{Convergence}\n\nIn the piecewise constant case with strictly positive averaging kernel \nwe can also show convergence to the solution constructed above for all initial conditions.\nDefine $f_\\lambda$ by \n\\[\nf^{-1}_\\lambda = \\lambda f^{t+1,-1} + (1-\\lambda) f^{t,-1}\n\\]\nand set\n$g_\\lambda = \\hf \\circ f^\\smthker_\\lambda.$\nThen by applying \\eqref{eqn:delgfrecursionInv} we obtain\n\\begin{align*}\n&g^{t+1,-1}(u) - g^{t,-1}(u) \n\\\\\n=&\n\\int_0^1 \\int_0^1 M(\\lambda,u,v)(f^{t+1,-1}(v) - f^{t,-1}(v)) \\,dv\\,d\\lambda\n\\end{align*}\nwhere\n\\begin{align*}\nM(\\lambda,u,v)=\\frac{\\int_0^1 \\smthker(g^{-1}_\\lambda(u)-f^{-1}_\\lambda(v))}{\\int_0^1 \\smthker(g^{-1}_\\lambda(u)-f^{-1}_\\lambda(v)) \\,dv}\n\\end{align*}\nSince $\\smthker$ is strictly positive we have\n for each $\\lambda$ and $u$ that $M(\\lambda,u,v)>0$ and $\\int_0^1 M(\\lambda,u,v)\\,dv = 1.$\nSo we obtain\n\\begin{align*}\n\\sup_u (g^{t+1,-1}(u) - g^{t,-1}(u)) &\\le \\sup_v (f^{t+1,-1}(v) - f^{t,-1}(v))\\, ,\n\\\\\n\\inf_u (g^{t+1,-1}(u) - g^{t,-1}(u)) &\\ge \\inf_v (f^{t+1,-1}(v) - f^{t,-1}(v))\\,.\n\\end{align*}\nIn the piecewise constant case the inverse functions are bounded and with strictly positive $\\smthker$ we see that\nthe inequalities are strict unless $f^{t+1,-1}(v) - f^{t,-1}(v)$ is a constant.\nIt is easy to conclude in this case that $f^{t+1,-1}(v) - f^{t,-1}(v)$ converges in $t$ to a constant in $v.$\nFrom this it follows that the $f^t$ converges to the solution given above (with suitable translation).\n\n\n\\subsection{Limit Theorems \\label{sec:limitthms}}\n\nOne of the main tools we use to extend the existence results for the piecewise constant\ncase to the general case is taking limits. In this section we develop the basic results needed.\n\nLet us recall the notation $g^{{\\smthker},\\ashift}(x) = g^{\\smthker}(x+\\ashift).$\nThen, we have the bound\n\\begin{align*}\n&g^{{\\smthker},\\ashift}(x)\n-\ng^{\\smthker',\\ashift'}(x)\n=\n\\\\ &\n\\int_{-\\infty}^\\infty (g(y-\\ashift)-g(y-\\ashift')) \\smthker(x-y) \\,dx\n\\\\&+\n\\int_{-\\infty}^\\infty g(y-\\ashift') (\\smthker(x-y)-\\smthker'(x-y)) \\,dx\n\\end{align*}\n\\begin{align}\n\\begin{split}\\label{eqn:diffbound}\n|g^{{\\smthker},\\ashift}(x)\n-\ng^{\\smthker',\\ashift'}(x)|\n\\le\n|\\ashift-\\ashift'| \\|\\smthker\\|_\\infty\n+\n\\|\\smthker-\\smthker'\\|_1 \\,\n\\end{split}\n\\end{align}\n\n\n\n\\begin{theorem}\\label{thm:mainlimit}\nLet $f_i,g_i,\\ashift_i,\\smthker_i,\\; i=1,2,3,...$ be sequences where $(f_i,g_i) \\in \\sptfns^2$\nare $(0,1)$-interpolating, \n$\\ashift_i \\in \\reals,$ and $\\smthker_i$ are averaging kernels. \nAssume\n\\[\nf_i \\rightarrow f,\\;\ng_i \\rightarrow g,\\;\n\\ashift_i \\rightarrow \\ashift,\\;\\text{ and }\n\\smthker_i \\rightarrow \\smthker \\text{ (in $L_1$) },\n\\]\nwhere $|\\ashift| < \\infty$\nand $\\smthker$ is an averaging kernel.\n(Note that we do not assume $f$ and $g$ are interpolating or that $\\smthker$ is regular.)\nFurther assume\n\\[\nh_{[\\ff_i,\\gSiai_i]}\\rightarrow \\hf\n,\\quad\nh_{[\\fg_i,\\fSi_i]}\\rightarrow \\hg\n\\]\nfor some $\\hf,\\hg \\in \\exitfns$ respectively.\nThen we have the following\n\\begin{itemize}\n\\item[A.] \n\\begin{align*}\n\\ff \\veq \\hf\\circ\\gS\n\\text{ and }\n\\fg \\veq \\hg\\circ\\fS\n\\end{align*}\n\\item[B.]\n\\[\n(f(\\minfty),g(\\minfty)),(f(\\pinfty),g(\\pinfty)) \\in \\cross (\\hf,\\hg)\n\\]\n\\item[C.]\nIf $\\ashift=0$ then\n\\begin{align*}\n0 = &\\altPhi(\\hf,\\hg;g(\\minfty),f(\\minfty)) \\\\ \n = &\\altPhi(\\hf,\\hg;g(\\pinfty),f(\\pinfty)) \\,.\n\\end{align*}\nand, for all $x_1,x_2$\n\\[\n\\altPhi(\\hf,\\hg;\\fg(x_2+),\\ff(x_1+)) = \n\\altPhiSI (\\smthker;f,g;x_1,x_2).\n\\]\n\\item[D.]\nFor $(u,v)\\in\\{(\\ff(\\minfty),\\fg(\\minfty)),(\\ff(\\pinfty),\\fg(\\pinfty))\\}$\n\\[\n\\min\\{0,A(\\hf,\\hg)\\}\\le\\altPhi(\\hf,\\hg;u,v) \\le \\max\\{0,A(\\hf,\\hg)\\}.\n\\]\n\\end{itemize}\n\\end{theorem}\n\\begin{IEEEproof}\nSince $\\fg_i \\rightarrow \\fg,$ $\\smthker_i \\rightarrow \\smthker$\nand $\\ashift_i \\rightarrow \\ashift$ we have from \\eqref{eqn:diffbound}\nthat $\\fg^{{{\\smthker}_i},\\ashift_i}_i \\rightarrow \\fg^{{\\smthker},\\ashift}$ point-wise.\n\nIf $x$ is a point of continuity of $f$ then $f_i(x) \\rightarrow f(x)$ and we have\n$(g^{{\\smthker}_i}(x+\\ashift_i),f_i(x))\\rightarrow (g^{\\smthker}(x+\\ashift),f(x))$ which implies\n$f(x) \\veq \\hf( \\gSa(x)).$ \nSince $\\gS$ is continuous we can extend this to all $x$ by taking limits.\nThis shows part A.\n\nPart B follows from part A by Lemma \\ref{lem:FPequal}.\n\nNow we consider part C where we assume $\\ashift=0.$\nBy Lemma \\ref{lem:FPequal} we need only show that\n$\\altPhi(\\hf,\\hg;g(\\pinfty),f(\\pinfty))=0.$\nFor any $\\epsilon>0$ \nwe can find $L$ large enough so that $\\int_L^\\infty \\smthker_i(x) dx < \\epsilon$ for all $i$ since $\\smthker_i\\rightarrow\\smthker.$\nNow choose $z$ large enough so that \n$f^\\smthker(z-L),f(z-L) > f(\\minfty)-\\epsilon$\nand\n$g^{\\smthker}(z-L),g(z-L) > g(\\minfty)-\\epsilon.$ \nIt follows that \n$\\Delta_L g(z) < \\epsilon$ and $\\Delta_L f(z) < \\epsilon.$\nFor all $i$ large enough we have $\\Delta_L g_i^{\\smthker_i}(z) < 2\\epsilon$ and \n$\\Delta_L f_i^{\\smthker_i}(z) < 2\\epsilon.$\n\nBy Lemma \\ref{lem:transitionPhiBounds} this implies\n\\[\n \\altPhi(\\hf^i,\\hg^i; \\fg_i^{\\smthker_i}(z),\\ff_i^{\\smthker_i}(z)) < 11\\epsilon\n\\]\nIt follows from \\eqref{eqn:diffbound} that $\\ff_i^{\\smthker_i} \\rightarrow \\fS$ \nand $\\fg_i^{\\smthker_i} \\rightarrow \\gS$ \npoint-wise\nand since \n$h_{[\\fg_i,\\ff_i^{\\smthker_i}]} \\rightarrow \\hg$\nand\n$h_{[\\ff_i,\\fg_i^{\\smthker_i}]} \\rightarrow \\hf$\nwe have for all $x_1,x_2,$\n\\[\n\\begin{split}\n& \\altPhi(\\smthker_i;\\hf^i,\\hg^i; g^{\\smthker_i}_i(x_2),f^{\\smthker_i}_i(x_1))\n\\\\ &\n\\rightarrow\n\\altPhi(\\smthker;\\hf,\\hg;g^{\\smthker}(x_2),f^{\\smthker}(x_1))\\,.\n\\end{split}\n\\]\nWe now obtain\n\\[\n \\altPhi(\\hf,\\hg; g^{\\smthker}(z),f^{\\smthker}(z)) \\le 11\\epsilon\\,.\n\\]\nBy Lipschitz continuity of $\\altPhi$ we have\n\\[\n \\altPhi(\\hf,\\hg; g^{\\smthker}(\\pinfty),f^{\\smthker}(\\pinfty)) < 13\\epsilon\n\\]\nand since $\\epsilon$ is arbitrary we obtain\n\\[\n \\altPhi(\\hf,\\hg; g^{\\smthker}(\\pinfty),f^{\\smthker}(\\pinfty)) = 0\n\\]\nIt now follows from Lemma \\ref{lem:twofint} and part A that\n\\[\n\\altPhi(\\hf,\\hg;g(x_2+),f(x_1+)) = \\altPhiSI(\\smthker;\\ff,\\fg;x_1,x_2)\n\\]\nfor all $x_1,x_2.$\n\nFinally, we show part D. \nIf $\\ashift=0$ then part $C$ gives part $D.$\nBy choosing a subsequence if necessary, we can assume that\n\\(\nh_{[\\ff_i,\\gSi_i]}\n\\)\nconverges to some $\\thf \\in \\exitfns.$\n\nWe assume $\\ashift > 0,$ the case $\\ashift<0$ is analogous.\nSince $\\thfinv \\le \\hfinv$ almost everywhere we have\n\\(\n\\altPhi(\\thf,\\hg;u,v) - \\altPhi(\\hf,\\hg;u,v) =\n\\int_0^{v} (\\thfinv(x) -\\hfinv(x))dx \\le 0\\,.\n\\)\nFor $(u,v)\\in\\{(\\ff(\\minfty),\\fg(\\minfty)),(\\ff(\\pinfty),\\fg(\\pinfty))\\}$\nwe have $\\altPhi(\\thf,\\hg;u,v)=0$ \nby part C,\nand therefore $\\altPhi(\\hf,\\hg;u,v) \\ge 0.$\n\nNow \n\\(\nA(\\hf,\\hg)-A(\\thf,\\hg) =\n\\int_0^{1} (\\hfinv(x) -\\thfinv(x))dx\n\\ge \n \\int_0^{v} (\\hfinv(x) -\\thfinv(x))dx\n\\)\nand since $A(\\thf,\\hg)=0,$ we have\n$\\altPhi(\\hf,\\hg;u,v) \\le A(\\hf,\\hg)$ for\n$(u,v)\\in\\{(\\ff(\\minfty),\\fg(\\minfty)),(\\ff(\\pinfty),\\fg(\\pinfty))\\}.$\nThis completes the proof.\n\\end{IEEEproof}\n\nThe following result is largely a corollary of the above but it is\nmore convenient to apply.\n\n\\begin{lemma}\\label{lem:limitexist}\nLet $(\\hf,\\hg) \\in \\exitfns^2$ satisfy the strictly positive gap condition.\nIf there exists a sequence of $(0,1)$-interpolating $f_i,g_i \\in \\sptfns$ and bounded $\\ashift_i$ such that \n$(\\hf^i,\\hg^i) \\rightarrow (\\hf,\\hg)$\nand $\\smthker_i \\rightarrow \\smthker$ in $L_1,$\nwhere $\\hf^i \\equiv h_{[\\ff_i,\\fg^{\\smthker_i,\\ashift_i}_i]}$ and\n$\\hg^i \\equiv h_{[\\fg_i,\\ff^{\\smthker_i}_i]},$\nthen\nthere exists $(0,1)$-interpolating $\\ff,\\fg \\in \\sptfns$ and finite $\\ashift,$ all limits of some translated subsequence,\nsuch that $\\hf = h_{[\\ff,\\gSa]}$ and\n$\\hg = h_{[\\fg,\\fS]}.$\n\\end{lemma}\n\\begin{IEEEproof}\nSince $\\smthker_i \\rightarrow \\smthker$ in $L_1$ and\n $(\\hf^i,\\hg^i) \\rightarrow (\\hf,\\hg)$ we conclude from \nLemma \\ref{lem:stposbound} and Lemma \\ref{lem:shiftupperbound}\nthat $|\\ashift_i|$ is bounded.\n\nBy translating $\\ff$ and $\\fg$ as necessary, we can assume that $\\ff^{\\smthker_i}(0) = 1\/2$ for each $i.$\nTaking subsequences as necessary, we can now assume that $f_i \\rarrowi f,$\n$g_i \\rarrowi g,$ and $\\ashift_i \\rarrowi \\ashift,$ for some finite $\\ashift.$\n\nWe claim that $\\ff$ and $\\fg$ are $(0,1)$-interpolating.\nFor all $(u,v) \\in \\intcross(\\hf,\\hg)$ we have\n$\\altPhi(\\hf,\\hg;u,v) >\\max \\{0,A(\\hf,\\hg)\\}$\nby assumption.\nBy Theorem \\ref{thm:mainlimit} parts B and D we now have\n$(\\ff(\\minfty),\\fg(\\minfty)) \\in \\cross(\\hf,\\hg) \\backslash \\intcross(\\hf,\\hg) = \\{ (0,0),(1,1) \\}.$\nSince $\\fS(0) = \\frac{1}{2}$ we must have $(\\ff(\\minfty),\\fg(\\minfty)) = (0,0)$\nand $(\\ff(\\pinfty),\\fg(\\pinfty)) = (1,1),$\nproving the claim.\n\\end{IEEEproof}\n\n\\subsection{Existence of Consistent Spatial Waves}\n\nIn Section \\ref{sec:PCcase} we proved Theorem\n\\ref{thm:PCexist},\na special case of Theorem \\ref{thm:mainexist}\nin which $\\hg$ and $\\hf$ are piecewise constant\nfunctions and $\\smthker$ is $C_1$ and strictly positive.\nIn this section we show how to remove the special conditions \nto arrive at the general results.\nWe make repeated use of the limit theorems of Section \\ref{sec:limitthms} and develop\nsome approximations for functions in $\\exitfns.$\nIt is quite simple to approximate $h \\in \\exitfns$ using piecewise constant functions.\nThe challenge is to approximate a pair $(\\hg,\\hf)$ so that the strictly positive gap\ncondition is preserved.\n\n\\subsubsection{Approximation by Tilting}\n\nIn a manner analogous to \\eqref{eqn:pcscale} we define a perturbation of\n$\\hf,\\hg$ as $\\hf(;t),\\hg(;t)$ for $t \\in [0,1]$ by\n\\begin{align}\n\\begin{split}\n\\hfinv (v;t) & = (1-t) B_{\\hf} + t \\hfinv(v) \\\\\n\\hginv (u;t) & = (1-t) B_{\\hg} + t \\hginv(u)\\,,\\label{eqn:genscale}\n\\end{split}\n\\end{align}\nwhere we recall $B_h = \\int_0^1 h^{-1}(x) \\,dx.$\nThis can also be expressed as\n\\begin{align}\n\\begin{split}\n\\hf (u;t) & = \\hf\\Bigl(\\frac{u-B_{\\hf}}{t} + B_{\\hf} \\Bigr) \\\\\n\\hg (v;t) & = \\hg\\Bigl(\\frac{u-B_{\\hg}}{t} + B_{\\hg}\\Bigr) \\,,\\label{eqn:genscalefor}\n\\end{split}\n\\end{align}\nwith appropriate extension of $\\hf$ and $\\hg$ outside of $[0,1],$\n$\\hf(x)=\\hg(x)=0$ for $x<0$ and\n$\\hf(x)=\\hg(x)=1$ for $x>1.$\n\nLetting $h$ denote either $\\hf$ or $\\hg,$ we clearly have\n\\begin{align}\n\\int_0^1 h^{-1} (x;t) dx = B_h\\label{eqn:slanteq}\n\\end{align}\nfor all $t.$\nNote also that \n\\(\nh^{-1}(v;t) -h^{-1} (v) =\n(1-t)(B_h-h^{-1} (v))\n\\)\nis non-increasing in $v.$\nIt follows that\n\\(\n\\int_0^v h^{-1}(x;t) dx \\ge \n\\int_0^v h^{-1} (x) dx \n\\)\nfor all $v\\in[0,1]$\nand we obtain\n\\begin{align}\n\\altPhi(\\hf(;t),\\hg(;t);)\\ge \\altPhi(\\hf,\\hg;)\\label{eqn:altslbound}\n\\end{align}\nfor all $t\\in [0,1].$\n\n\\begin{lemma}\\label{lem:smoothcompress}\nLet $(\\hg,\\hf) \\in \\exitfns^2$ \nsatisfy the strictly positive gap condition.\nThen, there exists $\\epsilon > 0$ such that\n$(\\slanta{\\hg},\\slanta{\\hf})$ \nsatisfies the strictly positive gap condition\nfor any $t \\in (1-\\epsilon,1].$\n\\end{lemma}\n\\begin{IEEEproof}[Proof of Lemma \\ref{lem:smoothcompress}]\nFor the case $A(\\hf,\\hg)=0$ equation \\eqref{eqn:slanteq}, inequality \\eqref{eqn:altslbound} and\nLemma \\ref{lem:monotonic} gives\nthe result immediately.\nBy symmetry we now need only consider the case $A(\\hf,\\hg) > 0.$\n\nBy Lemma \\ref{lem:Sstructure} and \\eqref{eqn:altslbound}\nit is sufficient to show that \n$\\intcross(\\hf(;t),\\hg(;t)) \\cap \\closure{S(\\hf,\\hg)}=\\emptyset$\nfor $t\\in[1-\\epsilon,1].$\n\nAlso by Lemma \\ref{lem:Sstructure}, there exists a\nminimal and positive element $(u^*,v^*) \\in \\intcross(\\hf,\\hg).$\nThere exists a neighborhood ${\\cal N}$ of $(0,0),$ which we take to be a subset of\n$\\subset [0,u^*)\\times [0,v^*),$\nin which\n$\\hginv(u;t) \\ge \\hginv(u)$ and \n$\\hfinv(v;t) \\ge \\hfinv(v).$ \nIt follows that \n$\\intcross(\\hf(;t),\\hg(;t)) \\cap {\\cal N} =\\emptyset$\nfor all $t.$\n\nLet $\\delta>0$ be small enough so that $\\neigh{(0,0)}{\\delta} \\subset {\\cal N}$\nand $\\neigh{(x^*,y^*)}{\\delta} \\cap \\closure{S(\\hf,\\hg)}=\\emptyset.$\nFor $\\epsilon$ small enough and $t\\in[1-\\epsilon,1]$ we have\n$\\cross(\\hf(;t),\\hg(;t))\\subset \\neigh{\\cross(\\hf,\\hg)}{\\delta}$\nby Lemma \\ref{lem:crosspointlimit}\nand it now follows that\n$\\intcross(\\hf(;t),\\hg(;t)) \\cap \\closure{S(\\hf,\\hg)}=\\emptyset.$\n\\end{IEEEproof}\n\n\\subsubsection{Piecewise Constant Approximation}\nGiven $h \\in \\exitfns$ let us define a sequence of piecewise constant approximations\n$Q_n(h),$ $n=1,2,...$ by\n\\begin{align*}\nQ_n(h) (x) &= \\sum_{j=1}^n \\frac{1}{n} \\,\\unitstep(x- u_{n,j}) \n\\end{align*}\nwhere we set \n\\begin{align*}\nu_{n,j} \n& = n\\int_{(j-1)\/n}^{j\/n}h^{-1}(v)dv\n\\end{align*}\nand we have\n\\begin{align*}\n\\int_0^1 Q_n(h) (x) dx &\n= \\sum_{j=1}^n \\frac{1-u_{n,j}}{n} \\\\ \n& = \\int_0^1 (1-h^{-1})(x) dx \\\\\n&= \\int_0^1 h (x) dx.\n\\end{align*}\nIn general, it holds that\n\\(\n\\int_0^1 Q_n(h) (x) dx = \\int_0^1 h (x) dx.\n\\)\nIt also follows that $\\int_0^z Q_n(h) (x) dx \\le \\int_0^z h (x) dx$ for all\n$z \\in [0,1].$\n\n\\begin{lemma}\\label{lem:PCapprox}\nLet $(\\hg,\\hf)$ be pair of functions in $\\exitfns$ satisfying the strictly positive gap condition \nsuch that for some $\\eta>0$ we have\n$\\hg (x) =\\hf(x)= 0$ for\n$x \\in [0,\\eta)$ and $\\hg (x) =\\hf(x)= 1$ for $x \\in (1-\\eta,1].$\nThen, for all $n$ sufficiently large\n$(Q_n(\\hg),Q_n(\\hf))$ satisfies the strictly positive gap condition.\n\\end{lemma}\n\\begin{IEEEproof}\nWe have\n\\(\nA(Q_n(\\hf),Q_n(\\hg)) =\nA(\\hf,\\hg)\n\\)\nand\n\\(\n\\altPhi(Q_n(\\hf),Q_n(\\hg);\\cdot,\\cdot) \\ge\n\\altPhi(\\hf,\\hg;\\cdot,\\cdot)\n\\)\nso it suffices to show that \n\\(\n\\intcrossing (Q_n(\\hf),Q_n(\\hg)) \\cap \\closure{S(\\hf,\\hg)}=\\emptyset.\n\\)\n\nSince $\\hg$ and $\\hf$ are $0$ on $[0,\\eta)$ and $1$ on $(1-\\eta,1]$ it follows that \n$\\intcrossing (\\hf,\\hg) \\subset [\\eta,1-\\eta]^2$ and\n$\\intcrossing (\\hf,\\hg)$ is closed and by Lemma \\ref{lem:Sstructure} it is disjoint from\n$\\closure{S(\\hf,\\hg)}.$\nThus, for $\\delta$ sufficiently small we have $\\neigh{\\intcrossing(\\hf,\\hg)}{\\delta}\\cap \\closure{S(\\hf,\\hg)}=\\emptyset.$\n\nBy Lemma \\ref{lem:crosspointlimit} we now have\n\\(\n\\intcrossing (Q_n(\\hf),Q_n(\\hg)) \\cap \n\\closure{S(\\hf,\\hg)}=\\emptyset\n\\)\nfor all $n$ sufficiently large.\n\\end{IEEEproof}\n\nWe are now ready to prove the main result of this section.\n\n\\begin{lemma}\\label{lem:weakexistence}\nLet $(\\hf,\\hg)$ satisfy the strictly positive gap condition.\nThen there exists $(0,1)$-interpolating $(\\ff,\\fg) \\in \\sptfns^2$ and $\\ashift$ such that\n$\\hf = h_{[\\ff,\\gSa]}$\nand\n$\\hg = h_{[\\fg,\\fS]}.$\n\\end{lemma}\n\\begin{IEEEproof}\nThe simplest case is already established in Theorem \\ref{thm:PCexist}\nand we first generalize to arbitrary $\\smthker.$\nAssume that $(\\hg,\\hf)$ are both piecewise constant.\nDefine $\\smthker_k = \\smthker \\otimes G_k$ where \n$G_i(x) = \\frac{k}{\\sqrt{2\\pi}} e^{- (kx)^2\/2}.$\nIt follows that $\\smthker_k \\rightarrow \\smthker$ in $L_1$\nand $\\| \\smthker_k \\|_\\infty \\le \\| \\smthker\\|_\\infty.$\nFor each $\\smthker_k$ we apply Theorem \\ref{thm:PCexist}\nto obtain piecewise constant $f_k,g_k \\in \\sptfns$ \n(with corresponding $z^{f_k},z^{g_k}$) and constants $\\ashift_k$\nsuch that $ h_{[\\fg_k,\\ff_k^{{\\smthker}_k}]} = \\hg$ and $h_{[\\ff_k,g_k^{{\\smthker}_k,\\ashift_k}]} = \\hf.$\nWe can now apply Lemma \\ref{lem:limitexist} to conclude\nthat the theorem holds for piecewise constant $\\hf,\\hg$ and general $\\smthker.$\n\nLet now assume first that for some $\\eta>0$ we have $\\hf(x)=\\hg(x)=0$ for $x\\in [0,\\eta)$\nand $\\hf(x)=\\hg(x)=0$ for $x\\in (1-\\eta,1].$\nConsider $Q_n(\\hf)$ and $Q_n(\\hg).$ \nWe apply Lemma \\ref{lem:PCapprox} and the preceding case already established\nto conclude that for all $n$ sufficiently large\nthere exists (piecewise constant) $(0,1)$-interpolating $f_n,g_n \\in \\sptfns$ and finite constants $\\ashift_n$ such that\n$h_{[\\fg_n,\\fS_n]} = Q_n(\\hg)$ and $h_{[\\ff_k,\\fg_k^{{\\smthker}_k,\\ashift_n}]}= Q_n(\\hf).$\nSince $Q_n(\\hg)$ and $Q_n(\\hf)$ converge to $\\hg$ and $\\hf$ respectively,\nwe can apply Lemma \\ref{lem:limitexist} to conclude that the theorem holds for this case.\n\nFor arbitrary $(\\hg,\\hf)$ we consider $(\\slanta{\\hg},\\slanta{\\hf}).$\n\nBy Lemma \\ref{lem:smoothcompress} we can find a sequence $t_i \\rightarrow 1$\nsuch $(\\hg(;t_i),\\hf(;t_i))$ satisfies the strictly positive gap condition for each $i.$\nBy the preceding case, there exists $f_{t_i},g_{t_i} \\in \\sptfns$ and finite constants $\\ashift_i$ such that\n$h_{[\\fg_{t_i},\\fS_{t_i}]} = \\hg(;t_i)$ and $h_{[\\ff_{t_i},\\fg_{t_i}^{{\\smthker},\\ashift_i}]} = \\hf(;t_i).$\nBy taking a subsequence if necessary we can assume that $\\ashift_i \\rightarrow \\ashift.$\nSince $(\\hf(;t_i),\\hg;(t_i))\\rightarrow (\\hf,\\hg)$ we can apply\nLemma \\ref{lem:limitexist} to obtain $(\\ff,\\fg) \\in \\sptfns^2$\nsuch that $\\hf = h_{[\\ff,\\gSa]}$ and $\\hg = h_{[\\fg,\\fS]}.$\n\\end{IEEEproof}\n\\subsection{Existence of Spatial Wave Solutions}\n\nIn the preceeding section we estabished the existence of consistent spatial waves\nunder general conditions. In this section we refine the results to obtain full\nspatial wave solutions. Thus, in this section we complete the proof of \n\\ref{thm:mainexist}.\n\n\\subsubsection{Analysis of Consistent Spatial Waves}\n\nFor $h \\in \\exitfns$ we use $\\jump{h}$ to denote the set of discontinuity points of\n$h,$ i.e.,\n\\[\n\\jump{h} = \\{ u\\in[0,1]:h(u-)0$ (where $\\mu$ is Lebesgue measure)\nwe have\n$\\mu (\\{x:\\ff(x) \\neq \\hf(\\fg(x))\\}\\cap [\\fginv(v-),\\fginv(v+)]) >0$\nfor some $v\\in \\jump{\\hf}.$\nThen $I = [\\fginv(v-),\\fginv(v+)] \\in \\flats{\\fg}.$\n\\end{IEEEproof}\n\n\n\n\n\\begin{lemma}\\label{lem:pathology}\nLet $\\ff,\\fg \\in \\sptfns$ be $(0,1)$-interpolating.\nIf $h_{[\\ff,\\gSa]} = \\hf$ and\n$h_{[\\fg,\\fS]} = \\hg$ \nthen $\\ff = \\hf\\circ\\gSa$\nand $\\fg = \\hg\\circ\\fS$\nin any of the following scenarios.\n\\begin{itemize}\n\\item[A.] $\\hf$ and $\\hg$ are continuous.\n\\item[B.] $\\smthker$ is positive on all $\\reals.$\n\\item[C.] $\\smthker$ is regular,\n$\\ashift = 0,$\nand $(\\hf,\\hg)$ satisfies the strictly positive gap condition.\n\\item[D.] $\\smthker$ is regular, \n$(\\hf,\\hg)$ satisfies the strictly positive gap condition\nand\n$\\jump{\\hf} \\cap \\jump{\\hginv}=\\emptyset$\nand\n$\\jump{\\hg} \\cap \\jump{\\hfinv}=\\emptyset.$\n\\end{itemize}\n\\end{lemma}\n\\begin{IEEEproof}\nIf $\\hf$ is continuous then $\\jump{\\hf}=\\emptyset$ and,\nby Lemma \\ref{lem:notequal},\n$\\ff \\veq\\hf \\circ \\gSa$ then implies\n$\\ff = \\hf \\circ \\gSa$\nThus, case A is clear.\n\nIf $\\smthker(x)>0$ for all $x$ then $\\gS(x)$ is increasing (strictly) on $\\reals$\nand takes values in $(0,1).$\nHence $\\gSinv(v-)=\\gSinv(v+)$ for all $v\\in (0,1)$ and we see by\nLemma \\ref{lem:notequal} that\n$\\ff \\veq\\hf \\circ \\gSa$ then implies\n$\\ff = \\hf \\circ \\gSa.$\nThis shows case B.\n\nAssume $\\smthker$ is regular\nand $\\ashift=0.$ \nIf $\\gSx(x_1) = 0$ then\nLemma \\ref{lem:transitionPhiBounds} yields\n$\\altPhiSI(\\smthker,\\ff,\\fg;x_1,x_1) = 0$\nwhich implies\n$\\altPhi(\\hf,\\hg;\\ff(x_1),\\fg(x_1))=0.$\nBy Lemma \\ref{lem:monotonic} this violates the strictly positive gap condition if $\\gS(x_1) \\in (0,1)$\nso the condition implies that $\\gS$ is strictly increasing on $\\{x: 0 < \\gS(x) < 1 \\}.$\nSince $\\hf$ is continuous at $0$ and $1$ by Lemma\n\\ref{lem:zocontinuity}, part C now follows from Lemma \\ref{lem:notequal}.\n\nTo show part D \nassume $\\smthker$ is regular and that\n$(\\hf,\\hg)$ satisfies the strictly positive gap condition.\nAssume $\\ff \\not\\equiv \\hf \\circ\\gSa.$\nWe have $\\ff \\veq \\hf \\circ\\gSa$ so\nwe apply Lemma \\ref{lem:notequal} to obtain $I-\\ashift \\in\\flats{\\gSa}$ (so $I \\in \\flats{\\gS}$)\nsuch that \n$\\gSa(I-\\ashift) \\in \\jump{\\hf}$\nand such that $\\ff(x) \\neq \\hf(U)$ on a set of positive measure in\n$I-\\ashift.$\nLet us denote $\\gSa(I-\\ashift) =\\gS(I)$ by $U.$\nWe claim that $\\fS$ is not constant on $\\neigh{I}{W}$ and hence we have\n$U \\in \\jump{\\hginv}.$\n\nTo prove the claim assume $\\fS$ is a constant $V$ on $\\neigh{I}{W}.$ Then $\\ff(x)=V$ on\n $\\neigh{I}{2W}$ which,\nsince $|\\ashift|<2W$ by Corollary \\ref{cor:regshiftbound},\n gives $\\ff(x)=V$ on $\\neigh{I-\\ashift}{\\delta}$\nfor some $\\delta>0.$ \nThis, however, implies $\\hf(U)=V$ and $U \\not\\in \\jump{\\hf},$ which is a contradiction.\n\nHence $\\ff \\neq \\hf \\circ\\gSa$ implies $\\jump{\\hf} \\cap \\jump{\\hginv}\\neq\\emptyset.$\nSimilarly,\n$\\fg \\neq \\hg \\circ\\fS$ implies $\\jump{\\hg} \\cap \\jump{\\hfinv}\\neq\\emptyset.$\n\\end{IEEEproof}\n\n\n\n\n\\subsubsection{Proof of Theorem \\ref{thm:mainexist}}\n\nSince we assume that $\\smthker$ is regular \nLemma \\ref{lem:pathology} shows that Lemma \\ref{lem:weakexistence}\nimplies Theorem \\ref{thm:mainexist} except in the case\n$\\jump{\\hf}\\cap\\jump{\\hginv} \\neq \\emptyset$ or\n$\\jump{\\hg}\\cap\\jump{\\hfinv} \\neq \\emptyset.$\nIt turns out that this case can be handled by constructing $(\\hf^i,\\hg^i) \\rightarrow (\\hf,\\hg)$\nwith certain properties. The argument is lengthy and is relegated to appendix \\ref{app:B}.\n\n\n\n\n\n\n\\appendices\n\n\\section{Continuum Spatial Fixed Point Integration}\\label{app:A}\n\n\n\\begin{IEEEproof}[Proof of Lemma \\ref{lem:twofint}]\n\nWe assume that the smoothing kernel $\\smthker$ has finite total variation hence \n$\\|\\smthker\\|_\\infty < \\infty.$ \nFor any $\\ff \\in \\sptfns$ a simple calculation shows that $\\fS(x)-\\fS(y) \\le \\|\\smthker\\|_\\infty |x-y|.$\nThis implies that and $\\fS$ is Lipschitz continuous with Lipschitz constant $\\|\\smthker\\|_\\infty.$ \n\nThus, $\\gSx$ and $\\fSx,$ the derivatives of $\\gS$ and $\\fS,$ exist for almost all $x$ and $\\gS$ and $\\fS$ are absolutely continuous.\nIf $\\smthker(x-y)$ and $g(y)$ do not have in common any points of discontinuity in $y$,\nthen $\\gSx(x) = \\int_{-\\infty}^\\infty \\smthker(x-y)\\,dg (y)$ where the right hand side is\na Lebesgue-Stieltjes integral. The integral is well defined as long as $\\smthker(x-y)$ and $g(y)$ do not have\nany discontinuity points (in $y$) in common. The set of $x$ at which this can occur is countable.\nMore generally in the sequal we will have integrals in the form \n$\\int_{(a,b]} g(x) df(x).$\nThe integral is well defined as long as $g(x)$ and $f(y)$ do not have\nany discontinuity points in common.\nWe define the integral so that \n$\\int_{(a,b]} df(x) = f(b+)-f(a+).$\nThis holds even if $a \\ge b.$\n\nWe now have\n\\begin{align*}\n& \\int_{\\gS(\\minfty)}^{\\gS(x_2)} h_{[\\ff,\\gS]}(u) du \\; \n\\\\ & = \n\\; \\int_{-\\infty}^{x_2} f(x) \\gSx(x) dx\n\\\\& =\n\\int_{-\\infty}^{x_2} f(x) \n\\Bigl(\n\\int_{-\\infty}^\\infty \\smthker(x-y)\\,\ndg (y)\n\\Bigr)\ndx\\,.\n\\end{align*}\nSince, $\\int_{-\\infty}^{x_2} f(x)\\smthker(x-y)dx \\le \\fS(y)$ we see that the Fubini theorem can\nbe applied \nand we obtain\n\\begin{align*}\n& \\int_{\\gS(\\minfty)}^{\\gS(x_2)} h_{[\\ff,\\gS]}(u) du \\; \n\\\\ & = \n\\int_{-\\infty}^\\infty \n\\Bigl(\n\\int_{-\\infty}^{x_2} f(x) \n\\smthker(x-y)\\,\ndx\\,\n\\Bigr)\ndg (y)\n\\\\ & = \n\\int_{-\\infty}^\\infty \n\\Bigl(\n\\int_{-\\infty}^{x_2-y} f(x+y) \n\\smthker(x)\\,\ndx\\,\n\\Bigr)\ndg (y)\n\\\\ & =\n\\int_{-\\infty}^\\infty\n\\smthker(x)\n\\Bigl(\n\\int_{-\\infty}^{x_2-x} \nf(y+x) \ndg(y)\\,\\Bigr)\ndx\\,\n\\end{align*}\nwhere the inner integral is defined for almost all $x.$\nSimilarly,\n\\begin{align*}\n& \\int_{\\fS(\\minfty)}^{\\fS(x_1)} h_{[\\fg,\\fS]}(v) dv \\; \n\\\\& = \n\\int_{-\\infty}^\\infty\n\\smthker(x)\n\\Bigl(\n\\int_{-\\infty}^{x_1-x} \ng(y+x) \ndf(y)\\,\\Bigr)\ndx\\,\n\\end{align*}\nWe can now exploit the evenness of $\\smthker$ to replace $x$ with $-x$ in the above\nand $\\smthker(-x)$ with $\\smthker(x)$ to write\n\\begin{align*}\n& \\int_{\\fS(\\minfty)}^{\\fS(x_1)} h_{[\\fg,\\fS]}(v) dv \n+ \\int_{\\gS(\\minfty)}^{\\gS(x_2)} h_{[\\ff,\\gS]}(u) du \n\\\\&=\n\\int_{-\\infty}^\\infty\n\\smthker(x)\n\\Bigl(\n\\int_{-\\infty}^{x_2-x} \nf(y+x) \ndg(y)\n+\\int_{-\\infty}^{x_1+x} \ng(y-x) \ndf(y)\n\\,\\Bigr)\ndx\\,\n\\\\ \n&\n=\n\\int_{-\\infty}^\\infty\n\\smthker(x)\n\\Bigl(\n\\int_{-\\infty}^{x_2+x} \nf(y-x) \ndg(y)\n+\\int_{-\\infty}^{x_1-x} \ng(y+x) \ndf(y)\n\\,\\Bigr)\ndx\\,\n\\end{align*}\nConsider the second form, in which we have the expression\n\\begin{align*}\n& \\quad \\int_{-\\infty}^{x_2+x} f(y-x) dg(y)\n+\n\\int_{-\\infty}^{x_1-x} g(y+x) df(y) \\,.\n\\end{align*}\nWith a slight abuse of notation, we may write \n\\(\n\\int_{-\\infty}^{x_1-x} g(y+x) df(y) \\,\n\\)\nas\n\\(\n\\int_{-\\infty}^{x_1} g(y) df(y-x) \\,\n\\)\nand we see that for almost all $x$ we have\n\\begin{align*}\n&\\int_{-\\infty}^{x_2+x} \nf(y-x) \ndg(y)\n+\\int_{-\\infty}^{x_1-x} \ng(y+x) \ndf(y)\n\\\\&=\ng(x_1+) f(x_1-x) - g(\\minfty)f(\\minfty) +\n\\int_{(x_1,x_2+x]} f(y-x) dg(y)\\,\n\\\\&=\ng(x_1+) f(x_1-x) + \nf(x_2+)(g(x_2+x)-g(x_1+))\n\\\\\n&\\quad - f(\\minfty)g(\\minfty) -\n\\int_{(x_1,x_2+x]} (f(x_2+)-f(y-x)) dg(y)\\,.\n\\end{align*}\nThus, we obtain\n\\begin{align*}\n& \\int_{\\fS(\\minfty)}^{\\fS(x_1)} h_{[\\fg,\\fS]}(v) dv \n+ \\int_{\\gS(\\minfty)}^{\\gS(x_2)} h_{[\\ff,\\gS]}(u) du \n\\\\=&\ng(x_1+) \\fS(x_1) \n+f(x_2+)\\gS(x_2)\n\\\\&-f(x_2+)g(x_1+)\n- f(\\minfty)g(\\minfty) \n\\\\& -\n\\int_{-\\infty}^\\infty\n\\smthker(x)\n\\Bigl(\n\\int_{(x_1,x_2+x]} (f(x_2+)-f(y-x)) dg(y)\\,\n\\,\\Bigr)\ndx\\,.\n\\end{align*}\nThe same analysis applied to the first form gives\n\\begin{align*}\n& \\int_{\\fS(\\minfty)}^{\\fS(x_1)}h_{[\\fg,\\fS]}(v) dv \n+ \\int_{\\gS(\\minfty)}^{\\gS(x_2)} h_{[\\ff,\\gS]}(u) du \n\\\\= &\ng(x_1+) \\fS(x_1) \n+f(x_2+)\\gS(x_2)\n\\\\ & -f(x_2+)g(x_1+)\n- f(\\minfty)g(\\minfty) \n\\\\& -\n\\int_{-\\infty}^\\infty\n\\smthker(x)\n\\Bigl(\n\\int_{(x_2,x_1+x]} (g(x_1+)-g(y-x)) df(y)\\,\n\\,\\Bigr)\ndx\\,.\n\\end{align*}\nFinally, by combining the two forms and integrating from $0$ to $\\infty$ we obtain\n\\begin{align*}\n& \\int_{\\fS(\\minfty)}^{\\fS(x_1)} h_{[\\fg,\\fS]}(v) dv \n+ \\int_{\\gS(\\minfty)}^{\\gS(x_2)}h_{[\\ff,\\gS]}(u) du \n\\\\= &\ng(x_1+) \\fS(x_1) \n+f(x_2+)\\gS(x_2)\n\\\\&-f(x_2+)g(x_1+)\n- f(\\minfty)g(\\minfty) \n\\\\& -\n\\int_{0}^\\infty\n\\smthker(x)\n\\Bigl(\n\\int_{(x_2,x_1+x]} (g(x_1+)-g(y-x)) df(y)\\,\n\\\\ & \\qquad +\n\\int_{(x_1,x_2+x]} (f(x_2+)-f(y-x)) dg(y)\\,\n\\,\\Bigr)\ndx\\,.\n\\end{align*}\nNow note that\n\\begin{align*}\n&\\int_{\\fS(\\minfty)}^{\\fS(x_1)} h_{[\\fg,\\fS]}(v) dv \n \\\\ = & \\fS(x_1)\\fg(x_1+) - \\fS(\\minfty)\\fg(\\minfty)\n\\\\ & -\\int_{\\fg(\\minfty)}^{\\fg(x_1+)} h^{-1}_{[\\fg,\\fS]}(u) du\n\\end{align*} \nand, similarly,\n\\begin{align*}\n&\\int_{\\gS(\\minfty)}^{\\gS(x_2)} h_{[\\ff,\\gS]}(u) du \n \\\\ = & \\gS(x_2)\\ff(x_2+) - \\gS(\\minfty)\\ff(\\minfty)\n\\\\ & -\\int_{\\ff(\\minfty)}^{\\ff(x_2+)} h^{-1}_{[\\ff,\\gS]}(u) du\n\\end{align*} \nand use the fact that $\\fS(\\minfty)=\\ff(\\minfty)$ and\n$\\gS(\\minfty)=\\fg(\\minfty)$\nand the desired result follows.\n\\end{IEEEproof}\n\n\\subsection{Fixed Point Bounds on Potentials}\n\n\\begin{IEEEproof}[Proof of Lemma \\ref{lem:transitionPhiBounds}]\n\nAs a first bound use the evenness of $\\smthker$ to write\n\\begin{align*}\n\\fS(x) &= \\int_{-\\infty}^{\\infty} \\smthker(y)f(x+y)\\text{d}y\n\\\\\n&= \\int_{-\\infty}^{L} \\smthker(y)f(x+y)\\text{d}y\n+ \\int_{L}^{\\infty} \\smthker(y)f(x+y)\\text{d}y\n\\\\\n&\\le (1-e_L) f((x+L)-) + e_L\n\\\\\n&\\le f((x+L)-) + e_L (1-f((x+L)-))\n\\\\\n&\\le f(x) + \\Delta_L f(x) + e_L (1- f(x))\n\\end{align*}\nand\n\\begin{align*}\n\\fS(x) &= \\int_{-\\infty}^{\\infty} \\smthker(y)f(x+y)\\text{d}y\n\\\\\n&\\ge\n\\int_{-L}^{\\infty} \\smthker(y)f(x+y)\\text{d}y\n\\\\\n&\\ge (1-e_L) f((x-L)+)\n\\\\\n&\\ge f(x) - \\Delta_L f(x) - e_L f(x)\\,.\n\\end{align*}\n\nTo get a bound on\n$\\altPhiSI(\\smthker;f,g;x_1,x_1)$ we proceed similarly\n\\begin{align*}\n&\n\\int_{-\\infty}^\\infty \\smthker(x)\n\\Bigl(\n\\int_{(x_1,x_1+x]} (g(x_1)-g(y-x))df(y)\n\\Bigr) dx\n\\\\\n\\le&\n\\int_{-L}^L \\smthker(x)\n\\Bigl(\n\\Delta_L g(x_1) |f(x_1+x)-f(x_1)|\n\\Bigr)\n\\text{d}x\\\\\n&+\\int_L^\\infty\\smthker(x) ( f(x_1+x)-f(x_1-x)) dx\n\\\\\n\\le&\n\\int_{-L}^L \\smthker(x)\n\\Bigl(\n \\Delta_L g(x_1) \\Delta_L f(x_1)\n\\Bigr) dx +e_L\n\\\\\n\\le&\\\n \\Delta_L g(x_1) \\Delta_L f(x_1)\n+e_L\n\\end{align*}\n\nThus, we obtain the bound\n\\begin{align*}\n\\altPhi(\\fg(x),\\ff(x))\n&\\le\n\\Delta_L f(x)\\Delta_L g(x)\n+ e_L\n\\end{align*}\nand since, by \\eqref{eqn:altPhiderivatives},\n\\begin{align*}\n|\\altPhi(\\fg(x),\\fS(x))\n-\\altPhi(\\fg(x),f(x)) |\n&\\le\n|\\fS(x)-f(x)|\n\\\\\n&\\le\n\\Delta_L f(x) + e_L\n\\end{align*}\nthe other bounds follow easily.\n\\end{IEEEproof}\n\n\\section{Discrete Spatial Fixed Point Integration}\\label{app:Aa}\n\nIn this section we show how spatial integration can be done directly in the \nspatially discrete setting.\nFirst, an unusual point of notation.\nLet $\\dv{g}_i$ be defined for all integers $i.$\nWe will occasionally write a sum as\n\\(\n\\sum_{i\\in (a,b]} \\dv{g}_i\\,.\n\\)\nIn the case where $ab$ the sum is\n\\(\n-\\sum_{i\\in (b,a]} \\dv{g}_i= -\\sum_{i=b+1}^a \\dv{g}_i\\,.\n\\)\nNote that\n\\(\n\\sum_{i\\in (a,b]} (\\dv{g}_i-\\dv{g}_{i-1}) = \\dv{g}_b - \\dv{g}_a\n\\) \nfor all choices of $a$ and $b.$\nFurther, we can write\n\\begin{align*}\n&\\sum_{i=-\\infty}^a \\dv{g}_i + \\sum_{i=-\\infty}^b \\dv{f}_i\n\\\\ =&\n\\sum_{i=-\\infty}^a (\\dv{g}_i+\\dv{f}_i) + \\sum_{i\\in(a,b]} \\dv{f}_i\n\\end{align*}\nregardless of $a$ and $b.$\n\nThe spatially smoothed $\\dv{g}$ will be denoted $\\dv{\\gSdisc}$ and is defined by\n\\[\n\\dv{\\gSdisc}_i = \\sum_{j=-W}^W \\discsmthker_j \\dv{g}_{i-j}\n\\]\nwhere we require $\\discsmthker_j = \\discsmthker_{-j}$ and $\\discsmthker_j \\ge 0$ and $\\sum_j \\discsmthker_j =1.$\n\nFirst we note\n\\begin{align*}\n& \\sum_{i=-\\infty}^{k} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} )\n\\\\\n&\\qquad+\\sum_{i=-\\infty}^{k}( \\dv{g}_{i+j}+\\dv{g}_{i+j-1}) (\\dv{f}_{i} - \\dv{f}_{i-1} )\n\\\\ &\n= \\sum_{i=-\\infty}^{k} 2 \\dv{f}_i \\dv{g}_{i+j} - 2 \\dv{f}_{i-1} \\dv{g}_{i+j-1} \n- \\dv{f}_i \\dv{g}_{i+j-1} \n\\\\\n& \\qquad+ \\dv{f}_{i} \\dv{g}_{i+j-1} \n+ \\dv{f}_{i-1} \\dv{g}_{i+j} - \\dv{f}_{i-1} \\dv{g}_{i+j} \n\\\\ &\n= 2 (\\dv{f}_{k} \\dv{g}_{k+j}-\\dv{f}_{-\\infty} \\dv{g}_{-\\infty})\n\\end{align*}\nand we obtain\n\\begin{align}\n\\begin{split}\\label{eqn:sumcutA}\n&\\sum_{i=-\\infty}^{i_2}( \\dv{f}_i + \\dv{f}_{i-1})(\\dv{g}_{i+j} - \\dv{g}_{i+j-1} )\n\\\\+& \\sum_{i=-\\infty}^{i_1}( \\dv{g}_i + \\dv{g}_{i-1}) (\\dv{f}_{i-j} - \\dv{f}_{i-j-1} )\n\\\\ \n=\\quad &\\qquad\n 2 (\\dv{f}_{i_1-j} \\dv{g}_{i_1}-\\dv{f}_{-\\infty} \\dv{g}_{-\\infty})\n\\\\+&\\sum_{i\\in (i_1-j,i_2]} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} ) \n\\end{split}\n\\end{align}\n\n\nWe now can write\n\\begin{align*}\n& \\sum_{i=-\\infty}^{i_2} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{\\gSdisc}_i - \\dv{\\gSdisc}_{i-1})\n\\\\+ &\\sum_{i=-\\infty}^{i_1} (\\dv{g}_i+\\dv{g}_{i-1}) (\\dv{\\fSdisc}_i - \\dv{\\fSdisc}_{i-1})\n\\\\ =\n\\sum_{j=-W}^W \\discsmthker_j \n \\Bigl( &\n \\sum_{i=-\\infty}^{i_2} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1})\n\\\\+& \\sum_{i=-\\infty}^{i_1} (\\dv{g}_i+\\dv{g}_{i-1}) (\\dv{f}_{i+j} - \\dv{f}_{i+j-1})\n\\Bigr)\n\\\\ =\n\\sum_{j=-W}^W \\discsmthker_j \n\\Bigl( &\n 2(\\dv{f}_{i_1-j} \\dv{g}_{i_1} - \\dv{f}_{-\\infty} \\dv{g}_{-\\infty} )\n\\\\+ &\n\\sum_{i\\in(i_1-j,i_2]} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} ) \n\\Bigr)\n\\\\\n=\\qquad\n&\n2( \\dv{\\fSdisc_{i_1}} \\dv{g}_{i_1}\n- \\dv{f}_{-\\infty} \\dv{g}_{-\\infty})\n\\\\+\\sum_{j=-W}^W \\discsmthker_j &\\sum_{i\\in (i_1-j,i_2]} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} ) \n\\\\\n=\\qquad\n&\n 2(\\dv{\\fSdisc_{i_1}} \\dv{g}_{i_1}\n- \\dv{f}_{-\\infty} \\dv{g}_{-\\infty})\n\\\\+&2 \\dv{f}_{i_2}(\\dv{\\gSdisc_{i_2}} - \\dv{g}_{i_1})\n\\\\ +\\sum_{j=-W}^W \\discsmthker_j &\\sum_{i\\in (i_1-j,i_2]} (\\dv{f}_i+\\dv{f}_{i-1}-2\\dv{f}_{i_2}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} ) \n\\\\\n=\\qquad\n&\n 2 (\\dv{\\fSdisc_{i_1}} \\dv{g}_{i_1} \n+ \\dv{\\gSdisc_{i_2}} \\dv{f}_{i_2} - \\dv{f}_{i_2}\\dv{g}_{i_1}\n- \\dv{f}_{-\\infty} \\dv{g}_{-\\infty})\n\\\\+\\sum_{j=-W}^W \\discsmthker_j&\\sum_{i\\in (i_1-j,i_2]} (\\dv{f}_i+\\dv{f}_{i-1}-2\\dv{f}_{i_2}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} ) \n\\end{align*}\n\nAnd finally we have the result\n\\begin{align*}\n&2(\\dv{\\fSdisc_{i_1}} \\dv{\\gSdisc_{i_2}} - \\dv{f}_{-\\infty} \\dv{g}_{-\\infty})\n\\\\-\n& \\Bigl(\\sum_{i=-\\infty}^{i_2} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{\\gSdisc}_i - \\dv{\\gSdisc}_{i-1})\n\\\\&+ \\sum_{i=-\\infty}^{i_1} (\\dv{g}_i+\\dv{g}_{i-1}) (\\dv{\\fSdisc}_i - \\dv{\\fSdisc}_{i-1})\\Bigr)\n\\\\ =\\quad\n&\n2( \\dv{\\fSdisc_{i_1}} -\\dv{f}_{i_2}) \n( \\dv{\\gSdisc_{i_2}} - \\dv{g}_{i_1})\n\\\\&+\\sum_{j=-W}^W \\discsmthker_j\\sum_{i\\in (i_1-j,i_2]} (2\\dv{f}_{i_2}-\\dv{f}_{i}-\\dv{f}_{i-1}) (\\dv{g}_{i+j} - \\dv{g}_{i+j-1} ) \n\\end{align*}\n\nNow, if in place of \\eqref{eqn:sumcutA} we write,\n\\begin{align*}\n&\\sum_{i=-\\infty}^{i_2}( \\dv{f}_i + \\dv{f}_{i-1})(\\dv{g}_{i-j} - \\dv{g}_{i-j-1} )\n\\\\+& \\sum_{i=-\\infty}^{i_1}( \\dv{g}_i + \\dv{g}_{i-1}) (\\dv{f}_{i+j} - \\dv{f}_{i+j-1} )\n\\\\ \n=\\quad&\\qquad 2( \\dv{g}_{i_2-j} \\dv{f}_{i_2}-\\dv{f}_{-\\infty} \\dv{g}_{-\\infty}) \n\\\\+&\\sum_{i\\in (i_2-j,i_1]} (\\dv{g}_i+\\dv{g}_{i-1}) (\\dv{f}_{i+j} - \\dv{f}_{i+j-1} ) \n\\end{align*}\nand subsequently proceed similarly, then we obtain\n\\begin{align*}\n&2(\\dv{\\fSdisc_{i_1}} \\dv{\\gSdisc_{i_2}} - \\dv{f}_{-\\infty} \\dv{g}_{-\\infty})\n\\\\-\n& \\Bigl(\\sum_{i=-\\infty}^{i_2} (\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{\\gSdisc}_i - \\dv{\\gSdisc}_{i-1})\n\\\\&+ \\sum_{i=-\\infty}^{i_1} (\\dv{g}_i+\\dv{g}_{i-1}) (\\dv{\\fSdisc}_i - \\dv{\\fSdisc}_{i-1})\\Bigr)\n\\\\ =\\quad\n&\n2( \\dv{\\fSdisc_{i_1}} -\\dv{f}_{i_2}) \n( \\dv{\\gSdisc_{i_2}} - \\dv{g}_{i_1})\n\\\\&+\\sum_{j=-W}^W \\discsmthker_j\\sum_{i\\in (i_2-j,i_1]} (2\\dv{g}_{i_1}-\\dv{g}_{i}-\\dv{g}_{i-1}) (\\dv{f}_{i+j} - \\dv{f}_{i+j-1} ) \n\\end{align*}\n\n\\subsection{Discrete-Continuum Relation}\n\nIn this section we obtain tighter bounds on the dependence on $\\Delta.$\n\\begin{lemma}\\label{lem:discreteFPsum}\nFor a spatially discrete fixed point for the regular ensemble\nwe have\n\\[\n|A(\\dv{f}_\\infty,\\dv{g}_\\infty) - A(\\dv{f}_{-\\infty},\\dv{g}_{-\\infty})| \\le \n\\frac{1}{2} (\\|\\hf''\\|_\\infty+\\|\\hg''\\|_\\infty)\\|\\smthker\\|_\\infty^2{\\Delta^2}\n\\]\n\\end{lemma}\n\\begin{IEEEproof}\nAs before, we associate to the discrete spatial index $i$ the real valued point $x_i = i\\Delta.$\nWe assume that $\\smthker$ is the piecewise constant extension of\n$\\discsmthker.$ Thus, \\eqref{eqn:kerdiscretetosmth} holds trivially.\n\nAssume a spatially discrete fixed point $\\ff,\\fg.$\nLet $\\tff$ and $\\tfg$ be the piecewise extensions of $\\ff$ and $\\fg.$\nWe can now relate the discrete spatial EXIT sum to the corresponding continuum integral\nto arrive at approximate fixed point conditions for spatially discrete fixed points.\n\nThe discrete sum\n\\begin{align*}\n&\\frac{1}{2}\\sum_{i=-\\infty}^i\n(\\dv{f}_i+\\dv{f}_{i-1}) (\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker)\n\\\\& =\n\\int_{-\\infty}^{x_i}\n\\dv{\\tff}(x) \\text{d} \\dv{\\tgS}(x)\n\\\\ & =\n\\int_{-\\infty}^{x_i}\nh_{[\\tff,\\tgS]}\n(\\dv{\\tgS}(x)) \\text{d} \\dv{\\tgS}(x)\n\\end{align*}\nand, similarly,\n\\begin{align*}\n&\\frac{1}{2}\\sum_{i=-\\infty}^i\n(\\dv{g}_i+\\dv{g}_{i-1}) (\\dv{f}_i^\\discsmthker-\\dv{f}_{i-1}^\\discsmthker)\n\\\\& =\n\\int_{-\\infty}^{x_i}\n\\dv{\\tfg}(x) \\text{d} \\dv{\\tfS}(x)\n\\\\\n& =\n\\int_{-\\infty}^{x_i}\nh_{[\\tfg,\\tfS]}\n(\\dv{\\tfS}(x)) \\text{d} \\dv{\\tfS}(x)\n\\end{align*}\nWe want to compare\n\\(\n\\int_{x_{i-1}}^{x_i}\nh_{[\\tff,\\tgS]}\n(\\dv{\\tgS}(x)) \\text{d} \\dv{\\tgS}(x)\n\\)\nto\n\\(\n\\int_{x_{i-1}}^{x_i}\n\\hf\n(\\dv{\\tgS}(x)) \\text{d} \\dv{\\tgS}(x)\\,.\n\\)\n\nWe have\n\\begin{align*}\n&\\int_{x_{i-1}}^{x_i}\nh_{[\\tff,\\tgS]}\n(\\dv{\\tgS}(x)) \\text{d} \\dv{\\tgS}(x)\n\\\\& =\n\\frac{1}{2}(\n\\hf(\\dv{g}_i^\\discsmthker)+\n\\hf(\\dv{g}^\\discsmthker_{i-1}))(\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker)\n\\\\ & = \n\\int_0^1 (\\alpha\n\\hf(\\dv{g}_i^\\discsmthker)+\n\\bar{\\alpha} \n\\hf(\\dv{g}^\\discsmthker_{i-1})) \\text{d}\\alpha \\,(\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker)\n\\end{align*}\nand\n\\begin{align*}\n&\\int_{x_{i-1}}^{x_i}\n\\hf\n(\\dv{\\tgS}(x)) \\text{d} \\dv{\\tgS}(x)\n\\\\& =\n\\int_0^1 \n\\hf(\\alpha \\dv{g}_i^\\discsmthker +\n\\bar{\\alpha} \\dv{g}^\\discsmthker_{i-1}) \\text{d}\\alpha \\,(\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker)\n\\end{align*}\nwhere $\\bar{\\alpha}$ denotes $1-\\alpha.$\n\n\nLet $\\dv{g}(\\alpha) = \\alpha \\dv{g}_i^\\discsmthker+ \\bar{\\alpha} \\dv{g}^\\discsmthker_{i-1}.$\nThen, assuming $\\hf$ is $C^2$ we have by a simple application of the remainder theorem\n\\begin{align*}\n& |\\alpha \\hf(\\dv{g}_i^\\discsmthker) + \\bar{\\alpha} \\hf(\\dv{g}_{i-1}^\\discsmthker)\n-\n\\hf (\\alpha \\dv{g}_i^\\discsmthker + \\bar{\\alpha} \\dv{g}_{i-1}^\\discsmthker)|\n\\\\\\le &\n\\frac{C_i}{2}(\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker)^2\n\\end{align*}\nwhere $C_i$ is the maximum of $|\\hf''(u)|$ for $u$ in $[\\dv{g}_{i-1}^\\discsmthker,\\dv{g}_{i}^\\discsmthker].$\n\nWe now have\n\\begin{align*}\n&\\Bigl|\\int_{x_{i-1}}^{x_i}\n\\bigl(h_{[\\tff,\\tgS]}\n(\\dv{\\tgS}(x)) \n-\n\\hf(\\dv{\\tgS}(x))\\bigr)\n\\text{d} \\dv{\\tgS}(x)\\Bigr|\n\\\\& \\le\n\\frac{C_i}{2}(\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker)^3\\,.\n\\end{align*}\nSince $\\sum_i (\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker) \\le 1$\nand\n\\[\n\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker\n\\le \\Delta \\|\\smthker\\|_\\infty\n\\]\nwe obtain\n\\begin{align*}\n&\\Bigl|\\int_{0}^{1}\n\\bigl(h_{[\\tff,\\tgS]}\n(\\dv{\\tgS}(x)) \n-\n\\hf(\\dv{\\tgS}(x))\\bigr)\n\\text{d} \\dv{\\tgS}(x)\\Bigr|\n\\\\& \\le\n\\frac{\\|\\hf''\\|_\\infty}{2} ( \\|\\smthker\\|_\\infty)^2 \\Delta^2\n\\end{align*}\n\nA similar argument applies to $\\hg$ and $h_{[\\tfg,\\tfS]},$\nand the Lemma follows.\n\\end{IEEEproof}\n\nNote that a general inequality can also be derived based on\n\\begin{align*}\n& |\\alpha \\hf(\\dv{g}_i^\\discsmthker) + \\bar{\\alpha} \\hf(\\dv{g}_{i-1}^\\discsmthker)\n-\n\\hf (\\alpha \\dv{g}_i^\\discsmthker + \\bar{\\alpha} \\dv{g}_{i-1}^\\discsmthker)|\n\\\\\\le &\n\\hf(\\dv{g}_i^\\discsmthker)- \\hf(\\dv{g}_{i-1}^\\discsmthker)\n\\end{align*}\nfrom which we obtain\n\\begin{align*}\n&\\Bigl|\\int_{x_{i-1}}^{x_i}\n\\bigl(h_{[\\tff,\\tgS]}\n(\\dv{\\tgS}(x)) \n-\n\\hf(\\dv{\\tgS}(x))\\bigr)\n\\text{d} \\dv{\\tgS}(x)\\Bigr|\n\\\\& \\le\n(\\hf(\\dv{g}_i^\\discsmthker)- \\hf(\\dv{g}_{i-1}^\\discsmthker))\n(\\dv{g}_i^\\discsmthker-\\dv{g}_{i-1}^\\discsmthker)\n\\\\&\\le\n(\\hf(\\dv{g}_i^\\discsmthker)- \\hf(\\dv{g}_{i-1}^\\discsmthker))\n\\Delta\\|\\omega\\|_\\infty\n\\end{align*}\nand for $k\\ge 0,$\n\\begin{align}\n\\begin{split}\\label{eqn:discIntegralbnd}\n&\\Bigl|\\int_{x_{i-k}}^{x_i}\n\\bigl(h_{[\\tff,\\tgS]}\n(\\dv{\\tgS}(x)) \n-\n\\hf(\\dv{\\tgS}(x))\\bigr)\n\\text{d} \\dv{\\tgS}(x)\\Bigr|\n\\\\&\\le\n(\\hf(\\dv{g}_i^\\discsmthker)- \\hf(\\dv{g}_{i-k}^\\discsmthker))\n\\Delta\\|\\omega\\|_\\infty\n\\end{split}\n\\end{align}\nThis inequality proves Theorem \\ref{thm:discreteFPDelta}.\n\n\\section{Existence of Travelling Wave Solution: Final Case}\\label{app:B}\n\nIn this section we prove Theorem \\ref{thm:mainexist} for the case where\n$\\jump{\\hf}\\cap \\jump{\\hginv} \\neq \\emptyset$ or\n$\\jump{\\hg}\\cap \\jump{\\hfinv} \\neq \\emptyset$\nand $\\ashift \\neq 0.$\nWithout loss of generality we assume $\\ashift>0.$\n\n\nGiven an interval $I$ let \n$\\upperx{I}$ denote its right end point and\nlet\n$\\lowerx{I}$ denote its left end point.\nFor two closed intervals $I,I'$ we say $I \\le I'$ \nif $\\upperx{I} \\le \\lowerx{I'}.$\nThe interval $I+x$ denotes the interval $I$ translated by $x.$ \nFor a non-empty interval $I$ and $\\epsilon>0$ by $\\neigh{I}{-\\epsilon}$ we mean\n$(\\lowerx{I}-\\epsilon,\\lowerx{I}+\\epsilon).$\n\n\\begin{lemma}\\label{lem:flatseparate}\nLet $I,I' \\in \\flats{\\fS}$ be distinct where $\\ff\\in \\sptfns$ \nand $\\smthker$ is regular.\nThen $I \\le I'$ implies $I+2W \\le I'.$\n\\end{lemma}\n\\begin{IEEEproof}\nSince $I$ and $I'$ are both maximal we have $\\fS(I) < \\fS(I').$\nSince $\\smthker$ is regular, we have $\\ff(x)=\\fS(I)$ for $x\\in\\neigh{I}{W}$ \nand $\\ff(x)=\\fS(I')$ for $x\\in\\neigh{I'}{W}.$\nIt follows that $\\neigh{I}{W}$ and $\\neigh{I'}{W}$ are disjoint.\\end{IEEEproof}\n\nGiven regular $\\smthker$ and shift $\\ashift >0$ \nwe say $I \\in \\flats{\\fS}$ is {\\em linked to}\n$I' \\in \\flats{\\gS}$ if $\\upperx{I}+W +\\ashift \\in I',$\nand\nwe say $I' \\in \\flats{\\gS}$ is {\\em linked to}\n$I'' \\in \\flats{\\fS}$ if $\\upperx{I'}+W \\in I''.$\nIf we have a sequence $I_1,I_2,\\ldots$\nsuch that $I_j$ is linked to $I_{j+1}$ then we call this a {\\em chain}.\nNote that all intervals in a chain in either $\\flats{\\fS}$\nor $\\flats{\\gS}$ must be distinct.\nThe chain {\\em terminates} if the last element in the chain is not linked to \nanother interval.\n\n\n\n\\begin{lemma}\\label{lem:linkterminate}\nLet $(\\hf,\\hg) \\in \\exitfns^2$ satisfy the strictly positive gap condition \nwith $A(\\hf,\\hg)>0.$\nLet $(f,g)\\in\\sptfns^2$ be $(0,1)$-interpolating and let $\\smthker$ be regular.\nAssume $h_{[\\ff,\\gSa]} \\equiv\\hf$ and $h_{[\\fg,\\fS]} \\equiv\\hg$ \n(hence $\\ashift>0$),\nthen any chain in $\\flats{\\fS},\\flats{\\gS}$ terminates.\n\\end{lemma}\n\\begin{IEEEproof}\nLet $(u^*,v^*)$ be the minimal element in $\\intcross(\\hf,\\hg)$\nas guaranteed by Lemma \\ref{lem:Sstructure}.\nThere exists finite $y$ such that $g(y) \\ge u^*$ and \n$f(y) \\ge v^*.$\nBy Lemma \\ref{lem:phiflat} if $z \\in I \\in \\{\\flats{\\gS} \\cup \\flats{\\fS}\\}$\nthen $\\altPhi(\\hf,\\hg;g(z),f(z)) \\in [0,A(\\hf,\\hg)]$ and we therefore have\n$(g(z),f(z)) < (u^*,v^*)$ componentwise by Lemma \\ref{lem:Sstructure}.\nThus, we obtain $z 0$ (we cannot have $u_k=0$ or $u_k=1$ since $\\hf$ is continuous at $0$ and $1$ by Lemma \\ref{lem:zocontinuity}. \nFor each $i=1,2,\\ldots$ we define sequences $\\eta_{i,k},$\n$k=1,2,\\ldots$ such that \n\\[\n0 < \\eta_{i,k} < \\frac{1}{2}\\min \\{ 3^{-ik},d_k \\}\n\\] \nand such that \n\\[\n\\{ u_k \\pm \\eta_{i,k} \\} \\cap \\jump{ \\hginv } = \\emptyset\\,.\n\\]\nNote that $2\\sum_k \\eta_{i,k} \\le \\frac{1}{2^i}.$\n\nFor each $k$ we define $H_k = \\unitstep_{r_k}$ (which is a unit step function\nexcept that we set $H_k(0) =r_k$)\nwhere $r_k= \\frac{\\hf(u_k)-\\hf(u_k-)}{\\hf(u_k+)-\\hf(u_k-)}.$\nThis function represents the jump in $\\hf$ at $u_k.$\nWe will substitute for this a function continuous at $0$:\n\\[\nS_{i,k}(x) = \\begin{cases}\n0 & x < 1- \\eta_{i,k}\\\\\n0 \\vee (x-r_k) \\wedge 1 & |x| \\le \\eta_{i,k} \\\\\n1 & x > 1+ \\eta_{i,k}\n\\end{cases}\n\\]\nwhere $0 \\vee z \\wedge 1 = \\min\\{ \\max \\{ 0,z\\} ,1\\}.$\nDefine\n\\[\n\\hf^i(x) = \\hf(x) - \\sum_k ( \\hf(u_k+)-\\hf(u_k-)) (H_k(x) - S_{i,k}(x))\\,.\n\\]\nNote that $\\sum_k ( \\hf(u_k+)-\\hf(u_k-)) \\le 1$ and $|H_k(x) - S_{i,k}(x)|\\le 1$\nso the sum is well defined.\nThe function $\\hf^i(x)$ can be expressed as the sum of two functions,\n\\[\nh_1(x)= \\hf(x) - \\sum_k ( \\hf(u_k+)-\\hf(u_k-)) H_k(x) \\,\n\\]\nand\n\\[\nh_{2,i}(x)= \\sum_k ( \\hf(u_k+)-\\hf(u_k-)) S_{i,k}(x)\\,,\n\\]\nboth of which are in $\\exitfns,$ i.e., both of which are non-decreasing.\nThe function $h_1$ is continuous for all $u \\in \\jump{\\hginv} \\cap \\jump{\\hf}$\nsince $H_k(u+)-H_k(u-)=1$ if $u=u_k$ and $H_k(u+)-H_k(u-)=0$ if $u \\neq u_k.$\nIf $u \\in \\jump{\\hginv} \\backslash \\jump{\\hf}$ then $\\hf$ is continuous at $u$ and\ntherefore $h_1$ is continuous at $u.$\nIf follows that $\\hf^i \\in \\exitfns$ and $\\hf^i \\xrightarrow{i\\rightarrow\\infty} \\hf.$\nWe assume a similar definition of $\\hg^i.$\n\nWe will now show that properties A through E hold for this sequence.\nEach property has two essentially equivalent forms (through the symmetry of substitution of $f$ and $g$). In each case we will show the first form.\n\nConsider part A. Let $v \\in \\jump{\\hfiinv}.$ There is non-empty interval\n$I =(x_1,x_2)$ such that $\\hf^i$ is evaluates to $v$ on $I.$\nSince both $h_1$ and $h_{2,i}$ are non-decreasing it follows that both are constant\non $I.$ From the fact that $h_{2,i}$ is constant on $I$ we easily obtain that\n$\\sum_k ( \\hf(u_k+)-\\hf(u_k-)) H_{k}(x)\\,$ is also constant on $I$ and \nwe deduce that $\\hf$ is constant on $I.$\nHence $v \\in \\jump{\\hfinv}$\nand part A is proved.\n\nConsider part B. We have $S_{i,k}(u+) - S_{i,k}(u-) =0$ unless $u = u_j \\pm \\eta_{j,k}$ for some $j.$\nBy construction, $u_j \\pm \\eta_{j,k} \\not\\in \\jump{\\hginv}.$\nHence, $h_{2,i}(u)$ is continuous at all $u \\in \\jump{\\hginv}.$\nSince $h_1$ is continuous, $\\hf^i$ is continuous at all $u\\in \\jump{\\hginv}$ and part B is proved.\n\nConsider part C. Let $u \\in\\jump{\\hginv}\\cap \\jump{\\hf}\\,,$ i.e., $u=u_j$ for some $j.$\nWe prove part C by showing that\n$H_k(u_j) - S_{i,k}(u_j) =0$ for all $k$ for all $i$ large enough.\nFor $k=j$ we have $H_k(u_j) - S_{i,k}(u_j) =0$ by definition.\nFor $k \\eta_{i,k}.$\nThis proves part C.\n\nConsider part D.\nAssume $v \\in \\jump{\\hfinv},$ then $\\hfinv(v-)<\\hfinv(v+).$\nSince $(\\hfinv(v-),\\hfinv(v+)) \\cap \\jump{\\hf} = \\emptyset$ and $\\eta_{i,k} < 2^{-i}$ it follows\nthat for $u \\in (\\hfinv(v-)+2^{-i},\\hfinv(v+)-2^{-i})$ we have\n$\\hf^i(u) = \\hf(u) = v$ and for \n$u \\not\\in (\\hfinv(v-)-2^{-i},\\hfinv(v+)+2^{-i})$\nwe have\n$\\hf^i(u) \\neq v.$\nPart D now follows.\n\nConsider part E.\nLet $v \\in \\jump{\\hfinv}$ and set $u=\\hfinv(v+).$\nIf $u \\in \\jump{\\hginv}\\cap \\jump{\\hf}$ then $u=u_k$ for some $k$ and\nproperty C implies property E.\nOtherwise, we have $u = t_k$ for some $k.$\nFor $j\\ge k$ we have $ |u_j-t_k| > \\eta_{i,k}$ for all $i.$\nFir $j < k$ we have $\\frac{2}{3^i} < \\min_{j0.$\nThen\n\\begin{align*}\n&\\altPhi(\\hf,\\hg;u,v)-\n\\altPhi(\\hf^\\delta,\\hg^\\delta;u,v)\\\\\n=&\n\\int_0^u (\\hginv(x)-{(\\hg^\\delta)}^{-1}(x)) dx\n+\n\\int_0^v (\\hfinv(x)-{(\\hf^\\delta)}^{-1}(x)) dx\n\\end{align*}\nwhich is non-negative and non-decreasing in $u$ and $v.$\nHence for $(u,v) \\in \\intcross(\\hf^\\delta,\\hg^\\delta)\\subset \\intcross(\\hf,\\hg)$ we have\n\\begin{align*}\n&\\altPhi(\\hf^\\delta,\\hg^\\delta;u,v) - A(\\hf^\\delta,\\hg^\\delta)\\\\\n\\ge&\n\\altPhi(\\hf,\\hg;u,v) - A(\\hf,\\hg)\\\\\n>&0\n\\end{align*}\nwhich establishes the claim. \nNow for each $\\delta>0$ we can choose $\\eta$ sufficiently small so that\n\\begin{align*}\n\\unitstep_0(x-\\eta) &\\wedge \\hf^\\delta \\\\\n\\unitstep_0(x-\\eta) &\\wedge \\hg^\\delta\n\\end{align*}\nhas a non-trivial crossing point, and it follows easily that the pair\nsatisfies the strictly positive gap condition with $A >0.$\n(The values of $\\altPhi$ at crossing points and $A$ increase by identical amounts.)\n\nLet us define $\\delta_j \\rightarrow 0$ and $\\eta_j \\rightarrow 0$ with \n$1-\\delta_j, \\eta_j \\not\\in \\jump{\\hginv} \\cup \\jump{\\hfinv}$ so that, for each $j,$\n\\begin{align*}\n\\hf^j = \\unitstep_0(x-\\eta_j) \\wedge \\hf(x) \\vee \\unitstep_1(x - (1-\\delta_j)) \\\\\n\\hg^j = \\unitstep_0(x-\\eta_j) \\wedge \\hg(x) \\vee \\unitstep_1(x - (1-\\delta_j)) \n\\end{align*}\nsatisfies the strictly positive gap condition with $A >0.$ Now, for each $i$ we define the sequence\n\\begin{align*}\n\\hf^{i,j}(x) &=\n\\unitstep_0(x-\\eta_j) \\wedge \\hf^i(x) \\vee \\unitstep_1(x - (1-\\delta_j)) \\\\\n\\hg^{i,j}(x) &=\n\\unitstep_0(x-\\eta_j) \\wedge \\hg^i(x) \\vee \\unitstep_1(x - (1-\\delta_j)) \\,.\n\\end{align*}\nThen we have\n\\begin{align*}\n\\hf^{i,j} &\\xrightarrow{\\i \\rightarrow \\infty} \\hf^j \\\\\n\\hg^{i,j} &\\xrightarrow{\\i \\rightarrow \\infty} \\hg^j \\,.\n\\end{align*}\nClearly $\\intcross(\\hf^j,\\hg^j) = \\intcross(\\hf,\\hg) \\cap [\\eta_j,1-\\delta_j]^2,$\nand it follows that, for each $j,$ $(\\hf^{i,j},\\hg^{i,j})$ satisfies the strictly positive gap condition\nwith $A(\\hf^{i,j},\\hg^{i,j}) >0$ for all $i$ large enough.\nProperties A and B still hold for all $i$ and $j.$\n\nFor each $j$ we can find $i(j)$ such that $(\\hf^{i,j},\\hg^{i,j})$\nsatisfies the strictly positive gap condition for all $i \\ge i(j).$\nWe can assume $i(j)$ is increasing in $j.$\nConsider the diagonal sequence \n$(\\hf^{i(j),j},\\hg^{i(j),j})\\, j=1,2,\\ldots.$\nLet us re-index this as \n$(\\hf^{i},\\hg^{i})\\, i=1,2,\\ldots$ with corresponding $\\delta_i,\\eta_i.$\nWe now show that properties C,D, and E continue to hold.\n\nProperty C holds since, by Lemma \\ref{lem:zocontinuity}, $u \\in \\jump{\\hf}$ implies $u \\in (0,1)$\nand $v \\in \\jump{\\hg}$ implies $v \\in (0,1).$\nNow we show property D.\nAssume $v \\in \\jump{\\hfinv}.$ If $v \\in (0,1)$ then\n$ [\\hfinv(v-),\\hfinv(v+)] \\subset (0,1)$ by Lemma \\ref{lem:zocontinuity} and property D clearly holds.\nIf $v=0$ then $\\hfinv(v-)=\\hfiinv(v-)=0$ and \n$\\hfinv(v+) < 1$ by Lemma \\ref{lem:zocontinuity}.\nSince $1-\\delta_i \\rightarrow 1$ we have $\\hfiinv(v+) \\rightarrow \\hfinv(v+).$ \nSimilarly, if $v=1$ then $\\hfinv(v+)=\\hfiinv(v+)=1$ and \n$\\hfinv(v-) >0$ and we have $\\hfiinv(v-) \\rightarrow \\hfinv(v-).$ \nThus, property D holds generally.\n\nFinally we consider property E.\nLet $v \\in \\jump{\\hfinv}$ and set $u=\\hfinv(v+).$\nThen $u>0$ and if $u<1$ then we clearly have\n$\\hf^i(u)=\\hf(u)$ for all $i$ large enough.\nIf $u=1$ then $\\hf(u)=1$\nand $\\hf^i(u)=1$ for all $i.$\nThus, property E holds.\n\\end{IEEEproof}\n\nFor $\\ff \\in \\sptfns$ we say that $\\ff$ is increasing to the right of $x$ if\n$z > x \\Rightarrow \\ff(z) > \\ff(x)$\nand\nwe say that $\\ff$ is increasing to the left of $x$ if\n$z < x \\Rightarrow \\ff(z) < \\ff(x).$\n\n\n\\begin{lemma}\\label{lem:rightincrease}\nLet $(\\hf,\\hg)$ satisfy the strictly positive gap condition with $A(\\hf,\\hg)>0$ and let $\\smthker$ be regular.\nLet $(\\ff,\\fg)$ be $(0,1)$-interpolating such that\n$\\ff \\veq \\hf \\circ \\gSa$ and \n$\\fg \\veq \\hg \\circ \\fS$ as guaranteed by Lemma \\ref{lem:weakexistence}.\nIf $I \\in \\flats{\\fS},$ \nthen $\\gSa$ is increasing to the right of $\\lowerx{I}-W.$\nIf $I' \\in \\flats{\\gS},$ \nthen $\\fS$ is increasing to the right of $\\lowerx{I'}-W.$\n\\end{lemma}\n\\begin{IEEEproof}\nAssume $I \\in \\flats{\\fS}.$ \nAssume $\\gSa$ is not increasing to the right of $\\lowerx{I}-W.$\nThen $\\lowerx{I}-W \\in \\hat{I}-\\ashift$ for some $\\hat{I} \\in \\flats{\\gS}$ with \n\\[\n\\lowerx{\\hat{I}-\\ashift} \\le \\lowerx{I}-W < \\upperx{\\hat{I}-\\ashift}.\n\\]\nIt follows that there exists $x \\in \\neigh{{I}}{W} \\cap (\\hat{I}-\\ashift)$ and,\nsince $\\smthker$ is regular, we have $\\ff(x) = \\fS({I}).$ \nHence we obtain $\\fS({I}) \\veq \\hf(\\gSa(\\hat{I}-\\ashift))$\nwhich is equivalent to $\\fS({I}) \\veq \\hf(\\gS(\\hat{I})).$\n\nSince $\\ashift<2W$ by Corollary \\ref{cor:regshiftbound},\nwe have\n\\[\n\\lowerx{\\hat{I}}-W \\le\n\\lowerx{I}+\\ashift-2W <\n\\lowerx{I} <\n \\upperx{\\hat{I}}+W\\,.\n\\]\nIt follows that there exists $x \\in \\neigh{\\hat{I}}{W} \\cap {I}$ and,\nsince $\\fg(x) = \\gS(\\hat{I}),$ \nwe obtain $\\gS(\\hat{I}) \\veq \\hg(\\fS({I})).$\n\nWe now have $(\\gS(\\hat{I'}),\\fS(I')) \\in \\cross( \\hf,\\hg),$ but since\n$\\altPhi(\\hf,\\hg;\\gSa(I),\\fS(I')) \\in [0,A]$ by Lemma \\ref{lem:phiflat}, this contradicts the strictly positive gap condition.\n\nThe argument for the second case is similar. \n\nAssume $I' \\in \\flats{\\gS}$ which is equivalent to\n$I'-\\ashift \\in \\flats{\\gSa}.$\nAssume $\\fS$ is not increasing to the right of $\\lowerx{I'}-W.$\nThen $\\lowerx{I'}-W \\in \\hat{I'}$ for some $\\hat{I'} \\in \\flats{\\fS}$ with \n\\[\n\\lowerx{\\hat{I'}} \\le \\lowerx{I'}-W < \\upperx{\\hat{I'}}.\n\\]\nIt follows that there exists $x \\in \\neigh{{I'}}{W} \\cap \\hat{I'}$ and,\nsince $\\smthker$ is regular, we have $\\fg(x) = \\gS({I'}).$ \nHence we obtain $\\gS({I'}) \\veq \\hg(\\fS(\\hat{I'})).$\n\nSince $\\ashift<2W$ by Corollary \\ref{cor:regshiftbound},\nwe have\n\\[\n\\lowerx{\\hat{I'}}-W \\le\n\\lowerx{I'}-2W <\n\\lowerx{I'}-\\ashift <\n\\lowerx{I'}< \\upperx{\\hat{I'}}+W\\,.\n\\]\nIt follows that there exists $x \\in \\neigh{\\hat{I'}}{W} \\cap {(I'-\\ashift)}$ and,\nsince $\\ff(x) = \\fS(\\hat{I'}),$ \nwe obtain $\\fS(\\hat{I'}) \\veq \\hf(\\gSa({I'}-\\ashift))$\nwhich is equivalent to $\\fS(\\hat{I'}) \\veq \\hf(\\gS({I'})).$\n\nThe rest of the argument is as before.\n\\end{IEEEproof}\n\n\\begin{lemma}\\label{lem:convprop}\nLet $(\\hf,\\hg) \\in \\exitfns^2$ satisfy the strictly positive gap condition \nwith $A(\\hf,\\hg) > 0$ and let $\\smthker$ be regular.\nAssume we have $\\ff,\\fg \\in \\sptfns$ such that\n$\\ff \\veq \\hf \\circ \\gSa$ and \n$\\fg \\veq \\hg \\circ \\fS$\nand sequences\n$\\ff_i \\rightarrow \\ff$ and $\\fg_i\\rightarrow \\fg$ and $\\ashift_i \\rightarrow \\ashift$ \nwhere\n$\\ff_i = \\hf^i \\circ \\gSai_i$ and \n$\\fg_i = \\hg^i \\circ \\fS_i$ for each $i,$\nand\n$(\\hf^i,\\hg^i) \\rightarrow (\\hf,\\hg)$ is given as in Lemma \\ref{lem:regularapprox}.\n\nThen $\\ashift>0$ and \nthe following properties hold for any $I\\in\\flats{\\gS}.$\n\\begin{itemize}\n\\item[A.]\nIf $I$ is not linked to an $I'\\in\\flats{\\fS}$\nthen for any $\\epsilon >0$ we have \n$\\gS_i(x) =\\gS(I)$ for $x\\in \\neigh{I}{-\\epsilon}$\nfor all $i$ large enough.\n\\item[B.]\nIf $I$ is linked to $I' \\in \\flats{\\fS}$ and\nfor any $\\delta>0$ we have\n$\\fS_i$ is a fixed constant, denoted $F,$ on $\\neigh{I'}{-\\delta}$ for all $i$ large enough,\nthen,\nfor any $\\epsilon >0$ we have \n$\\gS_i(x) =\\gS(I)$ for $x\\in \\neigh{I}{-\\epsilon}$\nfor all $i$ large enough.\n\\item[C.]\nAssume that for any $\\epsilon>0$ we have\n$\\gSai_i(x) = \\gS(I)$ for $x \\in \\neigh{I-\\ashift}{-\\epsilon}$ for all $i$ large enough.\nThen we have $\\ff(x) = \\hf(\\gSa(I-\\ashift))$ for all $x \\in (\\lowerx{I-\\ashift},\\upperx{I-\\ashift}).$\n\\end{itemize}\n\\end{lemma}\n\\begin{IEEEproof}\nConsider part A.\nLet $(\\hf^i,\\hg^i) \\rightarrow (\\hf,\\hg)$ be given as in Lemma \\ref{lem:regularapprox}.\nBy Lemma \\ref{lem:weakexistence} and Lemma \\ref{lem:pathology}\nthere exists $\\ff_i,\\fg_i$ such that\n$\\ff_i = \\hf^i \\circ \\gS_i$ and \n$\\fg_i = \\hg^i \\circ \\fS_i$ for each $i.$\nLet $\\ff$ and $\\fg$ be limits so that\n$\\ff \\veq \\hf \\circ \\gS$ and \n$\\fg \\veq \\hg \\circ \\fS$ as guaranteed by Lemma \\ref{lem:limitexist}.\n\nLet $I \\in \\flats{\\gS}.$\nWe have $\\fg(x)=\\gS(I)$ for all $x \\in \\neigh{I}{W}.$\nSince $\\fS$ is increasing to the right of $\\lowerx{I}-W$ by Lemma \\ref{lem:rightincrease}\nand increasing to the left of $\\upperx{I}+W$ by assumption (no linked interval),\nwe see by monotonicity of $\\hg$ that\n\\(\n\\hg(v) = \\gS(I)\n\\)\nfor all $v \\in (\\fS(\\lowerx{I}-W),\\fS(\\upperx{I}+W))$\nand $\\gS(I) \\in \\jump{\\hginv}.$\nGiven any $\\epsilon>0$ property D of Lemma \\ref{lem:regularapprox} now implies that\n\\(\n(\\fS(\\lowerx{I}-W+\\epsilon),\\fS(\\upperx{I}+W)-\\epsilon)\n\\subset\n[\\hgiinv(\\gS(I)-),\\hgiinv(\\gS(I)+)]\n\\)\nfor all $i$ large enough.\nWe conclude from this that $\\fg_i(x)=\\gS(I)$ for $x \\in \\neigh{I}{W-\\epsilon}$\nfor all $i$ large enough. \nThis implies that $\\gS_i(x) = \\gS(I)$ \nfor $x \\in \\neigh{I}{-\\epsilon}$\nfor all $i$ large enough, proving part A.\n\nConsider part B.\nLet $I$ be linked to $I' \\in \\flats{\\fS}.$\nWe have $\\gS(I) \\in \\jump{\\hginv}$ since\n $\\fS$ is increasing to the right of $\\lowerx{I}-W.$\nAs in the proof of part A we have\n\\(\n\\hg(v) = \\gS(I)\n\\)\nfor all $v \\in (\\fS(\\lowerx{I}-W),F)$\n\nWe have $F = \\hginv(\\gS(I)+)$ since $I$ is maximal.\nWe now apply property E of Lemma \\ref{lem:regularapprox} to conclude that $\\hg^i(F) = \\gS(I)$\nfor all $i$ large enough.\nGiven $\\epsilon>0$ we combine this\n with property D of Lemma \\ref{lem:regularapprox} to obtain\n\\(\n(\\fS(x_1-W+\\epsilon),F]\n\\subset\n[\\hgiinv(\\gS(I)-),\\hgiinv(\\gS(I)+)]\n\\)\nfor all $i$ large enough.\nLet $\\delta = \\epsilon,$ then for all $i$ large enough\nwe also have $\\fS_i(x) = F$ for $x \\in\\neigh{I'}{-\\epsilon}.$\nWe conclude that $\\fg_i(x)=\\gS(I)$ for $x \\in \\neigh{I}{W-\\epsilon}$\nfor all $i$ large enough. \nThis implies that $\\gS_i(x) = \\gS(I)$ \nfor $x \\in \\neigh{I}{-\\epsilon}$\nfor all $i$ large enough, proving part B.\n\nConsider part C. \nIf $\\hf$ is continuous at $\\gS(I)$ then we must have\n$f(x)=\\hf(\\gS(I))$ on $I-\\ashift.$\nAssume now that $\\gS(I) \\in \\jump{\\hf}.$\nWe have $\\gS(I) \\in \\jump{\\hginv}$ by \nLemma \\ref{lem:rightincrease} since $\\hf,\\hg$ satisfies the strictly\npositive gap condition.\nProperty $C$ of Lemma \\ref{lem:regularapprox} now gives $\\hf^i(\\gS(I))=\\hf(\\gS(I))$ \nfor all $i$ large enough.\nThis implies that for any $\\epsilon>0$ we now have\n$\\ff_i(x)=\\hf(\\gS(I))$ for all $x \\in \\neigh{I-\\ashift}{-\\epsilon}$\nfor all $i$ large enough.\nSince $\\ff_i \\rightarrow \\ff$ this proves part C.\n\\end{IEEEproof}\n\nLemma \\ref{lem:convprop} essentially completes the proof of Theorem \\ref{thm:mainexist} and we state the main\nresult as the following.\n\\begin{corollary}\nLet $(\\hf,\\hg)$ satisfy the strictly positive gap condition \nwith $A(\\hf,\\hg) > 0$ and let $\\smthker$ be regular.\nThere exists $(0,1)$-interpolating $\\ff,\\fg$ such that \n$\\ff = \\hf \\circ \\gS$ and \n$\\fg = \\hg \\circ \\fSa.$\n\\end{corollary}\n\\begin{IEEEproof}\nLet $(\\hf^i,\\hg^i) \\rightarrow (\\hf,\\hg)$ be given as in Lemma \\ref{lem:regularapprox}.\nBy Lemma \\ref{lem:weakexistence} and Lemma \\ref{lem:pathology}\nthere exists $\\ff_i,\\fg_i$ such that\n$\\ff_i = \\hf^i \\circ \\gS_i$ and \n$\\fg_i = \\hg^i \\circ \\fS_i$ for each $i.$\nLet $\\ff$ and $\\fg$ be limits so that\n$\\ff \\veq \\hf \\circ \\gSa$ and \n$\\fg \\veq \\hg \\circ \\fS$ as guaranteed by Lemma \\ref{lem:limitexist}.\n\nLemma \\ref{lem:linkterminate} states that any element in $I\\in \\flats{\\gS}$ must be part of a terminating\nchain.\nParts A and B of Lemma \\ref{lem:convprop} show that for any $\\epsilon > 0$ we have\n$\\gS_i(x)=\\gS(I)$ for all $x\\in \\neigh{I}{-\\epsilon}$ for all $i$ large enough.\nPart C of Lemma \\ref{lem:convprop} then shows that $\\ff(x) = \\hf(\\gSa(x))$ for all\n$x$ in the interior of $I.$\nLemma \\ref{lem:notequal} states that if $\\ff \\neq \\hf \\circ \\gSa$ then there exists\n$I\\in\\flats{\\gS}$ such that $\\ff(x) \\neq \\hf(\\gSa(x))$ on a subset of positive measure in $I-\\ashift.$\nHence, $\\ff \\equiv \\hf \\circ \\gSa.$\nA similar argument shows that $\\fg \\equiv \\hg \\circ \\fS.$\nWe can obtain equality by modifying $\\ff$ and $\\fg$ on a set of measure $0.$\n\\end{IEEEproof}\n\\section{Two Sided Termination with Positive Gap}\\label{app:C}\n\n\n\nIn this section we prove Theorems\n\\ref{thm:twoterminatedexist} and\n\\ref{thm:discretetwoterminatedexist}.\nThe two results have much in common and we begin with some\nconstructions that apply to both.\n\nWe assume that $\\smthker$ is regular and that\n$(\\hf,\\hg)$ satisfies the strictly positive gap condition with $A(\\hf,\\hg) < 0.$\n\n\n\n\n\nConsider first the parametric modification of $(\\hf,\\hg)$ given by\n\\begin{align}\n\\begin{split}\n\\hf(\\eta;u) &=(\\hf(u)-\\eta)^+\\,\\\\ \\label{eqn:FirstMod}\n\\hg(\\eta;v) &= (\\hg(v)-\\eta)^+ \\,.\n\\end{split}\n\\end{align}\n\nThe minimum of $\\altPhi(\\hf(\\eta;u),\\hg(\\eta,\\cdot);u,v)$ for $(u,v)\\in [0,1]\\times[0,1]$\nis achieved in $\\cross(\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot))\\backslash (1,1).$ \nLet $(u(\\eta),v(\\eta))$ denote the minimum point where this minimum is achieved.\nThen $(u(\\eta),v(\\eta)) \\in \\cross(\\hf(\\eta;u),\\hg(\\eta;\\cdot))$\nand we have\n$\\altPhi(\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot);u(\\eta),v(\\eta)) \\rightarrow A(\\hf,\\hg)$ as $\\eta\\rightarrow 0.$\nIt follow that $(u(\\eta),v(\\eta)) \\rightarrow (1,1)$ as $\\eta \\rightarrow 0.$\nHence, given arbitrary $\\epsilon > 0$ we have for all $\\eta$ small enough that\n$u(\\eta),v(\\eta) > 1-\\epsilon$ and $\\altPhi(\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot);u(\\eta),v(\\eta)) < 0.$\nIf $\\hf(\\eta;u(\\eta)) > v(\\eta)$ then redefine $\\hf(\\eta;u(\\eta)) = v(\\eta)$\nand\nif $\\hg(\\eta; v(\\eta)) > u(\\eta)$ then redefine $\\hg(\\eta;v(\\eta)) = u(\\eta).$\n\nWe can now apply Theorem \\ref{thm:mainexist}\nto $(\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot))$ on $[0,u(\\eta)]\\times[0,v(\\eta)]$ to obtain\n ${\\tmplF},{\\tmplG} \\in \\Psi_{[-\\infty,\\infty]}$ \ninterpolating over $[0,u(\\eta)]\\times[0,v(\\eta)]$\nand $\\ashift < 0$ so that\nsetting $\\ff^t(x) = {\\tmplF}(x-\\ashift t)$ and\n$\\fg^t(x) = {\\tmplG}(x-\\ashift t)$ solves\n\\eqref{eqn:gfrecursion} for $(\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot)).$\nSince $\\hf$ and $\\hg$ are continuous at $0$\nwe have ${\\tmplF}(x) = 0$ on some maximal interval, we may take to be $[-\\infty,0),$\nand ${\\tmplG}(x) = 0$ on some maximal interval $[-\\infty,z_g).$\n\nNote that we have\n\\begin{equation}\\label{eqn:ADFGbnd}\n\\hg(\\tmplF^{\\smthker}(x))\n\\ge\n\\tmplG(x) + \\eta \\unitstep_0 (x-x_g)\n\\end{equation}\nand\n\\begin{equation}\\label{eqn:ADGFbnd}\n\\hf(\\tmplG^{\\smthker}(x+\\ashift))\n\\ge\n\\tmplF(x) + \\eta \\unitstep_0 (x)\\,.\n\\end{equation}\n\nApplying Lemmas \\ref{lem:shiftupperbound} with Lemma \\ref{lem:stposbound} we can\nassert the existence of a bound $S<2W$ such that $-\\ashift \\le S$ for all $\\eta$ sufficiently small. Furthermore, \n\nWe assume $Z = Z(\\eta)$ large enough so that\n\\begin{align}\n{\\tmplF}(\\tfrac{1}{4}Z+\\ashift-\\Delta) & > v(\\eta)-\\frac{\\eta}{4},\\label{eqn:topF}\\\\\n{\\tmplG}(\\tfrac{1}{4}Z+\\ashift-\\Delta) & > u(\\eta)-\\frac{\\eta}{4} \\label{eqn:topG}\\,.\n\\end{align}\nIn the discrete case we will assume $Z = L\\Delta$ for an integer $L.$\n\nLet us define \n\\begin{equation}\\label{eqn:f0initial}\n\\ff^{0}(x) = \\tmplF(x+\\ashift) + \\eta \\unitstep_0(x+\\ashift)\n\\end{equation}\nfor $x \\le \\half Z$ and for $x > \\half Z$\ninitialize symmetrically using $\\ff^{0}(x) = \\ff^{0}(Z-x).$\nClearly this is even about $\\half Z$ and we have\n$\\ff^0(x) \\le 1.$ \nFor $x \\in [\\tfrac{1}{4}Z,\\tfrac{1}{2}Z]$ we have\n${\\tmplF}(x+\\ashift) > v(\\eta)-\\frac{\\eta}{4}$ by \\eqref{eqn:topF}\nand for all $x$ we have ${\\tmplF}(x) \\le v(\\eta).$ This gives for all $x$ the bound\n\\begin{equation}\\label{eqn:f0initialbound}\n\\ff^{0}(x) \\ge \\tmplF(x+\\ashift) + \\tfrac{3}{4} \\eta \\unitstep_0(x+\\ashift)\n-\n\\unitstep_1(x-\\tfrac{3}{4}Z)\n\\end{equation}\n\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:twoterminatedexist}]\n\nWe assume $Z$ large enough so that\n\\begin{align}\n\\tfrac{3}{4}\\eta \\Omega(0)\n- \\Omega(-\\tfrac{1}{4}Z) & \\ge 0 \\label{eqn:lm15Obnd} \\\\\n\\tfrac{3}{4}\\eta \\Omega(-x_g+\\ashift)\n- \\Omega(-\\tfrac{1}{4}Z) & \\ge 0 \\label{eqn:lm15Obbnd} \n\\end{align}\n\nLet us initialize the system \\eqref{eqn:gfrecursion} with \n\\(\n\\ff^{0}(x) \n\\)\nas given in \\eqref{eqn:f0initial}.\nBy \\eqref{eqn:f0initialbound} we have\n\\[\n\\ff^{0,\\smthker}(x) \\ge \\tmplF^{\\smthker} (x+\\ashift) + \\tfrac{3}{4} \\eta \\Omega(x+\\ashift)\n- \\Omega(x-\\tfrac{3}{4}Z)\n\\] \nand for $x \\in [x_g-\\ashift,\\half Z]$ we have by \\eqref{eqn:lm15Obnd}\n\\[\n\\ff^{0,\\smthker}(x) \\ge \\tmplF^{\\smthker} (x+\\ashift) \\,.\n\\] \n\nConsider now \n\\(\ng^{0}(x) = \\hg(\\ftS{0}(x)).\n\\)\nWe have for $x\\in [x_g-\\ashift,\\half Z]$\n\\begin{align*}\ng^{0}(x) &= \\hg(\\ftS{0}(x)) \\\\\n& \\ge \\hg(\\tmplF^{\\smthker} (x+\\ashift)) \n\\\\ & \\stackrel{\\eqref{eqn:ADFGbnd}}{\\ge}\n\\tmplG(x+\\ashift)+\\eta\\unitstep_0(x-x_g+\\ashift)\n\\end{align*}\nand we observe that since the right hand side is $0$ for\n$x < x_g-\\ashift$ the inequality holds for all $x \\le \\half Z.$\n\nBy the same argument that gave \\eqref{eqn:f0initialbound}\nwe have for all $x$ the bound\n\\begin{align*}\ng^{0}(x) & \\ge\n\\tmplG(x+\\ashift)+\\tfrac{3}{4}\\eta\\unitstep_0(x-x_g+\\ashift) - \\unitstep_1(x-\\tfrac{3}{4}Z)\n\\end{align*}\nand we obtain\n\\begin{align*}\ng^{0,\\smthker}(x) \\ge \\tmplG^{\\smthker}(x+\\ashift) + \\tfrac{3}{4}\\eta \\Omega(x-x_g+\\ashift) - \\Omega(x-\\tfrac{3}{4}Z)\n\\end{align*}\nWhich, by \\eqref{eqn:lm15Obbnd}, gives\n$g^{0,\\smthker}(x) \\ge \\tmplG^{\\smthker}(x+\\ashift)$\nfor $x\\in [0,\\half Z].$\n\nNow, define $\\ff^1$ by\n$\\ff^1(x) = \\hf(\\gtS{0}(x))$ for $x\\in[0,Z]$ and\n$\\ff^1(x) = 0$ otherwise. For $x\\in[0,\\half Z]$ we have\n\\begin{align*}\nf^{1}(x) &= \\hf(\\gtS{0}(x)) \\\\\n& \\ge \\hf(\\tmplG^{\\smthker} (x+\\ashift)) \n\\\\ & \\stackrel{\\eqref{eqn:ADGFbnd}}{\\ge}\n\\tmplF(x)+\\eta\\unitstep_0(x)\n\\\\ & \\ge\nf^{0}(x)\n\\end{align*}\nThis implies the existence of a fixed point lower bounded by\n$f^0,g^0,$ which completes the proof.\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:discretetwoterminatedexist}]\nThe proof is similar to the proof of Lemma \\ref{thm:twoterminatedexist}\nbut we require some stronger assumptions.\nFirst, we assume that $Z=L\\Delta$ for an integer $L.$\nIn addition we assume $\\eta$ small enough so that \n$\\altPhi(\\hf(\\eta;\\cdot),\\hg(\\eta,\\cdot);u(\\eta),v(\\eta))< -\\Delta\\|\\smthker\\|_\\infty.$\nTheorem \\ref{thm:mainexist} now implies\n$\\ashift <-\\Delta$ (actually we have $\\ashift < -\\Delta\/(u(\\eta) v(\\eta)$).\nFinally, we assume $Z$ large enough so that\n\\begin{align}\n\\tfrac{3}{4}\\eta \\Omega(0)\n- \\Omega(-\\tfrac{1}{4}Z+\\half \\Delta) & \\ge 0 \\label{eqn:lm16Obnd}\n\\\\\n\\tfrac{3}{4}\\eta \\Omega(-x_g+\\ashift-\\Delta)\n- \\Omega(-\\tfrac{1}{4}Z+ \\Delta) & \\ge 0 \\label{eqn:lm16Obbnd}\n\\end{align}\n\nLet us initialize the system \\eqref{eqn:discretegfrecursion} with \n\\(\n\\ff^{0}(x) \n\\)\nas given in \\eqref{eqn:f0initial}.\nBy \\eqref{eqn:f0initialbound} and Lemma \\ref{lem:disccontbnd} we have\n\\begin{align*}\n\\ff^{0,\\smthker}(x_i) \\ge &\\tmplF^{\\smthker} (x_i+\\ashift-\\half\\Delta) + \\tfrac{3}{4} \\eta \\Omega(x_i+\\ashift-\\half\\Delta)\\\\\n&- \\Omega(x_i-\\tfrac{3}{4}Z+\\half\\Delta)\n\\end{align*}\nand for $x_i \\in [x_g-\\ashift+\\half\\Delta,\\half Z]$ we have by \\eqref{eqn:lm16Obnd}\n\\[\n\\ff^{0,\\smthker}(x_i) \\ge \\tmplF^{\\smthker} (x_i+\\ashift-\\half\\Delta) \\,.\n\\] \n\nConsider now \n\\(\ng^{0}(x_i) = \\hg(\\ftS{0}(x_i)).\n\\)\nWe have for $x_i\\in [x_g-\\ashift+\\half\\Delta,\\half Z]$\n\\begin{align*}\ng^{0}(x_i) &= \\hg(\\ftS{0}(x_i)) \\\\\n& \\ge \\hg(\\tmplF^{\\smthker} (x_i+\\ashift-\\half\\Delta)) \n\\\\ & \\stackrel{\\eqref{eqn:ADFGbnd}}{\\ge}\n\\tmplG(x_i+\\ashift-\\half\\Delta)+\\eta\\unitstep_0(x_i-x_g+\\ashift-\\half\\Delta)\n\\end{align*}\nand we observe that since the right hand side is $0$ for\n$x_i < x_g-\\ashift+\\half\\Delta$ the inequality holds for all $x_i \\le \\half Z.$\n\nBy the same argument that gave \\eqref{eqn:f0initialbound}\nwe have for all $x$ the bound\n\\begin{align*}\ng^{0}(x_i) \\ge &\n\\tmplG(x_i+\\ashift-\\half\\Delta)+\\tfrac{3}{4}\\eta\\unitstep_0(x_i-x_g+\\ashift-\\half\\Delta)\n\\\\& - \\unitstep_1(x_i-\\tfrac{3}{4}Z)\n\\end{align*}\nand, applying Lemma \\ref{lem:disccontbnd}, we obtain\n\\begin{align*}\ng^{0,\\discsmthker}(x_i) \\ge& \\tmplG^{\\smthker}(x_i+\\ashift-\\Delta) + \\tfrac{3}{4}\\eta \\Omega(x_i-x_g+\\ashift-\\Delta)\n\\\\& - \\Omega(x_i-\\tfrac{3}{4}Z+\\Delta)\n\\end{align*}\nWhich, by \\eqref{eqn:lm16Obbnd}, gives\n$g^{0,\\smthker}(x_i) \\ge \\tmplG^{\\smthker}(x_i+\\ashift-\\Delta)$\nfor $x_i\\in [0,\\half Z].$\n\nNow, define $\\ff^1$ by\n$\\ff^1(x_i) = \\hf(\\gtS{0}(x_i))$ for $x_i\\in[0,Z]$ and\n$\\ff^1(x_i) = 0$ otherwise. For $x_i\\in[0,\\half Z]$ we have\n\\begin{align*}\nf^{1}(x_i) &= \\hf(\\gtS{0}(x_i)) \\\\\n& \\ge \\hf(\\tmplG^{\\smthker} (x_i+\\ashift-\\Delta)) \n\\\\ & \\stackrel{\\eqref{eqn:ADGFbnd}}{\\ge}\n\\tmplF(x_i-\\Delta)+\\eta\\unitstep_0(x_i-\\Delta)\n\\\\ & \\ge\nf^{0}(x_i)\n\\end{align*}\nwhere the last inequality uses $\\ashift <-\\Delta.$\nThis implies the existence of a fixed point lower bounded by\n$f^0,g^0,$ which completes the proof.\n\n\n\\end{IEEEproof}\n\n\n\\section{General Convergence Results}\\label{app:E}\n\nThe existence of interpolating wave solutions often implies \nglobal convergence of the spatially coupled system. The structure of $\\cross(\\hf,\\hg)$ can potentially be complicated\nenough to prevent direct application of the existence results for wave-like solutions.\nTypically, the existence results for spatial fixed points are easier to apply.\nOur technique to prove the general convergence results largely consists of modifying the\nsystem monotonically to elicit the existence of a fixed point solution for the modified system\nthat then initiates a monotonic sequence for the original system.\nMonotonicity implies convergence and necessary conditions on interpolating fixed points\nprovide the leverage to get the desired results.\nWe start with a Lemma that uses this approach in a canonical way and that we can\nthen use for the more complicated statements.\n\n\\begin{lemma}\\label{lem:liminfgap}\nLet $(\\hf,\\hg)$ be given with $A(\\hf,\\hg)<0$ and \n$\\altPhi(\\hf,\\hg;u,v) > A(\\hf,\\hg)$ for $(u,v) \\neq (1,1).$\nConsider the spatially continuous system \\eqref{eqn:gfrecursion}.\nIf $\\ff^0 \\in \\sptfns$ \nsatisfies $\\ff^0(\\pinfty) = 1$ then for all $x\\in\\reals$ we have\n\\[\n\\lim_{t\\rightarrow \\infty} \\ff^t(x) =1\\,,\\quad\n\\lim_{t\\rightarrow \\infty} \\fg^t(x) =1\n\\]\n\\end{lemma}\n\\begin{IEEEproof}\nConsider first the parametric modification of $(\\hf,\\hg)$ given \nin \\eqref{eqn:FirstMod}, i.e.,\n\\begin{align*}\n\\hf(\\eta;u) &=(\\hf(u)-\\eta)^+\\,\\\\\n\\hg(\\eta;v) &= (\\hg(v)-\\eta)^+ \\,.\n\\end{align*}\n\nAs before, $(u(\\eta),v(\\eta)) \\in \\cross(\\hf(\\eta;u),\\hg(\\eta;\\cdot))$ is the minimum point\nwhere $\\altPhi(\\hf(\\eta;u),\\hg(\\eta,\\cdot);u,v)$ achieves its minimum\nin $[0,1]\\times[0,1]$\nand $(u(\\eta),v(\\eta)) \\rightarrow (1,1)$ as $\\eta \\rightarrow 0.$\nGiven arbitrary $\\epsilon > 0$ we choose $\\eta$ small enough so that\n$u(\\eta),v(\\eta) > 1-\\epsilon$ and $\\altPhi(\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot);u(\\eta),v(\\eta)) < 0.$\nIf $\\hf(\\eta;u(\\eta)) > v(\\eta)$ then redefine $\\hf(\\eta;u(\\eta)) = v(\\eta)$\nand\nif $\\hg(\\eta; v(\\eta)) > u(\\eta)$ then redefine $\\hg(\\eta;v(\\eta)) = u(\\eta).$\n\nWe are interested in the modified pair $\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot)$ restricted to $[0,u(\\eta)] \\times [0,v(\\eta)].$\nNow consider a further parametric modification\n\\begin{align*}\n\\hf(\\eta',\\eta;u) &=v(\\eta)\\unitstep_1(u-(u(\\eta)-\\eta'))\\vee \\hf(\\eta;u)\\,\\\\\n\\hg(\\eta',\\eta;v) &=u(\\eta)\\unitstep_1(v-(v(\\eta)-\\eta'))\\vee \\hg(\\eta;v) \\,.\n\\end{align*}\nSince $\\hf(\\eta;u)$ is continuous at $u(\\eta)$ and\n$\\hg(\\eta;u)$ is continuous at $v(\\eta)$\n(by the minimization of $\\altPhi(\\hf(\\eta;\\cdot),\\hg(\\eta;\\cdot);\\cdot,\\cdot)$)\nwe can assume $\\eta' >0$ sufficiently small so that\n\\begin{align*}\n\\hf(u) &\\ge\\hf(\\eta',\\eta;u) \\,\\\\\n\\hg(v) &\\ge\\hg(\\eta',\\eta;v) \\,.\n\\end{align*}\n\nBy construction \n$\\altPhi(\\hf(\\eta',\\eta;\\cdot),\\hg(\\eta',\\eta;\\cdot);u,v )$ is uniquely minimized on\n $[0,u(\\eta)] \\times [0,v(\\eta)]$\nat $(u,v) = (u(\\eta),v(\\eta))$ and there exists a positive $\\delta>0$ such\nthat\n\\(\n\\altPhi(\\hf(\\eta',\\eta;\\cdot),\\hg(\\eta',\\eta;\\cdot);u,v ) >\n\\altPhi(\\hf(\\eta',\\eta;\\cdot),\\hg(\\eta',\\eta;\\cdot);u(\\eta),v(\\eta))+\\delta\n\\)\nfor all $(u,v)\\in \\cross (\\hf(\\eta',\\eta;\\cdot),\\hg(\\eta',\\eta;\\cdot))\\backslash (u(\\eta),v(\\eta)).$\n\nNow we make one further parametric modification, further reducing the functions,\n\\begin{align*}\n\\hf(z,\\eta',\\eta;u) &=\\unitstep_1(u-z)\\wedge \\hf(\\eta',\\eta;u)\\,\\\\\n\\hg(z,\\eta',\\eta;v) &=\\unitstep_1(v-z)\\wedge \\hg(\\eta',\\eta;v) \\,,\n\\end{align*}\nwhere we choose $z>0$ so that \n$(\\hf(z,\\eta',\\eta;u),\\hg(z,\\eta',\\eta;v))$ satisfies the strictly positive gap condition\non $[0,u(\\eta)] \\times [0,v(\\eta)].$\nTo do this we choose $z$ so that\n\\(\n\\altPhi(\\hf(z,\\eta',\\eta;\\cdot),\\hg(z,\\eta',\\eta;\\cdot);u(\\eta),v(\\eta)) = - \\delta\/2\n\\)\n\nBy Theorem \\ref{thm:mainexist} there exists $\\tmplF,\\tmplG \\in \\sptfns$ \ninterpolating over $[0,u(\\eta)] \\times [0,v(\\eta)]$ and $\\ashift \\le -(\\delta\/2)\/\\|\\omega\\|_\\infty$\nsuch that $f^t(x)=\\tmplF(x-\\ashift t)$ and\n$g^t(x)=\\tmplG(x-\\ashift t)$ solves \\eqref{eqn:gfrecursion} for\nthe pair $\\hf(z,\\eta',\\eta;\\cdot),\\hg(z,\\eta',\\eta;\\cdot).$\n\nSince $\\tmplF(\\pinfty)=v(\\eta)<1$ and $\\tmplF(x)=0$ for some finite $x,$ we see that \nfor any $(0,1)$-interpolating function $f^0 \\in \\sptfns$ we can find a $y$ so that\nsuch that $f^0(x) \\ge \\tmplF(x-y)$ for all $x.$\nIt now follows that under \\eqref{eqn:gfrecursion} for the original pair $(\\hf,\\hg)$ we have $\\liminf_{t\\rightarrow\\infty} f^t(x)\\ge v(\\eta)\\ge 1-\\epsilon$\nand\n$\\liminf_{t\\rightarrow\\infty} g^t(x)\\ge u(\\eta)\\ge 1-\\epsilon$\nfor all $x.$\n\nSince $\\epsilon$ is arbitrary the proof is complete.\n\\end{IEEEproof}\n\nThe above proof can be easily adapted to the spatially discrete case.\n\\begin{lemma}\\label{lem:discreteliminfgap}\nLet $(\\hf,\\hg)$ be given with $A(\\hf,\\hg)<0$ and \n$\\altPhi(\\hf,\\hg;u,v) > A(\\hf,\\hg)$ for $(u,v) \\neq (1,1).$\nConsider the spatially discrete system \\eqref{eqn:discretegfrecursion}.\nFor any $\\epsilon>0,$ if $\\Delta$ is sufficiently small then\nfor all $x\\in\\reals$ we have\n\\[\n\\lim_{t\\rightarrow \\infty} \\ff^t(x) =1-\\epsilon\\,,\\quad\n\\lim_{t\\rightarrow \\infty} \\fg^t(x) =1-\\epsilon\n\\]\nfor anny $\\ff^0 \\in \\sptfns$ satisfying $\\ff^0(\\pinfty) = 1.$\n\\end{lemma}\n\\begin{IEEEproof}\nWe use the construction from the proof of Lemma \\ref{lem:liminfgap}\nand recall\nthe existence of $\\tmplF,\\tmplG \\in \\sptfns$ \ninterpolating over $[0,u(\\eta)] \\times [0,v(\\eta)]$ and $\\ashift \\le -(\\delta\/2)\/\\|\\omega\\|_\\infty$\nsuch that $f^t(x)=\\tmplF(x-\\ashift t)$ and\n$g^t(x)=\\tmplG(x-\\ashift t)$ solves \\eqref{eqn:gfrecursion} for\nthe pair $\\hf(z,\\eta',\\eta;\\cdot),\\hg(z,\\eta',\\eta;\\cdot).$\nAssume $\\Delta \\le |\\ashift|.$ \n\nGiven $f^0$ satisfying $\\ff^0(\\pinfty) = 1$ we can find $y$ \nsuch that $f^0(x_i) \\ge \\tmplF(x_i-y)$ for all $x.$\nWe can apply Theorem \\ref{thm:mainquantize} and the inequalities\n$\\hf \\ge \\hf(z,\\eta',\\eta;\\cdot)$ and $\\hg \\ge \\hg(z,\\eta',\\eta;\\cdot)$\nto obtain\n$f^t(x_i) \\ge \\tmplF(x_i-y-(\\ashift+\\Delta)t)$ and\n$g^t(x_i) \\ge \\tmplG(x_i-y-(\\ashift+\\Delta)t).$\n\nThe Lemma now follows.\n\\end{IEEEproof}\n\n\n\nRecall that in the statement of Theorem \\ref{thm:globalconv} \nwe have $(0,0)\\le (u',v') \\le (u'',v'') \\le (1,1)$ and $\\altPhi$ is minimized on\n$(u',v')$ and $(u'',v'')$ where it takes the value $m(\\hf,\\hg).$\nFurthermore, $(u',v')$ and $(u'',v'')$ are the extreme points where the minimum is attained.\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:globalconv}]\nWe will prove the first statement in the Theorem, i.e.,\n\\(\n\\liminf_{t\\rightarrow \\infty} f^t(x) \\ge v'\\,,\n\\)\nthe other cases being similar.\n\nIf $u' =0$ or $v'=0$ then $m=0$ and $(u',v')=(0,0)$ and the result is immediate.\nLet us assume that $m<0$ and hence that $(u',v')>(0,0).$\nConsider the system restricted to $[0,u']\\times [0,v'].$\nIf $\\hf(u')>v'$ then let us redefine $\\hf(u')=v'$ and \nif $\\hg(v')>u'$ then let us redefine $\\hf(u')=u'.$\nThis makes $(u',v')$ a fixed point of the underlying system.\nThis reduction will not affect the remaining argument.\nLet us reduce $\\ff^0$ by saturating it at $v',$ i.e., replacing it with\n$\\ff^0 \\wedge v'.$\n\nWe can now apply Lemma \\ref{lem:liminfgap} to obtain $\\ff^t(x) \\rightarrow v'$\nand $\\fg^t(x) \\rightarrow u'.$\nSince $\\ff^t$ and $\\fg^t$ in the original system are only larger, the result follows.\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:discreteglobalconv}]\nJust as Lemma \\ref{lem:liminfgap} proved Theorem \\ref{thm:globalconv},\nwe can use Lemma \\ref{lem:discreteliminfgap} proved Theorem \\ref{thm:discreteglobalconv}.\nThe argument is essentially the same so we omit it.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:terminatedzero}]\nThe case where $\\altPhi(\\hf,\\hg;u,v) > 0$ for $(u,v) \\neq (0,0)$\nfollows easily from Theorem \\ref{thm:globalconv}. We assume now that\n$\\altPhi(\\hf,\\hg;u,v) = 0$ and $\\hf$ and $\\hg$ are strictly positive on $(0,1].$\n\nDefine \n\\[\n\\ff^0(x)=\\unitstep_1(x)\n\\] \nWe will show that $\\ff^t\\rightarrow 0,$ which implies the same for arbitrary initial conditions.\nBy monotonicity in $t,$ $\\ff^t$ has a point-wise limit $\\ff^\\infty \\in \\sptfns$\nand $\\fg^t$ has a point-wise limit $\\fg^\\infty \\in \\sptfns$\nBy continuity we have $(\\ff^\\infty(\\pinfty),\\fg^\\infty(\\pinfty)) \\in \\cross(\\hf,\\hg).$\n\nIn general $h_{[\\ff^\\infty,\\fg^{\\smthker,\\infty}]}$\nis well defined on\n$[0,\\fg^{\\infty}(\\pinfty)]$ and\n$h_{[\\fg^\\infty,\\ff^{\\smthker,\\infty}]}$\nis well defined on\n$[0,\\ff^{\\infty}(\\pinfty)]$ \nand we have\n\\begin{align}\n\\begin{split}\\label{eqn:AineqB}\n0 \\le &\\altPhi(\\hf,\\hg;\\fg^{\\infty}(\\pinfty),\\ff^{\\infty}(\\pinfty))\n\\\\ \\le &\n\\altPhi(h_{[\\ff^\\infty,\\fg^{\\smthker,\\infty}]},h_{[\\fg^\\infty,\\ff^{\\smthker,\\infty}]};\\fg^{\\infty}(\\pinfty),\\ff^{\\infty}(\\pinfty))\\,.\n\\\\ = &0\\,.\n\\end{split}\n\\end{align}\n\nAssume that $\\ff^\\infty \\neq 0.$\nLet $z = \\sup \\{x:\\ff^\\infty(x)=0\\}.$\nWe have $\\ff^{\\smthker,\\infty}(x)>0$ on $\\neigh{z}{W}$ and therefore \n$\\fg^{\\infty}(x)>0$ on $\\neigh{z}{W}$ and\n$\\fg^{\\smthker,\\infty}(x)>0$ on $\\neigh{z}{2W}.$\nHence $\\ff^\\infty(x) > 0$ for $x \\in \\neigh{z}{2W} \\cap (0,\\infty)$\nbut $\\ff^\\infty(x)=0$ for $x 0$ and we easily conclude that \n$(\\fg^{\\infty}(\\pinfty),\\ff^{\\infty}(\\pinfty)) = (0,0).$\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:discreteterminatedexistB}]\nTheorem \\ref{thm:discreteterminatedexistB} can be proved along lines similar to\nLemma \\ref{lem:liminfgap} with some additional features introduced to handle the spatial discreteness.\n\nConsider a parametric modification of $(\\hf,\\hg)$ given by\n\\begin{align*}\n\\hf(z,\\eta;u) &=\\unitstep_1(u-z)\\wedge ( \\hf(u)-\\eta)^+ \\\\\n\\hg(z,\\eta;v) &=\\unitstep_1(v-z)\\wedge (\\hg(v)-\\eta)^+ \\,.\n\\end{align*}\nWe first fix $z=0.$\nDefine $m(\\eta) = \\min \\altPhi(\\hf(0,\\eta;\\cdot),\\hg(0,\\eta;\\cdot);\\cdot,\\cdot)$\nand let $(u(\\eta),v(\\eta))$ be the minimum point where $m(\\eta)$ is \nrealized.\nAs $\\eta\\rightarrow 0$ we have $(u(\\eta),v(\\eta))\\rightarrow (1,1)$\nand $m(\\eta)\\rightarrow A(\\hf,\\hg).$\nGiven $\\epsilon>0$ we may assume $\\eta$ small enough so that\n$(u(\\eta),v(\\eta)) \\ge (1-\\epsilon,1-\\epsilon)$ and we also assume\n$m(\\eta)<0.$\n\nNow we choose $z(\\eta) (>0)$ so that\n\\[\n \\altPhi\\bigl(\\hf(z(\\eta),\\eta;\\cdot),\\hg(z(\\eta),\\eta;\\cdot);u(\\eta),v(\\eta)\\bigr)=0\n\\]\nIf necessary we reduce $\\hf(z(\\eta),\\eta;u(\\eta))$ and $\\hg(z(\\eta),\\eta;v(\\eta))$\nso that they equal $v(\\eta)$ and $u(\\eta)$ respectively.\nThen \n$(\\hf(z(\\eta),\\eta;\\cdot),\\hg(z(\\eta),\\eta;\\cdot))$\nsatisfies the strictly positive gap condition on\n$[0,u(\\eta)]\\times[0,v(\\eta)].$\n\nBy Theorem \\ref{thm:mainexist} there exists\n $\\tmplF$ and $\\tmplG$ that form a fixed point for \n\\eqref{eqn:gfrecursion}\nwith $(\\tmplF(\\minfty),\\tmplG(\\minfty))=(0,0)$ and\n$(\\tmplF(\\pinfty),\\tmplG(\\pinfty))=(v(\\eta),u(\\eta)).$\n\nLet us translate the solution so that $0 =\\sup_x \\{\\tmplF(x)=0\\}$\nand let us then define $x_g = \\sup_x \\{\\tmplG(x)=0\\}.$\nIt follows that $|x_g| < W.$\nLet us choose $\\Delta$ sufficiently small so that\nthe following holds:\n\\begin{equation}\\label{eqn:9condB}\n\\eta\\intsmthker(-|x_g|-\\half\\Delta)\n\\ge\n\\half\\Delta \\|\\omega\\|_\\infty\\,.\n\\end{equation}\n\n\nConsider initializing \\eqref{eqn:discretegfrecursion} with\n\\begin{align*}\nf^{0}(x_i)&=\n\\tmplF(x_i) +\\eta \\unitstep_0(x_i)\\,.\n\\end{align*} \nApplying Lemma \\ref{lem:disccontbnd} this yields for $x_i\\ge x_g$\n\\begin{align*}\nf^{0,\\discsmthker}(x_i)&\\ge\n\\tmplF^\\smthker (x_i-\\half\\Delta) + \\eta\\intsmthker(x_i-\\half\\Delta) \n\\\\\n&\\stackrel{\\eqref{eqn:9condB}}{\\ge}\n\\tmplF^\\smthker (x_i-\\half\\Delta) +\\half\\Delta\\|\\omega\\|_\\infty\\,\n\\\\\n&{\\ge}\n\\tmplF^\\smthker (x_i) \\,.\n\\end{align*}\nWe now obtain for $x_i \\ge x_g$\n\\begin{align*}\ng^{0}(x_i)&=\\hg(f^{0,\\discsmthker}(x_i))\n\\\\\n&\\ge\n\\hg(\\tmplF^{\\smthker}(x_i))\n\\\\\n&\\ge\n\\hg(z,\\eta;\\tmplF^{\\smthker}(x_i))+\\eta\\unitstep_0(x_i-x_g)\n\\\\\n&\\ge\n\\tmplG(x_i)+\\eta\\unitstep_0(x_i-x_g)\n\\end{align*}\nand we observe that since $\\tmplG(x_i)=0$ for\n$x_i < x_g$ this bound holds for all $x_i.$\nWe now have\n\\begin{align*}\ng^{0,\\discsmthker}(x_i)&\\ge\n\\tmplG^\\smthker (x_i-\\half\\Delta)\n+\\eta\\intsmthker(x_i-x_g-\\half\\Delta)\\,.\n\\end{align*}\n For $x_i \\ge 0$ we obtain\n\\begin{align*}\ng^{0,\\discsmthker}(x_i)\n&\\stackrel{\\eqref{eqn:9condB}}{\\ge}\n\\tmplG^\\smthker (x_i-\\half\\Delta)\n+\\half\\Delta \\|\\omega\\|_\\infty\n\\\\ & \\ge \n\\tmplG^\\smthker (x_i)\\,.\n\\end{align*}\nThus we have \n\\begin{align*}\n\\ff^{1}(x_i)&=\\hf(\\fg^{0,\\discsmthker}(x_i))\n\\\\&\\ge \\hf(\\tmplG^{\\smthker}(x_i))\n\\\\&\\ge \\tmplF(x_i)+\\eta\\unitstep_0(x_i)\n\\\\&= \\ff^0(x_i)\n\\end{align*}\nand the consequently increasing sequence establishes the existence of the\ndesired fixed point.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:discretetwoterminatedexistGB}]\nWe assume $Z = L\\Delta$ for an integer $L.$\nthe termination $\\hf(x_i,\\cdot)=0$ holds for $x_i<0$ and $x_i > Z.$\nThis means that symmetry holds about $\\half Z.$\nThe proof follows that of Theorem \\ref{thm:discreteterminatedexistB}\nup to the point where requirements on $\\Delta$ are given.\nContinuing from there we\nchoose $\\Delta$ small enough and $Z$ large enough \nso that\nall of the following hold.\n\\begin{equation}\\label{eqn:14condB}\n\\tfrac{3}{4}\\eta\\intsmthker(-|x_g|-\\Delta)\n- \\intsmthker(-\\tfrac{1}{4}Z+\\half\\Delta )\n\\ge\n\\half\\Delta \\|\\omega\\|_\\infty\n\\end{equation}\n\\begin{equation}\\label{eqn:14condA}\n\\tmplF(\\tfrac{1}{4}Z) > v(\\eta)-\\eta\/4\n\\end{equation}\n\\begin{equation}\\label{eqn:14condE}\n\\tmplG(\\tfrac{1}{4}Z) > u(\\eta)-\\eta\/4\n\\end{equation}\n\nConsider initializing for $x_i \\le \\half Z$ with\n\\[\nf^{0}(x_i)=\n\\tmplF(x_i) +\\eta \\unitstep_0(x_i)\\,.\n\\]\nand for $x_i > \\half Z$ initializing symmetrially with $\\ff^{0}(x_i)=\\ff^{0}(x_{L-i}).$\nAs in the derivation of \\eqref{eqn:f0initialbound}, this, by \\eqref{eqn:14condA}, implies for all $x$\n\\[\nf^{0}(x_i)=\n\\tmplF(x_i) +\\tfrac{3}{4}\\eta \\unitstep_0(x_i)\n-\\unitstep_1(x_i - \\tfrac{3}{4}Z)\\,.\n\\]\nwhich gives by Lemma \\ref{lem:disccontbnd},\n\\[\nf^{0,\\discsmthker}(x_i)\\ge\n\\tmplF^{\\smthker}(x_i-\\half\\Delta) +\\tfrac{3}{4}\\eta \\intsmthker(x_i-\\half\\Delta)\n-\\intsmthker(x_i - \\tfrac{3}{4}Z+\\half\\Delta)\\,.\n\\]\n\nThis yields for $x_i\\in[x_g-\\half\\Delta,\\tfrac{1}{2} Z]$\n\\begin{align*}\nf^{0,\\discsmthker}(x_i)\n&\\ge\n\\tmplF^\\smthker (x_i) + \\tfrac{3}{4}\\eta\\intsmthker(x_g-\\Delta) - \\intsmthker(-\\tfrac{1}{4} Z+\\half\\Delta )\n\\\\\n&\\stackrel{\\eqref{eqn:14condB}}{\\ge}\n\\tmplF^\\smthker (x_i) +\\half\\Delta\\|\\omega\\|_\\infty\\,\n\\\\\n&{\\ge}\n\\tmplF^\\smthker (x_i+\\half\\Delta) \\,.\n\\end{align*}\n\nWe now obtain for $x_i \\in [x_g-\\half \\Delta, \\half Z]$\n\\begin{align*}\ng^{0}(x_i)&=\\hg(f^{0,\\discsmthker}(x_i))\n\\\\\n&\\ge\n\\hg(\\tmplF(x_i+\\half\\Delta))\n\\\\\n&\\ge\n\\tmplG(x_i+\\half\\Delta)+\\eta\\unitstep_0(x_i+\\half\\Delta-x_g)\n\\end{align*}\nand we observe that since the right hand side is $0$ for\n$x_i+\\half\\Delta < x_g$ this bound holds for all $x_i \\le \\half Z.$\nAs in the derivation of \\eqref{eqn:f0initialbound}, by \\eqref{eqn:14condE} we now have\n\\[\ng^{0}(x_i)\\ge\n\\tmplG(x_i+\\half\\Delta)+\\tfrac{3}{4}\\eta\\unitstep_0(x_i+\\half\\Delta-x_g)-\\unitstep_1(x_i-\\tfrac{3}{4}Z)\n\\]\nwhich gives by Lemma \\ref{lem:disccontbnd}\n\\[\ng^{0,\\discsmthker}(x_i)\\ge\n\\tmplG^\\smthker(x_i)+\\tfrac{3}{4}\\eta\\Omega(x_i-x_g)-\\Omega(x_i-\\tfrac{3}{4}Z+\\half\\Delta)\n\\]\nwhich by \\eqref{eqn:14condB} yields for $x_i \\in [0,\\half Z],$\n\\[\ng^{0,\\discsmthker}(x_i)\\ge\n\\tmplG^\\smthker(x_i)\\,.\n\\]\n\n\nThus we have for $x_i \\in [0,\\half Z],$\n\\begin{align*}\n\\ff^{1}(x_i)&=\\hf(\\fg^{0,\\discsmthker}(x_i))\n\\\\&\\ge \\hf(\\tmplG^{\\smthker}(x_i))\n\\\\&\\ge \\tmplF(x_i)+\\eta\\unitstep_0(x_i)\n\\\\&= \\ff^0(x_i)\n\\end{align*}\nand the consequently increasing sequence establishes the existence of the\ndesired fixed point.\n\\end{IEEEproof}\n\n\\bibliographystyle{IEEEtran}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof for the General Case\\label{sec:general}}\n\nIn Section \\ref{sec:PCcase} we proved Theorem\n\\ref{thm:PCexist},\na special case of Theorem \\ref{thm:mainexist}\nin which $\\hg$ and $\\hf$ are piecewise constant\nfunctions and $\\smthker$ is $C_1$ and strictly positive.\nIn this section we show how to remove the speical conditions \nto arrive at the general results.\nWe make repeated use of the limit theorems of Section \\ref{sec:limitthms} and develop\nsome approximations for functions in $\\exitfns.$\nIt is quite simple to approximate $h \\in \\exitfns$ using piecewise constant functions.\nThe only difficulty is to approximate a pair $(\\hg,\\hf)$ so that the strictly positive gap\ncondition is preserved.\n\n\\subsection{Approximation by Scaling}\nLet $h \\in \\exitfns$ and denote $\\int_0^1 h(x) \\, dx$ by $A.$\nFor $0< a < 1$ we define \n\\[\n\\slanta{h} (x) = h\\Bigl(\\frac{x - (1-A)}{1-a} + (1-A)\\Bigr)\n\\]\nwhere we assume $h(x)=0$ for $x<0$ and \n$h(x)=1$ for $x>1.$ A quick calculation verifies\n$\\int_0^1 \\slanta{h} (x) dx = A.$\nNote that $h(x) \\ge \\slanta{h}(x)$ for $u \\in [0,1-A]$ and \nthat $h(x) \\le \\slanta{h}(x)$ for $x \\in [1-A,1].$\nIt follows that \n$0 \\le \\int_0^z (h(x)-\\slanta{h}(x))\\, dx \\le \\frac{a}{1-a}\\int_0^{1-A} h(x)\\,dx$\nfor all $z \\in [0,1],$\nwhich yields\n$\\Phi(\\hg,\\hf) \\le \\Phi(\\slanta{\\hg},\\slanta{\\hf})$ and\n$E(\\hg,\\hf) = E(\\slanta{\\hg},\\slanta{\\hf}).$\n\n\\begin{lemma}\\label{lem:smoothcompress}\nLet $(\\hg,\\hf) \\in \\exitfns^2$ \nsatisfy the strictly positive gap condition.\nThen, there exists $\\epsilon > 0$ such that\n$(\\slanta{\\hg},\\slanta{\\hf})$ \nsatisfies the strictly positive gap condition\nfor any $a \\in (0,\\epsilon).$\n\\end{lemma}\n\\begin{IEEEproof}[Proof of Lemma \\ref{lem:smoothcompress}]\nLet $A_1 = \\int_0^1 \\hg(x) dx,$ $A_2 = \\int_0^1 \\hf(x) dx$ so $E(\\hg,\\hf) = 1-A_1-A_2.$\nBy symmetry we need only consider the case $E \\ge 0.$\n\nFirst, we note that \n$[0,a (1-A_1)]\\times [0,a (1-A_2)]\\subset \\gapptsplus(\\slanta{\\hg},\\slanta{\\hf})$ \nand\n$[1-a A_1,1]\\times [1-a A_2,1]\\subset \\gapptsminus(\\slanta{\\hg},\\slanta{\\hf}).$\nThus, it remains only to show that $\\Phi(\\slanta{\\hg},\\slanta{\\hf}) > E$ on \n$\\intcross(\\slanta{\\hg},\\slanta{\\hf}).$\nAssume $v \\in \\intcross(\\slanta{\\hg},\\slanta{\\hf}).$ \n\nLet $\\eta$ be chosen with $0 < \\eta < \\min\\{A_1,A_2,1-A_1,1-A_2\\}.$\nDefine\n\\[\n\\tilde{\\cross}(\\hg,\\hf) = \\cross(\\hg,\\hf) \\backslash (\\ball_{\\eta\/2}(0,0) \\cup \\ball_{\\eta\/2}(1,1)).\n\\]\nBy continuity, there exists $\\gamma >0,$ which we may take to be less than $2\\eta,$\nsuch that\n$\\Phi(\\hg,\\hf) > \\gamma + E$ on $\\tilde{\\cross}(\\hg,\\hf).$\nSince $\\Phi$ is Lipschitz-1 in its last two arguments it follows that\n$\\Phi(\\hg,\\hf) > \\gamma\/2 + E$ on $\\neigh{\\tilde{\\cross}(\\hg,\\hf)}{\\gamma\/4}.$\nFor all $a$ sufficiently small we have\n$|\\Phi(\\hg,\\hf) - \\Phi(\\slanta{\\hg},\\slanta{\\hf})| < \\gamma\/2$ uniformly\nand it follows that \n$\\Phi(\\slanta{\\hg},\\slanta{\\hf}) > E$ on $\\neigh{\\tilde{\\cross}(\\hg,\\hf)}{\\gamma\/4}.$\nBy Lemma \\ref{lem:crosspointlimit}, for all $a$ sufficiently small we have\n\\(\n{\\cross}(\\slanta{\\hg},\\slanta{\\hf})\n\\subset \\neigh{{\\cross}(\\hg,\\hf)}{\\gamma\/4}\n\\)\nand since\n\\(\n\\neigh{{\\cross}(\\hg,\\hf)}{\\gamma\/4}\n\\subset\n\\ball_{\\eta}(0,0) \\cup \\ball_{\\eta}(1,1) \\cup \\neigh{\\tilde{\\cross}(\\hg,\\hf)}{\\gamma\/4}\n\\)\nwe have\n\\[\n{\\cross}(\\slanta{\\hg},\\slanta{\\hf})\n\\subset \n\\ball_{\\eta}(0,0) \\cup \\ball_{\\eta}(1,1) \\cup \\neigh{\\tilde{\\cross}(\\hg,\\hf)}{\\gamma\/4}\\,.\n\\]\nTherefore, it remains only to bound \n$\\Phi(\\slanta{\\hg},\\slanta{\\hf}; v)$ on $\\ball_{\\eta}(0,0) \\cup \\ball_{\\eta}(1,1).$\n\nSince $\\Phi(\\hg,\\hf) \\le \\Phi(\\slanta{\\hg},\\slanta{\\hf})$ it is sufficient to show\n$\\Phi(h_{1},h_{2};v_1,v_2) > E,$ or equivalently\n$\\hat{\\Phi}(h_{1},h_{2};v_1,v_2) > 0.$\nIf $v\\in \\cross(\\hg,\\hf)$ then the inequality is immediate, so we assume\n$v\\not\\in \\cross(\\hg,\\hf).$\nIf $(v_1,v_2) \\le (1-A_1,1-A_2)$ \nthen since $v_1 \\in \\closure{h}_{2,a}(v_2)$ we have\n$v_1 \\le \\hf(v_2^+)$ and, similarly, $v_2 \\le \\hg(v_1^+),$ hence\n$v \\in \\gapptsminus(\\hg,\\hf).$\nSimilarly, if $(v_1,v_2) \\ge (1-A_1,1-A_2)$ \nthen $v \\in \\gapptsplus(\\hg,\\hf).$\n\nSince $\\closeM[\\hg,\\hf](v) \\in \\cross(\\hg,\\hf)$ and \n$v\\not\\in \\cross(\\hg,\\hf),$ \nLemma \\ref{lem:verphigap} and Lemma \\ref{lem:Rvverify}\nyield\n${\\Phi}(\\hg,\\hf;\\closeM(v)) > {\\Phi}(\\hg,\\hf;v)$\nand, equivalently,\n$\\hat{\\Phi}(\\hg,\\hf;\\closeM(v)) < \\hat{\\Phi}(\\hg,\\hf;v).$\n\nAssume $v\\in \\ball_\\eta(0,0).$ \nThen $v \\in \\gapptsminus(\\hg,\\hf)$ and $\\closeM(v) \\le v.$\nWe cannot have $\\closeM(v)=(0,0)$ since ${\\Phi}(\\hg,\\hf;0,0)=0$ and\n${\\Phi}(\\hg,\\hf;v) > 0$ (by Lemma \\ref{lem:PhiEquiv}).\nHence $\\closeM[\\hg,\\hf](v) \\in \\intcross(\\hg,\\hf),$ \nwhich implies $\\hat{\\Phi}(\\hg,\\hf;\\closeM(v)) > 0,$ and we obtain\n$\\hat{\\Phi}(\\hg,\\hf;v) > 0.$\n\nNow consider the case $v\\in \\ball_\\eta(1,1).$ \nThen $v \\in \\gapptsplus(\\hg,\\hf)$ and $\\closeM(v) \\ge v.$\nIt follows directly that $\\hat{\\Phi}(\\hg,\\hf;\\closeM(v)) \\ge 0$ (although we could show\nthat $\\closeM(v) \\neq (1,1)$ and that the inequality is strict)\nand we obtain $\\hat{\\Phi}(\\hg,\\hf;v) > 0.$\n\\end{IEEEproof}\n\n\\subsection{Piecewise Constant Approximation}\nGiven any $h \\in \\exitfns$ let us define a sequence of piecewise constant approximations\n$Q_n(h),$ $n=1,2,...$ by\n\\(\nQ_n(h) (x) = \\sum_{j=1}^n \\frac{1}{n} \\,1_{x \\ge u_{n,j}}\n\\)\nwhere we set \n\\[\nu_{n,j} = \\int_0^1 \\max\\{ (n h(x) - (j-1))^+,1 \\} dx.\n\\]\nIf $h$ is invertible then we have $u_{n,j} = n\\int_{(j-1)\/n}^{j\/n}(1-h^{-1}(x))dx$\nand \n\\(\n\\int_0^1 Q_n(h) (x) dx = \\sum_{j=1}^n \\frac{1-u_{n,j}}{n} = \\int_0^1 (1-h^{-1})(x) dx = \\int_0^1 h (x) dx.\n\\)\nIn general, it holds that\n\\(\n\\int_0^1 Q_n(h) (x) dx = \\int_0^1 h (x) dx.\n\\)\nSince $h$ is non-decreasing, it also follows that $\\int_0^z Q_n(h) (x) dx \\le \\int_0^z h (x) dx$ for all\n$z \\in [0,1].$\n\n\\begin{lemma}\\label{lem:PCapprox}\nLet $(\\hg,\\hf)$ be pair of functions in $\\exitfns$ satisfying the strictly positive gap condition \nsuch that for some $\\eta>0$ we have\n$\\hg (x) =\\hf(x)= 0$ for\n$x \\in [0,\\eta)$ and $\\hg (x) =\\hf(x)= 1$ for $x \\in (1-\\eta,1].$\nThen, for all $n$ sufficiently large $n,$\n$(Q_n(\\hg),Q_n(\\hf))$ satisfies the strictly positive gap condition.\n\\end{lemma}\n\\begin{IEEEproof}\nBy symmetry we need only consider the case $E(\\hg,\\hf) \\ge 0.$\n\nSince $\\hg$ and $\\hf$ are $0$ on $[0,\\eta)$ and $(1-\\eta,1]$ it follows that \n$\\intcrossing (\\hg,\\hf) \\subset [\\eta,1-\\eta]^2.$\nFurther, $\\intcrossing (\\hg,\\hf)$ is closed and there exists $\\gamma > 0$ such that \n$\\Phi(\\hg,\\hf;v_1,v_2) \\ge \\gamma + E$ on $\\intcrossing (\\hg,\\hf).$\nSince $\\Phi(\\hg,\\hf)$ is Lipschitz-1 componentwise it follows that \n$\\Phi(\\hg,\\hf) > \\gamma\/2 + E$ on $\\neigh{\\intcrossing (\\hg,\\hf)}{\\gamma\/4}.$\nObserve that $\\Phi(Q_n(h_{1}),Q_n(h_{2}))$ converges in $n$ uniformly to $\\Phi(\\hg,\\hf).$\n(In fact we have $0 \\le |\\Phi(\\hg,\\hf)-\\Phi(Q_n(h_{1}),Q_n(h_{2}))| \\le \\frac{1}{n}.$)\nSo, for $n$ sufficiently large, we have $\\Phi(Q_n(h_{1}),Q_n(h_{2})) > E$ on \n$\\neigh{\\intcrossing (\\hg,\\hf)}{\\gamma\/4}.$\nBy construction we also have $\\intcrossing (Q_n(h_{1},Q_n(h_{2})) \\in [\\eta,1-\\eta]^2.$\nBy Lemma \\ref{lem:crosspointlimit} we now have \n\\(\n\\intcrossing (Q_n(h_{1}),Q_n(h_{2})) \\subset \n\\neigh{\\intcrossing (h_{1},h_{2})}{\\gamma\/4}\n\\)\nfor all $n$ sufficiently large so $\\Phi(Q_n(h_{1}),Q_n(h_{2})) > E$ on\n$\\intcrossing (Q_n(h_{1}),Q_n(h_{2})).$\n\nFinally, we note that $[0,\\eta]^2 \\subset \\gapptsplus(Q_n(h_{1}),Q_n(h_{2}))$\nand $[1-\\eta,1]^2 \\subset \\gapptsminus(Q_n(h_{1}),Q_n(h_{2})).$\nThis completes the proof.\n\\end{IEEEproof}\n\n\\subsection{Proof of Main Results}\n\nWe are now ready to prove our main results.\n\n\\begin{IEEEproof}[Proof Theorem \\ref{thm:mainexist}]\nThe proof proceeds by repeated application of Lemma \\ref{lem:limitexist}\nto establish existence of $f,g \\in \\Psi$ and constant $\\ashift$ such that\n$h^{\\fS,g} = \\hg$ and $h^{\\gSa,f} = \\hf$ for a series of \nincreasingly generalized cases of $(\\hg,\\hf)$ and $\\smthker.$\nThe simplest case is already establishted in Thereom \\ref{thm:PCexist}.\n\nWe first generalize to arbitrary $\\smthker.$\nAssume that $(\\hg,\\hf)$ are both piecewise constant.\nDefine $\\smthker_i = \\smthker \\otimes G_i$ where \n$G_i(x) = \\frac{i}{\\sqrt{2\\pi}} e^{- (ix)^2\/2}.$\nIt follows that $\\smthker_i \\rightarrow \\smthker$ in $L_1$\nand $\\| \\smthker_i \\|_\\infty \\le \\| \\smthker\\|_\\infty.$\nFor each $\\smthker_i$ we apply Theorem \\ref{thm:PCexist}\nto obtain piecewise constant $f_i,g_i \\in \\Psi$ \n(with corresponding $\\zfi,\\zgi$) and constants $\\ashift_i$\nsuch that $h^{f_i^{{\\smthker}_i},g_i} = \\hg$ and $h^{g_i^{{\\smthker}_i,\\ashift_i},f_i} = \\hf.$\nWe can now apply Lemma \\ref{lem:limitexist} as indicated above to conclude\nexistence for general $\\smthker.$\n\nWe now generalize $(\\hg,\\hf)$ and to\nrequire, beyond the conditions of the Theorem statement, only that there exists $\\eta > 0$ such that $\\hg (x) =\\hf(x)= 0$ for\n$x \\in [0,\\eta)$ and $\\hg (x) =\\hf(x)= 1$ for $x \\in (1-\\eta,1].$\nWe apply Lemma \\ref{lem:PCapprox} and the preceding case already established\nto conclude that for all $n$ sufficiently large\nthere exists (piecewise constant) $f_n,g_n \\in \\Psi$ and finite constants $\\ashift_n$ such that\n$h^{f_n^{{\\smthker}},g_n} = Q_n(\\hg)$ and $h^{g_n^{{\\smthker},\\ashift_n},f_n} = Q_n(\\hf).$\nSince $Q_n(\\hg)$ and $Q_n(\\hf)$ converge to $\\hg$ and $\\hf$ respectively,\nwe can apply Lemma \\ref{lem:limitexist} to conclude existence for this case.\n\nFor arbitrary $(\\hg,\\hf)$ we consider $(\\slanta{\\hg},\\slanta{\\hf}).$\nWe apply Lemma \\ref{lem:smoothcompress} and the preceding case to conclude that for all $a$ sufficiently small\nthere exists $f_a,g_a \\in \\Psi$ and finite constants $\\ashift_a$ such that\n$h^{f_a^{{\\smthker}},g_a} = \\slanta{\\hg}$ and $h^{g_a^{{\\smthker},\\ashift_a},f_a} = \\slanta{\\hf}.$\nWe make a final application of Lemma \\ref{lem:limitexist} to obtain a solution for the general case.\n\nIn all cases above the approximations used preserve $E.$\nIn particular, if $E(\\hg,\\hf) = 0$ then we have respectively,\n$\\ashift_i=0$ or $\\ashift_n=0$ of $\\ashift_a=0$ and we obtain $\\ashift=0$\nfrom the limit construction in Lemma \\ref{lem:limitexist}.\nIf $E(\\hg,\\hf) \\neq 0$ then we cannot have $\\ashift=0$ by Lemma \\ref{lem:PPhi}.\nMoreover, we must have $\\sgn (E) = \\sgn (\\ashift).$\nFinally, the bound $|\\ashift| \\ge |E|\/\\|\\smthker\\|_\\infty$ was proved in Lemma \\ref{lem:shiftbound}.\n\\end{IEEEproof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMachine learning at its core involves solving stochastic optimization (SO) problems of the form\n\\begin{equation}\n\\label{eqn:SO_prob}\n\t\\min_{\\mathbf{x} \\in X} \\psi(\\mathbf{x}) \\triangleq \\min_{\\mathbf{x} \\in X} E_\\xi[\\phi(\\mathbf{x},\\xi)]\n\\end{equation}\nto learn a ``model'' $\\mathbf{x} \\in X \\subset \\mathbb{R}^n$ that is then used for tasks such as dimensionality reduction, classification, clustering, regression, and\/or prediction. A primary challenge of machine learning is to find a solution to the SO problem \\eqref{eqn:SO_prob} without knowledge of the distribution $P(\\xi)$. This involves finding an approximate solution to \\eqref{eqn:SO_prob} using a sequence of $T$ training samples $\\{\\xi(t) \\in \\Upsilon\\}_{t=1}^T$ drawn independently from the distribution $P(\\xi)$, which is supported on a subset of $\\Upsilon$. There are, in particular, two main categorizations of training data that, in turn, determine the types of methods that can be used to find approximate solutions to the SO problem. These are ($i$) \\emph{batch} training data and ($ii$) \\emph{streaming} training data.\n\nIn the case of {\\em batch} training data, where all $T$ samples $\\{\\xi(t)\\}$ are pre-stored and simultaneously available, a common strategy is {\\em sample average approximation} (SAA) (also referred to as {\\em empirical risk minimization} (ERM)), in which one minimizes the empirical average of the ``risk'' function $\\phi(\\cdot,\\cdot)$ in lieu of the true expectation. In the case of {\\em streaming} data, by contrast, the samples $\\{\\xi(t)\\}$ arrive one-by-one, cannot be stored in memory for long, and should be processed as soon as possible. In this setting, {\\em stochastic approximation} (SA) methods---the most well known of which is stochastic gradient descent (SGD)---are more common. Both SAA and SA have a long history in the literature; see~\\cite{Kushner.Book2010} for a historical survey of SA methods, \\cite{Kim.etal.HSO2015} for a comparative review of SAA and SA techniques, and \\cite{Pereyra.etal.JSTSP16} for a recent survey of SO techniques.\n\nAmong other trends, the rapid proliferation of sensing and wearable devices, the emergence of the internet-of-things (IoT), and the storage of data across geographically-distributed data centers have spurred a renewed interest in development and analysis of new methods for learning from {\\em fast-streaming} and {\\em distributed} data. The goal of this paper is to find a fast and efficient solution to the SO problem \\eqref{eqn:SO_prob} in this setting of distributed, streaming data. In particular, we focus on geographically-distributed nodes that collaborate over {\\em rate-limited} communication links (e.g., wireless links within an IoT infrastructure) and obtain independent streams of training data arriving at a constant rate.\n\nThe relationship between the rate at which communication takes place between nodes and the rate at which streaming data arrive at individual nodes plays a critical role in this setting. If, for example, data samples arrive much faster than nodes can communicate among themselves, it is difficult for the nodes to exchange enough information to enable an SA iteration on existing data in the network before new data arrives, thereby overwhelming the network. In order to address the challenge of distributed SO in the presence of a mismatch between the communications and streaming rates, we propose and analyze two distributed SA techniques, each based on distributed averaging consensus and stochastic mirror descent. In particular, we present bounds on the convergence rates of these techniques and derive conditions---involving the number of nodes, network topology, the streaming rate, and the communications rate---under which our solutions achieve order-optimum convergence speed.\n\n\\subsection{Relationship to Prior Work}\nSA methods date back to the seminal work of Robbins and Monro~\\cite{Robbins.Monro.AMS51}, and recent work shows that, for stochastic convex optimization, SA methods can outperform SAA methods~\\cite{Nemirovski.etal.JOO09,Juditsky.etal.SS11}. Lan~\\cite{Lan.MP12} proposed {\\em accelerated stochastic mirror descent}, which achieves the best possible convergence rate for general stochastic convex problems. This method, which makes use of noisy subgradients of $\\psi(\\cdot)$ computed using incoming training samples, satisfies\n\\begin{equation}\\label{eqn:smd.rate}\n\tE[\\psi(\\mathbf{x}(T)) - \\psi(\\mathbf{x}^*)] \\leq O(1)\\left[\\frac{L}{T^2} + \\frac{\\mathcal{M}+\\sigma}{\\sqrt{T}} \\right],\n\\end{equation}\nwhere $\\mathbf{x}^*$ denotes the minimizer of \\eqref{eqn:SO_prob}, $\\sigma^2$ denotes variance of the subgradient noise, and $\\mathcal{M}$ and $L$ denote the Lipschitz constants associated with the non-smooth (convex) component of $\\psi$ and the gradient of the smooth (convex) component of $\\psi$, respectively. Further assumptions such as smoothness and strong convexity of $\\psi(\\cdot)$ and\/or presence of a structured regularizer term in $\\psi(\\cdot)$ can remove the dependence of the convergence rate on $\\mathcal{M}$ and\/or improve the convergence rate to $O(\\sigma\/T)$ \\cite{Hu.etal.NIPS09,Nemirovski.etal.JOO09,Xiao.JMLR10,Chen.etal.NIPS12}.\n\nThe problem of {\\em distributed} SO goes back to the seminal work of Tsitsiklis et al.~\\cite{Tsitsiklis.etal.ITAC1986}, which presents distributed first-order methods for SO and gives proofs of their asymptotic convergence. Myriad works since then have applied these ideas to other settings, each with different assumptions about the type of data, how the data are distributed across the network, and how distributed units process data and share information among themselves. In order to put our work in context, we review a representative sample of these works. A recent line of work was initiated by {\\em distributed gradient descent} (DGD) \\cite{Nedic.Ozdaglar.ITAC2009}, in which nodes descend using gradients of local data and collaborate via averaging consensus \\cite{Dimakis.etal:IEEE2010}. More recent works incorporate accelerated methods, time-varying or directed graphs, data structure, etc. \\cite{Srivastava.Nedic.IJSTSP2011,Tsianos.etal.Conf2012,Mokhtari.Ribeiro.JMLR2016,Bijra.etal.arxiv2016,LiChenEtAl.N16}. These works tend not to address the SA problem directly; instead, they suppose a linearly separable function consistent with SAA using local, independent and identically distributed (i.i.d.) data. The works \\cite{Ram.etal.JOTA2010,Duchi.etal.ITAC2012,RaginskyBouvrie.ConfCDC12,DuchiAgarwalEtAl.SJO12} do consider SA directly, but suppose that nodes engage in a single round of message passing per stochastic subgradient sample.\n\nWe conclude by discussing two lines of works \\cite{Dekel.etal.JMLR2012,Rabbat.ConfCAMSAP15,tsianos.rabbat.SIPN2016} that are most closely related to this work. In \\cite{Dekel.etal.JMLR2012}, nodes perform distributed SA by forming distributed mini-batch averages of stochastic gradients and using stochastic dual averaging.\nThe main assumption in this work is that nodes can compute {\\em exact} stochastic gradient averages (e.g., via {\\tt AllReduce} in parallel computing architectures). Under this assumption, it is shown in this work that there is an appropriate mini-batch size for which the nodes' iterates converge at the optimum (centralized) rate. However, the need for exact averages in this work is not suited to rate-limited (e.g., wireless) networks, in which mimicking the {\\tt AllReduce} functionality can be costly and challenging.\n\nThe need for exact stochastic gradient averages in~\\cite{Dekel.etal.JMLR2012} has been relaxed recently in~\\cite{tsianos.rabbat.SIPN2016}, in which nodes carry out distributed stochastic dual averaging by computing {\\em approximate} mini-batch averages of dual variables via distributed consensus. In addition, and similar to our work,~\\cite{tsianos.rabbat.SIPN2016} allows for a mismatch between the communications rate and the data streaming rate. Nonetheless, there are four main distinctions between \\cite{tsianos.rabbat.SIPN2016} and our work.\nFirst, we provide results for stochastic {\\em composite} optimization, whereas \\cite{Dekel.etal.JMLR2012,tsianos.rabbat.SIPN2016} suppose a differentiable objective. Second, we consider distributed {\\em mirror descent}, which allows for a limited generalization to non-Euclidean settings. Third, we explicitly examine the impact of slow communications rate on performance, in particular highlighting the need for large mini-batches and their impact on convergence speed when the communications rate is slow. In \\cite{tsianos.rabbat.SIPN2016}, the optimum mini-batch size is first derived from \\cite{Dekel.etal.JMLR2012}, after which the communications rate needed to facilitate distributed consensus at the optimum mini-batch size is specified. While it appears to be possible to derive some of our results from a re-framing of the results of \\cite{tsianos.rabbat.SIPN2016}, it is crucial to highlight the trade-offs necessary under slow communications, which is not done in prior works. Finally, this work also presents a distributed {\\em accelerated} mirror descent approach to distributed SA; a somewhat surprising outcome is that acceleration substantially improves the convergence rate in networks with slow communications.\n\n\\subsection{Our Contributions}\nIn this paper, we present two strategies for distributed SA over networks with fast streaming data and slow communications links: distributed stochastic approximation mirror descent (D-SAMD) and accelerated distributed stochastic approximation mirror descent (AD-SAMD). In both cases, nodes first locally compute mini-batch stochastic subgradient averages to accommodate a fast streaming rate (or, equivalently, a slow communications rate), and then they collaboratively compute approximate network subgradient averages via distributed consensus. Finally, nodes individually employ mirror descent and accelerated mirror descent, respectively, on the approximate averaged subgradients for the next set of iterates.\n\nOur main theoretical contribution is the derivation of \\textit{upper} bounds on the convergence rates of D-SAMD and AD-SAMD. These bounds involve a careful analysis of the impact of imperfect subgradient averaging on individual nodes' iterates. In addition, we derive sufficient conditions for order-optimum convergence of D-SAMD and AD-SAMD in terms of the streaming and communications rates, the size and topology of the network, and the data statistics.\n\nTwo key findings of this paper are that distributed methods can achieve order-optimum convergence with small communication rates, as long as the number of nodes in the network does not grow too quickly as a function of the number of data samples each node processes, and that accelerated methods seem to offer order-optimum convergence in a larger regime than D-SAMD, thus potentially accommodating slower communications links relative to the streaming rate. By contrast, the convergence speeds of {\\em centralized} stochastic mirror descent and accelerated stochastic mirror descent typically differ only in higher-order terms. We hasten to point out that we do {\\em not} claim superior performance of D-SAMD and AD-SAMD versus other distributed methods. Instead, the larger goal is to establish the existence of methods for order-optimum stochastic learning in the fast-streaming, rate-limited regimes. D-SAMD and AD-SAMD should be best regarded as a proof of concept towards this end.\n\n\\subsection{Notation and Organization}\nWe typically use boldfaced lowercase and boldfaced capital letters (e.g., $\\mathbf{x}$ and $\\mathbf{W}$) to denote (possibly random) vectors and matrices, respectively. Unless otherwise specified, all vectors are assumed to be column vectors. We use $(\\cdot)^T$ to denote the transpose operation and $\\mathbf{1}$ to denote the vector of all ones. Further, we denote the expectation operation by $E[\\cdot]$ and the field of real numbers by $\\mathbb{R}$. We use $\\nabla$ to denote the gradient operator, while $\\odot$ denotes the Hadamard product. Finally, given two functions $p(r)$ and $q(r)$, we write $p(r) = O(q(r))$ if there exists a constant $C$ such that $\\forall r, p(r) \\leq C q(r)$, and we write $p(r) = \\Omega(q(r))$ if $q(r) = O(p(r))$.\n\nThe rest of this paper is organized as follows. In Section~\\ref{sect:setting}, we formalize the problem of distributed stochastic composite optimization. In Sections~\\ref{sect:mirror.descent} and \\ref{sect:accelerated.mirror.descent}, we describe D-SAMD and AD-SAMD, respectively, and also derive performance guarantees for these two methods. We examine the empirical performance of the proposed methods via numerical experiments in Section~\\ref{sect:numerical}, and we conclude the paper in Section~\\ref{sect:conclusion}. Proofs are provided in the appendix.\n\n\\section{Problem Formulation}\\label{sect:setting}\nThe objective of this paper is order-optimal, distributed minimization of the composite function\n\\begin{equation}\n\t\\psi(\\mathbf{x}) = f(\\mathbf{x}) + h(\\mathbf{x}),\n\\end{equation}\nwhere $\\mathbf{x} \\in X \\subset \\mathbb{R}^n$ and $X$ is convex and compact. The space $\\mathbb{R}^n$ is endowed with an inner product $\\langle \\cdot , \\cdot \\rangle$ that need not be the usual one and a norm $\\norm{\\cdot}$ that need not be the one induced by the inner product. In the following, the minimizer and the minimum value of $\\psi$ are denoted as:\n\\begin{equation}\n\t\\mathbf{x}^* \\triangleq \\arg\\min_{\\mathbf{x} \\in X} \\psi(\\mathbf{x}), \\quad \\text{and} \\quad \\psi^* \\triangleq \\psi(\\mathbf{x}^*).\n\\end{equation}\n\nWe now make a few assumptions on the smooth ($f(\\cdot)$) and non-smooth ($h(\\cdot)$) components of $\\psi$. The function $f: X \\to \\mathbb{R}$ is convex with Lipschitz continuous gradients, i.e.,\n\\begin{equation}\n\t\\norm{\\nabla f(\\mathbf{x}) - \\nabla f(\\mathbf{y})}_* \\leq L\\norm{\\mathbf{x} - \\mathbf{x}}, \\ \\forall \\ \\mathbf{x},\\mathbf{y} \\in X,\n\\end{equation}\nwhere $\\norm{\\cdot}_*$ is the dual norm associated with $\\langle \\cdot, \\cdot \\rangle$ and $\\norm{\\cdot}$:\n\\begin{equation}\n\t\\norm{\\mathbf{g}}_* \\triangleq \\sup_{\\norm{\\mathbf{x}} \\leq 1} \\langle \\mathbf{g}, \\mathbf{x} \\rangle.\n\\end{equation}\nThe function $h: X \\to \\mathbb{R}$ is convex and Lipschitz continuous:\n\\begin{equation}\n\t\\norm{h(\\mathbf{x}) - h(\\mathbf{y})} \\leq \\mathcal{M}\\norm{\\mathbf{x} - \\mathbf{y}}, \\forall \\ \\mathbf{x},\\mathbf{y} \\in X.\n\\end{equation}\nNote that $h$ need not have gradients; however, since it is convex we can consider its {\\em subdifferential}, denoted by $\\partial h(\\mathbf{y})$:\n\\begin{equation}\n\t\\partial h(\\mathbf{y}) = \\{\\mathbf{g}: h(\\mathbf{z}) \\geq h(\\mathbf{y}) + \\mathbf{g}^T(\\mathbf{z} - \\mathbf{y}), \\forall \\ \\mathbf{z} \\in X\\}.\n\\end{equation}\n\nAn important fact that will be used in this paper is that the \\emph{subgradient} $\\mathbf{g} \\in \\partial h$ of a Lipschitz-continuous convex function $h$ is bounded~\\cite[Lemma~2.6]{Shalev-Shwartz.Book2012}:\n\\begin{equation}\n\t\\norm{\\mathbf{g}}_* \\leq \\mathcal{M}, \\ \\forall \\mathbf{g} \\in \\partial h(\\mathbf{y}), \\ \\mathbf{y} \\in X.\n\\end{equation}\nConsequently, the gap between the subgradients of $\\psi$ is bounded: $\\forall \\mathbf{x},\\mathbf{y} \\in X$ and $\\mathbf{g}_\\mathbf{x} \\in \\partial h(\\mathbf{x})$, $\\mathbf{g}_\\mathbf{y} \\in \\partial h(\\mathbf{y})$, we have\n\\begin{align}\n\t\\norm{\\partial \\psi(\\mathbf{x}) - \\partial \\psi(\\mathbf{y})}_* &= \\norm{\\nabla f(\\mathbf{x}) - \\nabla f(\\mathbf{y}) + \\mathbf{g}_\\mathbf{x} - \\mathbf{g}_\\mathbf{y}}_* \\notag \\\\\n &\\leq \\norm{\\nabla f(\\mathbf{x}) - \\nabla f(\\mathbf{y})}_* + \\norm{\\mathbf{g}_\\mathbf{x} - \\mathbf{g}_\\mathbf{y}}_* \\notag\\\\\n &\\leq L\\norm{\\mathbf{x} - \\mathbf{y}} + 2\\mathcal{M}. \\label{eqn:subgradient.bound}\n\\end{align}\n\n\\subsection{Distributed Stochastic Composite Optimization}\nOur focus in this paper is minimization of $\\psi(\\mathbf{x})$ over a network of $m$ nodes, represented by the undirected graph $G=(V,E)$. To this end, we suppose that nodes minimize $\\psi$ collaboratively by exchanging subgradient information with their neighbors at each communications round. Specifically, each node $i \\in V$ transmits a message at each communications round to each of its neighbors, defined as\n\\begin{equation}\n \\mathcal{N}_i = \\{j \\in V: (i,j) \\in E\\},\n\\end{equation}\nwhere we suppose that a node is in its own neighborhood, i.e., $i \\in \\mathcal{N}_i$. We assume that this message passing between nodes takes place without any error or distortion. Further, we constrain the messages between nodes to be members of the dual space of $X$ and to satisfy causality; i.e., messages transmitted by a node can depend only on its local data and previous messages received from its neighbors.\n\nNext, in terms of data generation, we suppose that each node $i \\in V$ queries a first-order stochastic ``oracle'' at a fixed rate---which may be different from the rate of message exchange---to obtain noisy estimates of the subgradient of $\\psi$ at different query points in $X$. Formally, we use `$t$' to index time according to {\\em data-acquisition} rounds and define $\\{\\xi_i(t) \\in \\Upsilon\\}_{t \\geq 1}$ to be a sequence of independent (with respect to $i$ and $t$) and identically distributed (i.i.d.) random variables with unknown probability distribution $P(\\xi)$. At each data-acquisition round $t$, node $i$ queries the oracle at search point $\\mathbf{x}_i(s)$ to obtain a point $G(\\mathbf{x}_i(s),\\xi_i(t))$ that is a noisy version of the subgradient of $\\psi$ at $\\mathbf{x}_i(s)$. Here, we use `$s$' to index time according to \\emph{search-point update} rounds, with possibly multiple data-acquisition rounds per search-point update. The reason for allowing the search-point update index $s$ to be different from the data-acquisition index $t$ is to accommodate the setting in which data (equivalently, subgradient estimates) arrive at a much faster rate than the rate at which nodes can communicate with each other; we will elaborate further on this in the next subsection.\n\nFormally, $G(\\mathbf{x},\\xi)$ is a Borel function that satisfies the following properties:\n\\begin{align}\n\tE[G(\\mathbf{x},\\xi)] &\\triangleq \\mathbf{g}(\\mathbf{x}) \\in \\partial \\psi(\\mathbf{x}), \\quad \\text{and}\\\\\n E[\\norm{G(\\mathbf{x},\\xi) - \\mathbf{g}(\\mathbf{x})}_*^2] &\\leq \\sigma^2,\n\\end{align}\nwhere the expectation is with respect to the distribution $P(\\xi)$. We emphasize that this formulation is equivalent to that in which the objective function is $\\psi(\\mathbf{x}) \\triangleq E[\\phi(\\mathbf{x},\\xi)]$, and where nodes in the network acquire data point $\\{\\xi_i(t)\\}_{i\\in V}$ at each data-acquisition round $t$ that are then used to compute the subgradients of $\\phi(\\mathbf{x},\\xi_i(t))$, which---in turn---are noisy subgradients of $\\psi(\\mathbf{x})$.\n\n\\subsection{Mini-batching for Rate-Limited Networks}\nA common technique to reduce the variance of the (sub)gradient noise and\/or reduce the computational burden in centralized SO is to average ``batches'' of oracle outputs into a single (sub)gradient estimate. This technique, which is referred to as \\emph{mini-batching}, is also used in this paper; however, its purpose in our distributed setting is to both reduce the subgradient noise variance \\emph{and} manage the potential mismatch between the communications rate and the data streaming rate. Before delving into the details of our mini-batch strategy, we present a simple model to parametrize the mismatch between the two rates. Specifically, let $\\rho >0$ be the {\\em communications ratio}, i.e. the fixed ratio between the rate of communications and the rate of data acquisition. That is, $\\rho \\geq 1$ implies nodes engage in $\\rho$ rounds of message exchanges for every data-acquisition round. Similarly, $\\rho < 1$ means there is one communications round for every $1\/\\rho$ data-acquisition rounds. We ignore rounding issues for simplicity.\n\nThe mini-batching in our distributed problem proceeds as follows. Each mini-batch round spans $b \\geq 1$ data-acquisition rounds and coincides with the search-point update round, i.e., each node $i$ updates its search point at the end of a mini-batch round. In each mini-batch round $s$, each node $i$ uses its current search point $\\mathbf{x}_i(s)$ to compute an average of oracle outputs\n\\begin{equation}\n\t\\theta_i(s) = \\frac{1}{b}\\sum_{t = (s-1)b +1}^{sb} G(\\mathbf{x}_i(s),\\xi_i(t)).\n\\end{equation}\nThis is followed by each node computing a new search point $\\mathbf{x}_i(s+1)$ using $\\theta_i(s)$ and messages received from its neighbors.\n\nIn order to analyze the mini-batching distributed SA techniques proposed in this work, we need to generalize the usual averaging property of variances to non-Euclidean norms.\n\\begin{lemma}\\label{lem:average.variance}\n\tLet $\\mathbf{z}_1,\\dots,\\mathbf{z}_k$ be i.i.d. random vectors in $\\mathbb{R}^n$ with $E[\\mathbf{z}_i] = 0$ and $E[\\norm{\\mathbf{z}_i}^2_*] \\leq \\sigma^2$. There exists a constant $C_* \\geq 0$, which depends only on $\\norm{\\cdot}$ and $\\langle \\cdot, \\cdot \\rangle$, such that\n \\begin{equation}\n \tE\\left[\\norm{\\frac{1}{k}\\sum_{i=1}^k \\mathbf{z}_i }_*^2\\right] \\leq \\frac{C_* \\sigma^2}{k}.\n \\end{equation}\n\\end{lemma}\n\\begin{IEEEproof}\n\tThis follows directly from the property of norm equivalence in finite-dimensional spaces.\n\\end{IEEEproof}\nIn order to illustrate Lemma~\\ref{lem:average.variance}, notice that when $\\norm{\\cdot} = \\norm{\\cdot}_1$, i.e., the $\\ell_1$ norm, and $\\langle \\cdot, \\cdot \\rangle$ is the standard inner product, the associated dual norm is the $\\ell_\\infty$ norm: $\\norm{\\cdot}_* = \\norm{\\cdot}_\\infty$. Since $\\norm{\\mathbf{x}}^2_\\infty \\leq \\norm{\\mathbf{x}}^2_2 \\leq n\\norm{\\mathbf{x}}_\\infty^2$, we have $C_* = n$ in this case. Thus, depending on the norm in use, the extent to which averaging reduces subgradient noise variance may depend on the dimension of the optimization space.\n\nIn the following, we will use the notation $\\mathbf{z}_i(s) \\triangleq \\theta_i(s) - \\mathbf{g}(\\mathbf{x}_i(s))$. Then, $E[\\norm{\\mathbf{z}_i(s)}_*^2] \\leq C_*\\sigma^2\/b$. We emphasize that the subgradient noise vectors $\\mathbf{z}_i(s)$ depend on the search points $\\mathbf{x}_i(s)$; we suppress this notation for brevity.\n\n\\subsection{Problem Statement}\nIt is straightforward to see that mini-batching induces a performance trade-off: Averaging reduces subgradient noise and processing time, but it also reduces the rate of search-point updates (and hence slows down convergence). This trade-off depends on the relationship between the streaming and communications rates. In order to carry out distributed SA in an order-optimal manner, we will require that the nodes collaborate by carrying out $r \\geq 1$ rounds of averaging consensus on their mini-batch averages $\\theta_i(s)$ in each mini-batch round $s$ (see Section~\\ref{sect:mirror.descent} for details). In order to complete the $r$ communication rounds in time for the next mini-batch round, we have the constraint\n\\begin{equation}\n\tr \\leq b \\rho.\n\\end{equation}\nIf communications is faster, or if the mini-batch rounds are longer, nodes can fit in more rounds of information exchange between each mini-batch round or, equivalently, between each search-point update. But when the mismatch factor $\\rho$ is small, the mini-batch size $b$ needed to enable sufficiently many consensus rounds may be so large that the reduction in subgradient noise is outstripped by the reduction in search-point updates and the resulting convergence speed is sub-optimum. In this context, our main goal is specification of sufficient conditions for $\\rho$ such that the resulting convergence speeds of the proposed distributed SA techniques are optimum.\n\n\\section{Distributed Stochastic Approximation\\\\Mirror Descent}\\label{sect:mirror.descent}\nIn this section we present our first distributed SA algorithm, called \\emph{distributed stochastic approximation mirror descent} (D-SAMD). This algorithm is based upon stochastic approximated mirror descent, which is a generalized version of stochastic subgradient descent. Before presenting D-SAMD, we review a few concepts that underlie mirror descent.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.95\\textwidth]{figures\/timing_diag.png}\n\t\\caption{The different time counters for the rate-limited framework of this paper, here with $\\rho = 1\/2$, $b=4$, and $r=2$. In this particular case, over $T=8$ total data-acquisition rounds, each node receives $8$ data samples and computes $8$ (sub)gradients; it averages those (sub)gradients into $S=2$ mini-batch (sub)gradients; it then engages in $r=2$ rounds of consensus averaging to produce $S=2$ {\\em locally averaged} (sub)gradients; each of those (sub)gradients is finally used to update the search points twice, one for each $1 \\leq s \\leq S$. Note that, while not explicitly shown in the figure, the search point $\\mathbf{x}_i(1)$ is used for all computations spanning the data-acquisition rounds $5 \\leq t \\leq 8$.}\\label{fig:timing_diag}\n\\end{figure*}\n\n\\subsection{Stochastic Mirror Descent Preliminaries}\nStochastic mirror descent, presented in \\cite{Lan.MP12}, is a generalization of stochastic subgradient descent. This generalization is characterized by a {\\em distance-generating function} $\\omega: X \\to \\mathbb{R}$ that generalizes the Euclidean norm. The distance-generating function must be continuously differentiable and strongly convex with modulus $\\alpha$, i.e.\n\\begin{equation}\n\t\\langle \\nabla \\omega(\\mathbf{x}) - \\nabla \\omega(\\mathbf{y}), \\mathbf{x} - \\mathbf{y} \\rangle \\geq \\alpha \\norm{\\mathbf{x} - \\mathbf{y}}^2, \\forall \\ \\mathbf{x},\\mathbf{y} \\in X.\n\\end{equation}\nIn the convergence analysis, we will require two measures of the ``radius'' of $X$ that will arise in the convergence analysis, defined as follows:\n\\begin{equation*}\n\tD_\\omega \\triangleq \\sqrt{\\max_{\\mathbf{x} \\in X}\\omega(x) - \\min_{\\mathbf{x} \\in X} \\omega(x)}, \\quad \\Omega_\\omega \\triangleq \\sqrt{\\frac{2}{\\alpha} D_\\omega}.\n\\end{equation*}\n\nThe distance-generating function induces the {\\em prox function}, or the Bregman divergence $V : X \\times X \\to \\mathbb{R}_+$, which generalizes the Euclidean distance:\n\\begin{equation}\n\tV(\\mathbf{x},\\mathbf{z}) = \\omega(\\mathbf{z}) - (\\omega(\\mathbf{x}) + \\langle \\nabla \\omega(\\mathbf{x}), \\mathbf{z}-\\mathbf{x} \\rangle).\n\\end{equation}\nThe prox function $V(\\mathbf{x},\\cdot)$ inherits strong convexity from $\\omega(\\cdot)$, but it need not be symmetric or satisfy the triangle inequality. We define the {\\em prox mapping} $P_{\\mathbf{x}}: \\mathbb{R}^n \\to X$ as\n\\begin{equation}\\label{eqn:prox.mapping}\n\tP_\\mathbf{x}(\\mathbf{y}) = \\argmin_{\\mathbf{z} \\in X} \\langle \\mathbf{y}, \\mathbf{z}-\\mathbf{x} \\rangle + V(\\mathbf{x},\\mathbf{z}).\n\\end{equation}\nThe prox mapping generalizes the usual subgradient descent step, in which one minimizes the local linearization of the objective function regularized by the Euclidean distance of the step taken. In (centralized) stochastic mirror descent, one computes iterates of the form\n\\begin{align*}\n\t\\mathbf{x}(s+1) &= P_{\\mathbf{x}(s)}(\\gamma_s \\mathbf{g}(s))\n\\end{align*}\nwhere $\\mathbf{g}(s)$ is a stochastic subgradient of $\\psi(\\mathbf{x}(s))$, and $\\gamma_s$ is a step size. These iterates have the same form as (stochastic) subgradient descent; indeed, choosing $\\omega(\\mathbf{x}) = \\frac{1}{2}\\norm{\\mathbf{x}}^2_2$ as well as $\\langle \\cdot, \\cdot \\rangle$ and $\\norm{\\cdot}$ to be the usual ones results in subgradient descent iterations.\n\nOne can speed up convergence by choosing $\\omega(\\cdot)$ to match the structure of $X$ and $\\psi$. For example, if the optimization space $X$ is the unit simplex over $\\mathbb{R}^n$, one can choose $\\omega(\\mathbf{x}) = \\sum_i x_i \\log(x_i)$ and $\\norm{\\cdot}$ to be the $\\ell_1$ norm. This leads to $V(\\mathbf{x},\\mathbf{z}) = D(\\mathbf{z} || \\mathbf{x}))$, where $D(\\cdot || \\cdot)$ denotes the Kullback-Leibler (K-L) divergence between $\\mathbf{z}$ and $\\mathbf{x}$. %\nThis choice speeds up convergence on the order of $O\\left(\\sqrt{n\/\\log(n)}\\right)$ over using the Euclidean norm throughout. Along a similar vein, when $\\psi$ includes an $\\ell_1$ regularizer to promote a sparse minimizer, one can speed up convergence by choosing $\\omega(\\cdot)$ to be a $p$-norm with $p = \\log(n)\/(\\log(n)-1)$.\n\nIn order to guarantee convergence for D-SAMD, we need to restrict further the distance-generating function $\\omega(\\cdot)$. In particular, we require that the resulting prox mapping be 1-Lipschitz continuous in $\\mathbf{x},\\mathbf{y}$ pairs, i.e., $\\forall \\ \\mathbf{x},\\mathbf{x}^\\prime,\\mathbf{y},\\mathbf{y}^\\prime \\in \\mathbb{R}^n$,\n\\begin{equation*}\n\t\\norm{P_{\\mathbf{x}}(\\mathbf{y}) - P_{\\mathbf{x}^\\prime}(\\mathbf{y}^\\prime)} \\leq \\norm{\\mathbf{x}-\\mathbf{x}^\\prime} + \\norm{\\mathbf{y} - \\mathbf{y}\\prime}.\n\\end{equation*}\nThis condition is in addition to the conditions one usually places on the Bregman divergence for stochastic optimization; we will use it to guarantee that imprecise gradient averages make a bounded perturbation in the iterates of stochastic mirror descent. The condition holds whenever the prox mapping is the projection of a 1-Lipschitz function of $\\mathbf{x},\\mathbf{y}$ onto $X$. For example, it is easy to verify that this condition holds in the Euclidean setting. One can also show that when the distance-generating function $\\omega(\\mathbf{x})$ is an $\\ell_p$ norm for $p > 1$, the resulting prox mapping is 1-Lipschitz continuous in $\\mathbf{x}$ and $\\mathbf{y}$ as required.\n\nHowever, not all Bregman divergences satisfy this condition. One can show that the K-L divergence results in a prox mapping that is not Lipschitz. Consequently, while we present our results in terms of general prox functions, we emphasize that the results do not apply in all cases.\\footnote{We note further that it is possible to relax the constraint that the best Lipschitz constant be no larger than unity. This worsens the scaling laws---in particular, the required communications ratio $\\rho$ grows in $T$ rather than decreases---and we omit this case for brevity's sake.} One can think of the results primarily in the setting of Euclidean (accelerated) stochastic subgradient descent---for which case they are guaranteed to hold---with the understanding that one can check on a case-by-case basis to see if they hold for a particular non-Euclidean setting.\n\n\\subsection{Description of D-SAMD}\\label{sect:dsamd.description}\nHere we present in detail D-SAMD, which generalizes stochastic mirror descent to the setting of distributed, streaming data. In D-SAMD, nodes carry out iterations similar to stochastic mirror descent as presented in \\cite{Lan.MP12}, but instead of using local stochastic subgradients associated with the local search points, they carry out approximate consensus to estimate the {\\em average} of stochastic subgradients across the network. This reduces the subgradient noise at each node and speeds up convergence.\n\nLet $\\mathbf{W}$ be a symmetric, doubly-stochastic matrix consistent with the network graph $G$, i.e., $[\\mathbf{W}]_{ij} \\triangleq w_{ij}= 0$ if $(i,j) \\notin E$. Further suppose that $\\mathbf{W} - \\mathbf{1}\\mathbf{1}^T\/n$ has spectral radius strictly less than one, i.e. the second-largest eigenvalue magnitude is strictly less than one. This condition is guaranteed by choosing the diagonal elements of $\\mathbf{W}$ to be strictly greater than zero.\n\nNext, we focus on the case of constant step size $\\gamma$ and set it as $0 < \\gamma \\leq \\alpha\/(2L)$.\\footnote{It is shown in \\cite{Lan.MP12} that a constant step size is sufficient for order-optimal performance, so we adopt such a rule here.} For simplicity, we suppose that there is a predetermined number of data-acquisition rounds $T$, which leads to $S=T\/b$ mini-batch rounds. We detail the steps of D-SAMD in Algorithm \\ref{alg:standard}. Further, in Figure \\ref{fig:timing_diag} we illustrate the data acquisition round, mini-batch round, communication round, and search point update counters and their role in the D-SAMD algorithm.\n\n\\begin{algorithm}[t]\n \\caption{Distributed stochastic approximation mirror descent (D-SAMD)\n \\label{alg:standard}}\n \\begin{algorithmic}[1]\n \\Require Doubly-stochastic matrix $\\mathbf{W}$, step size $\\gamma$, number of consensus rounds $r$, batch size $b$, and stream of mini-batched subgradients $\\theta_i(s)$.\n \\For{$i=1:m$}\n \t\\State $\\mathbf{x}_i(1) \\gets \\min_{\\mathbf{x} \\in X} \\omega(\\mathbf{x})$ \\Comment{Initialize search points}\n \\EndFor\n \\For {$s=1:S$}\n \\State $\\mathbf{h}_i^0(s) \\gets \\theta_i(s)$ \\Comment{Get mini-batched subgradients}\n \\For{$q=1:r$, $i=1:m$}\n \t\\State $\\mathbf{h}_i^q(s) \\gets \\sum_{j \\in \\mathcal{N}_i} w_{ij}\\mathbf{h}_j^{q-1}(s)$ \\Comment{Consensus rounds}\n \\EndFor\n \\For{$i=1:m$}\n \t\\State $\\mathbf{x}_i(s+1) \\gets P_{\\mathbf{x}_i(s)}(\\gamma \\mathbf{h}_i^r(s))$ \\Comment{Prox mapping}\n \\State $\\mathbf{x}_i^\\mathrm{av}(s+1) \\gets \\frac{1}{s}\\sum_{k=1}^s\\mathbf{x}_i(k)$ \\Comment{Average iterates}\n \\EndFor\n \\EndFor\n \\end{algorithmic}\n \\Return $\\mathbf{x}_i^{\\mathrm{av}}(S+1), i=1,\\dots,m.$\n\\end{algorithm}\n\nIn D-SAMD, each node $i$ initializes its iterate at the minimizer of $\\omega(\\cdot)$, which is guaranteed to be unique due to strong convexity. At each mini-batch round $s$, each node $i$ obtains its mini-batched subgradient and nodes engage in $r$ rounds of averaging consensus to produce the (approximate) average subgradients $\\mathbf{h}_i^r(s)$. Then, each node $i$ takes a mirror prox step, using $\\mathbf{h}^r_i(s)$ instead of its own mini-batched estimate. Finally, each node keeps a running average of its iterates, which is well-known to speed up convergence \\cite{Polyak.Juditsky.JCO1992}.\n\n\\subsection{Convergence Analysis}\nThe convergence rate of D-SAMD depends on the bias and variance of the approximate subgradient averages $\\mathbf{h}_i^r(s)$. In principle, averaging subgradients together reduces the noise variance and speeds up convergence. However, because averaging consensus using only $r$ communications rounds results in {\\em approximate} averages, each node takes a slightly different mirror prox step and therefore ends up with a different iterate. At each mini-batch round $s$, nodes then compute subgradients at different search points, leading to bias in the averages $\\mathbf{h}^r_i(s)$. This bias accumulates at a rate that depends on the subgradient noise variance, the topology of the network, and the number of consensus rounds per mini-batch round.\n\nTherefore, the first step in bounding the convergence speed of D-SAMD is to bound the bias and the variance of the subgradient estimates $\\mathbf{h}^r_i(s)$, which we do in the following lemma.\n\\begin{lemma}\\label{lem:consensus.error.norm}\n\tLet $0 \\leq \\lambda_2 < 1$ denote the magnitude of the second-largest (ordered by magnitude) eigenvalue of $\\mathbf{W}$. Define the matrices\n \\begin{align*}\n \t\\mathbf{H}(s) &\\triangleq [\\mathbf{h}_1^r(s), \\dots, \\mathbf{h}_m^r(s)], \\\\\n \\mathbf{G}(s) &\\triangleq [\\mathbf{g}(\\mathbf{x}_1(s)), \\dots, \\mathbf{g}(\\mathbf{x}_m(s))], \\text{ and} \\\\\n \\mathbf{Z}(s) &\\triangleq [\\mathbf{z}_1(s), \\dots, \\mathbf{z}_m(s)],\n \\end{align*}\n recalling that the subgradient noise $\\mathbf{z}_i(s)$ is defined with respect to the mini-batched subgradient $\\theta_i(s)$. Also define\n \\begin{align*}\n \t\\overline{\\mathbf{g}}(s) &\\triangleq \\frac{1}{m}\\sum_{i=1}^m \\mathbf{g}(\\mathbf{x}_i(s)), \\quad \\overline{\\mathbf{G}}(s) \\triangleq [\\overline{\\mathbf{g}}(s), \\dots, \\overline{\\mathbf{g}}(s)], \\text{ and}\\\\\n \\overline{\\mathbf{z}}(s) &\\triangleq \\frac{1}{m}\\sum_{i=1}^m\\mathbf{z}_i(s), \\quad \\overline{\\mathbf{Z}}(s) \\triangleq [\\overline{\\mathbf{z}}(s), \\cdots, \\overline{\\mathbf{z}}(s)],\n \\end{align*}\n where the matrices $\\overline{\\mathbf{G}}(s), \\overline{\\mathbf{Z}}(s) \\in \\mathbb{R}^{n \\times n}$ have identical columns. Finally, define the matrices\n \\begin{align*}\n \t\\mathbf{E}(s) &\\triangleq \\mathbf{G}(s)\\mathbf{W}^r - \\overline{\\mathbf{G}}(s) \\text{ and} \\\\\n \\tilde{\\mathbf{Z}}(s) &\\triangleq \\mathbf{Z}(s)\\mathbf{W}^r - \\overline{\\mathbf{Z}}(s)\n \\end{align*}\n of average consensus error on the subgradients and subgradient noise, respectively. Then, the following facts are true. First, one can write $\\mathbf{H}(s)$ as\n \\begin{equation}\\label{eqn:gradient.decomposition}\n \t\\mathbf{H}(s) = \\overline{\\mathbf{G}}(s) + \\mathbf{E}(s) + \\overline{\\mathbf{Z}}(s) + \\tilde{\\mathbf{Z}}(s).\n \\end{equation}\n Second, the columns of $\\overline{\\mathbf{Z}}(s)$ satisfy\n \\begin{equation}\n \tE[\\norm{\\overline{\\mathbf{z}}(s)}_*^2] \\leq \\frac{C_*\\sigma^2}{mb}.\n \\end{equation}\n Finally, the $i$th columns of $\\mathbf{E}(s)$ and $\\tilde{\\mathbf{Z}}(s)$, denoted by $\\mathbf{e}_i(s)$ and $\\tilde{\\mathbf{z}}_i(s)$, respectively, satisfy\n \\begin{equation}\\label{eqn:gradient.average.norm}\n \t\\norm{\\mathbf{e}_i(s)}_* \\leq \\max_{j,k} m^2\\sqrt{C_*}\\lambda_2^r \\norm{\\mathbf{g}_j(s) - \\mathbf{g}_k(s)}_*\n \\end{equation}\n and\n \\begin{equation}\\label{eqn:average.gradient.noise.variance}\n \tE[\\norm{\\tilde{\\mathbf{z}}_i(s)}_*^2] \\leq \\frac{\\lambda^{2r}m^2 C_* \\sigma^2}{b},\n \\end{equation}\n where we have used $\\mathbf{g}_j(s)$ as a shorthand for $\\mathbf{g}(\\mathbf{x}_j(s))$.\n\\end{lemma}\n\nThe next step in the convergence analysis is to bound the distance between iterates at different nodes. As long as iterates are not too far apart, the subgradients computed at different nodes have sufficiently similar means that averaging them together reduces the overall subgradient noise.\n\\begin{lemma}\\label{lem:iterate.gap}\n\tLet $a_s \\triangleq \\max_{i,j} \\norm{\\mathbf{x}_i(s) - \\mathbf{x}_j(s)}$. The moments of $a_s$ follow:\n \\begin{align}\n \tE[a_s] &\\leq \\frac{\\mathcal{M}+\\sigma\/\\sqrt{b}}{L}((1+\\alpha m^2 \\sqrt{C_*} \\lambda_2^r)^{s}-1), \\\\\n E[a_s^2] &\\leq \\frac{(\\mathcal{M}+\\sigma\/\\sqrt{b})^2}{L^2}((1+\\alpha m^2 \\sqrt{C_*} \\lambda_2^r)^{s}-1)^2.\n \\end{align}\n\\end{lemma}\n\nNow, we bound D-SAMD's expected gap to optimality.\n\\begin{theorem}\\label{thm:mirror.descent.convergence.rate}\n\tFor D-SAMD, the expected gap to optimality at each node $i$ satisfies\n \\begin{multline}\\label{eqn:DSAMD.convergence.rate}\n \tE[\\psi(\\mathbf{x}_i^{\\mathrm{av}}(S+1))] - \\psi^* \\leq \\\\ \\frac{2L\\Omega_\\omega^2}{\\alpha S} + \\sqrt{\\frac{2(4\\mathcal{M}^2 + 2\\Delta_S^2)}{\\alpha S}} + \\sqrt{\\frac{\\alpha}{2}}\\frac{\\Xi_S D_\\omega}{L},\n \\end{multline}\nwhere\n\\begin{align}\n\t\\Xi_s &\\triangleq \\left(\\mathcal{M}+\\frac{\\sigma}{\\sqrt{b}}\\right)(1+ m^2 \\sqrt{C_*} \\lambda_2^r)\\times\\nonumber\\\\\n&\\qquad\\qquad\\qquad ((1+\\alpha m^2 \\sqrt{C_*} \\lambda_2^r)^{s}-1) + 2\\mathcal{M}\n\\end{align}\nand\n\\begin{align}\n \\Delta_s^2 &\\triangleq 2 \\left(\\mathcal{M}+\\frac{\\sigma}{\\sqrt{b}}\\right)^2(1+m^4 C_* \\lambda_2^{2r})\\times\\nonumber\\\\\n &\\qquad\\qquad((1+\\alpha m^2 \\sqrt{C_*} \\lambda_2^r)^{s}-1)^2 + 4C_*\\sigma^2\/(mb) \\nonumber\\\\\n &\\qquad\\qquad\\qquad+ 4\\lambda_2^{2r}C_*\\sigma^2 m^2\/b +4\\mathcal{M}\n\\end{align}\nquantify the moments of the effective subgradient noise.\n\\end{theorem}\n\nThe convergence rate proven in Theorem \\ref{thm:mirror.descent.convergence.rate} is akin to that provided in \\cite{Lan.MP12}, with $\\Delta_s^2$ taking the role of the subgradient noise variance. A crucial difference is the presence of the final term involving $\\Xi_s$. In \\cite{Lan.MP12}, this term vanishes because the intrinsic subgradient noise has zero mean. However, the equivalent gradient error in D-SAMD does not have zero mean in general. As nodes' iterates diverge, their subgradients differ, and the nonlinear mapping between iterates and subgradients results in noise with nonzero mean.\n\nThe critical question is how fast communication needs to be for order-optimum convergence speed, i.e., the convergence speed that one would obtain if nodes had access to other nodes' subgradient estimates at each round. After $S$ mini-batch rounds, the network has processed $mT$ data samples. Centralized mirror descent, with access to all $mT$ data samples in sequence, achieves the convergence rate \\cite{Lan.MP12}\n\\begin{equation*}\n\tO(1)\\left[\\frac{L}{mT} + \\frac{\\mathcal{M} + \\sigma}{\\sqrt{mT}} \\right].\n\\end{equation*}\nThe final term dominates the error as a function of $m$ and $T$ if $\\sigma^2 > 0$. In the following corollary we derive conditions under which the convergence rate of D-SAMD matches this term.\n\\begin{corollary}\\label{cor:mirror.descent.consensus.rounds}\n\tThe optimality gap for D-SAMD satisfies\n \\begin{equation}\n E[\\psi(\\mathbf{x}_i^\\mathrm{av}(S+1))] - \\psi^* = O\\left(\\frac{\\mathcal{M} + \\sigma}{\\sqrt{mT}} \\right),\n \\end{equation}\n provided the mini-batch size $b$, the communications ratio $\\rho$, the number of users $m$, and the Lipschitz constant $\\mathcal{M}$ satisfy\n \\begin{align*}\n \tb &= \\Omega\\left(1 + \\frac{\\log(mT)}{\\rho\\log(1\/\\lambda_2)}\\right), \\quad b = O\\left(\\frac{\\sigma T^{1\/2}}{m^{1\/2}}\\right),\\\\\n \\rho &= \\Omega\\left(\\frac{m^{1\/2}\\log(mT)}{\\sigma T^{1\/2}\\log(1\/\\lambda_2)}\\right), \\quad T = \\Omega\\left(\\frac{m}{\\sigma^2}\\right), \\text{ and}\\\\\n \\mathcal{M} &= O\\left(\\min\\left\\{\\frac{1}{m},\\frac{1}{\\sqrt{ m \\sigma^2 T}}\\right\\}\\right).\n \\end{align*}\n\\end{corollary}\n\n\\subsection{Discussion}\nCorollary \\ref{cor:mirror.descent.consensus.rounds} gives new insights into influences of the communications and streaming rates, network topology, and mini-batch size on the convergence rate of distributed stochastic learning. In \\cite{Dekel.etal.JMLR2012}, a mini-batch size of $b=O(T^{1\/2})$ is prescribed---which is sufficient whenever gradient averages are perfect---and in \\cite{tsianos.rabbat.SIPN2016} the number of imperfect consensus rounds needed to facilitate the mini-batch size $b$ prescribed in \\cite{Dekel.etal.JMLR2012} is derived. By contrast, we derive a mini-batch condition sufficient to drive the effective noise variance to $O(\\sigma^2\/(mT))$ while taking into consideration the impact of imperfect subgradient averaging. This condition depends not only on $T$ but also on $m$, $\\rho$, $\\lambda_2$, and $\\sigma^2$---indeed, for all else constant, the optimum mini-batch size is merely $\\Omega(\\log(T))$. Then, the condition on $\\rho$ essentially ensures that $b = O(T^{1\/2})$ as specified in \\cite{Dekel.etal.JMLR2012}.\n\nWe note that Corollary \\ref{cor:mirror.descent.consensus.rounds} imposes a strict requirement on $\\mathcal{M}$, the Lipschitz constant of the non-smooth part of $\\psi$. Essentially the non-smooth part must vanish as $m$, $T$, or $\\sigma^2$ becomes large. This is because the contribution of $h(\\mathbf{x})$ to the convergence rate depends only on the number of iterations taken, not on the noise variance. Reducing the effective subgradient noise via mini-batching has no impact on this contribution, so we require the Lipschitz constant $\\mathcal{M}$ to be small to compensate.\n\nFinally, we note that Corollary \\ref{cor:mirror.descent.consensus.rounds} dictates the relationship between the size of the network and the number of data samples obtained at each node. Leaving the terms besides $m$ and $T$ constant, Corollary \\ref{cor:mirror.descent.consensus.rounds} requires $T = \\Omega(m)$, i.e. the number of nodes in the network should scale no faster than the number of data samples processed per node. This is a relatively mild condition for big data applications; many applications involve data streams that are large relative to the size of the network. Furthermore, ignoring the $\\log(mT)$ term and assuming $\\lambda_2$ and $\\sigma$ to be fixed, Corollary \\ref{cor:mirror.descent.consensus.rounds} indicates that a communication ratio of $\\rho = \\Omega\\big(\\sqrt{m\/T}\\big)$ is sufficient for order optimality; i.e., nodes need to communicate at least $\\Omega\\big(\\sqrt{m\/T}\\big)$ times per data sample. This means that if $T$ scales faster than $\\Omega(m)$ then the required communications ratio approaches zero in this case as $m,T \\to \\infty$. In particular, fast stochastic learning is possible in expander graphs, for which the spectral gap $1-\\lambda_2$ is bounded away from zero, even in communication rate-limited scenarios. For graph families that are poor expanders, however, the required communications ratio depends on the scaling of $\\lambda_2$ as a function of $m$.\n\n\\section{Accelerated Distributed Stochastic Approximation Mirror Descent}\\label{sect:accelerated.mirror.descent}\nIn this section, we present {\\em accelerated} distributed stochastic approximation mirror descent (AD-SAMD), which distributes the accelerated stochastic approximation mirror descent proposed in \\cite{Lan.MP12}. The centralized version of accelerated mirror descent achieves the optimum convergence rate of\n\\begin{equation*}\n\tO(1)\\left[\\frac{L}{T^2}+\\frac{\\mathcal{M} + \\sigma^2}{\\sqrt{T}} \\right].\n\\end{equation*}\nConsequently, we will see that the convergence rate of AD-SAMD has $1\/S^2$ as its first term. This faster convergence in $S$ allows for more aggressive mini-batching, and the resulting conditions for order-optimal convergence are less stringent.\n\n\\subsection{Description of AD-SAMD}\nThe setting for AD-SAMD is the same as in Section \\ref{sect:mirror.descent}. We again suppose a distance function $\\omega: X \\to \\mathbb{R}$, its associated prox function\/Bregman divergence $V: X \\times X \\to \\mathbb{R}$, and the resulting (Lipschitz) prox mapping $P_x: \\mathbb{R}^n \\to X$.\n\nAs in Section \\ref{sect:dsamd.description}, we suppose a mixing matrix $\\mathbf{W} \\in \\mathbb{R}^{m \\times m}$ that is symmetric, doubly stochastic, consistent with $G$, and has nonzero spectral gap. The main distinction between accelerated and standard mirror descent is the way one averages iterates. Rather than simply average the sequence of iterates, one maintains several distinct sequences of iterates, carefully averaging them along the way. This involves two sequences of step sizes $\\beta_s \\in [1,\\infty)$ and $\\gamma_s \\in \\mathbb{R}$, which are not held constant. Again we suppose that the number of mini-batch rounds $S=T\/b$ is predetermined. We detail the steps of AD-SAMD in Algorithm \\ref{alg:accelerated}.\n\n\\begin{algorithm}[t]\n \\caption{Accelerated distributed stochastic approximation mirror descent (AD-SAMD)\n \\label{alg:accelerated}}\n \\begin{algorithmic}[1]\n \\Require Doubly-stochastic matrix $\\mathbf{W}$, step size sequences $\\gamma_s$, $\\beta_s$, number of consensus rounds $r$, batch size $b$, and stream of mini-batched subgradients $\\theta_i(s)$.\n \\For{$i=1:m$}\n \t\\State $\\mathbf{x}_i(1),\\mathbf{x}^\\mathrm{md}_i(1),\\mathbf{x}^\\mathrm{ag}_i(1) \\gets \\min_{\\mathbf{x} \\in X} \\omega(\\mathbf{x})$ \\Comment{Initialize search points}\n \\EndFor\n \\For {$s=1:S$}\n \\For {$i=1:m$}\n \t\\State $\\mathbf{x}_i^\\mathrm{md}(s) \\gets \\beta_s^{-1}\\mathbf{x}_i(s) + (1-\\beta^{-1}_s)\\mathbf{x}_i^\\mathrm{ag}(s)$\n \\State $\\mathbf{h}_i^0(s) \\gets \\theta_i(s)$ \\Comment{Get mini-batched subgradients}\n \\EndFor\n\n \\For{$q=1:r$, $i=1:m$}\n \t\\State $\\mathbf{h}_i^q(s) \\gets \\sum_{j \\in \\mathcal{N}_i} w_{ij}\\mathbf{h}_j^{q-1}(s)$ \\Comment{Consensus rounds}\n \\EndFor\n \\For{$i=1:m$}\n \t\\State $\\mathbf{x}_i(s+1) \\gets P_{\\mathbf{x}_i(s)}(\\gamma_s \\mathbf{h}_i^r(s))$ \\Comment{Prox mapping}\n \\State $\\mathbf{x}^\\mathrm{ag}_i(s+1) \\gets \\beta_s^{-1}\\mathbf{x}_i(s+1) + (1-\\beta_s^{-1})\\mathbf{x}_i^\\mathrm{ag}(s)$\n \\EndFor\n \\EndFor\n \\end{algorithmic}\n \\Return $\\mathbf{x}_i^{\\mathrm{ag}}(S+1), i=1,\\dots,m.$\n\\end{algorithm}\n\nThe sequences of iterates $\\mathbf{x}_i(s)$, $\\mathbf{x}_i^{\\mathrm{md}}(s)$, and $\\mathbf{x}^\\mathrm{ag}_i(s)$ are interrelated in complicated ways; we refer the reader to \\cite{Lan.MP12} for an intuitive explanation of these iterations.\n\n\\subsection{Convergence Analysis}\nAs with D-SAMD, the convergence analysis relies on bounds on the bias and variance of the averaged subgradients. To this end, we note first that Lemma \\ref{lem:consensus.error.norm} also holds for AD-SAMD, where $\\mathbf{H}(s)$ has columns corresponding to noisy subgradients evaluated at $\\mathbf{x}_i^\\mathrm{md}(s)$. Next, we bound the distance between iterates at different nodes. This analysis is somewhat more complicated due to the relationships between the three iterate sequences.\n\\begin{lemma}\\label{lem:accelerated.iterate.gap}\n\tLet\n \\begin{align*}\n \ta_s &\\triangleq \\max_{i,j}\\norm{\\mathbf{x}^\\mathrm{ag}_i(s) - \\mathbf{x}^\\mathrm{ag}_j(s)}, \\\\\n b_s &\\triangleq \\max_{i,j}\\norm{\\mathbf{x}_i(s) - \\mathbf{x}_j(s)}, \\text{ and} \\\\\n c_s &\\triangleq \\max_{i,j}\\norm{\\mathbf{x}^\\mathrm{md}_i(s) - \\mathbf{x}^\\mathrm{md}_j(s)}.\n \\end{align*}\n Then, the moments of $a_s$, $b_s$, and $c_s$ satisfy:\n \\begin{align*}\n \tE[a_s],E[b_s],E[c_s] &\\leq \\frac{\\mathcal{M} \\!\\!+\\! \\sigma\/\\sqrt{b}}{L}((1 \\!\\!+\\! 2\\gamma_s m^2 \\sqrt{C_*}L\\lambda_2^r)^s \\!-\\! 1), \\\\\n E[a_s^2],E[b_s^2],E[c_s^2] &\\leq \\frac{(\\mathcal{M} \\!\\!+\\! \\sigma\/\\sqrt{b})^2}{L^2}(\\!(1 \\!\\!+\\! 2\\gamma_s m^2 \\!\\!\\sqrt{C_*} L\\lambda_2^r)^s \\!\\!-\\!\\! 1)^2.\n \\end{align*}\n\\end{lemma}\n\nNow, we bound the expected gap to optimality of the AD-SAMD iterates.\n\\begin{theorem}\\label{thm:accelerated.optimality.gap}\n\tFor AD-SAMD, there exist step size sequences $\\beta_s$ and $\\gamma_s$ such that the expected gap to optimality satisfies\n \\begin{multline}\n \tE[\\Psi(\\mathbf{x}_i^\\mathrm{ag}(S+1))] - \\Psi^* \\leq \\frac{8 L D_{\\omega,X}^2}{\\alpha S^2} + \\\\ 4 D_{\\omega,X}\\sqrt{\\frac{4M + \\Delta_S^2}{\\alpha S}} + \\sqrt{\\frac{32}{\\alpha}}D_{\\omega,X}\\Xi_S,\n \\end{multline}\n where\n \\begin{multline*}\n \t\\Delta_\\tau^2 = 2(\\mathcal{M}+\\sigma\/\\sqrt{b})^2((1+ 2\\gamma_\\tau m^2 \\sqrt{C_*} L\\lambda_2^r)^\\tau-1)^2 + \\\\ \\frac{4 C_* \\sigma^2}{b}(\\lambda_2^{2r}m^2 + 1\/m) + 4\\mathcal{M}.\n \\end{multline*}\n and\n \\begin{multline*}\n \\Xi_\\tau = (\\mathcal{M} + \\sigma\/\\sqrt{b})(1+\\sqrt{C_*}m^2\\lambda_2^r) \\times \\\\ ((1+2\\gamma_\\tau m^2\\sqrt{C_*}L\\lambda_2^r)^\\tau-1) + 2\\mathcal{M}.%\n \\end{multline*}\n\\end{theorem}\n\nAs with D-SAMD, we study the conditions under which AD-SAMD achieves order-optimum convergence speed. The centralized version of accelerated mirror descent, after processing the $mT$ data samples that the network sees after $S$ mini-batch rounds, achieves the convergence rate\n\\begin{equation*}\n\tO(1)\\left[ \\frac{L}{(mT)^2} + \\frac{\\mathcal{M}+\\sigma}{\\sqrt{mT}}\\right].\n\\end{equation*}\nThis is the optimum convergence rate under any circumstance. In the following corollary, we derive the conditions under which the convergence rate matches the second term, which usually dominates the error when $\\sigma^2 > 0$.\n\\begin{corollary}\\label{cor:accelerated.descent.consensus.rounds}\n\tThe optimality gap satisfies\n \\begin{equation*}\n \tE[\\psi(\\mathbf{x}^\\mathrm{ag}_i(S+1)] - \\psi^* = O\\left(\\frac{\\mathcal{M} + \\sigma}{\\sqrt{mT}} \\right),\n \\end{equation*}\n provided\n \\begin{align*}\n \tb &= \\Omega\\left(1 + \\frac{\\log(mT)}{\\rho\\log(1\/\\lambda_2)}\\right), \\quad b = O\\left(\\frac{\\sigma^{1\/2}T^{3\/4}}{m^{1\/4}}\\right), \\\\\n \\rho &= \\Omega\\left( \\frac{m^{1\/4}\\log(m T)}{\\sigma T^{3\/4}\\log(1\/\\lambda_2)} \\right), \\quad T = \\Omega\\left(\\frac{m^{1\/3}}{\\sigma^2}\\right), \\text{ and}\\\\\n \\mathcal{M} &= O\\left(\\min\\left\\{\\frac{1}{m},\\frac{1}{\\sqrt{ m \\sigma^2 T}}\\right\\}\\right).\n \\end{align*}\n\\end{corollary}\n\n\\subsection{Discussion}\nThe crucial difference between the two schemes is that AD-SAMD has a convergence rate of $1\/S^2$ in the absence of noise and non-smoothness. This faster term, which is often negligible in centralized mirror descent, means that AD-SAMD tolerates more aggressive mini-batching without impact on the order of the convergence rate. As a result, while the condition on the mini-batch size $b$ is the same in terms of $\\rho$, the condition on $\\rho$ is relaxed by $1\/4$ in the exponents of $m$ and $T$. This is because the condition $b = O(T^{1\/2})$, which holds for standard stochastic SO methods, is relaxed to $b = O(T^{3\/4})$ for accelerated stochastic mirror descent.\n\nSimilar to Corollary \\ref{cor:mirror.descent.consensus.rounds}, Corollary \\ref{cor:accelerated.descent.consensus.rounds} prescribes a relationship between $m$ and $T$, but the relationship for AD-SAMD is $T~=~\\Omega(m^{1\/3})$, holding all but $m,T$ constant. This again is due to the relaxed mini-batch condition $b = O(T^{3\/4})$ for accelerated stochastic mirror descent. Furthermore, ignoring the $\\log$ term, Corollary \\ref{cor:accelerated.descent.consensus.rounds} indicates that a communications ratio $\\rho = \\Omega\\left(\\frac{m^{1\/4}}{T^{3\/4}}\\right)$ is needed for well-connected graphs such as expander graphs. In this case, as long as $T$ grows faster than the cube root of $m$, order-optimum convergence rates can be obtained even for small communications ratio. Thus, the use of accelerated methods increases the domain in which order optimum rate-limited learning is guaranteed.\n\n\\section{Numerical Example: Logistic Regression}\\label{sect:numerical}\nTo demonstrate the scaling laws predicted by Corollaries \\ref{cor:mirror.descent.consensus.rounds} and \\ref{cor:accelerated.descent.consensus.rounds} and to investigate the empirical performance of D-SAMD and AD-SAMD, we consider supervised learning via binary logistic regression. Specifically, we assume each node observes a stream of pairs $\\xi_i(t) = (y(t),l(t))$ of\ndata points $y_i(t) \\in \\mathbb{R}^d$ and their labels $l_i(t) \\in \\{0,1\\}$, from which it learns a classifier with the log-likelihood function\n\\begin{equation*}\n\tF(\\mathbf{x},x_0,\\mathbf{y},l) = l (\\mathbf{y}^T\\mathbf{x} + x_0) - \\log(1+\\exp(\\mathbf{y}^T\\mathbf{x} + x_0))\n\\end{equation*}\nwhere $\\mathbf{x} \\in \\mathbb{R}^d$ and $x_0 \\in \\mathbb{R}$ are regression coefficients.\n\nThe SO task is to learn the optimum regression coefficients $\\mathbf{x},x_0$. In terms of the framework of this paper, $\\Upsilon = (\\mathbb{R}^d \\times \\{0,1\\})$, and $X = \\mathbb{R}^{d+1}$ (i.e., $n = d+1$). We use the Euclidean norm, inner product, and distance-generating function to compute the prox mapping. The convex objective function is the negative of the log-likelihood function, averaged over the unknown distribution of the data, i.e.,\n\\begin{equation*}\n\t\\psi(\\mathbf{x}) = -E_{\\mathbf{y},l}[F(\\mathbf{x},x_0,\\mathbf{y},l)].\n\\end{equation*}\nMinimizing $\\psi$ is equivalent to performing maximum likelihood estimation of the regression coefficients \\cite{bishop:book06}.\n\nWe examine performance on synthetic data so that there exists a ``ground truth'' distribution with which to compute $\\psi(\\mathbf{x})$ and evaluate empirical performance. We suppose that the data follow a Gaussian distribution. For $l_i(t)~\\in~\\{0,1\\}$, we let $\\mathbf{y}_i(t)~\\sim~\\mathcal{N}(\\mu_{l_i(t)},\\sigma_r^2\\mathbf{I})$, where $\\mu_{l(t)}$ is one of two mean vectors, and $\\sigma_r^2 > 0$ is the noise variance.\\footnote{The variance $\\sigma_r^2$ is distinct from the resulting gradient noise variance $\\sigma^2$.} For this experiment, we draw the elements $\\mu_0$ and $\\mu_1$ randomly from the standard normal distribution, let $d=20$, and choose $\\sigma_r^2=2$. We consider several network topologies, as detailed in the next subsections.\n\nWe compare the performance of D-SAMD and AD-SAMD against several other schemes. As a best-case scenario, we consider {\\em centralized} mirror descent, meaning that at each data-acquisition round $t$ all $m$ data samples and their associated gradients are available at a single machine, which carries out stochastic mirror descent and {\\em accelerated} stochastic mirror descent. These algorithms naturally have the best average performance. As a baseline, we consider {\\em local} (accelerated) stochastic mirror descent, in which nodes simply perform mirror descent on their own data streams without collaboration. This scheme does benefit from an insensitivity to the communications ratio $\\rho$, and no mini-batching is required, but it represents a minimum standard for performance in the sense that it does not require collaboration among nodes.\n\nFinally, we consider a communications-constrained adaptation of {\\em distributed gradient descent} (DGD), introduced in \\cite{Nedic.Ozdaglar.ITAC2009}, where local subgradient updates are followed by a single round of consensus averaging on the search points $\\mathbf{x}_i(s)$. DGD implicitly supposes that $\\rho=1$. To handle the $\\rho < 1$ case, we consider two adaptations: {\\em naive} DGD, in which data samples that arrive between communications rounds are simply discarded, and {\\em mini-batched} DGD, in which nodes compute {\\em local} mini-batches of size $b=1\/\\rho$, take gradient updates with the local mini-batch, and carry out a consensus round. While it is not designed for the communications rate-limited scenario, DGD has good performance in general, so it represents a natural alternative against which to compare the performance of D-SAMD and AD-SAMD.\n\n\\subsection{Fully Connected Graphs}\nFirst, we consider the simple case of a fully connected graph, in which $E$ is the set of all possible edges, and the obvious mixing matrix is $\\mathbf{W}= \\mathbf{1}\\mathbf{1}^T\/n$, which has $\\lambda_2 = 0$. This represents a best-case scenario in which to validate the theoretical claims made above. We choose $\\rho = 1\/2$ to examine the regime of low communications ratio, and we let $m$ and $T$ grow according to two regimes: $T = m$, and $T = \\sqrt{m}$, which are the regimes in which D-SAMD and AD-SAMD are predicted to give order-optimum performance, respectively. The constraint on mini-batch size per Corollaries \\ref{cor:mirror.descent.consensus.rounds} and \\ref{cor:accelerated.descent.consensus.rounds} is trivial, so we take $b=2$ to ensure that nodes can average each mini-batch gradient via (perfect) consensus. We select the following step-size parameter $\\gamma$: $0.5$ and $2$ for (local and centralized) stochastic mirror descent (MD) and accelerated stochastic mirror descent (A-MD), respectively; 5 for both variants of DGD; and $5$ and $20$ for D-SAMD and AD-SAMD, respectively.\\footnote{While the accelerated variant of stochastic mirror descent makes use of two\nsequences of step sizes, $\\beta_s$ and $\\gamma_s$, these two sequences can be expressed as a function of a single parameter $\\gamma$; see, e.g., the proof of Theorem~\\ref{thm:accelerated.optimality.gap}.} These values were selected via trial-and-error to give good performance for all algorithms; future work involves the use of adaptive step size rules such as AdaGrad and ADAM~\\cite{Duchi:JMLR2011,Kingma:ICLR2015}.\n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/fullConnT_m.eps}\n \\caption{$T = m$}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/fullConnT_Sq_m.eps}\n \\caption{$T = \\sqrt{m}$}\n \\end{subfigure}\n \\caption{Performance of different schemes for a fully connected graph on $\\log$-$\\log$ scale. The dashed lines (without markers) in (a) and (b) correspond to the asymptotic performance upper bounds for D-SAMD and AD-SAMD predicted by the theoretical analysis.}\n\t\\label{fig:fully.connected}\n\\end{figure}\n\nIn Figure~\\ref{fig:fully.connected}(a) and Figure~\\ref{fig:fully.connected}(b), we plot the performance averaged over 1200 and 2400 independent instances of the problem, respectively. We also plot the order-wise theoretical performance $1\/\\sqrt{mT}$, which has a constant slope on the log-log axis. As expected, the distributed methods significantly outperform the local methods. The performance of the distributed methods is on par with the asymptotic theoretical predictions, as seen by the slope of the performance curves, with the possible exception of D-SAMD for $T = m$. However, we observe that D-SAMD performance is at least as good as predicted by theory for $T = \\sqrt{m}$, a regime in which optimality is not guaranteed for D-SAMD. This suggests the possibility that the requirement that $T = \\Omega(m)$ for D-SAMD is an artifact of the analysis, at least for this problem.\n\n\\subsection{Expander Graphs}\nFor a more realistic setting, we consider {\\em expander graphs}, which are families of graphs that have spectral gap $1-\\lambda_2$ bounded away from zero as $m \\to \\infty$. In particular, we use 6-regular graphs, i.e., regular graphs in which each node has six neighbors, drawn uniformly from the ensemble of such graphs. Because the spectral gap is bounded away from zero for expander graphs, one can more easily examine whether performance of D-SAMD and AD-SAMD agrees with the ideal scaling laws discussed in Corollaries \\ref{cor:mirror.descent.consensus.rounds} and \\ref{cor:accelerated.descent.consensus.rounds}. At the same time, because D-SAMD and AD-SAMD make use of imperfect averaging, expander graphs also allow us to examine non-asymptotic behavior of the two schemes. Per Corollaries \\ref{cor:mirror.descent.consensus.rounds} and \\ref{cor:accelerated.descent.consensus.rounds}, we choose $b = \\frac{1}{10}\\frac{\\log(mT)}{\\rho \\log(1\/\\lambda_2)}$. While this choice is guaranteed to be sufficient for optimum asymptotic performance, we chose the multiplicative constant $1\/10$ via trial-and-error to give good non-asymptotic performance.\n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/expanderT_m.eps}\n \\caption{$T = m$}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/expanderT_Sq_m.eps}\n \\caption{$T = \\sqrt{m}$}\n \\end{subfigure}\n \\caption{Performance of different schemes for $6$-regular expander graphs on $\\log$-$\\log$ scale, for $\\rho = 1\/2$. Dashed lines once again represent asymptotic theoretical upper bounds on performance.}\n\t\\label{fig:expander}\n\\end{figure}\n\nIn Figure \\ref{fig:expander} we plot the performance averaged over 600 problem instances. We again take $\\rho = 1\/2$, and consider the regimes $T = m$ and $T = \\sqrt{m}$. The step sizes are the same as in the previous subsection except that $\\gamma = 2.5$ for D-SAMD when $T = \\sqrt{m}$, $\\gamma = 28$ for AD-SAMD when $T = m$, and $\\gamma= 8$ for AD-SAMD when $T = \\sqrt{m}$. Again, we see that AD-SAMD and D-SAMD outperform local methods, while their performance is roughly in line with asymptotic theoretical predictions. The performance of DGD, on the other hand, depends on the regime: For $T = m$, it appears to have order-optimum performance, whereas for $T = \\sqrt{m}$ it has suboptimum performance on par with local methods. The reason for the dependency on regime is not immediately clear and suggests the need for further study into DGD-style methods in the case of rate-limited networks.\n\n\\subsection{Erd\\H{o}s-Renyi Graphs}\nFinally, we consider {\\em Erd\\H{o}s-Renyi} graphs, in which a random fraction (in this case $0.1$) of possible edges are chosen. These graphs are not expanders, and their spectral gaps are not bounded. Therefore, order-optimum performance is not easy to guarantee, since the conditions on the rate and the size of the network depend on $\\lambda_2$, which is not guaranteed to be well behaved. We again take $\\rho = 1\/2$, consider the regimes $T = m$ and $T = \\sqrt{m}$, and again we choose $b = \\frac{1}{10}\\frac{\\log(mT)}{\\rho \\log(1\/\\lambda_2)}$. The step sizes are chosen to be the same as for expander graphs in both regimes.\n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/ER_T_m.eps}\n \\caption{$T = m$}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/ER_T_Sq_m.eps}\n \\caption{$T = \\sqrt{m}$}\n \\end{subfigure}\n \\caption{Performance of different schemes on Erd\\H{o}s-Renyi graphs, for $\\rho = 1\/2$, displayed using $\\log$-$\\log$ scale.}\n\t\\label{fig:erdos}\n\\end{figure}\n\nOnce again, we observe a clear distinction in performance between local and distributed methods; in particular, all distributed methods (including DGD) appear to show near-optimum performance in both regimes. However, as expected the performance is somewhat more volatile than in the case of expander graphs, especially for the case of $T = m$, and it is possible that the trends seen in these plots will change as $T$ and $m$ increase.\n\n\\section{Conclusion}\\label{sect:conclusion}\nWe have presented two distributed schemes, D-SAMD and AD-SAMD, for convex stochastic optimization over networks of nodes that collaborate via rate-limited links. Further, we have derived sufficient conditions for the order-optimum convergence of D-SAMD and AD-SAMD, showing that accelerated mirror descent provides a foundation for distributed SO that better tolerates slow communications links. These results characterize relationships between network communications speed and the convergence speed of stochastic optimization.\n\nA limitation of this work is that we are restricted to settings in which the prox mapping is Lipschitz continuous, which excludes important Bregman divergences such as the Kullbeck-Liebler divergence. Further, the conditions for optimum convergence restrict the Lipschitz constant of non-smooth component of the objective function to be small. Future work in this direction includes study of the limits on convergence speed for more general divergences and composite objective functions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecent developments, initiated in\n\\cite{Bagger:2006sk,Gustavsson:2007vu}, which led to important\nprogress in understanding the holographic duality between $D=3$\nsuperconformal theories and type IIA string\/M--theory on $AdS_4$\nhave revived an interest in studying strings and branes in\nsupergravity backgrounds whose bosonic subspace is $AdS_4\\times\nM^{6}$ and $AdS_4\\times M^{7}$, respectively, where $M^6$ is a\ncompactified manifold of $D=10$ type IIA supergravity and $M^7$ is\nits Hopf fibration counterpart in $D=11$ supergravity (or\nM--theory). Examples of interest include the supergravity solutions\nwith $M^6=CP^3$ and $M^7=S^7\/Z_k$ (with an integer $k$ being the\nChern--Simons theory level) and their squashings.\n\n\nIn particular, the ${\\cal N}=6$ Chern-Simons theory with the gauge\ngroup $U(N)_k\\times U(N)_{-k}$ \\cite{Aharony:2008ug} has been\nconjectured to describe, from the $CFT_3$ side, M--theory on $AdS_4\n\\times S^7\/Z_k$. In the limit of the parameter space of the ABJM\ntheory in which the 't Hooft coupling $\\lambda={N\/k}$ is\n$\\lambda^{5\/2}<>1$, the bulk description is given in\nterms of perturbative type IIA string theory on the $AdS_4\n\\times CP^3$ background. To analyze this new type of holographic\ncorrespondence from the bulk theory side, an explicit form of the\naction for the superstring in $AdS_4 \\times CP^3$ superspace is\nrequired.\n\nIn contrast to \\emph{e.g.} the case of type IIB string theory in\n$AdS_5 \\times S^5$ superspace which preserves the maximum\nnumber of 32 supersymmetries and is thus described by the supercoset\n$PSU(2,2|4)\/SO(1,4)\\times SO(5)$, the case of type IIA string theory\non $AdS_4 \\times CP^3$ is more complicated since $AdS_4 \\times CP^3$\npreserves only 24 of 32 supersymmetries. As a consequence, the\ncomplete type IIA superspace with 32 fermionic coordinates, that\nsolves the IIA supergravity constraints for the $AdS_4 \\times CP^3$\nvacuum solution, is not a coset superspace. This superspace has been\nconstructed in\n\\cite{Gomis:2008jt} by dimensional reduction\nof the $AdS_4\\times S^7\/Z_k$ solution of $D=11$ supergravity\ndescribed by the supercoset $OSp(8|4)\/SO(7)\\times SO(1,3)\\times Z_k$\nwith 32 fermionic coordinates. The construction of\n\\cite{Gomis:2008jt} has generalized to superspace the results of\n\\cite{Giani:1984wc,Nilsson:1984bj,Sorokin:1985ap} on the relation of\n$AdS_4 \\times M^6$ solutions of $D=10$ type IIA supergravity and\n$AdS_4\\times M^7$ solutions of $D=11$ supergravity by identifying the\ncompact manifolds $M^7$ as $S^1$ Hopf fibrations over corresponding\n$M^6$.\n\nIn \\cite{Gomis:2008jt} it has been shown that the supercoset space\n$OSp(6|4)\/U(3)\\times SO(1,3)$ with 24 fermionic directions, which\nhas been used in\n\\cite{Arutyunov:2008if}--\\cite{D'Auria:2008cw} to construct a superstring\nsigma model in $AdS_4 \\times CP^3$, is a subspace of the complete\nsuperspace and that the supercoset sigma--model action (being a\npartially gauge--fixed Green--Schwarz superstring action) describes\nonly a subsector of the complete type IIA superstring theory in\n$AdS_4 \\times CP^3$. The reason for this is that the\nkappa--symmetry gauge fixing condition which puts to zero eight\nfermionic modes corresponding to the 8 broken supersymmetries is not\nadmissible for all possible string configurations. So, in\nparticular, though the $OSp(6|4)\/U(3)\\times SO(1,3)$ sigma model\nsector of the theory is classically integrable\n\\cite{Arutyunov:2008if,Stefanski:2008ik} and there are\ngeneric arguments in favor of the integrability of the whole theory,\nthe direct proof of the integrability of the complete $AdS_4 \\times\nCP^3$ superstring still remains an open problem.\n\nThe knowledge of the explicit structure of the $AdS_4 \\times CP^3$\nsuperspace with 32 fermionic directions allows one to approach this\nand other problems. The form of the string action in the $AdS_4\n\\times CP^3$ superspace can be drastically simplified by choosing a\nsuitable description of the background supergeometry and an\nappropriate kappa--symmetry gauge, as was shown previously for the\ncases of the type IIB superstring, D3, M2 and M5--branes in the\ncorresponding $AdS\\times S$ backgrounds\n\\cite{Kallosh:1998qv}--\\cite{Pasti:1998tc}. A superconformal\nrealization and a kappa--symmetry gauge fixing of the $OSp(6|4)$\nsigma model sector of the $AdS_4 \\times CP^3$ superstring have been\nconsidered in \\cite{Uvarov:2008yi} and in a light--cone gauge in\n\\cite{Zarembo:2009au}.\n\n In this paper we perform an alternative\n$\\kappa$--symmetry gauge fixing of the complete $AdS_4\\times CP^3$\nsuperspace which is suitable for studying regions of the theory that\nare not reachable by the supercoset sigma model. In Subsection\n\\ref{FS} we apply this gauge fixing to simplify the superstring\naction in $AdS_4\\times CP^3$ and consider its T--dualization along a\n$3d$ translationally invariant subspace of $AdS_4$, similar to that\nperformed in\n\\cite{Kallosh:1998ji},\nwhich results in a simple action that contains fermions only up to\nthe fourth order. We also argue that, in contrast to the\n$AdS_5\\times S^5$ superstring\n\\cite{Ricci:2007eq,Berkovits:2008ic,Beisert:2008iq}, it is not possible to T--dualize\nthe fermionic sector of the superstring action in $AdS_4\\times\nCP^3$, which agrees with the conclusion of \\cite{Adam:2009kt}\nregarding the $OSp(6|4)$ supercoset subsector of the theory.\n\nIn addition to the superstring, also for certain configurations of\ntype IIA branes, \\emph{e.g.} D0-- and D2--branes considered in\nSection 4, the complete $AdS_4 \\times CP^3$ superspace should be\nused. An interesting example is a 1\/2 BPS probe D2--brane placed at\nthe $d=3$ Minkowski boundary of $AdS_4$. Upon gauge fixing\nworldvolume diffeomorphisms and kappa--symmetry, the effective\ntheory on the worldvolume of this D2--brane, which describes its\nfluctuations in $AdS_4\n\\times CP^3$, is an interacting $d=3$ gauge Born--Infeld--matter theory\npossessing the (spontaneously broken) superconformal symmetry\n$OSp(6|4)$. The model is superconformally invariant in spite of the\npresence on the $d=3$ worldvolume of the dynamical Abelian vector\nfield, since the latter is coupled to the $3d$ dilaton field\nassociated with the radial direction of $AdS_4$. The superconformal\ninvariance is spontaneously broken by a non--zero expectation value\nof the dilaton. This example is a type IIA counterpart of so called\nsingleton M2, tripleton M5 and doubleton D3--branes\n\\cite{deWit:1998tk,Claus:1998fh,Metsaev:1998hf,Pasti:1998tc} at the boundary of $AdS_{p+2}\\times S^{D-p-2}$ ($p=2,3$ and\n5), respectively, in $D=11$ supergravity and type IIB string theory\n(see\n\\cite{Duff:2008pa} for a corresponding brane scan and a review of\nrelated earlier work).\n\nAnother example of interest for the study of the $AdS_4\/CFT_3$\ncorrespondence is a D2--brane filling $AdS_2\\times S^1\\subset AdS_4$\n\\cite{Drukker:2008jm}. This BPS D2--brane configuration corresponds\nto a disorder loop operator in the ABJM theory. Other D--brane\nconfigurations, which are to be related to Wilson loop operators in\nthe ABJM theory, were considered \\emph{e.g.} in\n\\cite{Berenstein:2008dc}--\\cite{Rey:2008bh}.\nIn this paper we extend the bosonic action for a D2--brane wrapping\n$AdS_2\\times S^1$ to include the worldvolume fermionic modes.\n\nWe start our consideration with an overview of the geometry of the\n$AdS_4\\times CP^3$ superspace.\n\n\n\\section{$AdS_4 \\times CP^3$ superspace}\\label{superspace}\n\nThe superspace under consideration contains $AdS_4\\times CP^3$\nas its bosonic subspace and has 32 fermionic directions\n\\cite{Gomis:2008jt}. It is parametrized by the supercoordinates\n\\begin{equation}\\label{Z}\nZ^{\\mathcal M}=(x^{\\hat\nm},y^{m'},\\Theta^{\\underline\\alpha})=(x^{\\hat\nm},y^{m'},\\vartheta^{\\alpha a'},\\upsilon^{\\alpha i}),\n\\end{equation}\nwhere $x^{\\hat m}$ $(\\hat m=0,1,2,3)$ and $y^{m'}$ $(m'=1,\\cdots,6)$\nare, respectively, the coordinates of $AdS_4=SO(2,3)\/SO(1,3)$ and\n$CP^3=SU(4)\/SU(3)\\times U(1)$. $\\Theta^{\\underline\\alpha}$ are the\n32 fermionic coordinates which we split into the 24 coordinates\n$\\vartheta^{\\alpha a'}$, that correspond to the 24 unbroken\nsupersymmetries in the $AdS_4\\times CP^3$ background, and the 8\ncoordinates $\\upsilon^{\\alpha i}$ which correspond to the broken\nsupersymmetries. The indices $\\alpha=1,2,3,4$ are $AdS_4$ spinor\nindices, $a'=1,\\cdots,6$ correspond to a six--dimensional\nrepresentation of $SU(3)$ (note that the index $a'$ appearing on\nspinors is different from the same index appearing in bosonic\nquantities, see Appendix A.5) and $i=1,2$ are $SO(2)\\sim\nU(1)$ indices. For more details of our notation and conventions see\nAppendix A\n\\footnote{Our notation and conventions are close to those in\n\\cite{Gomis:2008jt}. The difference is that, in this paper we put a\n``hat\" on the $AdS_4$ vector indices and use a more conventional IIA\nsuperspace torsion constraint $T_{\\underline{\\alpha\\beta}}{}^A\n=-2i\\Gamma^A_{\\underline{\\alpha\\beta}}$ (instead of\n$T_{\\underline{\\alpha\\beta}}{}^A\n=2\\Gamma^A_{\\underline{\\alpha\\beta}}$) and corresponding constraints\non the gauge field strengths. We also restore the dependence of the\ngeometric quantities and fields on the $S^7$ radius $R$, the\neleven-dimensional Planck length\n$l_p=e^{\\frac{1}{3}<\\phi>}\\sqrt{\\alpha'}$ and the Chern--Simons\nlevel $k$, which were put equal to one in\n\\cite{Gomis:2008jt}.}. For\nthe reader's convenience, below we list some of the notation used in\nthe text:\n\\begin{enumerate}\n\\item $D=10$ $AdS_4\\times CP^3$ superspace with 24 fermions is the\nsupercoset $OSp(6|4)\/ U(3)\\times SO(1,3)$. The supervielbeins and\nconnections are denoted by\n\\begin{equation}\\label{notaA}\n\\Big(E^{\\hat a}, E^{a'}, E^{\\alpha a'}, \\Omega^{\\hat a\\hat b}, \\Omega^{a'b'}, A \\Big)\n\\end{equation}\nwhose expressions are given in Appendix B, eq. (B.1).\n\\item $D=11$ $AdS_4\\times S^7$ superspace with 24 fermions. This is obtained as a $U(1)$ bundle over\nthe $OSp(6|4)\/ U(3)\\times SO(1,3)$ supercoset with the fiber\ncoordinate denoted by $z$. It is the supercoset $OSp(6|4)\\times U(1)\n\/ U(3) \\times SO(1,3)$ whose supervielbeins and connections are\ndenoted by\n\\begin{equation}\\label{notaB}\n\\Big(\\hat E^{\\hat a}, \\hat E^{a'}, \\hat E^7, \\hat E^{\\alpha a'}, \\hat \\Omega^{\\hat a\\hat b}, \\hat\n\\Omega^{a'b'}\\Big)\\,.\n\\end{equation}\nThey are given in eqs. (\\ref{24thA}), see also\n\\cite{Gomis:2008jt}. ${\\hat E}^7$ stands for the 7th (fiber) direction of $S^7$\n(or, equivalently, the 11th direction in $D=11$).\n\\item $D=11$ $AdS_4\\times S^7$ superspace with 32 fermions.\nThis is the supercoset $OSp(8|4)\/SO(7)\\times SO(1,3)$. Its\nsupervielbeins and connections are denoted by\n\\begin{equation}\\label{notaC}\n\\Big(\\underline E^{\\hat a}, \\underline E^{a'}, \\underline E^7, \\underline E^{\\alpha a'},\\underline E^{\\alpha i},\n\\underline \\Omega^{\\hat a\\hat b},\n\\underline \\Omega^{a'b'}, \\underline \\Omega^{a' 7}\\Big)\\,.\n\\end{equation}\nTheir explicit expressions are given in (\\ref{upsilonfunctions}), (\\ref{ads4connection}) and (\\ref{so7connection}).\n\\item Finally, the $D=10$ $AdS_4\\times CP^3$ superspace\nwith 32 fermionic directions is obtained by performing a rotation of\n(\\ref{notaC}) in the $(\\hat a, 7)$--plane accompanied by the\ndimensional reduction to $D=10$ (see \\cite{Gomis:2008jt}). The\ngeometric quantities characterizing this superspace are denoted by\n\\begin{equation}\\label{notaD}\n\\Big({\\cal E}^{\\hat a}, {\\cal E}^{a'}, {\\cal E}^{\\alpha a'}, {\\cal E}^{\\alpha i},\n{\\mathcal O}^{\\hat a\\hat b}, {\\mathcal O}^{a'b'}, {\\cal A} \\Big).\n\\end{equation}\nThe supervielbeins have the following form\n\\end{enumerate}\n\\begin{equation}\\label{simplA}\n\\begin{aligned}\n{\\mathcal E}^{a'}(x,y,\\vartheta,\\upsilon)&=e^{\\frac{1}{3}\\phi(\\upsilon)}\\,\\left(E^{a'}(x,y,\\vartheta)+2i\\upsilon\\,{{\\sinh m}\\over\nm}\\gamma^{a'}\\gamma^5\\,E(x,y,\\vartheta)\\right) \\,,\n\\\\\n\\\\\n{\\mathcal E}^{\\hat a}(x,y,\\vartheta,\\upsilon) &=\ne^{{1\\over3}\\phi(\\upsilon)}\\,\\left(E^{\\hat\nb}(x,y,\\vartheta)+4i\\upsilon\\gamma^{\\hat b}\\,{{\\sinh^2{{\\mathcal M}\/\n2}}\\over{\\mathcal M}^2}\\,D\\upsilon\\right)\\Lambda_{\\hat b}{}^{\\hat\na}(\\upsilon)\n\\\\\n&{}\n\\hskip+1cm -e^{-{1\\over3}\\phi(\\upsilon)}\\,\\frac{R^2}{kl_p}\\left(A(x,y,\\vartheta)-\\frac{4}{R}\\upsilon\\,\\varepsilon\\gamma^5\\,{{\\sinh^2{{\\mathcal\nM}\/2}}\\over{\\mathcal M}^2}\\,D\\upsilon\\right) E_7{}^{\\hat\na}(\\upsilon)\\,,\n\\\\\n\\\\\n{\\mathcal E}^{\\alpha i}(x,y,\\vartheta,\\upsilon) &=\ne^{{1\\over6}\\phi(\\upsilon)}\\,\\left({{\\sinh{\\mathcal\nM}}\\over{\\mathcal M}}\\,D\\upsilon\\right)^{\\beta j}\\,S_{\\beta\nj}{}^{\\alpha i}\\,(\\upsilon)\n-ie^{\\phi(\\upsilon)}{\\mathcal A}_1(x,y,\\vartheta,\\upsilon)\\,(\\gamma^5\\varepsilon\\lambda(\\upsilon))^{\\alpha\ni}\\,,\n\\\\\n\\\\\n{\\mathcal E}^{\\alpha a'}(x,y,\\vartheta,\\upsilon) &=\ne^{{1\\over6}\\phi(\\upsilon)}\\,E^{\\gamma b'}(x,y,\\vartheta)\\,\\left(\n\\delta_{\\gamma}{}^{\\beta}-\\frac{8}{R}\\,\\left(\\gamma^5\\,\\upsilon\\,{{\\sinh^2{{m}\/2}}\\over{m}^2}\\right)_{\\gamma\ni}\\upsilon^{\\beta i} \\right)S_{\\beta b'}{}^{\\alpha\na'}\\,(\\upsilon)\\,.\n\\end{aligned}\n\\end{equation}\nThe new objects appearing in these expressions, $m$, $\\mathcal M$,\n$\\Lambda_{\\hat a}{}^{\\hat b}$, $E_7{}^{\\hat a}$ and\n$S_{\\underline\\alpha}^{\\underline\\beta}$, are functions of $\\upsilon$\nand their explicit forms are given in Appendix B.1 while the dilaton\n$\\phi$, dilatino $\\lambda$ and RR one--form $\\mathcal A_1$ are given\nbelow. Contracted spinor indices have been suppressed, \\emph{e.g.}\n$(\\upsilon\\varepsilon\\gamma^5)_{\\alpha i}=\\upsilon^{\\beta\nj}\\varepsilon_{ji}\\gamma^5_{\\beta\\alpha}$, where\n$\\varepsilon_{ij}=-\\varepsilon_{ji}$, $\\varepsilon_{12}=1$ is the\n$SO(2)$ invariant tensor. The covariant derivative is defined as\n\\begin{eqnarray}\\label{D}\nD\\upsilon=\\left(d+\\frac{i}{R}E^{\\hat\na}(x,y,\\vartheta)\\,\\gamma^5\\gamma_{\\hat a}-\\frac{1}{4}\\Omega^{\\hat a\n\\hat b}(x,y,\\vartheta)\\,\\gamma_{\\hat a\\hat b}\\right)\\upsilon \\,.\n\\end{eqnarray}\nThe type IIA RR one--form gauge superfield is\n\\begin{equation}\\label{simplB}\n\\begin{aligned}\n{\\mathcal A}_1(x,y,\\vartheta,\\upsilon) &=\nR\\,e^{-{4\\over3}\\phi(\\upsilon)}\\,\\left[\n\\left(A(x,y,\\vartheta)-\\frac{4}{R}\\upsilon\\,\\varepsilon\\gamma^5\\,{{\\sinh^2{{\\mathcal\nM}\/2}}\\over{\\mathcal\nM}^2}\\,D\\upsilon\\right)\\frac{R}{kl_p}\\,\\Phi(\\upsilon)\n\\right.\\\\\n&\\left.\\hspace{40pt}+\\frac{1}{kl_p}\\left(E^{\\hat\na}(x,y,\\vartheta)+4i\\upsilon\\gamma^{\\hat a}\\,{{\\sinh^2{{\\mathcal\nM}\/2}}\\over{\\mathcal M}^2}\\,D\\upsilon\\right)E_{7\\hat a}(\\upsilon)\n\\right]\\,.\n\\end{aligned}\n\\end{equation}\n\nThe RR four-form and the NS--NS three-form superfield strengths are\ngiven by\n\\begin{equation}\\label{f4h3}\n\\begin{aligned}\nF_4&=d{\\mathcal A}_3-{\\mathcal A}_1\\,H_3=-\\frac{1}{4!}{\\mathcal\nE}^{\\hat d}{\\mathcal E}^{\\hat c}{\\mathcal E}^{\\hat b}{\\mathcal\nE}^{\\hat a}\\left(\\frac{6}{kl_p}\\,e^{-2\\phi}\\Phi\\varepsilon_{\\hat\na\\hat b\\hat c\\hat d}\\right) -\\frac{i}{2}{\\mathcal E}^{B}{\\mathcal\nE}^{A}{\\mathcal E}^{\\underline\\beta}\n{\\mathcal E}^{\\underline\\alpha}e^{-\\phi}(\\Gamma_{AB})_{\\underline{\\alpha\\beta}}\\,,\\\\\nH_3&=dB_2=-\\frac{1}{3!}{\\mathcal E}^{\\hat c}{\\mathcal E}^{\\hat\nb}{\\mathcal E}^{\\hat a}(\\frac{6}{kl_p}e^{-\\phi}\\varepsilon_{\\hat a\\hat b\\hat\nc\\hat d}E_7{}^{\\hat d}) -i{\\mathcal E}^{A}{\\mathcal\nE}^{\\underline\\beta}{\\mathcal\nE}^{\\underline\\alpha}(\\Gamma_A\\Gamma_{11})_{\\underline{\\alpha\\beta}}\n+i{\\mathcal E}^{B}{\\mathcal E}^{A}{\\mathcal\nE}^{\\underline\\alpha}(\\Gamma_{AB}\\Gamma^{11}\\lambda)_{\\underline\\alpha}\n\\end{aligned}\n\\end{equation}\nand the corresponding gauge potentials are\n\\begin{equation}\\label{B2}\nB_2=b_2+\\int_0^1\\,dt\\,i_\\Theta H_3(x,y,t\\Theta)\\,,\\qquad \\Theta=(\\vartheta,\\upsilon)\\,\\\\\n\\end{equation}\n\\begin{equation}\\label{A3}\n\\hskip+1.9cm{\\mathcal\nA}_3=a_3+\\int_0^1\\,dt\\,i_\\Theta\\left(F_4+\\mathcal{A}_1H_3\\right)(x,y,t\\Theta)\\,,\n\\end{equation}\nwhere $b_2$ and $a_3$ are the purely bosonic parts of the gauge\npotentials and $i_\\Theta$ means the inner product with\n$\\Theta^{\\underline\\alpha}$. Note that $b_2$ is pure gauge in the\n$AdS_4\\times CP^3$ solution while $a_3$ is the RR three-form\npotential of the bosonic background.\n\n The dilaton superfield $\\phi(\\upsilon)$, which depends only on\nthe eight fermionic coordinates corresponding to the broken\nsupersymmetries, has the following form in terms of $E_7{}^{\\hat\na}(\\upsilon)$ and $\\Phi(\\upsilon)$\n\\begin{equation}\\label{dilaton1}\ne^{{2\\over\n3}\\phi(\\upsilon)}={R\\over{kl_p}}\\,\\sqrt{\\Phi^2+E_7{}^{\\hat\na}\\,E_7{}^{\\hat b}\\,\\eta_{\\hat a\\hat b}}\\,.\n\\end{equation}\nThe value of the dilaton at $\\upsilon=0$ is\n\\begin{equation}\ne^{\\frac{2}{3}\\phi(\\upsilon)}|_{\\upsilon=0}=e^{\\frac{2}{3}\\phi_0}=\\frac{R}{kl_p}\\,.\n\\end{equation}\nThe fermionic field $\\lambda^{\\alpha i}(\\upsilon)$ describes the\nnon--zero components of the dilatino superfield and is given by the\nequation \\cite{Howe:2004ib}\n\\begin{equation}\\label{dilatino1}\n\\lambda_{\\alpha i}=-\\frac{i}{3}D_{\\alpha i}\\,\\phi(\\upsilon)\\,.\n\\end{equation}\n\nIn the above expressions $E^{\\hat a}(x,y,\\vartheta)$, $E^{\na'}(x,y,\\vartheta)$ and $\\Omega^{\\hat a\\hat b}(x,y,\\vartheta)$ are\nthe supervielbeins and the $AdS_4$ part of the spin connection of\nthe supercoset $OSp(6|4)\/U(3)\\times SO(1,3)$ and $A(x,y,\\vartheta)$\nis the corresponding type IIA RR one--form gauge superfield, eq.\n(\\ref{notaA}), whose explicit form is given in Appendix B.\n\nAs mentioned above other quantities appearing in eqs.\n(\\ref{simplA})--(\\ref{dilatino1}), namely $\\mathcal M$, $m$,\n$\\Phi(\\upsilon)$, $E_7{}^{\\hat a}(\\upsilon)$, $\\Lambda_{\\hat a}{}^{\\hat b}(\\upsilon)$ and\n$S_{\\underline\\beta}{}^{\\underline\\alpha}(\\upsilon)$, whose geometrical and\ngroup--theoretical meaning has been explained in\n\\cite{Gomis:2008jt}, are also given in Appendix B.\n\n\\setcounter{equation}0\n\\section{Kappa--symmetry gauge fixing}\nWe shall now consider conditions for gauge fixing kappa--symmetry\nwhich are convenient for the description of configurations of\nsuperstrings and D-branes in the $AdS_4\\times CP^3$ superbackground\ndescribed above and for studying $AdS_4\/CFT_3$ correspondence\nproblems.\n\nSince the $AdS_4\/CFT_3$ holography is realized at the $3d$ Minkowski\nboundary of $AdS_4$ it is convenient to choose the $AdS_4\\times\nCP^3$ metric in the form\n\\begin{equation}\\label{ads4metric11}\nds^2=\\left(r\\over\n{R_{{CP^3}}}\\right)^4\\,dx^m\\,\\eta_{mn}\\,dx^n+\\left({R_{CP^3}}\\over\nr\\right)^2\\,dr^2+R_{CP^3}^2\\,ds^2_{_{CP^3}}\\,\n\\end{equation}\nwhere $m=0,1,2$ are indices corresponding to the coordinates of the\n$3d$ Minkowski boundary and $r$ is the 4th, radial, coordinate of\n$AdS_4$. So the $AdS_4$ coordinates are $x^{\\hat m}=(x^m,r)$. The\n$AdS_4$ radius is half of the $CP^3$ radius $R_{CP^3}$ which (in the\nstring frame) is related to the $S^7$ radius $R$ as follows\n\\begin{equation}\\label{R}\nR_{CP^3} = e^{\\frac{1}{3}\\phi_0}R =\\left(\\frac{R^3}{kl_p}\\right)^{1\/2}\\,.\n\\end{equation}\n\nIn the coordinate system associated with the metric\n(\\ref{ads4metric11}) (the bosonic part of) the RR field ${\\mathcal A}_3$, whose flux,\ntogether with $F_2=da_1={e^{-\\phi_0}\\over\n{R_{CP^3}}}\\,dy^{m'}dy^{n'}J_{m'n'}$ (where $dy^{m'}dy^{n'}J_{m'n'}$\nis the K\\\"ahler form on $CP^3$), ensures the compactification on\n$AdS_4\\times CP^{3}$\n\\cite{Watamura:1983hj,Nilsson:1984bj,Sorokin:1985ap}, has the\nfollowing form\n\\begin{equation}\\label{A31}\na_3=e^{-\\phi_0}\\left({r\\over\n{R_{CP^3}}}\\right)^6\\,dx^0\\,dx^1\\,dx^2\\,,\\qquad F_4={6\\over\nR_{CP^3}}e^{-\\phi_0}\\,\\left({r\\over\n{R_{CP^3}}}\\right)^5\\,dx^0\\,dx^1\\,dx^2\\,dr\\,.\n\\end{equation}\n(In our conventions the exterior derivative acts from the right.)\n\nInstead of the $AdS_4$ part of the metric (\\ref{ads4metric11}),\nwhich obscures a bit the fact that the metric of the conformal\nboundary is the flat Minkowski metric on $R^{1,2}$, one can use the\n$AdS_4$ metric in the conformally flat form\n\\begin{equation}\\label{ads4metric21}\nds^2_{_{AdS_4}}={1\\over\nu^2}(dx^m\\eta_{mn}dx^n+\\frac{R_{CP^3}^2}{4}\\,du^2)\\,,\n\\qquad u=\\left(R_{CP^3}\\over r\\right)^2\\,.\n\\end{equation}\nThis metric is associated with a simple coset representative $\ng=\\exp(x^m\\,\\Pi_m)$ $\\exp(R_{CP^3}\\,\\ln(u) D)$, where $\\Pi_m$ are\nthe generators of the Poincar\\'e translations along the Minkowski\nboundary $([\\Pi_m,\\,\\Pi_n]=0)$ and $D$ is the dilatation generator\n$[D,\\,\\Pi_m]=\\Pi_m$.\n\nNote that if the components of the vielbein associated with the\nmetric (\\ref{ads4metric11}) or (\\ref{ads4metric21}) are chosen to\nbe\\footnote{Note that the vielbeins $e^a$ and $e^3$ appearing in eq.\n(\\ref{ad4v}) correspond to the $AdS_4$ metric of the $D=11$\n$AdS_4\\times S^7$ solution characterized by the radius R which is\nrelated to the $CP^3$ radius in the string frame according to eq.\n(\\ref{R}). These bosonic vielbeins will appear in our explicit\nexpressions for the $AdS_4\\times CP^3$ supergeometry.}\n\\begin{equation}\\label{ad4v}\ne^{\\frac{\\phi_0}{3}}\\,e^a={r^2\\over\nR_{CP^3}^2}\\,dx^a=u^{-1}\\,dx^a\\,,\\qquad\ne^{\\frac{\\phi_0}{3}}\\,e^3=\\frac{R_{CP^3}}{r}\\,dr=-\\frac{R_{CP^3}}{2u}\\,du,\n\\end{equation}\nthe components of the $SO(1,3)$ spin connection are\n\\begin{equation}\\label{eaoa31}\n\\omega^{a3}=-\\frac{2}{R}\\,e^a\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{oab}\n\\omega^{ab}=0\\,.\n\\end{equation}\nWe shall use the relation (\\ref{eaoa31}) to simplify the form of the\ngauge fixed $AdS_4 \\times CP^3$ supergeometry. Note that the\ncondition (\\ref{eaoa31}) can always be imposed by performing an\nappropriate local $SO(1,3)$ transformations of the vielbein and\nconnection, though in general the $SO(1,2)$ components $\\omega^{ab}$\nof the connection will be non--zero.\n\nUsing the previous experience of gauge fixing kappa--symmetry of superstrings, D-branes and M-branes in AdS backgrounds\n\\cite{Kallosh:1998qv}--\\cite{Pasti:1998tc} we choose the\nkappa--symmetry gauge fixing condition in the form\n\\footnote{Such a gauge for fixing kappa--symmetry is analogous to\nthe so called Killing spinor gauge \\cite{Kallosh:1998qv}, or\nsupersolvable gauge\n\\cite{Dall'Agata:1998wz}, or the superconformal gauge \\cite{Pasti:1998tc}.}\n\\begin{equation}\\label{kappagauge1}\n\\Theta=\\frac{1}{2}(1\\pm\\gamma)\\Theta\\,\\quad \\Rightarrow\n\\quad \\vartheta^{a'}=\\frac{1}{2}(1\\pm\\gamma)\\vartheta^{a'}\\,,\\qquad\n\\upsilon^{i}=\\frac{1}{2}(1\\pm\\gamma)\\upsilon^{i},\n\\end{equation}\nwhere\n\\begin{equation}\\label{gamma}\n\\gamma=\\gamma^{012}\\qquad\\Rightarrow\\qquad\\gamma^2=1,\n\\quad\\{\\gamma,\\gamma^5\\}=[\\gamma,\\gamma^a]=0\\quad\\mbox{and}\\quad\\gamma\\gamma^3=-i\\gamma^5\\,,\n\\end{equation}\n$a=0,1,2$ are the indices of the 3d Minkowski boundary or of\n$AdS_2\\times S^1$ and $\\gamma^3$ is associated with the third\nspatial direction of $AdS_4$. Note that, in view of our definition\n(\\ref{Gamma10}) of the $D=10$ gamma--matrices the matrices defined\nin (\\ref{gamma}) can be regarded either as $4d$ gamma matrices or as\nthe $D=10$ matrices $\\Gamma^{\\hat a}=\\gamma^{\\hat a}\\otimes {\\bf 1}$\n$(\\hat a=0,1,2,3)$.\n\nThe condition (\\ref{kappagauge1}) is admissible for fixing\nkappa--symmetry if the projection matrix $\\frac{1}{2}(1\\mp\\gamma)$ either coincides or does not\ncommute with the kappa--symmetry projection matrix $\\frac{1}{2}(1+\\Gamma)$ of a given\nconfiguration of the superstring and D--branes.\nThis can be understood in the following way. To lowest order in fermions $\\Theta$ transforms under kappa--symmetry as\n\\begin{equation}\n\\delta_\\kappa\\Theta=\\frac{1}{2}(1+\\Gamma)\\kappa\\,,\n\\end{equation}\nwhere $\\frac{1}{2}(1+\\Gamma)$ is a projection matrix and\n$\\kappa(\\xi)$ is an arbitrary spinor parameter. It is then clear\nthat if the two projectors coincide, we can pick a $\\kappa$ such\nthat $\\frac{1}{2}(1+\\Gamma)\\Theta=0$, or equivalently\n$\\Theta=\\frac{1}{2}(1-\\Gamma)\\Theta$. In the case when the two\nprojection operators do not coincide a kappa--symmetry variation of\nthe gauge--fixing condition $\\frac{1}{2}(1\\mp\\gamma)\\Theta=0$ which\nleaves it intact gives\n\\begin{equation}\n0=\\frac{1}{4}(1\\mp\\gamma)(1+\\Gamma)\\kappa\n=\\frac{1}{8}(1+\\Gamma)(1\\mp\\gamma)(1+\\Gamma)\\kappa\\mp\\frac{1}{8}[\\gamma,\\Gamma](1+\\Gamma)\\kappa\n=\\mp\\frac{1}{8}[\\gamma,\\Gamma](1+\\Gamma)\\kappa\\,,\n\\end{equation}\nwhere in the last step we made use of the initial equation. This\nmeans that for the gauge--fixing to be complete, i.e. that the\nvariation of the gauge fixing condition vanishes if and only if all\nindependent kappa--symmetry parameters are put to zero, the\ncommutator $[\\gamma,\\Gamma]$ has to be an invertible matrix (when\nrestricted to the relevant subspace).\n\n As we shall see\nbelow, for any choice of the sign the condition (\\ref{kappagauge1})\n is an admissible gauge-fixing in the case of\n arbitrary motion of D0--branes in $AdS_4 \\times CP^3$, while in the\ncase of the superstring it is admissible (for both signs) for those\n configurations for which the projection of the string\nworldsheet on the $3d$ Minkowski boundary is a non--degenerate\ntwo--dimensional time--like surface. In the case of the D2--brane\nplaced at the Minkowski boundary of $AdS_4$, to gauge fix kappa--symmetry one must choose the condition (\\ref{kappagauge1}) with the\nlower sign\n\\cite{Pasti:1998tc}, while both signs are admissible when the\n D2--brane wraps an $AdS_2 \\times S^1$ subspace of $AdS_4$. However,\n the choice of (\\ref{kappagauge1}) with the upper sign yields\n the simplest gauge-fixed form of the string and\n brane actions in the $AdS_4\\times CP^3$ superbackground.\n\n\nWhen the fermionic coordinates are restricted by the condition\n(\\ref{kappagauge1}), the expressions for the supervielbeins and the\ngauge superfields of the $AdS_4 \\times CP^3$ superspace drastically\nsimplify due to the identities satisfied by the projected fermionic\ncoordinates given in Appendix C. In particular, the functions of\n$\\upsilon$ which enter the eqs. (\\ref{simplA})--(\\ref{dilaton1}),\nwhose explicit forms are given in Appendix B.1, reduce to\n\\begin{equation}\\label{Phi1}\n\\Phi(\\upsilon)=1+\\frac{8}{R}\\,\\upsilon\\,\\varepsilon\\gamma^5\\,{{\\sinh^2{{\\mathcal M}\/2}}\\over{\\mathcal\nM}^2}\\,\\varepsilon\\upsilon =1\\,,\n\\end{equation}\n\\begin{eqnarray}\nE_7{}^a(\\upsilon)&=&-\\frac{8i}{R}\\,\\upsilon\\gamma^a\\,{{\\sinh^2{{\\mathcal M}\/2}}\\over{\\mathcal M}^2}\\,\\varepsilon\\,{\\upsilon}=-\\frac{2i}{R}\\upsilon\\gamma^a\\varepsilon\\upsilon\\,,\\\\\nE_7{}^3(\\upsilon)&=&-\\frac{8i}{R}\\,\\upsilon\\gamma^3\\,{{\\sinh^2{{\\mathcal\nM}\/2}}\\over{\\mathcal M}^2}\\,\\varepsilon\\,{\\upsilon}=0\\,.\n\\end{eqnarray}\nThe dilaton superfield (\\ref{dilaton1}) takes the form\n\\begin{equation}\\label{gfdialton}\ne^{{2\\over 3}\\phi(\\upsilon)}\n=\\frac{R}{kl_p}(1-\\frac{6}{R^2}(\\upsilon\\ups)^2)\n\\quad \\Rightarrow \\quad\n\\phi(\\upsilon)=\\frac{3}{2}(\\log\\frac{R}{kl_p}-\\frac{6}{R^2}(\\upsilon\\ups)^2)\\,,\n\\end{equation}\nwhere $\\upsilon\\ups=\\delta_{ij}C_{\\alpha\\beta}\\upsilon^{\\alpha i}\\upsilon^{\\beta j}$, and the dilatino becomes\n\\begin{eqnarray}\n\\lambda^{\\alpha i}\n=\n\\frac{2i}{R}\\left(\\frac{R}{kl_p}\\right)^{-1\/4}((\\gamma^5\\upsilon)^{\\alpha i}+\\frac{3}{R}\\upsilon^{\\alpha i}\\,\\upsilon\\ups)\\,.\n\\end{eqnarray}\nWe also find that\n\\begin{equation}\n\\Lambda_a{}^\n= (1+\\frac{2}{R^2}(\\upsilon\\ups)^2)\\delta_a{}^b\\,,\n\\qquad\n\\Lambda_3{}^a = \\Lambda_a{}^3=0\\,,\\qquad\n\\Lambda_3{}^3=1\\,,\n\\end{equation}\nand\n\\begin{eqnarray}\\label{S}\n&&S_{\\underline\\alpha}{}^{\\underline\\beta}=\n\\left(\\frac{R}{kl_p}\\right)^{1\/4}e^{-\\frac{1}{6}\\phi}\\,\\delta_{\\underline\\alpha}{}^{\\underline\\beta}\n+\\frac{i}{R}\\upsilon\\gamma^a\\varepsilon\\upsilon\\,(\\Gamma_a\\Gamma_{11})_{\\underline\\alpha}{}^{\\underline\\beta}\\,.\n\\end{eqnarray}\n\n\n\n\\subsection{$AdS_4 \\times CP^3$ supergeometry with\n$\\Theta={1\\over 2}(1+\\gamma)\\Theta$}\\label{theta-}\n\nThe supervielbeins (\\ref{simplA}) and the gauge superfields\n(\\ref{simplB}), (\\ref{B2}) and (\\ref{A3}) take the simplest form\nwhen the kappa--symmetry gauge condition (\\ref{kappagauge1}) is\nchosen with the upper sign. In virtue of eqs.\n(\\ref{Phi1})--(\\ref{S}) and expressions given in Appendix C, the\nsupervielbeins reduce to\n\\begin{equation}\\label{simple+v}\n\\begin{aligned}\n{\\mathcal\nE}^{a'}(x,y,\\vartheta,\\upsilon)&=\\Big(\\frac{R}{kl_p}\\Big)^{1\/2}e^{a'}(y)(1-\\frac{3}{R^2}(\\upsilon\\ups)^2)\\,,\n\\\\\n\\\\\n{\\mathcal E}^a(x,y,\\vartheta,\\upsilon) &=\\Big(\\frac{R}{kl_p}\\Big)^{1\/2}\n(e^a(x)+i\\Theta\\gamma^aD\\Theta)(1-\\frac{1}{R^2}(\\upsilon\\ups)^2)\\,,\n\\\\\n\\\\\n{\\mathcal E}^3(x,y,\\vartheta,\\upsilon)\n&=\\Big(\\frac{R}{kl_p}\\Big)^{1\/2}e^3(x)(1-\\frac{3}{R^2}(\\upsilon\\ups)^2)\\,,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{simplesv}\n\\begin{aligned}\n{\\mathcal E}^{\\alpha i}(x,y,\\vartheta,\\upsilon) &=\n\\Big(\\frac{R}{kl_p}\\Big)^{1\/4}\\Big( (D_8\\upsilon)^{\\alpha\ni}-\\frac{1}{R}\\upsilon\\gamma^a\\varepsilon\\upsilon\\,(D_8\\upsilon\\varepsilon\\gamma_a\\gamma_5)^{\\alpha\ni}\n-\\frac{4i}{R^2}(e^a(x)+i\\Theta\\gamma^aD\\Theta)(\\gamma_a\\upsilon)^{\\alpha\ni}\\,\\upsilon\\ups\n\\Big)\n\\,,\n\\nonumber\\\\\n\\nonumber\\\\\n{\\mathcal E}^{\\alpha a'}(x,y,\\vartheta,\\upsilon) &=\n\\Big(\\frac{R}{kl_p}\\Big)^{1\/4}\\Big( (D_{_{24}}\\vartheta)^{\\alpha a'}\n+\\frac{i}{R}\\upsilon\\gamma^a\\varepsilon\\upsilon\\,(D_{_{24}}\\vartheta\\gamma_a\\gamma_5\\gamma_7)^{\\alpha\na'}\\Big)\\,.\\nonumber\n\\end{aligned}\n\\end{equation}\nThe type IIA RR one--form gauge superfield is\n\\begin{equation}\\label{simple+A}\n\\begin{aligned}\n{\\mathcal A}_1(x,y,\\vartheta,\\upsilon) &=kl_p\\Big(\nA(y)-\\frac{2i}{R^2}(e^a(x)+i\\Theta\\gamma^aD\\Theta)\\,\\upsilon\\varepsilon\\gamma_a\\upsilon\\Big)\\,,\n\\end{aligned}\n\\end{equation}\nwhere $A(y)$ is the potential for the K\\\"ahler form on $CP^3$,\n\\emph{i.e.} $dA(y)=\\frac{1}{R^2}\\,dy^{m'}dy^{n'}\\,J_{m'n'}$, and the covariant derivatives are\n\\begin{eqnarray}\\label{D81}\nD\\Theta&=&(D_8\\upsilon,\\,D_{24}\\vartheta)\\\\\nD_8\\upsilon&=&{\\mathcal P_2}\\,\\left(d-\\frac{1}{R}e^3-\\frac{1}{4}\\omega^{ab}\\gamma_{ab}+2A(y)\\varepsilon\\right)\\upsilon\\nonumber\\\\\nD_{24}\\vartheta&=&{\\mathcal P}_6\\,(d-\\frac{1}{R}e^3\n+\\frac{i}{R}e^{a'}\\gamma_{a'} -\\frac{1}{4}\\omega^{ab}\\gamma_{ab}\n-\\frac{1}{4}\\omega^{a'b'}\\gamma_{a'b'})\\vartheta\\,,\\nonumber\n\\end{eqnarray}\nwhere ${\\mathcal P}_2$ and ${\\mathcal P}_6$ are projectors that\nsingle out from 32 $\\Theta$, respectively, 8 $\\upsilon$ and 24\n$\\vartheta$ (see Appendix A.5). The appearance of the $U(1)$ gauge potential $A(y)$ in the covariant\nderivative of $\\upsilon$ (\\ref{D81}) reflects the fact that\n$\\upsilon$ has $U(1)$ charge equal to 2.\n\\\\\nNote that\n\\begin{equation}\nD_8={\\mathcal P}_2\\, {\\mathcal D}\\,{\\mathcal P}_2\\,,\\qquad\nD_{24}={\\mathcal P}_6\\, {\\mathcal D}\\,{\\mathcal P}_6\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n{\\mathcal D}=d-\\frac{1}{R}e^3+\\frac{i}{R}e^{a'}\\gamma_{a'}\n-\\frac{1}{4}\\omega^{ab}\\gamma_{ab}-\\frac{1}{4}\\omega^{a'b'}\\gamma_{a'b'}\\,.\n\\end{equation}\nThe NS--NS three-form, eq. (\\ref{f4h3}), becomes\n\\begin{eqnarray}\\label{H3ex}\nH_3&=&-\\frac{6i}{R^2} e^3\\, {\\mathcal E}^b\\,{\\mathcal\nE}^a\\,\\varepsilon_{abc}\\,\\upsilon\\gamma^c\\varepsilon\\upsilon\n-\\frac{2R}{kl_p}\\,\\Big[\n\\frac{i}{R}(e^b+i\\Theta\\gamma^bD\\Theta)(e^a+i\\Theta\\gamma^aD\\Theta)\\,D_8\\upsilon\\gamma_{ab}\\varepsilon\\upsilon\n\\nonumber\\\\\n&&{}\n+\\frac{1}{R}(e^a+i\\Theta\\gamma^aD\\Theta)\\,D\\Theta\\,\\gamma_{ab}\\,D\\Theta\\,\\upsilon\\gamma^b\\varepsilon\\upsilon\n+\\frac{1}{2}\\,e^3\\,D\\Theta\\,\\gamma_7\\,D\\Theta\n+\\frac{2i}{R}\\,e^3\\,e^{a'}\\,D\\Theta\\gamma_{a'}\\gamma_7\\upsilon\n\\nonumber\\\\\n&&{} +\\frac{i}{2}\\,e^{a'}\\,D\\Theta\\,\\gamma_{a'}\\gamma_7\\,D\\Theta\n+\\frac{1}{R}\\,e^{b'}\\,e^{a'}\\,D\\Theta\\,\\gamma_{a'b'}\\gamma_7\\upsilon\\,\\Big]\\,,\n\\end{eqnarray}\nwhere $\\Theta=(\\vartheta,\\upsilon)$ and\n$D\\Theta=(D_{24}\\vartheta,\\,D_8\\upsilon)$.\n\\\\\nWe now want to determine the potential of $H_3=dB_2$ using eq.\n(\\ref{B2}). Taking into account that $i_\\Theta\\mathcal E^{A}=0$, and\nthe fact that with the plus sign in the projector\n(\\ref{kappagauge1})\n$\\Theta\\gamma_{ab}D\\Theta=\\varepsilon_{abc}\\,\\Theta\\gamma^cD\\Theta$ etc., we\nget\n\\begin{eqnarray}\ni_\\Theta H_3 &=& 2\\frac{R}{kl_p}\n\\Big(\n-\\frac{i}{R}e^be^a\\,\\upsilon\\gamma_{ab}\\varepsilon\\upsilon\n-\\frac{2i}{R}e^3e^{a'}\\,\\vartheta\\gamma_{a'}\\gamma^7\\upsilon\n-\\frac{1}{R}e^{b'}e^{a'}\\,\\Theta\\gamma_{a'b'}\\gamma^7\\upsilon\n+e^3\\,\\Theta\\gamma^7D\\Theta\n\\nonumber\\\\\n&&{} +ie^{a'}\\,\\Theta\\gamma_{a'}\\gamma^7D\\Theta\n+\\frac{4}{R}e^b\\,\\Theta\\gamma^aD\\Theta\\,\\upsilon\\gamma_{ab}\\varepsilon\\upsilon\n+\\frac{3i}{R}\\Theta\\gamma^bD\\Theta\\,\\Theta\\gamma^aD\\Theta\\,\\upsilon\\gamma_{ab}\\varepsilon\\upsilon\n\\Big)\\,.\n\\end{eqnarray}\nThis gives the NS--NS two-form potential (see eq. (\\ref{B2}))\n\\begin{eqnarray}\\label{simple+B}\nB_2&=&\n\\frac{R}{kl_p}\n\\Big[\n-\\frac{i}{R}(e^b+i\\Theta\\gamma^bD\\Theta)\\,(e^a+i\\Theta\\gamma^aD\\Theta)\\,\n\\upsilon\\gamma_{ab}\\varepsilon\\upsilon\n-\\frac{2i}{R}e^3e^{a'}\\,\\vartheta\\gamma_{a'}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{\\hspace{20pt}}\n-\\frac{1}{R}e^{b'}e^{a'}\\,\\Theta\\gamma_{a'b'}\\gamma^7\\upsilon\n+e^3\\,\\Theta\\gamma^7D\\Theta\n+ie^{a'}\\,\\Theta\\gamma_{a'}\\gamma^7D\\Theta\n\\Big]\\,.\n\\end{eqnarray}\n\nNow we turn our attention to the RR four-form $F_4$ (\\ref{f4h3}) and\nits potential $\\mathcal A_3$ (\\ref{A3}). $F_4$ simplifies to\n\\begin{equation}\\label{f4}\nF_4= -\\frac{1}{kl_p}\\,e^{-2\\phi}\\,{\\mathcal E}^3{\\mathcal\nE}^c{\\mathcal E}^b{\\mathcal E}^a\\,\\varepsilon_{abc}\n-\\frac{i}{2}e^{-\\phi}{\\mathcal E}^{B}{\\mathcal E}^{A}{\\mathcal\nE}^{\\underline\\beta} {\\mathcal\nE}^{\\underline\\alpha}(\\Gamma_{AB})_{\\underline{\\alpha\\beta}}\\,,\n\\end{equation}\nwhich gives\n\\begin{eqnarray}\ni_\\Theta F_4 &=& -i(\\frac{R}{kl_p})^{1\/4}e^{-\\phi}{\\mathcal\nE}^{B}{\\mathcal E}^{A} ({\\mathcal E}\\Gamma_{AB}\\Theta\n+\\frac{i}{R}{\\mathcal\nE}\\Gamma_{AB}\\gamma_a\\Gamma_{11}\\Theta\\,\\upsilon\\gamma^a\\varepsilon\\upsilon)\n\\nonumber\\\\\n&=&\n-i(e^b+i\\Theta\\gamma^bD\\Theta)(e^a+i\\Theta\\gamma^aD\\Theta)\\,(1+\\frac{12}{R^2}(\\upsilon\\ups)^2)(D\\Theta\\gamma_{ab}\\Theta\n+\\frac{4i}{R^2}e^c\\varepsilon_{abc}\\,(\\upsilon\\ups)^2 )\n\\nonumber\\\\\n&&{}\n+\\frac{4i}{R}e^3(e^a+i\\Theta\\gamma^aD\\Theta)\\,D\\Theta\\gamma^7\\Theta\\,\\upsilon\\gamma_a\\varepsilon\\upsilon\n-\\frac{4}{R}e^{a'}(e^a+i\\Theta\\gamma^aD\\Theta)\\,D\\Theta\\gamma_{a'}\\gamma^7\\Theta\\,\\upsilon\\gamma_a\\varepsilon\\upsilon\n\\nonumber\\\\\n&&{} +2e^3e^{a'}(D\\Theta\\gamma_{a'}\\Theta\n+\\frac{4i}{R^2}(e^a+i\\Theta\\gamma^aD\\Theta)\\upsilon\\gamma_a\\gamma_{a'}\\vartheta\\,\\upsilon\\ups\n)\n\\nonumber\\\\\n&&{} -ie^{b'}e^{a'}(D\\Theta\\gamma_{a'b'}\\Theta\n+\\frac{4i}{R^2}(e^a+i\\Theta\\gamma^aD\\Theta)\\upsilon\\gamma_a\\gamma_{a'b'}\\vartheta\\,\\upsilon\\ups)\\,.\n\\end{eqnarray}\nSince $i_\\Theta\\mathcal A_1=0$ the RR three--form potential\n(\\ref{A3}) becomes\n\\begin{eqnarray}\\label{A3-}\n\\mathcal A_3&=&a_3+\\int_0^1\\,dt(i_\\Theta F_4+\\mathcal A_1i_\\Theta H_3)(x,y,t\\Theta)\n\\nonumber\\\\\n&=& a_3 -\\frac{i}{2}e^be^a\\,D\\Theta\\gamma_{ab}\\Theta\n+e^3e^{a'}\\,D\\Theta\\gamma_{a'}\\Theta\n-\\frac{i}{2}e^{b'}e^{a'}\\,D\\Theta\\gamma_{a'b'}\\Theta\n+\\frac{1}{2}e^b\\Theta\\gamma^aD\\Theta\\,D\\Theta\\gamma_{ab}\\Theta\n\\nonumber\\\\\n&&{}\n+\\frac{i}{6}\\Theta\\gamma^bD\\Theta\\,\\Theta\\gamma^aD\\Theta\\,D\\Theta\\gamma_{ab}\\Theta\n+k\\,l_p\\,A(y)\\,B_2\\,.\n\\end{eqnarray}\nLooking at the purely bosonic part of $F_4$, eq. (\\ref{f4}) it is\neasy to see (compare also with eqs. (\\ref{A31})) that we can take\n\\begin{equation}\na_3=\\frac{1}{3!}e^ce^be^a\\,\\varepsilon_{abc}\\,.\n\\end{equation}\nNote that in the above expressions for the supervielbeins\n(\\ref{simple+v}), the RR one--form (\\ref{simple+A}), the three--form\n(\\ref{A3-}) and the NS-NS two--form (\\ref{simple+B}) the maximum\norder of the fermions is six.\n\n\n\\subsection{$AdS_4 \\times CP^3$ supergeometry with\n$\\Theta={1\\over 2}(1-\\gamma)\\Theta$}\\label{theta++}\n\nWhen the condition (\\ref{kappagauge1}) is chosen with the lower\nsign, in view of eqs. (\\ref{Phi1})--(\\ref{S}) and expressions given\nin Appendix C, the supervielbeins (\\ref{simplA}) and the RR\none--form gauge superfield (\\ref{simplB}) reduce to a form which is\nmore complicated than their gauge--fixed counterparts of the\nprevious Subsection. But, as we have already mentioned, one cannot\nuse the gauge fixing condition of Subsection \\ref{theta-} to\ndescribe the D2--brane at the Minkowski boundary of $AdS_4$, and\nshould impose $\\Theta={1\\over 2}(1-\\gamma)\\Theta$ instead. In this\ncase the supervielbeins take the following form\n\\begin{equation}\\label{-bv}\n\\begin{aligned}\n{\\mathcal E}^{a'}(x,y,\\vartheta,\\upsilon)&=\\Big(\\frac{R}{kl_p}\\Big)^{1\/2}\n\\Big(e^{a'}(y)-\\frac{2}{R}e^a(x)\\,\\Theta\\gamma^{a'}\\gamma_a\\Theta\\Big)\n(1-\\frac{3}{R^2}(\\upsilon\\ups)^2)\\,,\n\\\\\n\\\\\n{\\mathcal E}^a(x,y,\\vartheta,\\upsilon)\n&=\\Big(\\frac{R}{kl_p}\\Big)^{1\/2}\\,\\Big( e^a(x) +i\\Theta\\gamma^a D\\Theta\n+\\frac{1}{R^2}e^a(x)(\\vartheta\\vartheta-\\upsilon\\ups)^2\n\\Big)(1-\\frac{1}{R^2}(\\upsilon\\ups)^2)\\,,\n\\\\\n\\\\\n{\\mathcal E}^3(x,y,\\vartheta,\\upsilon)\n&=\\Big(\\frac{R}{kl_p}\\Big)^{1\/2}e^3(x)(1-\\frac{3}{R^2}(\\upsilon\\ups)^2)\\,,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{-fv}\n\\begin{aligned}\n{\\mathcal E}^{\\alpha i}(x,y,\\vartheta,\\upsilon) &=\n\\Big(\\frac{R}{kl_p}\\Big)^{1\/4}\\Big( (D_8\\upsilon)^{\\alpha i}\n-\\frac{1}{R}\\upsilon\\gamma^a\\varepsilon\\upsilon\\,(D_8\\upsilon\\varepsilon\\gamma_a\\gamma_5)^{\\alpha\ni}\n\\\\\n&{}\n\\hskip+1cm\n-\\frac{4i}{R^2}(e^a(x)+i\\Theta\\gamma^aD\\Theta\n+\\frac{1}{R^2}e^a(x)(\\vartheta\\vartheta-\\upsilon\\ups)^2\n)(\\gamma_a\\upsilon)^{\\alpha i}\\,\\upsilon\\ups\n\\Big)\n\\,,\n\\\\\n\\\\\n{\\mathcal E}^{\\alpha a'}(x,y,\\vartheta,\\upsilon) &=\n\\Big(\\frac{R}{kl_p}\\Big)^{1\/4}\\Big( (D_{_{24}}\\vartheta)^{\\alpha a'}\n+\\frac{i}{R}(D_{24}\\vartheta\\gamma_a\\gamma^5\\gamma^7)^{\\alpha\na'}\\,\\upsilon\\gamma^a\\varepsilon\\upsilon\n\\Big)\\,.\n\\end{aligned}\n\\nonumber\n\\end{equation}\nThe type IIA RR one--form gauge superfield is\n\\begin{equation}\\label{A2-}\n{\\mathcal A}_1(x,y,\\vartheta,\\upsilon)=kl_p\\Big(A(y)\n-\\frac{2}{R^2}e^a(x)\\,\\Theta\\gamma^7\\gamma_a\\Theta\n -\\frac{2i}{R^2}(e^a(x)+i\\Theta\\gamma^aD\\Theta\n+\\frac{1}{R^2}e^a(x)(\\vartheta\\vartheta)^2\n)\\upsilon\\varepsilon\\gamma_a\\upsilon\n\\Big)\\,.\n\\end{equation}\nIn the above expressions\n\\begin{eqnarray}\\label{D8}\nD\\Theta&=&(D_8\\upsilon\\,,D_{24}\\vartheta)\\,,\\nonumber\\\\\nD_8\\upsilon&=&(D-\\frac{2i}{R^2}\\upsilon\\ups\\,e^a\\,\\gamma_a+2A(x,y,\\vartheta)\\,\\varepsilon)\\upsilon\n\\\\\n&&\\hspace{-40pt}=\n\\Big(d+\\frac{2i}{R}e^a(\\gamma^5\\gamma_a+\\frac{1}{R}(\\vartheta\\vartheta-\\upsilon\\ups)\\gamma_a)\n+\\frac{1}{R}e^3\n-\\frac{1}{4}\\omega^{ab}\\gamma_{ab}\n+(2A(y)-\\frac{4}{R^2}e^a\\vartheta\\gamma^7\\gamma_a\\vartheta)\\varepsilon\n\\Big)\\upsilon,\n\\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\\label{D_{24}}\nD_{_{24}}\\vartheta\n&=&{\\mathcal P}_6\\,\\Big(d\n+\\frac{2i}{R}e^a(\\gamma^5\\gamma_a+\\frac{1}{R}(\\vartheta\\vartheta-\\upsilon\\ups)\\gamma_a)\n+\\frac{1}{R}e^3 +\\frac{i}{R}e^{a'}\\gamma_{a'}\n-\\frac{1}{4}\\omega^{ab}\\gamma_{ab}\n-\\frac{1}{4}\\omega^{a'b'}\\gamma_{a'b'}\\Big)\\vartheta\\,.\\nonumber\\\\\n\\end{eqnarray}\n(The shift of $D$ by $-\\frac{2i}{R^2}\\upsilon\\ups\\,e^a\\,\\gamma_a$ has\nbeen made for the expressions to have a nicer and more\ncovariant--looking form).\n\\\\\nThe NS--NS three-form, eq. (\\ref{f4h3}), becomes\n\\begin{equation}\nH_3= -\\frac{6i}{R^2}e^3{\\mathcal E}^b{\\mathcal\nE}^a\\,\\varepsilon_{abc}\\,\\upsilon\\gamma^c\\varepsilon\\upsilon -i{\\mathcal E}^{A}{\\mathcal\nE}^{\\underline\\beta}{\\mathcal\nE}^{\\underline\\alpha}(\\Gamma_A\\Gamma_{11})_{\\underline{\\alpha\\beta}}\n+i{\\mathcal E}^{B}{\\mathcal E}^{A}{\\mathcal\nE}^{\\underline\\alpha}(\\Gamma_{AB}\\Gamma^{11}\\lambda)_{\\underline\\alpha}\n\\,.\n\\end{equation}\nWe now would like to determine its potential according to eq.\n(\\ref{B2}). Using the fact that\n\\begin{equation}\ni_\\Theta{\\mathcal E}^{\\underline\\alpha}=\n\\Big(\\frac{R}{kl_p}\\Big)^{1\/4}(\\Theta^{\\underline\\alpha}\n+\\frac{i}{R}\\upsilon\\gamma^a\\varepsilon\\upsilon\\,(\\Theta\\Gamma_a\\Gamma_{11})^{\\underline\\alpha})\n\\end{equation}\nand $i_\\Theta\\mathcal E^{A}=0$ we get\n\\begin{eqnarray}\\label{iH3-}\ni_\\Theta H_3&=&\n\\frac{R}{kl_p}\\Big(\n-\\frac{2}{R}(e^b+i\\Theta\\gamma^bD\\Theta+\\frac{1}{R^2}e^b(\\vartheta\\vartheta-\\upsilon\\ups)^2)\n(e^a+i\\Theta\\gamma^aD\\Theta+\\frac{1}{R^2}e^a(\\vartheta\\vartheta-\\upsilon\\ups)^2)\\upsilon\\gamma_{ab}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{}\n+\\frac{4}{R}(e^a+i\\Theta\\gamma^aD\\Theta+\\frac{1}{R^2}e^a(\\vartheta\\vartheta-\\upsilon\\ups)^2)\nD\\Theta\\gamma_{ba}\\Theta\\,\\upsilon\\gamma^b\\varepsilon\\upsilon\n+\\frac{8}{R^2}e^3e^a\\vartheta\\vartheta\\,\\upsilon\\gamma_a\\varepsilon\\upsilon\n\\nonumber\\\\\n&&{}\n+\\frac{4}{R}(e^a+i\\Theta\\gamma^aD\\Theta+\\frac{1}{R^2}e^a(\\vartheta\\vartheta-\\upsilon\\ups)^2)e^b\\Theta\\gamma_{ba}\\gamma^7\\Theta\n+2e^3D\\Theta\\gamma^7\\Theta\n\\nonumber\\\\\n&&{}\n-2i(e^{a'}-\\frac{2}{R}e^a\\,\\Theta\\gamma^{a'}\\gamma_a\\Theta)D\\Theta\\gamma_{a'}\\gamma^7\\Theta\n+\\frac{4i}{R}e^3(e^{a'}-\\frac{2}{R}e^c\\,\\Theta\\gamma^{a'}\\gamma_c\\Theta)\\vartheta\\gamma_{a'}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{}\n-\\frac{2}{R}(e^{b'}-\\frac{2}{R}e^b\\,\\Theta\\gamma^{b'}\\gamma_b\\Theta)(e^{a'}-\\frac{2}{R}e^c\\,\\Theta\\gamma^{a'}\\gamma_c\\Theta)\\Theta\\gamma_{a'b'}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{}\n+\\frac{8i}{R^2}(e^{a'}-\\frac{2}{R}e^c\\,\\Theta\\gamma^{a'}\\gamma_c\\Theta)e^a\\Theta\\gamma_{a'}\\gamma_{ab}\\Theta\\,\\upsilon\\gamma^b\\varepsilon\\upsilon\n\\Big)\n\\end{eqnarray}\nand finally\n\\begin{eqnarray}\\label{B2-}\nB_2&=&\n\\frac{R}{kl_p}\\Big(\n\\frac{i}{R}(e^b+i\\Theta\\gamma^bD\\Theta+\\frac{1}{R^2}e^b(\\vartheta\\vartheta)^2)(e^a+i\\Theta\\gamma^aD\\Theta+\\frac{1}{R^2}e^a(\\vartheta\\vartheta)^2)\\,\\varepsilon_{abc}\\,\\upsilon\\gamma^c\\varepsilon\\upsilon\n\\nonumber\\\\\n&&{}\n+\\frac{2}{R}(e^a+\\frac{i}{2}\\Theta\\gamma^aD\\Theta+\\frac{1}{3R^2}e^a(\\vartheta\\vartheta-\\upsilon\\ups)^2)e^b\\,\\varepsilon_{abc}\\,\\Theta\\gamma^c\\gamma^7\\Theta\n+\\frac{2i}{R}e^3(e^{a'}-\\frac{1}{R}e^a\\,\\Theta\\gamma^{a'}\\gamma_a\\Theta)\\,\\vartheta\\gamma_{a'}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{}\n-\\frac{1}{R}(e^{b'}-\\frac{1}{R}e^b\\,\\Theta\\gamma^{b'}\\gamma_b\\Theta)(e^{a'}-\\frac{1}{R}e^a\\,\\Theta\\gamma^{a'}\\gamma_a\\Theta)\\,\\Theta\\gamma_{a'b'}\\gamma^7\\upsilon\n-\\frac{1}{3R^3}e^b\\,\\Theta\\gamma^{b'}\\gamma_b\\Theta\\,e^c\\,\\Theta\\gamma^{a'}\\gamma_c\\Theta\\,\\Theta\\gamma_{a'b'}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{} -e^3\\,\\Theta\\gamma^7D\\Theta\n+\\frac{i}{R^2}e^3e^a(\\vartheta\\gamma^7\\gamma_a\\vartheta-\\upsilon\\gamma_a\\gamma^7\\upsilon)\\,\\Theta\\Theta\n+i(e^{a'}-\\frac{1}{R}e^a\\,\\Theta\\gamma^{a'}\\gamma_a\\Theta)\\,\\Theta\\gamma_{a'}\\gamma^7D\\Theta\n\\nonumber\\\\\n&&{}\n+\\frac{2i}{R^2}(e^{a'}-\\frac{4}{3R}e^c\\,\\Theta\\gamma^{a'}\\gamma_c\\Theta)\\,e^a\\,\\varepsilon_{abc}\\,\\Theta\\gamma_{a'}\\gamma^b\\Theta\\,\\upsilon\\gamma^c\\varepsilon\\upsilon\n+\\frac{2}{R^2}(e^{a'}-\\frac{2}{3R}e^b\\,\\Theta\\gamma^{a'}\\gamma_b\\Theta)e^a\\,\\vartheta\\gamma^7\\gamma_a\\vartheta\\,\\vartheta\\gamma_{a'}\\upsilon\n\\nonumber\\\\\n&&{}\n+\\frac{1}{R^2}(e^{a'}-\\frac{2}{3R}e^b\\,\\Theta\\gamma^{a'}\\gamma_b\\Theta)e^a\\,\\Theta\\gamma_{a'}\\gamma^7\\gamma_a\\Theta\\,(\\vartheta\\vartheta-\\upsilon\\ups)\n-\\frac{2}{3R^3}e^be^a\\,\\varepsilon_{abc}\\,\\Theta\\gamma^c\\gamma^7\\Theta\\,((\\vartheta\\vartheta)^2-(\\upsilon\\ups)^2)\n\\nonumber\\\\\n&&{}\n+\\frac{4i}{3R^3}e^be^d\\,\\vartheta\\gamma_d\\gamma^7\\vartheta\\,\\vartheta\\gamma^a\\gamma^7\\vartheta\\,\\varepsilon_{abc}\\,\\upsilon\\gamma^c\\varepsilon\\upsilon\n\\Big)\\,.\n\\end{eqnarray}\nNote that the maximum order of the fermions in the above expressions\nis ten.\n\nUsing the form of $F_4$ in (\\ref{f4h3}) as well as the expressions\n(\\ref{A2-}) for ${\\mathcal A}_1$ and (\\ref{iH3-}) for $i_\\Theta H_3$\nthe quantity relevant for computing the RR three-form potential\n$\\mathcal A_3$ becomes\n\\begin{eqnarray}\n\\lefteqn{i_\\Theta F_4+\\mathcal A_1i_\\Theta H_3}\n\\nonumber\\\\\n&=&\n-(e^b+i\\Theta\\gamma^bD\\Theta+\\frac{1}{R^2}e^b(\\vartheta\\vartheta-\\upsilon\\ups)^2)\n(e^a+i\\Theta\\gamma^aD\\Theta+\\frac{1}{R^2}e^a(\\vartheta\\vartheta-\\upsilon\\ups)^2)\\varepsilon_{abc}\ni\\Theta\\gamma^cD\\Theta\n\\nonumber\\\\\n&&{}\n-2e^3(e^{a'}-\\frac{2}{R}e^a\\,\\Theta\\gamma^{a'}\\gamma_a\\Theta)D\\Theta\\gamma_{a'}\\Theta\n-\\frac{4}{R}e^3e^a\\Theta\\gamma_aD\\Theta\\,\\Theta\\Theta(1+\\frac{2}{R^2}(\\upsilon\\ups)^2)\n\\\\\n&&{}\n\\hspace{-10pt}-\\frac{4}{R}(e^{a'}-\\frac{2}{R}e^c\\,\\Theta\\gamma^{a'}\\gamma_c\\Theta)e^a\n(e^b+i\\Theta\\gamma^bD\\Theta+\\frac{1}{R^2}e^b(\\vartheta\\vartheta-\\upsilon\\ups)^2)\n\\Theta\\gamma_{a'}\\gamma_{ab}\\Theta(1+\\frac{2}{R^2}(\\upsilon\\ups)^2)\n\\nonumber\\\\\n&&{}\n\\hspace{-10pt}-i(e^{b'}-\\frac{2}{R}e^b\\,\\Theta\\gamma^{b'}\\gamma_b\\Theta)(e^{a'}-\\frac{2}{R}e^a\\,\\Theta\\gamma^{a'}\\gamma_a\\Theta)D\\Theta\\gamma_{a'b'}\\Theta\n+kl_p(A(y)-\\frac{2}{R^2}e^a\\,\\Theta\\gamma^7\\gamma_a\\Theta)i_\\Theta\nH_3\\,.\\nonumber\n\\end{eqnarray}\nOne can now substitute this together with the expression for\n$i_\\Theta H_3$ (\\ref{iH3-}) into eq. (\\ref{A3}) and compute the\nexplicit form of the RR three--form potential ${\\mathcal A}_3$ in\nthis gauge. Since we have not got a reasonably simple expression for\n${\\mathcal A}_3$ we shall not present it here.\n\n\n\\setcounter{equation}0\n\\section{Applications}\nWe can now use the kappa--gauge fixed form of the $AdS_4 \\times\nCP^3$ superbackground of Subsections \\ref{theta-} and \\ref{theta++}\nto simplify the actions for the type IIA superstring and D--branes.\nLet us note that the gauge fixing conditions (\\ref{kappagauge1}) can\nalso be used to simplify the actions for the $D=11$ superparticle,\nM2-- and M5--branes in the $AdS_4\\times S^7\/Z_k$ superbackground\n(\\ref{notaC}). We shall consider the example of the $D=11$ superparticle\nbelow.\n\n\n\\subsection{$D=11$ superparticle}\n\\def\\mathcal M{\\mathcal M}\n\nLet us consider a massless superparticle in the $AdS_4\\times\nS^7\/Z_k$ supergravity background. Recall that when $k=1,2$, the\nsupergravity background preserves the maximum number of 32\nsupersymmetries, while for $k>2$ it preserves only 24.\nThe superparticle action in the complete superspace\nwith 32 $\\Theta$ is constructed using the supervielbeins of the\n$OSp(8|4)\/SO(7)\\times SO(1,3)\\times Z_k$ supercoset derived in\n\\cite{Gomis:2008jt}\n\\begin{equation}\\label{upsilonfunctions}\n\\begin{aligned}\n\\underline E^{\\hat a}&=E^{\\hat a}(x,y,\\vartheta) + 4 i\n\\upsilon\\gamma^{\\hat a}\\,{{\\sinh^2{{\\mathcal M}\/ 2}}\\over{\\mathcal M}^2}\\,D\\upsilon\n+\\frac{R}{k}\\,dz\\,E_7{}^{\\hat a}(\\upsilon)\\\\\n\\underline E^{a'}&=E^{a'}(x,y,\\vartheta)+2i\\upsilon\\,{{\\sinh m}\\over m}\\gamma^{a'}\\gamma^5\\,E(x,y,\\vartheta)\\\\\n\\underline E^7&=\\frac{R}{k}\\,dz\\,\\Phi(\\upsilon)+\nR\\,\\left(A(x,y,\\vartheta)-\\frac{4}{R}\\,\\upsilon\\,\\varepsilon\\gamma^5\\,{{\\sinh^2{{\\mathcal\nM}\/2}}\\over{\\mathcal M}^2}\\,D\\upsilon\\,\\right) \\\\\n\\underline E^{\\alpha i}&=\\left({{\\sinh{\\mathcal M}}\\over{\\mathcal\nM}}\\,(D\\upsilon-\\frac{2}{k}\\,dz\\,\\varepsilon\\upsilon)\\right)^{\\alpha i}\n\\\\\n\\underline E^{\\alpha a'}&=E^{\\alpha a'}(x,y,\\vartheta)-\\frac{8}{R}E^{\\beta\na'}\\left(\\gamma^5\\,\\upsilon\\,{{\\sinh^2{{m}\/2}}\\over{m}^2}\\right)_{\\beta\ni}\\upsilon^{\\alpha i}\\,,\n\\end{aligned}\n\\end{equation}\nwhere $z$ is the 7th, $U(1)$ fiber, coordinate of\n$S^7$, $D\\upsilon$ has been given in (\\ref{D}) and the eight fermionic\ncoordinates $\\upsilon^{\\alpha i}$ correspond to the eight\nsupersymmetries broken by orbifolding with $k>2$.\n\nThe explicit form of the fermionic supervielbeins in\n(\\ref{upsilonfunctions}) and of the connections on\\\\\n$OSp(8|4)\/SO(7)\\times SO(1,3)\\times Z_k$ are not required for the\nconstruction of the Brink--Schwarz superparticle action but one\nneeds them for the construction of the pure--spinor superparticle\naction in curved superbackgrounds, so we present also the form of\nthe spin--connection below.\n\nThe $SO(1,3)$ connection is\n\\begin{equation}\\label{ads4connection}\n\\underline \\Omega^{\\hat a\\hat b}= \\Omega^{\\hat a\\hat b}(x,y,\\vartheta)\n+\\frac{8}{R} \\upsilon\\gamma^{\\hat a\\hat\nb}\\gamma^5\\,{{\\sinh^2{{\\mathcal M}\/ 2}}\\over{\\mathcal\nM}^2}\\,\\left(D\\upsilon-\\frac{2}{k}dz\\,\\varepsilon\\upsilon\\right)\n\\end{equation}\nand the $SO(7)$ connection is\n\\begin{eqnarray}\n\\underline \\Omega^{a'b'}\n&=&\n\\Omega^{a'b'}(x,y,\\vartheta)-\\frac{1}{R}\\,\\underline E^7\\,J^{a'b'}\n-\\frac{2}{R}\\,\\upsilon\\,{{\\sinh m}\\over m}\\gamma^{a'b'}\\gamma^5E\\,,\n\\nonumber\\\\\n\\label{so7connection}\n\\\\\n{\\underline\\Omega}^{a'7}\n&=&\n\\frac{1}{R}\\left(\\underline E^{b'}-4i\\upsilon\\,{{\\sinh m}\\over m}\\gamma^{b'}\\gamma^5E\\right)\\,J_{b'}{}^{a'}\\,.\\nonumber\n\\end{eqnarray}\nThe functions and forms appearing in\n(\\ref{upsilonfunctions})--(\\ref{so7connection}) are defined in\nAppendix B.\n\nThe first order form of the action for the massless superparticle in\nthe $OSp(8|4)\/SO(7)\\times SO(1,3)\\times Z_k$ superbackground is\n\\begin{eqnarray}\\label{11dsuper}\nS = \\int d\\tau \\left( P_{\\underline A} \\,{\\underline\nE}^{\\underline A}_\\tau +\n\\frac{{e}}{2} \\, P_{\\underline A} P_{\\underline B}\\, \\eta^{\\underline{AB}} \\right)\\,,\n\\end{eqnarray}\nwhere $ P_{\\underline A}$ ($\\underline A=0,1,\\dots,10$) is the\nparticle momentum, $e(\\tau)$ is the Lagrange multiplier which\nensures the mass shell condition $P^2=0$ and\n$$\n{\\underline E}^{\\underline A}_\\tau=\\partial_\\tau\nZ^{\\underline{\\mathcal M}}\\,{\\underline E}_{\\underline{\\mathcal\nM}}{}^{\\underline A}\\,, \\qquad Z^{\\underline{\\mathcal\nM}}=(x,y,z,\\vartheta,\\upsilon)\n$$\nis the pullback to the worldline of the supervielbeins\n(\\ref{upsilonfunctions}). The action is\ninvariant under local worldline diffeomorphisms and under the\nfermionic kappa--symmetry transformations\n\\begin{equation}\\label{kappaA}\n\\delta Z^{\\underline{\\mathcal M}} \\,{\\underline E}_{\\underline{\\mathcal M}}{}^{\\underline \\alpha}\n= P^{\\underline A} \\,(\\Gamma_{\\underline A}\\,\\kappa)^{\\underline\\alpha}\\,,\\qquad\n\\delta Z^{\\underline{\\mathcal M}} \\,{\\underline E}_{\\underline\n{\\mathcal M}}{}^{\\underline A} = 0\\,, \\qquad\n\\end{equation}\n\\begin{equation}\\label{kappaA1}\n\\delta e=-4i\\,{\\underline E}^{\\underline\n\\alpha}_\\tau\\,\\kappa_{\\underline\\alpha}\\,,\n\\qquad \\delta P_{\\underline A}=\n\\delta Z^{\\underline{\\mathcal M}}\\,\\underline\\Omega_{{\\underline{\\mathcal M}}\\underline A}{}^{\\underline B}\\,P_{\\underline B}.\n\\end{equation}\nInserting in the action the expressions for the vielbeins\n(\\ref{upsilonfunctions}), we get\n\\begin{eqnarray}\\label{11dsuper2}\nS = \\int d\\tau \\!\\!\\!\\!\\!\\!&& \\left[\nP_{\\hat a}\\left(\nE^{\\hat a}_\\tau + 4 i\n\\upsilon\\gamma^{\\hat a}\\,{{\\sinh^2{{\\mathcal M}\/ 2}}\\over{\\mathcal M}^2}\\,D_\\tau\\upsilon\n- 8 i \\upsilon\\gamma^{\\hat a}\\,{{\\sinh^2{{\\mathcal M}\/2}}\\over{\\mathcal M}^2} \\varepsilon {\\upsilon} \\,\n\\frac{\\partial_\\tau z}{k}\n\\right) \\right. \\nonumber \\\\\n&&\\left.+ P_{a'}\n\\left(E^{a'}_\\tau+2i\\upsilon\\,{{\\sinh m}\\over m}\\gamma^{a'}\\gamma^5\\,E_\\tau \\right)\\right. \\\\\n&&\\left. + P_7 \\left( R \\left( \\frac{\\partial_\\tau z}{k} + A\\right)\n-4\n\\upsilon \\varepsilon\\gamma^5\\,{{\\sinh^2{{\\mathcal M}\/2}}\\over{\\mathcal\nM}^2}\\, (D_\\tau \\upsilon\\, -2 \\varepsilon\\upsilon \\frac{\\partial_\\tau z}{k})\n\\right) +\n\\frac{{e}}{2} \\, P_{\\underline A} P_{\\underline B}\\, \\eta^{\\underline{AB}} \\right]\n\\,.\n\\nonumber\n\\end{eqnarray}\n\nThe action (\\ref{11dsuper2}) can be simplified by eliminating some\nor all pure--gauge fermionic modes using the kappa--symmetry\ntransformations (\\ref{kappaA}). For instance, when the momentum of\nthe particle is non--zero along a $CP^3$ direction inside $S^7$, the\nprojectors ${\\mathcal P}_6$ and ${\\mathcal P}_2$, defined in eqs.\n(\\ref{p6}) and (\\ref{p2}), do not commute with the kappa--symmetry\nprojector (\\ref{kappaA}) and one can use \\emph{e.g.} the 16\nkappa--symmetry transformations to eliminate 16 of the 24\n$\\vartheta$. After such a gauge fixing the action will contain 8\nremaining $\\vartheta$ and 8 $\\upsilon$.\n\nAlternatively, by partially gauge fixing the kappa--symmetry one can\neliminate all eight $\\upsilon$ keeping 24 $\\vartheta$. In the latter\ncase the action reduces to the form in which it describes the\ndynamics of a superparticle in a superspace with 11 bosonic\ncoordinates and 24 fermionic ones. This superspace has been\nintroduced in\n\\cite{Gomis:2008jt} as a Hopf fibration of the supercoset $OSp(6|4)\/U(3)\\times SO(1,3)$.\nIt is the supercoset\n\\begin{equation}\\label{24th}\n\\frac{OSp(6|4) \\times U(1)}{U(3) \\times SO(1,3)\\times\nZ_k}.\n\\end{equation}\nThe geometry of (\\ref{24th}) is described by the supervielbeins\n\\begin{eqnarray}\\label{24thA}\n&&\\hat E^{\\hat a} = E^{\\hat a}(x,y,\\vartheta)\\,, \\nonumber \\\\\n&&\\hat E^{a'} = E^{a'}(x,y,\\vartheta) \\,,\\\\\n&&\\hat E^{7} = R\\Big(\\frac{dz}{k} + A(x,y,\\vartheta) \\Big)\\,,\n\\nonumber \\\\\n&&\n\\hat E^{\\alpha a'} = E^{\\alpha a'}(x,y,\\vartheta)\\,,\\nonumber\n\\end{eqnarray}\nwhere (as already mentioned) the explicit form of the\nright--hand sides of (\\ref{24thA}) are given in (\\ref{cartan24}). Notice that now $z$ appears only in the vielbein $\\hat\nE^7$ along the $U(1)$-fiber direction of $S^7$.\n\nThe first order form of the superparticle action in the superspace\n(\\ref{24thA}) is\n\\begin{eqnarray}\\label{11dsuper24_A}\nS& = & \\int d\\tau \\left( P_{\\hat a} \\,{\\hat E}^{\\hat a}_\\tau +\n P_{a'} \\,{\\hat E}^{a'}_\\tau +\n P_{7} \\,{\\hat E}^{7}_\\tau +\n\\frac{{e}}{2} \\, P_{\\underline A} P_{\\underline B}\\, \\eta^{\\underline{AB}} \\right)\n\\,\\nonumber\\\\\n& =& \\int d\\tau \\left( P_{\\hat a} \\,{E}^{\\hat a}_\\tau +\n P_{a'} \\,{E}^{a'}_\\tau +\n P_{7} \\, R \\left( \\frac{\\partial_\\tau z}{k} + A_\\tau\\right) +\n\\frac{{e}}{2} \\, P_{\\underline A} P_{\\underline B}\\, \\eta^{\\underline{AB}} \\right)\\,,\n\\end{eqnarray}\nwhere now\n$$\n{\\hat E}^{\\underline A}_\\tau=\\partial_\\tau Z^{{\\mathcal M}}\\,{\\hat\nE}_{{\\mathcal M}}{}^{\\underline A}+\\partial_\\tau\\,z\\,{\\hat\nE}_z{}^{\\underline A}\\,,\n\\qquad Z^{{\\mathcal M}}=(x,y,\\vartheta)\n$$\nis the pullback to the worldline of the supervielbeins\n(\\ref{24thA}).\n\nIt is easy to reduce the action (\\ref{11dsuper24_A}) to $D=10$. Once it\nis done, one obtains the action for a D0--brane moving in the\nsupercoset $OSp(6|4)\/U(3)\\times SO(1,3)$.\n\nAs we have mentioned above, the action (\\ref{11dsuper24_A})\ndescribes a superparticle which has a non--zero momentum along the\n$CP^3$ base of the $S^7$ bundle. This is required by the consistency\nof the kappa--symmetry gauge fixing condition $\\upsilon=0$. To\ndescribe other possible classical motions of the superparticle,\n\\emph{ e.g.} when $P^{a'}=0$, one should chose a different\nkappa--symmetry gauge.\n\nFor instance, if the superparticle has a non--zero spacial momentum\nalong the 7th, fiber direction, of $S^7$, one can use the gauge\nfixing condition corresponding to that of Subsection\n\\ref{theta-}. In this case, in virtue of the gauge-fixed expressions of Appendix C,\n the action (\\ref{11dsuper2}) simplifies to\n\\begin{eqnarray}\\label{11dsuper-gauged}\nS = \\int d\\tau \\hspace{-.5cm}&&\\left[\nP_{a} \\,\\Big( e^a_\\tau(x)+i\\vartheta\\gamma^aD_{\\tau}\\vartheta +\ni \\upsilon \\gamma^aD_{\\tau} \\upsilon - 2 i\n\\upsilon \\gamma^a \\varepsilon \\upsilon \\frac{\\partial_\\tau z}{k} \\Big) \\right. \\nonumber \\\\\n&& \\left. +P_{a'} \\, {e}^{a'}_\\tau(y) + P_{3} \\,{e}^{3}_\\tau(x) +\nP_{7}\n\\, R\n\\left(\\frac{\\partial_\\tau z}{k} + A_\\tau(y)\\right) +\n\\frac{{e}}{2} \\, P_{\\underline A} P_{\\underline B}\\, \\eta^{\\underline{AB}} \\right]\\,.\n\\end{eqnarray}\nThe dimensional reduction of the $D=11$ superparticle action\n(\\ref{11dsuper-gauged}) along $z$ results in the kappa--symmetry\ngauge--fixed action which describes an arbitrary motion of the type\nIIA D0--brane in $AdS_4\\times CP^{3}$ superspace.\n\nBefore considering the D0--brane, let us note that the action\n(\\ref{11dsuper2}) is the most appropriate starting point for the\nconstruction of the pure--spinor formulation of the $D=11$\nsuperparticle in the $AdS_4\\times CP^3$ supergravity background. The\npure--spinor condition $\\lambda\\Gamma^{\\underline A}\\lambda=0$ in\n$D=11$ implies that the 32--component bosonic pure spinor\n$\\lambda^{\\underline \\alpha}$ has 23 independent components\n\\cite{Berkovits:2002uc,Fre':2006es}. This counting ensures the correct\nnumber of bosonic and fermionic degrees of freedom.\n\nIn the cases of the actions (\\ref{11dsuper24_A}) and\n(\\ref{11dsuper-gauged}) that describe a particle motion in the\nreduced superspaces, one can also develop pure--spinor formulations\nin which the pure spinor $\\lambda$, in addition, is subject to the\nsame constraint as the one imposed on $\\Theta$ by kappa--symmetry\ngauge fixing, e.g. ${\\mathcal P_2}\\lambda=0$ in the case\n$\\upsilon={\\mathcal P_2}\\Theta=0$. This guarantees the correct\ncounting of the degrees of freedom in the pure--spinor formulation\n(similar to the cases considered in\n\\cite{Fre:2008qc,Bonelli:2008us}). That is, the difference between\nthe bosonic and fermionic degrees of freedom remains the same.\nIndeed, in the case of the pure--spinor formulation of the massless\n$D=11$ superparticle\n\\cite{Berkovits:2002uc} there are 11 bosonic $X^{\\underline A}$ plus 23 pure spinor\ndegrees of freedom and 32 fermionic $\\Theta$, while in the above\nexample of the reduced pure spinor formulation the pure spinor\neffectively contains $23-8=15$ degrees of freedom against 24\nfermionic ones, while the number of $X$ remains the same.\n\n When the pure spinor formulations of the superparticle\nin reduced superspaces correspond to the kappa--gauge fixed versions\nof the Brink--Schwarz superparticle whose consistency is limited to\nparticular subsectors of the classical configuration space of the\nfull theory, one may expect that the former will also describe only\nsubsectors of the pure--spinor superparticle model formulated in the\ncomplete superspace with 32 fermionic coordinates. As in the case of\nthe pure--spinor type IIA superstring in $AdS_4\\times CP^3$\n\\cite{Fre:2008qc,Bonelli:2008us}, these issues require additional\nanalysis.\n\n\\subsection{$D0$-brane}\nTo obtain the action for the $D0$--brane by dimensional\nreduction of the $D=11$ superparticle action, one should first perform the\nappropriate Lorentz transformation of the $D=11$\nsupervielbeins (as was explained in \\cite{Gomis:2008jt}) and make a\ncorresponding redefinition of the particle momentum. We shall not\nperform this dimensional reduction procedure since the result is\nwell known. The $D0$--brane action has the following first order\nform in the type IIA superbackground in the string frame (see eqs.\n(\\ref{simplA}) and (\\ref{simplB}))\n\\begin{equation}\\label{d0-action}\nS = \\int d\\tau e^{- \\phi} \\,\\left(P_A {\\cal E}^A_\\tau +\n\\frac{e}{2} (P_A P_B \\eta^{AB}+ m^2) \\right) + m\\, \\int {\\cal\nA}_1\\,,\n\\end{equation}\nwhere $m$ is the mass of the particle and the second term describes\nits coupling to the RR one--form potential ${\\cal A}_1$.\n\nIntegrating out the momenta $P_A$ and the auxiliary field $e(\\tau)$\nwe arrive at the action\n\\begin{equation}\\label{d0-action2}\nS = -m\\,\\int d\\tau e^{- \\phi} \\,\\sqrt{-{\\cal E}^A_\\tau {\\cal\nE}^B_\\tau\\,\\eta_{AB}} + m\\int {\\cal A}_1\\,.\n\\end{equation}\nThe action (\\ref{d0-action2}) is invariant under worldline\ndiffeomorphisms and the kappa-symmetry transformations (to\nverify the kappa-symmetry one needs the superspace constraints\non the torsion $T^A$ and on $F_2$ given in Appendix A.4)\n\\begin{eqnarray}\n&&\\delta_\\kappa Z^{\\mathcal M} {\\cal E}_{\\cal\nM}^{~~\\underline\\alpha} =\n\\frac{1}{2}(1 + \\Gamma)^{\\underline\\alpha}{}_{\\underline\\beta} \\,\\kappa^{\\underline\\beta}(\\tau)\\,, ~~~\n{\\underline\\alpha} = 1,\\dots,32\\,,\\qquad Z^{\\mathcal M}=(x,y,\\vartheta,\\upsilon)\\,\\nonumber\\\\\n\\\\\n&&\\delta_\\kappa Z^{\\mathcal M} {\\cal E}_{\\cal M}^{~~A} = 0\\,, ~~~\nA=0,1,\\dots,9\\,\n\\nonumber\n\\end{eqnarray}\nwhere\n\\begin{equation}\\label{GammaD0}\n\\Gamma = \\frac{1}{\\sqrt{-\\mathcal E^2_\\tau}}\\,{\\mathcal E}_\\tau^{~A} \\Gamma_A \\Gamma_{11}\\,,\n\\,\\qquad \\Gamma^2=1\\,.\n\\end{equation}\nComparing the form of the kappa--symmetry projector matrix\n(\\ref{GammaD0}) with the kappa--symmetry gauge fixing condition of\nSubsection \\ref{theta-}, we see that\n$\\gamma=\\Gamma^0\\Gamma^1\\Gamma^2$ (introduced in eq. (\\ref{gamma}))\ndoes not commute with $\\Gamma$ in (\\ref{GammaD0}) provided that the\nenergy $P^0\\sim{{\\mathcal E}^0_\\tau}$ of the massive particle is\nnonzero, which is always the case. Thus to simplify, \\emph{e.g.} the first\norder action (\\ref{d0-action}) we can use the gauge fixed form of\nthe supervielbeins and the RR one--form of Subsection\n\\ref{theta-}. The action takes the following explicit form, with an\nappropriately rescaled Lagrange multiplier $e(\\tau)$,\n\\begin{eqnarray}\\label{simply}\nS &=& \\left(\\frac{R}{kl_p}\\right)^{-1} \\,\\int d\\tau \\,\n\\left[\\Big( P_{a'}e_\\tau^{a'}(y)\n+ P_{3} \\,e_\\tau^3(x)\\Big)\\,(1+\\frac{6}{R^2}(\\upsilon\\ups)^2)\n\\right. \\nonumber \\\\\n&+& \\left. P_{a} \\,\n(e_\\tau^a(x)+i\\Theta\\gamma^aD_{\\tau}\\Theta)\\,(1+\\frac{8}{R^2}(\\upsilon\\ups)^2)\n+\\frac{e(\\tau)}{2} \\,(P_A P_B \\eta^{AB}+ m^2) \\nonumber\\right] \\\\\n&+& m kl_p\\,\\int \\, \\Big(\nA(y)-\\frac{2i}{R^2}(e^a(x)+i\\Theta\\gamma^aD\\Theta)\\,\n\\upsilon\\varepsilon\\gamma_a\\upsilon\\Big)\\,.\n\\end{eqnarray}\nThis action contains fermionic terms up to the 6th order in\n$\\Theta=(\\vartheta,\\upsilon)$.\n\n\\subsection{The fundamental string}\\label{FS}\n\nIn this section we use the geometry discussed above to construct\nthe Green-Schwarz model for the fundamental string. We will first\nreview the form of the superstring sigma model without gauge fixing\nand then impose the gauge fixing of the kappa--symmetry. This will\nprovide a calculable sigma model.\n\nThe action for the Green--Schwarz superstring has the following form\n\\begin{equation}\\label{cordaA}\nS = -\\frac{1}{4\\pi\\alpha'}\\,\\int d^2\\xi\\, \\sqrt {-h}\\, h^{IJ}\\,\n{\\cal E}_{I}^{A} {\\cal E}_{J}^{B} \\eta_{AB}\n-\\frac{1}{2\\pi\\alpha'}\\,\\int B_2\\,,\n\\end{equation}\nwhere $\\xi^I$ $(I,J=0,1)$ are the worldsheet coordinates,\n$h_{IJ}(\\xi)$ is a worldsheet metric and $B_2$ is the pull--back to\nthe worldsheet of the NS--NS 2--form.\n\nThe kappa--symmetry transformations which leave the superstring\naction (\\ref{cordaA}) invariant are\n\n\\begin{equation}\\label{kappastring}\n\\delta_\\kappa Z^{\\mathcal M}\\,{\\mathcal E}_{\\mathcal M}{}^{\\underline \\alpha}=\n{1\\over 2}(1+\\Gamma)^{\\underline \\alpha}_{~\\underline\\beta}\\,\n\\kappa^{\\underline\\beta}(\\xi),\\qquad {\\underline \\alpha}=1,\\cdots, 32\n\\end{equation}\n\\begin{equation}\\label{kA}\n\\hskip-2.5cm\\delta_\\kappa Z^{\\mathcal M}\\,{\\mathcal E}_{\\mathcal M}{}^A=0,\n\\qquad A=0,1,\\cdots,9\n\\end{equation}\nwhere $\\kappa^{\\underline\\alpha}(\\xi)$ is a 32--component spinor\nparameter, ${1\\over 2}(1+\\Gamma)^{\\underline\n\\alpha}_{~\\underline\\beta}$ is a spinor projection matrix with\n\\begin{equation}\\label{gbs}\n\\Gamma={1\\over {2\\,\\sqrt{-\\det{g_{IJ}}}}}\\,\\epsilon^{IJ}\\,{\\mathcal\nE}_{I}{}^A\\,{\\mathcal E}_{J}{}^B\\,\\Gamma_{AB}\\,\\Gamma_{11}, \\qquad\n\\Gamma^2=1\\,,\n\\end{equation}\nand the auxiliary worldsheet metric $h^{IJ}$ transforms as follows\n\\begin{eqnarray}\\label{deltah}\n\\lefteqn{\\delta_\\kappa\\,(\\sqrt{-h}\\,h^{IJ})}\n\\nonumber\\\\\n&=&2i\\,\\sqrt{-h}\\,(h^{IJ}\\,g^{KL}-2h^{K(I}\\,g^{J)L})\n\\left(\\delta_\\kappa\nZ^{\\mathcal M}\\,{\\mathcal E}_{\\mathcal M}\\,\\Gamma_A\\,{\\mathcal\nE}_K\\,{\\mathcal E}^A_L +\\frac{1}{2}g_{KL}\\delta_\\kappa Z^{\\mathcal\nM}\\,{\\mathcal E}_{\\mathcal M}{}^{\\alpha i}\\lambda_{\\alpha i}\n\\right)\n\\\\\n&&\\hspace{-20pt}\n-2i\\,\\sqrt{-h}\\,\\,\\frac{h^{IK'}g_{K'L'}h^{L'J}-\\frac{1}{2}h^{IJ}\\,h^{K'L'}g_{K'L'}}\n{\\frac{1}{2}\\,h^{K'L'}\\,g_{K'L'}+\\sqrt{\\frac{g}{h}}}\\,g^{KL}\\,\\left(\\delta_\\kappa\nZ^{\\mathcal M}\\,{\\mathcal E}_{\\mathcal M}\\,\\Gamma_A\\,{\\mathcal\nE}_K\\,{\\mathcal E}^A_L +\\frac{1}{2}g_{KL}\\delta_\\kappa Z^{\\mathcal\nM}\\,{\\mathcal E}_{\\mathcal M}{}^{\\alpha i}\\lambda_{\\alpha i}\n\\right)\\nonumber\n\\end{eqnarray}\nwhere\n\\begin{equation}\\label{im}\ng_{IJ}(\\xi)={\\mathcal E}_{I}{}^{A}\\, {\\mathcal\nE}_{J}{}^{B}\\,\\eta_{AB}\\,,\\qquad g^{IJ}\\equiv (g_{IJ})^{-1}\n\\end{equation}\nis the induced metric on the worldsheet of the string that on the\nmass shell coincides with the auxiliary metric $h_{IJ}(\\xi)$ modulo\na conformal factor. Finally, $g=\\det g_{IJ}$ and $h=\\det h_{IJ}$.\n\\\\\nUsing the identity\n\\begin{equation}\nh^{IJ}\\,g_{JK}\\,h^{KL}\\,g_{LI}-\\frac{1}{2}\\,(h^{IJ}\\,g_{IJ})^{2}\\equiv\n\\frac{1}{2}\\,(h^{IJ}\\,g_{IJ})^{2}-2\\,\\frac{g}{h}\\,\n\\end{equation}\none can check that eq. (\\ref{deltah}) multiplied by $g_{IJ}$ results\nin\n\\begin{equation}\\label{kappag}\n\\delta_\\kappa\\,(\\sqrt{-h}\\,h^{IJ})\\,g_{IJ}\n=4i(\\sqrt{-g}\\,g^{KL}-\\sqrt{-h}\\,h^{KL})\\,\n\\delta_\\kappa Z^{\\mathcal M}\\,{\\mathcal E}_{\\mathcal M}{}^{\\underline\\alpha}(\\Gamma_A\\,{\\mathcal E}_K\\,{\\mathcal E}^A_L\n+\\frac{1}{2}g_{KL}\\lambda )_{\\underline\\alpha}\\,,\n\\end{equation}\nwhich together with the variation (\\ref{kappastring}) and (\\ref{kA})\nof the superspace coordinates insures the invariance of the action\n(\\ref{cordaA}).\n\n Comparing the form of the kappa--symmetry projector\n(\\ref{kappastring}) with the kappa--symmetry gauge fixing condition\nof Subsection\n\\ref{theta-} we see that this gauge choice is admissible when the\nstring moves in such a way that the projection of its worldsheet on\nthe $3d$ subspace along the directions $e^a$ $(a=0,1,2)$ of the\ntarget space is a non--degenerate two--dimensional time--like\nsurface. Thus, it can be used to analyze the string dynamics in the\nsector which is not reachable by the supercoset model of\n\\cite{Arutyunov:2008if,Stefanski:2008ik,Fre:2008qc}. The latter is\nobtained from the action (\\ref{cordaA}) by gauge fixing to zero the\neight fermions $\\upsilon$, which is only possible when the string\nworldsheet extends in the $CP^3$ directions.\n\nIn the gauge of Subsection \\ref{theta-} we insert into the action\n(\\ref{cordaA}) the expressions (\\ref{simple+v}) and (\\ref{simple+B})\nfor the supervielbeins and $B_2$. This results in an action that\ncontains fermionic terms only up to the 8th order in\n$\\Theta=(\\vartheta,\\upsilon)$\n\\footnote{The factor in front of the\naction is unconventional due to our normalization of the vielbeins,\nwhich comes from the dimensional reduction of the eleven-dimensional\ngeometry. More conventional, unit radius string frame vielbeins\n$(\\hat e^{\\hat a},\\hat e^{a'})$ can be introduced by the following\nrescaling\n$$\n(e^{\\hat a},e^{a'})=\\left(\\frac{R}{k\nl_p}\\right)^{-1\/2}\\left(\\frac{R}{2}\\,\\hat e^{\\hat a},\\,R\\,\\hat\ne^{a'}\\right)\\,.\n$$\nThen the factor in front of the action becomes\n$\\frac{R^2}{4\\pi\\alpha'}=\\frac{(R\/l_p)^3}{4\\pi k}$.}\n\\begin{eqnarray}\\label{cordaB}\nS &=&-\\frac{1}{4\\pi\\alpha'}\\,\\frac{R}{k l_p}\n\\int\\,d^2\\xi\\,\\sqrt{-h}\\,h^{IJ}\\left[\n\\left(e^{a'}_I e^{b'}_{J} \\delta_{a'b'} +\ne^3_I e^3_{J}\\right)\\,(1-\\frac{6}{R^2}(\\upsilon\\ups)^2)\n\\right.\n\\nonumber \\\\\n&&{}\n\\hspace{4cm}\n+\\left.(e^a_I+i\\Theta\\gamma^aD_I\\Theta)\\,\n(e^b_{J}+i\\Theta\\gamma^bD_{J}\\Theta)\\,\\eta_{ab}\\,\n(1-\\frac{2}{R^2}(\\upsilon\\ups)^2)\\right]\n\\nonumber\\\\\n\\\\\n &-&\\frac{1}{2\\pi\\alpha'}\\frac{R}{kl_p}\\int\n\\Big[e^3\\,\\Theta\\gamma^7D\\Theta\n+ie^{a'}\\,\\Theta\\gamma_{a'}\\gamma^7D\\Theta\n-\\frac{2i}{R}e^3e^{a'}\\,\\vartheta\\gamma_{a'}\\gamma^7\\upsilon\n-\\frac{1}{R}e^{b'}e^{a'}\\,\\Theta\\gamma_{a'b'}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{\\hspace{60pt}}-\\frac{i}{R}(e^b+i\\Theta\\gamma^bD\\Theta)\\,(e^a+i\\Theta\\gamma^aD\\Theta)\\,\n\\upsilon\\gamma_{ab}\\,\\varepsilon\\upsilon\\,\n\\Big]\\,.\\nonumber\n\\end{eqnarray}\nTo avoid possible confusion, let us remind the reader that in eqs.\n(\\ref{cordaB})--(\\ref{cordaBT}) the covariant derivative\n$D\\Theta\\equiv (D_8\\upsilon,\\,D_{24}\\vartheta)$ is defined in eqs.\n(\\ref{D81}). Actually, in (\\ref{cordaB}) the vielbein\n$e^3$ does not contribute to the covariant derivative and the\nconnection $\\omega^{ab}$ is zero along the $3d$ Minkowski boundary\nof $AdS_4$, for the vielbeins chosen as in eqs. (\\ref{ad4v}). It is\nnot hard to check that the action (\\ref{cordaB}) is invariant\nunder twelve `linearly realized' supersymmetry transformations\n$$\n\\delta\\vartheta=\\epsilon, \\qquad \\delta\ne^a=-i\\epsilon\\,\\gamma^a\\,D_{24}\\vartheta\n$$\nwith parameters $\\epsilon=\\frac{1}{2}\\,(1+\\gamma)\\,\\epsilon$ being\n$CP^3$ Killing spinors\n$$\nD_{24}\\epsilon={\\mathcal P}_6\\,(d-\\frac{1}{R}e^3\n+\\frac{i}{R}e^{a'}\\gamma_{a'} -\\frac{1}{4}\\omega^{ab}\\gamma_{ab}\n-\\frac{1}{4}\\omega^{a'b'}\\gamma_{a'b'})\\epsilon=0\\,.\n$$\nThe other twelve supersymmetries of the $OSp(6|4)$ isometries of\n(\\ref{cordaB}) are non--linearly realized on the worldsheet fields\nand include compensating kappa--symmetry transformations required to\nmaintain the gauge $\\vartheta=\\frac{1}{2}\\,(1+\\gamma)\\,\\vartheta$.\n\nThe action (\\ref{cordaB}) is slightly more complicated than the\naction for the $AdS_5\\times S^5$ superstring in the analogous\nkappa--symmetry gauge\n\\cite{Pesando:1998fv,Kallosh:1998nx,Kallosh:1998ji},\nthat contains fermions only up to the fourth order, since\n$AdS_4\\times CP^3$ is less supersymmetric than $AdS_5\\times\nS^5$. The action (\\ref{cordaB}) takes a form similar to that of\n\\cite{Pesando:1998fv,Kallosh:1998nx,Kallosh:1998ji} when we formally\nput the broken supersymmetry fermions $\\upsilon^{\\alpha i}$ to zero.\n\nAs in the case of the string in $AdS_5\\times S^5$ it is possible to simplify the action further\nby performing a T--duality transformation on the worldsheet \\cite{Kallosh:1998ji}. Following\n\\cite{Kallosh:1998ji} we first rewrite the part of the action\n(\\ref{cordaB}) containing the vielbeins $e^a$ in the first order\nform\n\\begin{equation}\\label{first}\nS_1=\\frac{1}{2\\pi\\alpha'}\\,\\frac{R}{k l_p}\n\\int\\,d^2\\xi\\,\\left[P_a^I\\,(e^a_I+i\\Theta\\gamma^aD_I\\Theta)\n+\\frac{1-\\frac{6}{R^2}\\,(\\upsilon\\upsilon)^2}{2\\sqrt{-h}}\\,P_a^I\\,P_b^J\\,h_{IJ}\\,\\eta^{ab}\n-\\frac{i}{R}\\,\\frac{\\varepsilon_{IJ}}{-h}\\,P^I_a\\,P^J_b\\,\\upsilon\\gamma^{ab}\\varepsilon\\upsilon\\right]\\,.\n\\end{equation}\nThe equations of motion for the momenta $P_a^I$ imply that\n\\begin{equation}\\label{P}\nP^I_a=-\\sqrt{-h}\\,(1-\\frac{2}{R^2}(\\upsilon\\upsilon)^2)\\,\n\\Big(h^{IJ}\\eta_{ab}\n+\\frac{2i}{R\\sqrt{-h}}\\,\\varepsilon^{IJ}\\,\\upsilon\\gamma_{ab}\\varepsilon\\upsilon\\Big)\n\\,(e^b_J+i\\Theta\\gamma^bD_J\\Theta)\\,.\n\\end{equation}\nUsing the explicit form of the $AdS_4$ vielbeins given in eq.\n(\\ref{ad4v}) and varying the first order action (\\ref{first}) with\nrespect to $x^a$ we find that $P^I_a$ is proportional to the\nconserved current associated with translations along $x^a$\n\\begin{equation}\\label{trcurrent}\n\\partial_I\\,\\Big({r^2\\over R^2}\\,P^I_a\\Big)=0\\, \\qquad\\Rightarrow\n\\qquad P^I_a=\\frac{R^2}{r^2}\\,\\varepsilon^{IJ}\\partial_J\\,{\\tilde\nx}_a\\equiv \\varepsilon^{IJ}\\,{\\tilde e}_{Ja}\\,.\n\\end{equation}\nIf we now substitute eq. (\\ref{trcurrent}) into (\\ref{first}) the\nT--dualized version of the action (\\ref{cordaB}) for the string in\n$AdS_4\\times CP^3$ takes the form\n\\begin{eqnarray}\\label{cordaBT}\nS &=&-\\frac{1}{4\\pi\\alpha'}\\,\\frac{R}{k l_p}\n\\int\\,d^2\\xi\\,\\sqrt{-h}\\,h^{IJ}\n\\left(\n{\\tilde e}_I^a\\,{\\tilde e}_J^b\\,\\,\\eta_{ab}+e^3_I e^3_{J}\n+e^{a'}_I e^{b'}_{J} \\delta_{a'b'}\n\\right)\\,(1-\\frac{6}{R^2}(\\upsilon\\ups)^2)\n\\nonumber\\\\\n\\\\\n&-&\\frac{1}{2\\pi\\alpha'}\\frac{R}{kl_p}\\int\n\\Big[e^3\\,\\Theta\\gamma^7D\\Theta\n+ie^{a'}\\,\\Theta\\gamma_{a'}\\gamma^7D\\Theta\n-\\frac{2i}{R}e^3e^{a'}\\,\\vartheta\\gamma_{a'}\\gamma^7\\upsilon\n-\\frac{1}{R}e^{b'}e^{a'}\\,\\Theta\\gamma_{a'b'}\\gamma^7\\upsilon\n\\nonumber\\\\\n&&{\\hspace{60pt}} +i{\\tilde e}^a\\,\\Theta\\gamma_aD\\Theta\n-\\frac{i}{R}\\,\\,{\\tilde e}^a\\,{\\tilde e}^b\\,\\upsilon\\gamma_{ab}\\varepsilon\\upsilon\n\\Big]\\,.\\nonumber\n\\end{eqnarray}\n\nNote that in the T--dualized action the fermionic kinetic terms\nappear only in the Wess--Zumino term and that there are now terms of\nat most fourth order in fermions. Note also that the first (induced\nmetric) term of (\\ref{cordaBT}) acquires a common factor\n$(1-\\frac{6}{R^2}(\\upsilon\\ups)^2)$ in contrast to the corresponding\nterms in the original action (\\ref{cordaB}).\n\nTo preserve the conformal invariance of the dual action at the\nquantum level one should add to it a dilaton term\n$\\int\\,R^{(2)}\\,\\tilde\n\\phi$ (where $R^{(2)}$ is the worldsheet curvature), which is\ninduced by the functional integration of $P_a^I$ when passing to the\ndual action (see \\cite{Buscher:1987qj,Schwarz:1992te,Kallosh:1998ji}\nfor details). Here we should point out that in our case the original\n$AdS_4\\times CP^3$ superbackground already has a non--trivial\ndilaton which depends on $\\upsilon$ (see eqs. (\\ref{dilaton1}) and\n(\\ref{gfdialton})).\n\nThe following comment is now in order. As in the $AdS_5\\times S^5$\ncase \\cite{Kallosh:1998ji,Ricci:2007eq,Beisert:2008iq}, upon the\nT--duality along the three translational directions $x^a$ of $AdS_4$\nthe purely bosonic (classically integrable) $AdS_4\\times CP^3$\nsector of the type IIA superstring sigma model maps into an\nequivalent sigma model on a dual $AdS_4$ space, both models sharing\nthe same integrable structure \\cite{Ricci:2007eq}. The situation\nwith the fermionic sector of the $AdS_4\\times CP^3$ superstring is,\nhowever, different due to the fact that there is less supersymmetry\nthan in the $AdS_5\\times S^5$ case.\n\n In the case of the\n$AdS_5\\times S^5$ superstring sigma model, one can accompany the\nabove bosonic T--duality transformation by a fermionic one along\nfermionic directions in (complexified) superspace which have\ntranslational isometries \\cite{Berkovits:2008ic,Beisert:2008iq}.\nThis compensates the dilaton term generated by the bosonic\nT--duality and maps the $AdS_5\\times S^5$ superstring action to an\nequivalent (dual) one, which is also integrable\\footnote{In\n\\cite{Ricci:2007eq,Berkovits:2008ic,Beisert:2008iq} it has been shown that this\nduality property of the $AdS_5\\times S^5$ superstring is related to\nearlier observed dual conformal symmetry of maximally helicity\nviolating amplitudes of the ${\\mathcal N}=4$ super--Yang--Mills\ntheory and to the relation between gluon scattering amplitudes and\nWilson loops at strong and weak coupling.}. However, in the\n$AdS_4\\times CP^3$ case under consideration the fermionic directions\nin superspace parametrized by $\\upsilon$ do not have translational\nisometries, since the action (\\ref{cordaB}) or (\\ref{cordaBT}) has\n$\\upsilon$--dependent fermionic terms which do not contain\nworldsheet derivatives. This just reflects the fact that the\nfermionic modes $\\upsilon$ correspond to the broken supersymmetries\nof the superbackground.\n\nAs far as the T--dualization of the supersymmetric fermionic modes\n$\\vartheta$ is concerned, it might be, in principle, possible (at\nleast in the absence of $\\upsilon$) if, as in the case of the\n$PSU(2,2|4)$ superstring sigma model\n\\cite{Berkovits:2008ic,Beisert:2008iq}, there existed a\nrealization of the $OSp(6|4)$ superalgebra in which 12 of the 24\n(complex conjugate) supersymmetry generators squared to zero and\nformed a representation of the bosonic subalgebra of $OSp(6|4)$. In\nother words the possibility of T--dualizing part of fermionic modes\n$\\vartheta$ (in the absence of $\\upsilon$) is related to the\nquestion of the existence of a chiral superspace representation of\nthe superalgebra $OSp(6|4)$. Such a realization of $OSp(6|4)$ seems\nnot to exist. In fact, it has been argued in \\cite{Adam:2009kt} that\nthe $OSp(6|4)$ supercoset subsector of the Green--Schwarz\nsuperstring in $AdS_4\\times CP^3$ does not have any fermionic\nT--duality symmetry since in $OSp(6|4)$ the dimension of the\nrepresentation of the supercharges under the R--symmetry is odd. The\nabsence of the fermionic T--duality of the superstring in\n$AdS_4\\times CP^3$ may have interesting manifestations in particular\nfeatures of the $AdS_4\/CFT_3$ holography.\n\nThe gauge--fixed actions (\\ref{cordaB}) or (\\ref{cordaBT}) can be\nused for studying different aspects of the $AdS_4\/CFT_3$\ncorrespondence and integrability on both of its sides, in\nparticular, for making two-- and higher--loop string computations\nfor testing the Bethe ansatz and the S--matrix\n\\cite{Minahan:2008hf}--\\cite{Minahan:2009te} in the dual planar ${\\mathcal N}=6$\nsuperconformal Chern--Simons--matter theory, which would extend the\nanalysis of\n\\cite{Nishioka:2008gz}--\\cite{Suzuki:2009sc},\\cite{Zarembo:2009au} and\nothers.\n\n\\subsection{D2--branes}\nLet us now consider the effective worldvolume theory of probe\nD2--branes moving in the $AdS_4 \\times CP^3$ superbackground. This\ncan be derived from the action for $D$--branes in a generic type IIA\nsuperbackground\n\\cite{Cederwall:1996ri,Aganagic:1996pe,Bergshoeff:1996tu} by\nsubstituting the explicit form of the $AdS_4 \\times CP^3$\nsupergeometry (\\ref{simplA})--(\\ref{A3}).\n\nThe action for a D2--brane in a generic type IIA supergravity\nbackground in the string frame has the following form\n\\begin{equation}\\label{DBIstring}\nS= -T\\int\\,d^{3}\\xi\\,e^{{ - } {\\phi}}\\sqrt{-\\det(g_{IJ}+{\\cal\nF}_{IJ})}+T \\int\\, ({\\mathcal A}_3 + {\\mathcal A}_1\\,{\\cal F}_2)\\,,\n\\end{equation}\nwhere $T$ is the tension of the D2--brane, $\\phi(Z)$ is the dilaton\nsuperfield,\n\\begin{equation}\\label{imb}\ng_{IJ}(\\xi)={\\mathcal E}_{I}{}^{A}\\, {\\mathcal\nE}_{J}{}^{B}\\,\\eta_{AB}\\qquad I,J=0,1,2;\\qquad A,B=0,1,\\cdots,9\n\\end{equation}\nis the induced metric on the D2--brane worldvolume with ${\\mathcal\nE}_{I}{}^{A}=\\partial_I\\,Z^{\\mathcal M}\\,{\\mathcal E}_{\\mathcal\nM}{}^{A}$ being the pullbacks of the vector supervielbeins of the\ntype IIA $D=10$ superspace and\n\\begin{equation}\\label{deltaQstring}\n{\\cal F}_2 = d{\\mathcal V} - {B}_2\n\\end{equation}\nis the field strength of the worldvolume Born--Infeld gauge field\n${\\mathcal V}_I(\\xi)$ extended by the pullback of the NS--NS\ntwo--form. ${\\mathcal A}_1$ and ${\\mathcal A}_3$ are the pullbacks\nof the type IIA supergravity RR superforms (\\ref{simplB}) and\n(\\ref{A3}).\n\nProvided that the superbackground satisfies the IIA supergravity\nconstraints, the action (\\ref{DBIstring}) is invariant under\nkappa--symmetry transformations of the superstring coordinates\n$Z^{\\mathcal M}(\\xi)$ of the form\n(\\ref{kappastring}), (\\ref{kA}), together with\n\\begin{equation}\n\\delta_\\kappa\\mathcal F_{IJ}=-\\mathcal E_J^{\\mathcal B}\\,\\mathcal E_I^{\\mathcal A}\\,\\delta_\\kappa Z^{\\mathcal M}\\mathcal E_{\\mathcal M}{}^{\\underline\\alpha}\\,H_{\\underline\\alpha\\mathcal{AB}}\n=-4i\\mathcal E_{[I}^A\\,\\mathcal E_{J]}\\Gamma_A\\Gamma_{11}\\delta_\\kappa\\mathcal E\n-2i\\mathcal E_J^B\\,\\mathcal E_I^A\\,\\delta_\\kappa\\mathcal E\\Gamma_{AB}\\Gamma_{11}\\lambda\\,,\n\\end{equation}\nwhere $\\delta_\\kappa\\mathcal E=\\delta_\\kappa Z^{\\mathcal M}\\mathcal E_{\\mathcal M}{}^{\\underline\\alpha}$.\n\nIn the case of the $D2$--brane the matrix $\\Gamma$ has the form\n\\begin{eqnarray}\\label{bargamma}\n\\Gamma&=-&{{1}\\over {\\sqrt{-\\det(g+{\\cal\nF})}}}\\,\\,\\varepsilon^{IJK}\\,({1\\over 3!}\\,{\\mathcal\nE}_I^{A}\\,{\\mathcal E}_J^{B}\\,{\\mathcal E}_K^{C}\\Gamma_{ABC}+{1\\over\n2}\\,{\\mathcal F_{IJ}}\\,{\\mathcal\nE}_K^{A}\\,\\Gamma_A\\,\\Gamma_{11})\\nonumber\\\\\n&&\\\\\n&=&-{{1}\\over {3!\\sqrt{-\\det(g+{\\cal\nF})}}}\\,\\,\\varepsilon^{IJK}\\,{\\mathcal E}_I^{A}\\,{\\mathcal\nE}_J^{B}\\,{\\mathcal E}_K^{C}\\,\\Gamma_{ABC}\\,\\,(1+{1\\over\n2}\\,{\\mathcal F^{IJ}}\\,{\\mathcal E}_I^{A}\\,{\\mathcal\nE}_J^{B}\\,\\Gamma_{AB}\\,\\Gamma_{11})\\,.\\nonumber\n\\end{eqnarray}\n\n\\subsubsection{D2 filling $AdS_2\\times S^1$ inside of\n$AdS_4$}\\label{d2ads2s1}\n\nLet us consider the D2--brane configuration which corresponds to a\ndisorder loop operator in the ABJM theory\n\\cite{Drukker:2008jm}. The 1\/2 BPS static solution of the equations of motion of the D2--brane on\n$AdS_2\\times S^1$ in the metric\n\\begin{equation}\\label{ads4}\nds^2=\\frac{R_{CP^3}^2}{4u^2}(-dx^0\\,dx^0+dr^2+r^2d\\varphi^2+du^2)+R_{CP^3}^2ds^2_{CP^3}\\,,\n\\end{equation}\nwhere\n\\begin{eqnarray}\nds^2_{CP^3}={1\\over 4}\\Big[d\\alpha^2+\\cos^2{\\alpha\\over\n2}(d\\vartheta_1^2+\\sin^2\\vartheta_1d\\varphi_1^2)+\\sin^2{\\alpha\\over\n2}(d\\vartheta_2^2+\\sin^2\\vartheta_2d\\varphi_2^2)\\cr\n+\\sin^2{\\alpha\\over 2}\\cos^2{\\alpha\\over 2}(d\\chi+\n\\cos\\vartheta_1d\\varphi_1-\\cos\\vartheta_2d\\varphi_2)^2\\Big]\\,,\\nonumber\n\\end{eqnarray}\nis characterized by the following embedding of the brane worldvolume\n\\begin{equation}\\label{ads2s11}\n\\xi^0=x^0,\\qquad \\xi_1=u,\\qquad \\xi_2=\\varphi\\,,\n\\qquad r=a\\,\\xi_1\n\\end{equation}\nwhich is supported by the non--zero (electric) Born--Infeld field\nstrength\n\\begin{equation}\nF=E {dx^0\\wedge du\\over u^2}\\,, \\qquad\nE=\\frac{R_{CP^3}^2}{4}\\sqrt{1+a^2}\\,,\n\\end{equation}\nwhere $a$ is an arbitrary constant. Note that the presence of the\nnon--zero DBI flux on the $AdS_2$ subspace of the D2--brane\nworldvolume is required to ensure the no--force condition, i.e.\nvanishing of the classical action (\\ref{DBIstring}) of this static\nD2--brane configuration, provided that also an additional BI flux\nboundary counterterm is added to the action (see\n\\cite{Drukker:2008jm} for more details). A natural explanation of\nthis boundary term is that it appears in the process of the\ndualization of the compactified 11th coordinate scalar field of the\nM2--brane into the BI vector field of the D2--brane.\n\nNote that in \\cite{Drukker:2008jm} this brane configuration was\nconsidered in a different coordinate system, in which $AdS_4$ is\nfoliated with $AdS_2\\times S^1$ slices instead of the flat $R^{1,2}$\nslices. This makes manifest the symmetries of the D2--brane\nconfiguration. An explicit form of the $AdS_4$ metric in this\nslicing is\n\\begin{equation}\\label{ads2s1}\nds^2_{_{AdS_4}}= {R_{CP^3}^2\\over 4}\\,(\\cosh^2 \\psi\n\\,ds^2_{_{AdS_2}}+d\\psi^2+\n\\sinh^2 \\psi\n\\, d\\varphi^2)\n\\end{equation}\nwhich is essentially a double analytic continuation of the usual\nglobal $AdS_4$ metric. The static D2--brane configuration is then\ncharacterized by the identification of the worldvolume coordinates\n$\\xi^a$ with those of $AdS_2$ and the $S^1$ angle $\\varphi$.\nHowever, for our choice of the kappa--symmetry gauge fixing\ncondition the use of the metric in the form (\\ref{ads4}) is more\nconvenient, since the associated $AdS_4$ vielbeins\n\\begin{equation}\\label{0123}\ne^0=\\frac{R_{CP^3}}{2u}\\,dx^0\\,,\\qquad\ne^1=\\frac{R_{CP^3}}{2u}\\,dr\\,,\\qquad\ne^2=\\frac{R_{CP^3}\\,r}{2u}\\,d\\varphi\\,,\\qquad\ne^3=-\\frac{R_{CP^3}}{2u}\\,du\\,\n\\end{equation}\nand the spin connection directly satisfy the relations\n(\\ref{eaoa31}) and (\\ref{oab}).\n\nOne can be interested in D2--brane bosonic and fermionic\nfluctuations around this 1\/2 BPS static D2--brane solution described\nby the action (\\ref{DBIstring}). To simplify the form of the\nfermionic terms, the kappa--symmetry gauge fixing for the D2--brane\nwrapping $AdS_2\n\\times S^1$ can be made in the simplest possible way considered in\nSubsection \\ref{theta-}. To get the gauge fixed D2--brane action in\nthis case one should substitute into (\\ref{DBIstring}) the\nexpressions for the vector supervielbeins (\\ref{simple+v}), the RR\none--form (\\ref{simple+A}) and the three--form (\\ref{A3-}), and the\nNS--NS two--form (\\ref{simple+B}).\n\n\\subsubsection{D2 at the Minkowski boundary of $AdS_4$}\nLet us now consider the supersymmetric effective worldvolume action\ndescribing a D2--brane placed at the Minkowski boundary of the\n$AdS_4$ space. In this case it is convenient to choose the\n$AdS_4\\times CP^3$ metric in the form (\\ref{ads4metric11}) or\n(\\ref{ads4metric21}).\n\n\nWhen the D2--brane is at the Minkowski boundary, we take the static\ngauge $\\xi^m=x^m$. The 1\/2 BPS ground state of the D2--brane is when\nits transverse scalar modes are constant and the Born--Infeld field\nand the fermionic modes are zero. As a consistency check, let us\nnote that with the choice of the background value of the RR 3--form\n(\\ref{A3}) and (\\ref{A31}) and of the corresponding (positive) $D2$--brane charge\n(characterized by the plus sign in front of the Wess--Zumino term\n(\\ref{DBIstring})), the action of the ground state of the D2--brane\nat the Minkowski boundary vanishes. This means that such a brane\nconfiguration is stable and does not experience any external force,\n\\emph{i.e.} it is a BPS state.\n\nIf, on the other hand, with the same choice of ${\\mathcal A}_3$\n(\\ref{A3}) and (\\ref{A31}), we considered an anti--$D2$--brane carrying a negative\n${\\mathcal A}_3$ charge (which would be characterized by a minus\nsign in front of the Wess--Zumino term in (\\ref{DBIstring})), the\nground state of this anti--$D2$--brane at the Minkowski boundary\nwould have a non--zero action\n$$\nS_{\\overline{D2}}=-2Te^{-\\phi_0}\\,\\int\\,d^{\\,3}x\\,\\left(r\\over {R_{CP^3}}\\right)^6\\,\n$$\nimplying that such a solution is unstable (as is well known to be\nthe case for a probe anti--D--brane in a background of D--branes).\nIt is, therefore, important for the consistency of the solution to\ntake care that the relative signs of the RR potential ${\\mathcal\nA}_3$ and the $D2$--brane charge (and, as a consequence, the sign of\nthe kappa--symmetry projector) ensure the no--force condition, i.e.\nvanishing of the static $D2$--brane action. In the case of M2, M5\nand D3--branes at the Minkowski boundary of $AdS$ this issue was\ndiscussed in detail in\n\\cite{Pasti:1998tc}.\n\nFor the static D2--brane configuration the kappa--symmetry projector\n(\\ref{bargamma}) reduces to\n\\begin{equation}\\label{kappa}\nP=\\frac{1}{2}(1+\\gamma), \\qquad\n\\gamma=\\gamma^0\\gamma^1\\gamma^2=-\\gamma_0\\gamma_1\\gamma_2\n\\end{equation}\nSo the natural choice of the kappa--symmetry gauge fixing condition\nis\n\\begin{equation}\\label{kappagauge}\n\\Theta=\\frac{1}{2}(1-\\gamma)\\Theta\\,,\n\\end{equation}\n\\emph{i.e.} the gauge choice considered in detail in Subsection\n\\ref{theta++}.\nNote that in the case of the D2--brane at the Minkowski boundary we\ncannot use the simpler condition\n$\\Theta=\\frac{1}{2}(1+\\gamma)\\Theta$ of Subsection \\ref{theta-},\nbecause the kappa--symmetry projector (\\ref{kappa}) has the same\nsign.\n\nPlugging the kappa--symmetry gauge--fixed quantities of Subsection\n3.2 into the action (\\ref{DBIstring}), one can study the properties\nof the $OSp(6|4)$ invariant effective $3d$ gauge--matter field\ntheory on the worldvolume of the $D2$--brane placed at the Minkowski\nboundary of $AdS_4$, which from the point of view of M--theory\ncorresponds to an $M2$--brane pulled out to a finite distance from a\nstack of $M2$--branes probing $R^8\/Z_k$.\n\n\nThe effective theory on the worldvolume of this D2--brane, which\ndescribes its fluctuations in $AdS_4\n\\times CP^3$, is an interacting $d=3$ gauge Born--Infeld--matter theory\npossessing the (spontaneously broken) superconformal symmetry\n$OSp(6|4)$. The model is superconformally invariant in spite of the\npresence on the $d=3$ worldvolume of the dynamical Abelian vector\nfield, since the latter is coupled to the $3d$ dilaton field\nassociated with the radial direction of $AdS_4$. The superconformal\ninvariance is spontaneously broken by a non--zero expectation value\nof the dilaton. An ${\\mathcal N=3}$ superfield model with similar\nsymmetry properties was considered in the Appendix of\n\\cite{Buchbinder:2008vi}. To establish the explicit relation between\n the two models one should extract from the\nsuperfield action of \\cite{Buchbinder:2008vi} the component terms\ndescribing its physical sector and compare the result with\ncorresponding terms in the D2--brane action.\n\n\\section{Conclusion}\nIn this paper we have considered the gauge--fixing of\nkappa--symmetry of the superparticle, superstring and D2--brane\nactions in the complete $AdS_4\\times CP^3$ superspace which is\nsuitable, in particular, for studying regions of these theories that\nare not reachable by partially kappa--symmetry gauge fixed models\nbased on the supercoset $OSp(6|4)\/U(3)\\times SO(1,3)$. The\nsimplified form of these actions can be used to approach various\nproblems of the $AdS_4\/CFT_3$ correspondence. The gauge fixed form\nof the $AdS_4\\times CP^3$ supergeometry can also be used to consider\nthe actions for higher dimensional D4--, D6-- and D8--branes.\n\n\n\n\\section*{Acknowledgments}\nThe authors would like to thank Pietro Fr\\'e and Jaume Gomis for\ncollaboration at early stages of this project and for many fruitful\ndiscussions and comments. D.S. is also thankful to Soo--Jong Rey for\nuseful discussions. P.A.G. and D.S. are grateful to the Organizers\nof the Workshop Program ``Fundamental Aspects of Superstring Theory\"\nfor their hospitality at KITP, Santa Barbara, where their research\nwas supported in part by the National Science Foundation under Grant\nNo. PHY05-51164. Work of P.A.G., D.S. and L.W. was partially\nsupported by the INFN Special Initiative TV12. D.S. was also\npartially supported by the INTAS Project Grant 05-1000008-7928, an\nExcellence Grant of Fondazione Cariparo and the grant FIS2008-1980\nof the Spanish Ministry of Science and Innovation.\n\n\\def{}\n\\defC.\\arabic{equation}}\\label{C{A.\\arabic{equation}}\\label{A}\n\\section{Appendix A. Main notation and conventions}\n\\setcounter{equation}0\n\nThe convention for the ten and eleven dimensional metrics is the\n`almost plus' signature $(-,+,\\cdots,+)$. Generically, the tangent\nspace vector indices are labeled by letters from the beginning of\nthe Latin alphabet, while letters from the middle of the Latin\nalphabet stand for curved (world) indices. The spinor indices are\nlabeled by Greek letters.\n\n\\defC.2{A.1}\n\\subsection{$AdS_4$ space}\n\n$AdS_4$ is parametrized by the coordinates $x^{\\hat m}$ and its\nvielbeins are $e^{\\hat a}=dx^{\\hat m}\\,e_{\\hat m}{}^{\\hat a}(x)$,\n${\\hat m}=0,1,2,3;$ ${\\hat a}=0,1,2,3$. The $D=4$ gamma--matrices\nsatisfy:\n\\begin{equation}\\label{gammaa}\n\\{\\gamma^{\\hat a},\\gamma^{\\hat b}\\}=2\\,\\eta^{\\hat a\\hat b}\\,,\n\\qquad \\eta^{\\hat a\\hat b}={\\rm diag}\\,(-,+,+,+)\\,,\n\\end{equation}\n\\begin{equation}\\label{gamma5}\n\\gamma^5=i\\gamma^0\\,\\gamma^1\\,\\gamma^2\\,\\gamma^3, \\qquad\n\\gamma^5\\,\\gamma^5=1\\,.\n\\end{equation}\nThe charge conjugation matrix $C$ is antisymmetric, the matrices\n$(\\gamma^{\\hat a})_{\\alpha\\beta}\\equiv (C\\,\\gamma^{\\hat\na})_{\\alpha\\beta}$ and $(\\gamma^{\\hat a\\hat\nb})_{\\alpha\\beta}\\equiv(C\\,\\gamma^{\\hat a\\hat b})_{\\alpha\\beta}$ are\nsymmetric and $\\gamma^5_{\\alpha\\beta}\\equiv\n(C\\gamma^5)_{\\alpha\\beta}$ is antisymmetric, with\n$\\alpha,\\beta=1,2,3,4$ being the indices of a 4--dimensional spinor\nrepresentation of $SO(1,3)$ or $SO(2,3)$.\n\n\\defC.2{A.2}\n\\subsection{$CP^3$ space}\n\n$CP^3$ is parametrized by the coordinates $y^{m'}$ and its vielbeins\nare $e^{a'}=dy^{m'}e_{m'}{}^{a'}(y)$, ${m'}=1,\\cdots,6;$\n${a'}=1,\\cdots,6$. The $D=6$ gamma--matrices satisfy:\n\\begin{equation}\\label{gammaa'}\n\\{\\gamma^{a'},\\gamma^{b'}\\}=2\\,\\delta^{{a'}{b'}}\\,,\\qquad \\delta^{a'b'}={\\rm\ndiag}\\,(+,+,+,+,+,+)\\,,\n\\end{equation}\n\\begin{equation}\\label{gamma7}\n\\gamma^7={i\\over{6!}}\\,\\epsilon_{\\,a_1'a_2'a_3'a_4'a_5'a_6'}\\,\\gamma^{a_1'}\\cdots \\gamma^{a_6'} \\qquad\n\\gamma^7\\,\\gamma^7=1\\,.\n\\end{equation}\nThe charge conjugation matrix $C'$ is symmetric and the matrices\n$(\\gamma^{a'})_{\\alpha'\\beta'}\\equiv\n(C\\,\\gamma^{a'})_{\\alpha'\\beta'}$ and\n$(\\gamma^{a'b'})_{\\alpha'\\beta'}\\equiv(C'\\,\\gamma^{a'b'})_{\\alpha'\\beta'}$\nare antisymmetric, with $\\alpha',\\beta'=1,\\cdots,8$ being the\nindices of an 8--dimensional spinor representation of $SO(6)$.\n\n\\defC.2{A.3}\n\\subsection{ Type IIA $AdS_4\\times CP^3$ superspace}\n\nThe type IIA superspace whose bosonic body is $AdS_4\\times CP^3$ is\nparametrized by 10 bosonic coordinates $X^M=(x^{\\hat m},\\,y^{m'})$\nand 32-fermionic coordinates\n$\\Theta^{\\underline\\mu}=(\\Theta^{\\mu\\mu'})$\n($\\mu=1,2,3,4;\\,\\mu'=1,\\cdots,8$). These combine into the\nsuperspace supercoordinates $Z^{\\cal M}=(x^{\\hat\nm},\\,y^{m'},\\,\\Theta^{\\mu\\mu'})$. The type IIA supervielbeins are\n\\begin{equation}\\label{IIAsv}\n{\\mathcal E}^{\\mathcal A}=dZ^{\\mathcal M}\\,{\\mathcal E}_{\\mathcal\nM}{}^{\\mathcal A}(Z)=({\\mathcal E}^{A},\\,{\\mathcal\nE}^{\\underline\\alpha})\\,,\\qquad {\\mathcal E}^{A}(Z)=({\\mathcal\nE}^{\\hat a},\\,{\\mathcal E}^{a'})\\,,\\qquad {\\mathcal\nE}^{\\underline\\alpha}(Z)={\\mathcal E}^{\\alpha\\alpha'}\\,.\n\\end{equation}\n\\defC.2{A.4}\n\\subsection{Superspace constraints}\nIn our conventions the superspace constraint on the bosonic part of the torsion is\n\\begin{equation}\nT^A=-i\\mathcal E\\Gamma^A\\mathcal E+i\\mathcal E^A\\,\\mathcal\nE\\lambda+\\frac{1}{3}{\\mathcal E}^A\\,\\mathcal E^B\\nabla_B\\phi\\,,\n\\end{equation}\nwhile the constraints on the RR and NS--NS field strengths are\n\\begin{eqnarray}\nF_2&=&-i\\,e^{-\\phi}\\,\\mathcal E\\Gamma_{11}\\mathcal E\n+2i\\,e^{-\\phi}\\,\\mathcal E^A\\,\\mathcal E\\Gamma_A\\Gamma_{11}\\lambda+\\frac{1}{2}\\mathcal E^B\\mathcal E^A\\,F_{AB}\\,,\\\\\nF_4&=&-\\frac{i}{2}\\,e^{-\\phi}\\,{\\mathcal E}^B{\\mathcal E}^A\\,\\mathcal E\\Gamma_{AB}\\mathcal E\n+\\frac{1}{4!}{\\mathcal E}^D{\\mathcal E}^C{\\mathcal E}^B{\\mathcal E}^A\\,F_{ABCD}\n\\,,\\\\\nH_3&=&\n-i{\\mathcal E}^A\\,\\mathcal E\\Gamma_A\\Gamma_{11}\\mathcal E\n+i{\\mathcal E}^B{\\mathcal E}^A\\,\\mathcal E\\Gamma_{AB}\\Gamma^{11}\\lambda\n+\\frac{1}{3!}{\\mathcal E}^C{\\mathcal E}^B{\\mathcal E}^A\\,H_{ABC}\\,.\n\\end{eqnarray}\nThese differ from the conventional string frame constraints by the $\\lambda$--term in $T^A$ and related terms in $F_2$, $F_4$ and $H_3$. This is\na consequence of the dimensional reduction from eleven dimensions. They can be brought to a more conventional form by shifting the fermionic supervielbein $\\mathcal E^{\\underline\\alpha}$ by $-\\frac{1}{2}\\mathcal E^A(\\Gamma_A\\lambda)^{\\underline\\alpha}$ accompanied by a related shift in the connection.\n\\\\\n\\\\\n{\\bf The $D=10$ gamma--matrices $\\Gamma^A$} are given by\n\\begin{eqnarray}\\label{Gamma10}\n&\\{\\Gamma^A,\\,\\Gamma^B\\}=2\\eta^{AB},\\qquad\n\\Gamma^{A}=(\\Gamma^{\\hat a},\\,\\Gamma^{a'})\\,,\\nonumber\\\\\n&\\\\\n&\\Gamma^{\\hat a}=\\gamma^{\\hat a}\\,\\otimes\\,{\\bf 1},\\qquad\n\\Gamma^{a'}=\\gamma^5\\,\\otimes\\,\\gamma^{a'},\\qquad\n\\Gamma^{11}=\\gamma^5\\,\\otimes\\,\\gamma^7,\\qquad a=0,1,2,3;\\quad\na'=1,\\cdots,6\\,. \\nonumber\n\\end{eqnarray}\nThe charge conjugation matrix is ${\\mathcal C}=C\\otimes C'$.\n\nThe fermionic variables $\\Theta^{\\underline\\alpha}$ of IIA\nsupergravity carrying 32--component spinor indices of $Spin(1,9)$,\nin the $AdS_4\\times CP^3$ background and for the above choice of the\n$D=10$ gamma--matrices, naturally split into 4--dimensional\n$Spin(1,3)$ indices and 8--dimensional spinor indices of $Spin(6)$,\ni.e. $\\Theta^{\\underline\\alpha}=\\Theta^{\\alpha\\alpha'}$\n($\\alpha=1,2,3,4$; $\\alpha'=1,\\cdots,8$).\n\n\\defC.2{A.5}\n\\subsection{$24+8$ splitting of $32$ $\\Theta$}\n\n24 of $\\Theta^{\\underline\\alpha}=\\Theta^{\\alpha\\alpha'}$ correspond\nto the unbroken supersymmetries of the $AdS_4\\times CP^3$\nbackground. They are singled out by a projector introduced in\n\\cite{Nilsson:1984bj} which is constructed using the $CP^3$ K\\\"ahler\nform $J_{a'b'}$ and seven $8\\times 8$ antisymmetric gamma--matrices\n(\\ref{gammaa'}). The $8\\times 8$ projector matrix has the following\nform\n\\begin{equation}\\label{p6}\n{\\mathcal P}_{6}={1\\over 8}(6-J)\\,,\n\\end{equation}\nwhere the $8\\times 8$ matrix\n\\begin{equation}\\label{J}\nJ=-iJ_{a'b'}\\,\\gamma^{a'b'}\\,\\gamma^7 \\qquad {\\rm such~ that} \\qquad\nJ^2= 4J+12\n\\end{equation}\nhas six eigenvalues $-2$ and two eigenvalues $6$, \\emph{i.e.} its\ndiagonalization results in\n\\begin{equation}\\label{Jdia}\nJ=\\hbox{diag}(-2,-2,-2,-2,-2,-2,6,6)\\,.\n\\end{equation}\nTherefore, the projector (\\ref{p6}) when acting on an 8--dimensional\nspinor annihilates 2 and leaves 6 of its components, while the\ncomplementary projector\n\\begin{equation}\\label{p2}\n{\\mathcal P}_{2}={1\\over 8}(2+J)\\,,\\qquad\n\\mathcal{P}_2+\\mathcal{P}_6=\\mathbf 1\n\\end{equation}\nannihilates 6 and leaves 2 spinor components.\n\nThus the spinor\n\\begin{equation}\\label{24}\n\\vartheta^{\\alpha\\alpha'}=({\\mathcal P}_6\\,\\Theta)^{\\alpha\\alpha'} \\qquad \\Longleftrightarrow \\qquad\n\\vartheta^{\\alpha a'}\\, \\qquad a'=1,\\cdots, 6\n\\end{equation}\nhas 24 non--zero components and the spinor\n\\begin{equation}\\label{8}\n\\upsilon^{\\alpha\\alpha'}=({\\mathcal P}_2\\,\\Theta)^{\\alpha\\alpha'}\\qquad \\Longleftrightarrow \\qquad\n\\upsilon^{\\alpha i}\\, \\qquad i=1,2\n\\end{equation}\nhas 8 non--zero components. The latter corresponds to the eight\nsupersymmetries broken by the $AdS_4\\times CP^3$ background.\n\nTo avoid confusion, let us note that the index $a'$ on spinors is\ndifferent from the same index on bosonic quantities. They are\nrelated by the usual relation between vector and spinor\nrepresentations, \\emph{i.e.} given two $Spin(6)$ spinors\n$\\psi_1^{\\alpha'}$ and $\\psi_2^{\\alpha'}$, projected as in\n(\\ref{24}), their bilinear combination $v^{a'}=\\psi_1\\mathcal\nP_6\\gamma^{a'}\\mathcal P_6\\psi_2=\\psi_1^{b'}(\\mathcal\nP_6\\gamma^{a'}\\mathcal P_6)_{b'c'}\\psi_2^{c'}$ transforms as a\n6--dimensional 'vector'.\n\n\\defC.\\arabic{equation}}\\label{C{B.\\arabic{equation}}\\label{B}\n\\section{Appendix B. $OSp(6|4)\/U(3)\\times SO(1,3)$ supercoset realization\nand other ingredients of the $(10|32)$--dimensional $AdS_4\\times\nCP^3$ superspace}\n\\setcounter{equation}0\n\nThe supervielbeins and the superconnections of the\n$OSp(6|4)\/U(3)\\times SO(1,3)$ supercoset which appear in the\ndefinition of the geometric and gauge quantities of the $AdS_4\\times\nCP^3$ superspace in Section \\ref{superspace} are\n\\begin{equation}\\label{cartan24}\n\\begin{aligned}\nE^{\\hat a}&=e^{\\hat a}(x)+4i\\vartheta\\gamma^{\\hat\na}\\,{{\\sinh^2{{\\mathcal M}_{24}\/ 2}}\\over{\\mathcal M}^2_{24}}\\,\nD_{24}\\vartheta,\\\\\nE^{a'}&=e^{a'}(y)+4i\\vartheta\\gamma^{a'}\\gamma^5\\,{{\\sinh^2{{\\mathcal\nM}_{24}\/2}}\\over{\\mathcal M}_{24}^2}\\,D_{24}\\vartheta\\,,\n\\\\\nE^{\\alpha a'}&=\\left({{\\sinh{\\mathcal M}_{24}}\\over{\\mathcal\nM}_{24}}D_{24}\\vartheta\\right)^{\\alpha a'},\\\\\n\\Omega^{\\hat a\\hat b}&=\\omega^{\\hat a\\hat b}(x)+\\frac{8}{R}\\vartheta\\gamma^{\\hat a\\hat b}\\gamma^5\\,\n{{\\sinh^2{{\\mathcal M}_{24}\/2}}\\over{\\mathcal M}_{24}^2}D_{24}\\vartheta\\,,\\\\\n\\Omega^{a'b'}&=\\omega^{a'b'}(y)-\\frac{4}{R}\n\\vartheta(\\gamma^{a'b'}-iJ^{a'b'}\\gamma^7)\\gamma^5\\,{{\\sinh^2{{\\mathcal M}_{24}\/2}}\n\\over{\\mathcal M}_{24}^2}\\,D_{24}\\vartheta\\,,\\\\\nA&=\\frac{1}{8}J_{a'b'}\\Omega^{a'b'}=A(y)+\\frac{4i}{R}\\,\\vartheta\\gamma^7\\gamma^5\\,{{\\sinh^2{{\\mathcal\nM}_{24}\/2}}\\over{\\mathcal M}_{24}^2}\\,D_{24}\\vartheta\\,,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\\label{M24}\nR\\,({\\mathcal M}_{24}^2)^{\\alpha a'}{}_{\\beta b'}=\n4\\vartheta^{\\alpha}_ {b'}\\,(\\vartheta^{a'}\\gamma^5)_\\beta\n-4\\delta^{a'}_{b'}\\vartheta^{\\alpha c'}(\\vartheta\\gamma^5)_{\\beta\nc'} -2(\\gamma^5\\gamma^{\\hat a}\\vartheta)^{\\alpha\na'}(\\vartheta\\gamma_{\\hat a})_{\\beta b'} -(\\gamma^{\\hat a\\hat b}\\vartheta)^{\\alpha\na'}(\\vartheta\\gamma_{\\hat a\\hat b}\\gamma^5)_{\\beta b'}\\,.\n\\end{equation}\nThe derivative appearing in the above equations is defined as\n\\begin{equation}\\label{D24}\nD_{24}\\vartheta={\\mathcal P_6}\\,(d +\\frac{i}{R}\\,e^{\\hat\na}\\gamma^5\\gamma_{\\hat a}\n+\\frac{i}{R}e^{a'}\\gamma_{a'}-\\frac{1}{4}\\omega^{\\hat a\\hat\nb}\\gamma_{\\hat a\\hat b}\n-\\frac{1}{4}\\omega^{a'b'}\\gamma_{a'b'})\\vartheta\\,,\n\\end{equation}\nwhere $e^{\\hat a}(x)$, $e^{a'}(y)$, $\\omega^{\\hat a\\hat b}(x)$, $\\omega^{a'b'}(y)$\nand $A(y)$ are the vielbeins and connections of the bosonic\nsolution. The $U(3)$--connection $\\Omega^{a'b'}$ satisfies the\ncondition\n\\begin{equation}\n{(P^{-})_{a'b'}}^{c'd'}\\Omega_{c'd'}=\\frac{1}{2}\\,({\\delta_{[a'}}^{c'}\\,{\\delta_{b']}}^{d'}\\,-\\,\n{J_{[a'}}^{c'}\\,{J_{b']}}^{d'})\\Omega_{c'd'}=0\\,,\n\\end{equation}\nwhere $J_{a'b'}$ is the K\\\"ahler form on $CP^3$.\n\n\\defC.2{B.1}\n\\subsection{Other quantities appearing in the definition of the\n$AdS_4\\times CP^3$ superspace of Section \\ref{superspace}}\n\n\\begin{equation}\\label{M}\nR\\,({\\mathcal M}^2)^{\\alpha i}{}_{\\beta j}= 4(\\varepsilon\\upsilon)^{\\alpha\ni}(\\upsilon\\varepsilon\\gamma^5)_{\\beta j} -2(\\gamma^5\\gamma^{\\hat\na}\\upsilon)^{\\alpha i}(\\upsilon\\gamma_{\\hat a})_{\\beta j} -(\\gamma^{\\hat\na\\hat b}\\upsilon)^{\\alpha i}(\\upsilon\\gamma_{\\hat a\\hat\nb}\\gamma^5)_{\\beta j}\\,,\n\\end{equation}\n\\begin{equation}\n(m^2)^{ij}=-\\frac{4}{R}\\upsilon^i\\,\\gamma^5\\,\\upsilon^j\\,,\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\\Lambda_{\\hat a}{}^{\\hat b}&=\n\\delta_{\\hat a}{}^{\\hat b}-\\frac{R^2}{k^2l_p^2}\\,\\cdot\\,\n\\frac{e^{-\\frac{2}{3}\\phi}}{e^{\\frac{2}{3}\\phi}\n+{R\\over{kl_p}}\\,\\Phi}\\,{E_{7\\hat a}}\\,E_7{}^{\\hat b}\\,,\n\\\\\n\\\\\nS_{\\underline\\beta}{}^{\\underline\\alpha}&=\n\\frac{e^{-\\frac{1}{3}\\phi}}{\\sqrt2}\\left(\\sqrt{e^{\\frac{2}{3}\\phi}\n+{R\\over{kl_p}}\\,\\Phi}-{R\\over{kl_p}}\\,\n\\frac{E_7{}^{\\hat a}\\,\\Gamma_{\\hat a}\\Gamma_{11}}{\\sqrt{e^{\\frac{2}{3}\\phi}\n+{R\\over{kl_p}}\\,\\Phi}}\n\\,\\right)_{\\underline\\beta}{}^{\\underline\\alpha}\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\\label{phiE7}\\begin{aligned}\nE_7{}^{\\hat a}(\\upsilon)&=-\\frac{8i}{R}\\,\\upsilon\\gamma^{\\hat a}\\,{{\\sinh^2{{\\mathcal\nM}\/ 2}}\\over{\\mathcal M}^2}\\,\\varepsilon\\,{\\upsilon}\\,,\n\\\\\n\\Phi(\\upsilon)&= 1+\\frac{8}{R}\\,\\upsilon\\,\\varepsilon\\gamma^5\\,{{\\sinh^2{{\\mathcal\nM}\/2}}\\over{\\mathcal M}^2}\\,\\varepsilon\\upsilon\\,.\n\\end{aligned}\n\\end{equation}\nLet us emphasise that the $SO(2)$ indices $i,j=1,2$ are raised and\nlowered with the unit matrices $\\delta^{ij}$ and $\\delta_{ij}$ so\nthat there is actually no difference between the upper and the\nlower $SO(2)$ indices, $\\varepsilon_{ij}=-\\varepsilon_{ji}$,\n$\\varepsilon^{ij}=-\\varepsilon^{ji}$ and\n$\\varepsilon^{12}=\\varepsilon_{12}=1$.\n\n\\defC.\\arabic{equation}}\\label{C{C.\\arabic{equation}}\\label{C}\n\\section{Appendix C. Identities for the kappa-projected fermions}\n\\setcounter{equation}0\n\n\nWhen the fermionic variables\n$\\Theta^{\\underline\\alpha}=(\\vartheta^{\\alpha a'},\\,\\upsilon^{\\alpha\ni})$ are subject to the constraint (\\ref{kappagauge1}), the\nfollowing identities hold.\n\n\\defC.2{C.1}\n\\subsection{Identities involving $\\upsilon^{\\alpha i}$}\n\\begin{equation}\\label{i1}\n\\upsilon^i\\gamma^5\\upsilon^j=\\upsilon^i\\gamma^3\\upsilon^j=0\\,,\\qquad \\upsilon^{\\alpha i}\\upsilon^{\\beta j}\\delta_{ij}=-\\frac{1}{4}((1\\pm\\gamma)C^{-1})^{\\alpha\\beta}\\upsilon\\ups\\,,\n\\end{equation}\nwhere $\\gamma=\\gamma^{012}$ and $\\upsilon\\ups=\\delta_{ij}\\upsilon^{\\alpha\ni}C_{\\alpha\\beta}\\upsilon^{\\beta j}$.\n\nAnother useful relation is ($\\varepsilon^{012}=-\\varepsilon_{012}=1$)\n\\begin{equation}\\label{gg}\n\\upsilon\\gamma_{ab}d\\upsilon=\\pm\\varepsilon_{abc}\\upsilon\\gamma^cd\\upsilon\\,,\n\\end{equation}\nwhich also holds for the kappa--projected $\\vartheta$ and\n$d\\vartheta$.\n\nUsing eqs. (\\ref{i1}) and (\\ref{gg}) we find that\n\\begin{equation}\n\\upsilon\\varepsilon\\gamma^a\\upsilon\\,\\upsilon\\varepsilon\\gamma_b\\upsilon=\\delta_b^a(\\upsilon\\ups)^2\\,,\n\\qquad \\upsilon\\varepsilon\\gamma^{ac}\\upsilon\\,\\upsilon\\varepsilon\\gamma_{cb}\\upsilon=2\\delta_b^a(\\upsilon\\ups)^2\\,,\n\\end{equation}\n\\begin{equation}\n(m^2)^{ij}=-\\frac{4}{R}\\upsilon^i\\,\\gamma^5\\,\\upsilon^j=0\n\\end{equation}\nand\n\\begin{eqnarray}\n({\\mathcal M}^2\\varepsilon\\upsilon)^{\\alpha i}=0\\,.\n\\end{eqnarray}\n A similar computation shows that\n\\begin{equation}\n\\upsilon\\varepsilon\\gamma^5{\\mathcal M}^2=0.\n\\end{equation}\nIt is also true in general (i.e. without fixing $\\kappa$--symmetry)\nthat\n\\begin{equation}\n{\\mathcal M}^2\\upsilon=0\\,,\\qquad \\upsilon\\gamma^5{\\mathcal M}^2=0.\n\\end{equation}\nUsing the above identities we find that for $\\upsilon$ satisfying\n(\\ref{kappagauge1})\n\\begin{equation}\\label{M2d}\n\\mathcal M^2D\\upsilon\n=\\frac{6i}{R^2}(E^a\\pm\\frac{R}{2}\\Omega^{a3})(\\gamma_a\\upsilon)\\,\\upsilon\\ups\n\\end{equation}\nwhich results in\n\\begin{equation}\\label{vmdv}\n4\\upsilon\\gamma^a \\frac{\\sinh^2(\\mathcal M\/2)}{\\mathcal\nM^2}D\\upsilon=\n\\upsilon\\gamma^a(1+\\frac{1}{12}\\mathcal M^2)D\\upsilon\n=\\upsilon\\gamma^a\\,(d-\\frac{1}{4}\\Omega^{bc}\\gamma_{bc})\\upsilon\n+\\frac{i}{2R^2}(E^a\\pm\\frac{R}{2}\\Omega^{a3})(\\upsilon\\ups)^2\\,,\n\\end{equation}\nwhere $E^a$, $\\Omega^{bc}$ and $\\Omega^{a3}$ are $AdS_4$ components\nof the supervielbein and connection of the supercoset\n$OSp(6|4)\/U(3)\\times SO(1,3)$ defined in eqs. (\\ref{cartan24}) and\nthe matrix ${\\mathcal M}^2$ is defined in eq. (\\ref{M}).\n\nWe also find that\n\\begin{equation}\n4\\upsilon\\varepsilon\\gamma^5{{\\sinh^2{{\\mathcal M}\/2}}\\over{\\mathcal M}^2}D\\upsilon\n=\\upsilon\\varepsilon\\gamma^5D\\upsilon=\\frac{i}{R}(E^a\\pm\\frac{R}{2}\\Omega^{a3})\\upsilon\\varepsilon\\gamma_a\\upsilon\\,.\n\\end{equation}\n\n\\defC.2{C.2}\n\\subsection{Identities involving $\\vartheta^{\\alpha a'}$ and the simplified\nform of the \\\\ $OSp(6|4)\/U(3)\\times SO(1,3)$ supergeometry}\n\nUsing the definition of $\\mathcal M_{_{24}}$, eq. (\\ref{M24}), and\nthe fact that\n\\begin{equation}\n[\\gamma^{012},\\gamma^{a'}]=0\n\\end{equation}\nwe find that\n\\begin{equation}\n(\\vartheta\\gamma'\\gamma^5{\\mathcal M}_{_{24}}^2)_{\\beta b'} =0\n\\qquad\n({\\mathcal M}_{_{24}}^2\\gamma'\\vartheta)^{\\alpha a'}=0\\,,\n\\end{equation}\nwhere $\\gamma'$ is any product of the gamma-matrices that commutes\nwith $\\gamma=\\gamma^{012}$,\n\\emph{e.g.} any product of $\\gamma^{a'}$ and $\\gamma^{a}$. A\nslightly longer computation, using the fact that\n\\begin{equation}\n\\gamma^3\\vartheta=\\pm\\gamma^3\\gamma^{012}\\vartheta=\\pm i\\gamma^5\\vartheta\\,,\\qquad\\vartheta\\gamma^3=\\mp i\\vartheta\\gamma^5\\qquad\\mbox{for}\\quad\\vartheta=\\frac{1}{2}(1\\pm\\gamma)\\vartheta\\,,\n\\end{equation}\nshows that with this projection of the $\\vartheta$s\n\\begin{equation}\n\\mathcal M_{_{24}}^4=0\\,.\n\\end{equation}\nUsing the identity\n\\begin{equation}\n\\vartheta^{\\alpha a'}\\vartheta^{\\beta b'}\\delta_{a'b'}=-\\frac{1}{4}((1\\pm\\gamma)C^{-1})^{\\alpha\\beta}\\vartheta\\vartheta\\,,\n\\end{equation}\nwhere $\\vartheta\\vartheta\\equiv\\vartheta^{\\alpha\na'}C_{\\alpha\\beta}\\vartheta^{\\beta b'}\\,\\delta_{a'b'}\\,,$\n one can further show that\n\\begin{equation}\n({\\mathcal M}^2_{_{24}}D_{_{24}}\\vartheta)^{\\alpha a'}\n=\\frac{6i}{R^2}(e^b\\pm\\frac{R}{2}\\omega^{b3})(\\gamma_b\\vartheta)^{\\alpha\na'}\\,\\vartheta\\vartheta,\n\\end{equation}\nwhere the covariant derivative $D_{24}$, defined in\n(\\ref{D24}), becomes\n\\begin{equation}\nD_{24}\\vartheta=\\mathcal P_6(d\n+\\frac{i}{R}(e^a\\pm\\frac{R}{2}\\omega^{a3})\\gamma^5\\gamma_a\n\\mp\\frac{1}{R}\\,e^3\n+\\frac{i}{R}e^{a'}\\gamma_{a'}\n-\\frac{1}{4}\\omega^{ab}\\gamma_{ab}\n-\\frac{1}{4}\\omega^{a'b'}\\gamma_{a'b'})\\vartheta\\,.\n\\end{equation}\nThis gives\n\\begin{eqnarray}\n\\vartheta\\gamma^a(1+\\frac{1}{12}{\\mathcal M}^2_{_{24}})D_{_{24}}\\vartheta\n&=&\n\\vartheta\\gamma^aD_{24}\\vartheta+\\frac{i}{2R^2}(e^a\\pm\\frac{R}{2}\\omega^{a3})(\\vartheta\\vartheta)^2\\,.\n\\end{eqnarray}\nUsing the above expressions one finds that the form of the\n$OSp(6|4)\/U(3)\\times SO(1,3)$ geometrical objects (\\ref{cartan24})\nsimplify to\n\\begin{equation}\n\\begin{aligned}\nE^a&=e^a(x)+i\\vartheta\\gamma^aD_{24}\\vartheta-\\frac{1}{2R^2}(e^a\\pm\\frac{R}{2}\\omega^{a3})(\\vartheta\\vartheta)^2,\n\\\\\nE^3&=e^3(x),\\\\\nE^{a'}&=e^{a'}(y)-\\frac{1}{R}(e^a\\pm\\frac{R}{2}\\omega^{a3})\\vartheta\\gamma^{a'}\\gamma_a\\vartheta\n\\\\\nE^{\\alpha a'}&=\n\\left(D_{_{24}}\\vartheta\\right)^{\\alpha a'}\n+\\frac{i}{R^2}(e^b\\pm\\frac{R}{2}\\omega^{b3})(\\gamma_b\\vartheta)^{\\alpha\na'}\\,\\vartheta\\vartheta,\n\\\\\n\\Omega^{ab}&=\\omega^{ab}(x)+\\frac{2i}{R^2}(e^c\\pm\\frac{R}{2}\\omega^{c3})\\vartheta\\gamma^{ab}{}_c\\vartheta,\n\\\\\n\\Omega^{a3}&=\\omega^{a3}(x)\n\\mp\\frac{2i}{R}\\vartheta\\gamma^aD_{24}\\vartheta\n\\pm\\frac{1}{R^3}(e^a\\pm\\frac{R}{2}\\omega^{a3})(\\vartheta\\vartheta)^2,\n\\\\\n\\Omega^{a'b'}&=\\omega^{a'b'}(y)-\\frac{i}{R^2}(e^a\\pm\\frac{R}{2}\\omega^{a3})\\vartheta(\\gamma^{a'b'}-iJ^{a'b'}\\gamma^7)\\gamma_a\\vartheta,\n\\\\\nA&=A(y)-\\frac{1}{R^2}(e^a\\pm\\frac{R}{2}\\omega^{a3})\\vartheta\\gamma^7\\gamma_a\\vartheta\n\\,,\n\\end{aligned}\n\\end{equation}\nand in particular\n\\begin{equation}\nE^a\\pm\\frac{R}{2}\\Omega^{a3}=e^a(x)\\pm\\frac{R}{2}\\omega^{a3}(x)\\,.\n\\end{equation}\nThus, in the chosen $\\kappa$--symmetry gauge the\n$OSp(6|4)\/U(3)\\times SO(1,3)$ supercoset geometry depends on the\nfermionic coordinates only up to the 4th power.\n\nNote that in all the above expressions the components $e^a(x)$\n$(a=0,1,2)$ and ${R\/2}\\,\\omega^{a3}(x)$ of the $AdS_4$ vielbein and\nconnection appear only in the combination $e^a(x)\\pm\n{R\/2}\\,\\omega^{a3}(x)$. This combination has a very clear\ngeometrical meaning. In the case, when the indices $a=0,1,2$ label\nthe directions of the 3d Minkowski slice of the $AdS_4$, $e^a(x)\\pm\n{R\/2}\\,\\omega^{a3}(x)$ corresponds to the generator\n$\\Pi_a=P_a\\mp{1\/2}\\,M_{a3}$ of the Poincar\\'e translations\n($[\\Pi_a,\\Pi_b]=0$) along the 3d Minkowski boundary which is the\nlinear combination of boosts and Lorentz rotations in $AdS_4$ (see\n\\cite{Pasti:1998tc} for more details). More precisely, $e^a(x)-\n{R\/2}\\,\\omega^{a3}(x)$ corresponds to the Poincar\\'e translation,\nwhile $e^a(x)+{R\/2}\\,\\omega^{a3}(x)$ corresponds to the conformal\nboosts in $M_3$, or vice versa, depending on the orientation.\n\nWhen the $AdS_4$ metric is chosen in the form (\\ref{ads4metric11})\nthe vielbein $e^a(x)$ and the connection $\\omega^{a3}(x)$ are\nproportional to each other, namely,\n\\begin{equation}\\label{eaoa3}\ne^a=-{R\\over 2}\\,\\omega^{a3}.\n\\end{equation}\nActually, this relation can be imposed for any\nform of the metric by performing an appropriate $SO(1,3)$\ntransformation of the $AdS_4$ vielbein and connection.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbhot b/data_all_eng_slimpj/shuffled/split2/finalzzbhot new file mode 100644 index 0000000000000000000000000000000000000000..d196f66e1b2ac1057acc97f258da56579b3c92db --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbhot @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{Intro}\n\\setcounter{equation}{0}\n\n\\ale{The COVID-19 outbreak that deeply affected the world starting from the first months of 2020 led to a new, strong interest by researchers towards the development of mathematical models of infectious diseases, allowing the assessment of different scenarios and aiming at assisting the complex political decision-making process during the pandemic.\nAs a consequence, many papers were recently published, proposing interesting modeling ideas (see, e.g., \\cite{Albi,Calleri,Gatto,Giordano,Jha,Linka,Parolini,Wang,Zohdi}), often based on compartmental models, where the considered population is divided into ``compartments'' based on their qualitative characteristics (like, e.g., ``susceptible'', ``infected'', ``recovered''), with different assumptions about the nature and rate of transfer across compartments. Despite this kind of models do not readily offer the possibility of a multiscale vision (as, e.g., proposed in \\cite{Bellomo}), that would be a preferred feature given the nature of the phenomena to be simulated, they have the advantage of allowing a relatively easy introduction of diffusion terms (in this context, compelling ideas on diffusion models can be found, among others, in \\cite{Bellomo2,Bellomo3,Winkler} and references therein). \nFor a recent overview of mathematical models for virus pandemic, interested readers are referred to the articles included in \\cite{Bellomo4}. Here, a compartmental model able to describe the spatial heterogeneity underlying the spread of an epidemic in a realistic city network is proposed in \\cite{Bertaglia}. Another recent example where a classical compartmental model is enhanced to take into account infected individuals traveling on lines of fast diffusion is instead given by \\cite{Berestycki}.\nIn fact, compartmental models are typically governed by a system of ordinary differential equations (ODEs) in time, which might possibly be enriched with coupling terms, additional equations, or other approaches to take into account also the spatial variation of the studied problem. An interesting alternative is to resort to compartmental models based on partial differential equations (PDEs), to accurately account for spatial variations at the continuum level.}\n\n\\ale{Following this latter idea, in the present contribution, we consider the PDE-based model introduced}\n\\oldgianni{in \\cite{Vig1, Vig2} \\pcol{(a variation of it with delay terms is discussed in \\cite{Guglielmi})}.\nIn particular, the following system has been considered} \n\\Begin{eqnarray}\n && \\partial_t s\n = \\alpha n - (1 - A_0\/n) \\beta_i si - (1 - A_0\/n) \\beta_e se - \\mu s + \\div(n \\nu_s \\nabla s) \n \\label{Ieqs}\n \\\\\n && \\partial_t e\n = (1 - A_0\/n) \\beta_i si + (1 - A_0\/n) \\beta_e se - \\sigma e - \\phi_e e - \\mu e + \\div(n \\nu_e \\nabla e) \n \\qquad\n \\label{Ieqe}\n \\\\\n && \\partial_t i\n = \\sigma e - \\phi_d i - \\phi_r i - \\mu i + \\div(n \\nu_i \\nabla i) \n \\label{Ieqi}\n \\\\\n && \\partial_t r = \\phi_r i + \\phi_e e - \\mu r + \\div(n \\nu_r \\nabla r) \n \\label{Ieqr}\n \\\\\n && \\partial_t d = \\phi_d i \\,.\n \\label{Ieqd}\n\\End{eqnarray}\n\\oldgianni{In the above equations, the symbols $s$, $e$, $i$, $r$, and $d$ denote\nthe susceptible population, the exposed population, the infected population, the recovered population, \nand the deceased population, respectively,\nand $n:=s+e+i+r$ is the living population.\nNote that $d$ refers only to deaths due to COVID-19. \nDue to the names of the compartments used, this model may be called a susceptible-exposed-infected-recovered-deceased (SEIRD) model.\nMoreover, equations \\accorpa{Ieqs}{Ieqr} are complemented by initial conditions\nand no-flux boundary conditions,\nwhile just an initial condition is associated to~\\eqref{Ieqd}.}\n\n\\oldgianni{In \\cite{Vig1} the authors presented a SEIRD mathematical model based on partial differential equations\ncoupled with a heterogeneous diffusion model. \nThe model was used to describe the spatio-temporal spread of the COVID-19 pandemic \nand to capture dynamics also based on human habits and geographical features. \nTo test the model, the outputs generated by a finite-element solver \nwere compared with measured data over the Italian region of Lombardy \nand the results showed a strong qualitative agreement \nbetween the simulated forecast of the spatio-temporal COVID-19 spread in Lombardy \nand epidemiological data collected at the municipality level.} \n\n\\oldgianni{In \\cite{Vig2} the authors proposed a formulation of compartmental models \nbased on partial differential equations through \\pier{familiar} continuum mechanics concepts, \ninterpreting such models in terms of fundamental equations of balance and compatibility, \njoined by a constitutive relation. \nSuch an interpretation was useful to aid understanding and interdisciplinary collaboration. \nThe authors formally derived the model sensitivity to diffusion, \ndescribed its growth and decay, and established its stability in the $L^1$ norm. \nAttention was paid to an ODE version of the model, \nusing it to derive a basic reproduction number $R_0$ as well as analyzing its spectrum. \nAdditionally, a series of numerical simulations showed the role that numerical methods, diffusion, \nand specific ingredients played in the behavior of the system. \nImplicit models were found to be effective in describing the temporal dynamics of the system, \nand \\pier{second-order} in-time methods in particular.}\n\\ale{The model proposed in \\cite{Vig2} and used for simulating the COVID-19 spread in Lombardy, was further exploited in \\cite{Grave,Grave2}. In particular, finite element simulations were carried out in the context of adaptive mesh refinement and coarsening, forecasting the COVID-19 spread also in the U.S. state of Georgia and in the Brazilian state of Rio de Janeiro. Good agreements with real-world epidemiological data were attained in both time and space.}\n\n\\oldgianni{Coming back to system \\accorpa{Ieqs}{Ieqd},}\nwe note that $d$ can be determined from \\eqref{Ieqd} and the associated initial condition\nwhenever the system of the first four equation in solved.\nThus, since we are just interested in well-posedness,\nwe do not consider the last equation\n\\oldgianni{and study system \\accorpa{Ieqs}{Ieqr}.\nNamely, we consider a slightly modified system.\nIndeed, we replace the terms $1-A_0\/n$ and $\\phi_d i$ appearing in \\accorpa{Ieqs}{Ieqe} and \\eqref{Ieqi}\nby a non-singular one of the form~$A(n)$ and by the product~$\\phi_d in$, respectively.\nOn the contrary, we can treat the degeneracy in the diffusion terms\nthat occur as $n$ approaches zero, by proving that $n$ is bounded away from zero. \nThis is done even in the more general situation of a coefficient of the form~$\\ka(n)$,\nwhere $\\ka$ is continuous and strictly positive on~$(0,+\\infty)$.\nFinally, we assume that the constants $\\nu_s$, $\\nu_e$, $\\nu_i$ and $\\nu_r$ are the same.\nFor this modified model, we prove an existence result under mild assumption on the initial data.\nFurthermore, we give two uniqueness results:\nthe main assumption for the first one is that $\\ka$ is a positive constant,\nwhile the second one requires that the nonlinearities and the initial data are smooth.\nUnder the latter assumptions, we prove a regularity result as well.} \n\n\\pier{We hope that our results could capture the interest for the resulting systems and give rise to new approximations and simulations, in order to deal with and test our variations in the framework of some geographical areas. Moreover, we claim that our arguments for well-posedness provide a serious mathematical validation to this class of SEIR models.}\n\nThe paper is organized as follows. \nIn the next section, we list our assumptions and notations\nand state our results.\nThe proof of \\oldgianni{Theorem~\\ref{Existence}} regarding the existence of a solution \nis given in Section~\\ref{EXISTENCE}\nand is prepared in Section~\\ref{DISCRETE} where an approximating problem\nobtained by time discretization is studied.\nFinally, Section~\\ref{UNIQUENESS} is devoted to the uniqueness of the solution.\n\n\n\\section{Statement of the problem and results}\n\\label{STATEMENT}\n\\setcounter{equation}{0}\n\nIn this section, we state precise assumptions and notations and present our results.\nFirst of all, the subset $\\Omega\\subset{\\mathbb{R}}^3$\n\\oldgianni{(lower-dimensional cases can be treated in the same way)} \nis~assumed to be bounded, connected and smooth.\nThe symbol $\\partial_\\nu$ denotes for the normal derivative on $\\Gamma:=\\partial\\Omega$.\nMoreover, we set for brevity\n\\Begin{equation}\n Q_t := \\Omega \\times (0,t)\n \\quad\\hbox{and}\\quad\n Q := \\Omega \\times (0,T).\n \\label{defQt}\n\\End{equation}\nIf $X$ is a Banach space, $\\norma\\,\\cdot\\,_X$ denotes both its norm and the norm of~$X^d$.\nThe only exception from this convention on the norms is given\nby the spaces $L^p$ ($1\\leq p\\leq\\infty$) constructed on $(0,T)$,\n$\\Omega$ and~$Q$, \nwhose norms are often denoted by~$\\norma\\,\\cdot\\,_p$,\nand by the space $H$ defined below and its powers, \nwhose norms are simply denoted by~$\\norma\\,\\cdot\\,$.\nWe~put\n\\Begin{eqnarray}\n && H := \\Lx 2 \\,, \\quad \n V := \\Hx 1 \n \\quad\\hbox{and}\\quad\n W := \\graffe{v \\in \\Hx 2: \\ \\partial_\\nu v = 0} .\n \\label{defspazi}\n\\End{eqnarray} \nMoreover, $V^*$~is the dual space of $V$ and $\\<\\,\\cdot\\,,\\,\\cdot\\,>$ is the dual pairing between $V^*$ and~$V$.\nIn the sequel, we work in the framework of the Hilbert triplet\n$(V,H,V^*)$ \\oldgianni{obtained by identifying $H$ with a subspace of $V^*$ in the usual way}.\nThus, by also using the symbol $(\\,\\cdot\\,,\\,\\cdot\\,)$ for the standard inner product of~$H$\n(this symbol will be used for any power of $H$ as well later~on),\nwe have\n$\\=(g,v)$\nfor every $g\\in H$ and $v\\in V$.\n\nNow, we list our assumptions on the structure of our system. \nWe just consider the functions $A$ and $\\ka$ mentioned in the introduction\nand the constants $\\alpha$ and $\\mu$,\nsince all the other constants appearing in \\accorpa{Ieqs}{Ieqr} will be normalized in the following.\nHowever, we keep them for a while, for the reader's convenience.\nWe assume~that\n\\Begin{eqnarray}\n && \\hbox{$A,\\ka:(0,+\\infty)\\to{\\mathbb{R}}$ are continuous, with }\n \\non\n \\\\\n && \\quad \\hbox{$A$ nonnegative and $\\ka$ strictly positive}\n \\label{hpAk}\n \\\\\n && \\hbox{$\\alpha$ and $\\mu$ are positive constants}\n \\label{hpam}\n\\End{eqnarray}\n\\Accorpa\\HPstruttura hpAk hpam\nand we observe that a term like $\\ka(n)=n$ is allowed in the equations.\nIn principle, the system we are interested in is the following \n\\Begin{eqnarray}\n && \\partial_t s + A(n) \\beta_i si + A(n) \\beta_e se + \\mu s - \\div(\\ka(n)\\nabla s)\n = \\alpha n \n \\label{eqs}\n \\\\\n && \\partial_t e - A(n) \\beta_i si - A(n) \\beta_e se + \\sigma e + \\phi_e e + \\mu e - \\div(\\ka(n)\\nabla e) \n = 0\n \\qquad\n \\label{eqe}\n \\\\\n && \\partial_t i + \\oldgianni{\\phi_d in} + \\phi_r i + \\mu i - \\div(\\ka(n)\\nabla i) \n = \\sigma e\n \\label{eqi}\n \\\\\n && \\partial_t r - \\phi_r i - \\phi_e e + \\mu r - \\div(\\ka(n)\\nabla\\oldgianni r) \n = 0 \n \\label{eqr}\n\\End{eqnarray}\nwhere each equation is complemented by no-flux boundary conditions and an initial condition.\nHowever, it is more convenient to consider an equivalent system in the unknowns\n$n$, $s$, $i$ and~$h$, where $n$ and $h$ are related to the \\oldgianni{other} unknowns~by\n\\Begin{equation}\n n := s + e + i + r\n \\quad\\hbox{and}\\quad\n h := s + e \\,.\n \\label{defnh}\n\\End{equation}\nThe new system is obtained from the previous one by summing up all the equations,\nkeeping \\eqref{eqs} and \\eqref{eqi} and adding \\eqref{eqe} to~\\eqref{eqs}.\nHence, it is given~by \n\\Begin{eqnarray}\n && \\partial_t n + \\phi_d i n - \\div(\\ka(n)\\nabla n) \n = (\\alpha-\\mu) n\n \\label{eqprima}\n \\\\\n && \\partial_t s + A(n) \\beta_i si + A(n) \\beta_e s (h-s) + \\mu s - \\div(\\ka(n)\\nabla s)\n = \\alpha n \n \\label{eqseconda}\n \\\\\n && \\partial_t i + \\oldgianni{\\phi_d in} + \\phi_r i + \\mu i - \\div(\\ka(n)\\nabla i) \n = \\sigma (h-s)\n \\label{eqterza}\n \\\\\n && \\partial_t h + \\mu h +(\\sigma+\\phi_e) h - \\div(\\ka(n)\\nabla h) \n = \\alpha n + (\\sigma+\\phi_e) s \\,.\n \\label{eqquarta}\n\\End{eqnarray}\nHowever, as said above, among all of the constants appearing in these equations,\njust $\\alpha$ and $\\mu$ will play a significant role in the mathematical treatment.\nHence, we replace the other constants and even some sums of them by~$1$, without loss of generality.\nAt this point, we are ready to give our notion of solution.\nFor the initial data we require that \n\\Begin{equation}\n n^0 \\,, s^0 \\,, i^0 \\,, h^0 \\in \\Lx\\infty\n \\quad \\hbox{satisfy} \\quad\n \\infn^0 > 0 , \\quad\n h^0 \\geq s^0 \\geq 0\n \\quad\\hbox{and}\\quad\n i^0 \\geq 0\n \\label{hpdati}\n\\End{equation}\nand we call solution a quadruplet $(n,s,i,h)$ enjoying the requirements\n\\Begin{eqnarray}\n && n \\,, s \\,, i \\,, h \\in \\H1V^* \\cap \\C0 H \\cap \\L2V \\cap \\LQ\\infty\n \\qquad\n \\label{regsoluz}\n \\\\\n && \\inf n > 0 ,\\quad\n h \\geq s \\geq 0 \n \\quad\\hbox{and}\\quad\n i \\geq 0\n \\quad \\checkmmode{a.e.\\ in~$Q$}\n \\label{segni}\n\\End{eqnarray}\n\\Accorpa\\Regsoluz regsoluz segni\nand satisfying, \\checkmmode{a.e.\\ in~$(0,T)$}\\ and for every $v\\in V$, the variational equations \n\\Begin{eqnarray}\n && \\< \\partial_t n , v >\n + \\int_\\Omega i n v\n + \\int_\\Omega \\ka(n) \\nabla n \\cdot \\nabla v\n = (\\alpha-\\mu) \\int_\\Omega n v\n \\label{prima}\n \\\\\n && \\< \\partial_t s , v >\n + \\int_\\Omega \\bigl( A(n)si + A(n) s (h-s) + \\mu s \\bigr) v\n + \\int_\\Omega \\ka(n) \\nabla s \\cdot \\nabla v\n \\non\n \\\\\n && \\quad {} = \\alpha \\int_\\Omega n v\n \\label{seconda}\n \\\\\n \\noalign{\\allowbreak}\n && \\< \\partial_t i , v >\n + (1+\\mu) \\int_\\Omega i v\n + \\int_\\Omega \\ka(n) \\nabla i \\cdot \\nabla v\n = \\int_\\Omega (h-s) v\n \\label{terza}\n \\\\\n && \\< \\partial_t h , v >\n + \\oldgianni{\\int_\\Omega n i v}\n + (1+\\mu) \\int_\\Omega h v\n + \\int_\\Omega \\ka(n) \\nabla h \\cdot \\nabla v\n = \\int_\\Omega (\\alpha n + s) v\n \\qquad\n \\label{quarta}\n\\End{eqnarray}\nand the initial conditions\n\\Begin{equation}\n (n,s,i,h)(0) = (n^0,s^0,i^0,h^0) \\,.\n \\label{cauchy}\n\\End{equation}\n\\Accorpa\\Pbl prima cauchy\n\n\\Begin{remark}\\rm\n\\label{IntPbl}\nWe notice that the regularity $\\C0H$ explicitly stated in \\eqref{regsoluz}\nin fact follows from the other conditions since\n$\\H1V^*\\cap\\L2V\\subset\\C0H$.\nSo we do not mind \\oldgianni{about} it in the following.\nWe also observe that the above variational equations are equivalent \nto their integrated versions with time dependent test functions.\nFor instance, \\eqref{prima} is equivalent~to\n\\Begin{eqnarray}\n && \\int_0^T \\< \\partial_t n(t) , v(t) > \\, dt\n + \\int_Q i n v\n + \\int_Q \\ka(n) \\nabla n \\cdot \\nabla v\n = (\\alpha-\\mu) \\int_Q n v\n \\non\n \\\\\n && \\quad \\hbox{for every $v\\in\\L2V$}.\n \\label{intprima}\n\\End{eqnarray}\n\\End{remark}\n\nHere is our \\oldgianni{first} result.\n\\Begin{theorem}\n\\label{Existence}\nAssume \\HPstruttura\\ and \\eqref{hpdati}.\nThen, there exists at least one quadruplet $(n,s,i,h)$\nthat satisfies the conditions \\Regsoluz\\ and solves problem \\Pbl.\n\\End{theorem}\n\nAs for uniqueness, we present two results.\nThe first one regards a particular case and its proof, given in Section~\\ref{UNIQUENESS1}, is elementary.\n\n\\Begin{theorem}\n\\label{Uniqueness1}\nIn addition to \\HPstruttura\\ and \\eqref{hpdati},\nassume that $A$ is locally Lip\\-schitz\\ continuous and that $\\ka$ is a \\oldgianni{positive} constant.\nThen, the solution to problem \\Pbl\\ satisfying \\Regsoluz\\ is unique.\n\\End{theorem}\n\nOur last result regards uniqueness in the case of a non-constant~$\\ka$.\nIts proof, given in Section~\\ref{UNIQUENESS2}, is much more involved and \nis strictly related to a high regularity of the solution.\nFor this reason, we have to assume that both the nonlinearities and the initial data are much smoother.\n\n\\Begin{theorem}\n\\label{Uniqueness2}\nIn addition to \\HPstruttura\\ and \\eqref{hpdati},\nassume that\n\\Begin{eqnarray}\n && \\hbox{$A$ is locally Lip\\-schitz\\ continuous and $\\ka$ is a $C^1$ function}\n \\label{hpnonlin}\n \\\\\n && n^0,\\, s^0,\\, i^0,\\, h^0 \\in \\Wx{2,\\infty} \n \\quad \\hbox{with zero normal derivatives on $\\Gamma$}.\n \\label{hpdatareg}\n\\End{eqnarray}\nThen, the solution to problem \\Pbl\\ satisfying \\Regsoluz\\ is unique\nand enjoys the following regularity properties\n\\Begin{equation}\n n,\\, s,\\, i,\\, h \\in \\spazio W{1,p}{\\Lx p} \\cap \\spazio L p{\\Wx{2,p}}\n \\quad \\hbox{for every $p\\in[1,+\\infty)$}.\n \\label{highreg}\n\\End{equation}\n\\End{theorem}\n\nThroughout the paper, we make use of\nthe H\\\"older\\ inequality and the Sobolev inequality \nrelated to the continuous embedding $V\\subset L^p(\\Omega)$ with $p\\in[1,6]$\n(since $\\Omega$ is three-dimensional bounded and smooth).\nWe also account for the elementary identity and inequalities\n\\Begin{eqnarray}\n \\hskip-1cm&& a (a-b)\n = \\frac 12 \\, a^2\n + \\frac 12 \\, (a-b)^2\n - \\frac 12 \\, b^2\n \\geq \\frac 12 \\, a^2\n - \\frac 12 \\, b^2\n \\quad \\hbox{for every $a,b\\in{\\mathbb{R}}$},\n \\label{elementare}\n \\\\\n \\hskip-1cm&& ab\\leq \\delta a^2 + \\frac 1 {4\\delta}\\,b^2\n \\quad \\hbox{for every $a,b\\in{\\mathbb{R}}$ and $\\delta>0$},\n \\label{young}\n\\End{eqnarray}\n\\Accorpa\\Elementari elementare young\nand quote \\eqref{young} as the Young inequality.\nFurthermore, we take advantage of the summation by parts formula\n\\Begin{equation}\n \\somma k0{m-1} a_{k+1} (b_{k+1} - b_k)\n = a_m b_m - a_1 b_0\n - \\somma k1{m-1} (a_{k+1} - a_k) b_k\\,,\n \\label{byparts}\n\\End{equation}\nwhich is valid for arbitrary real numbers $a_1,\\dots,a_m$ and $b_0,\\dots,b_m$.\n\n\n\\section{The discrete problem}\n\\label{DISCRETE}\n\\setcounter{equation}{0}\n\nIn this section, we prepare \\oldgianni{the proof of Theorem~\\ref{Existence}}\nby introducing and solving an approximating problem obtained by time discretization.\nHowever, the structural functions $A$ and $\\ka$ have to satisfy different assumptions\n and the initial data have to be smoother.\nIn the next section, by starting from the original structure and the original initial data,\nwe consider the discrete problem with structural functions and approximating initial data\nconstructed in order to satisfy the assumptions listed below.\n\\oldgianni{Two~constants $\\ka_*$ and $\\ka^*$} and two real functions $\\tilde A$ and $\\tilde\\ka$ defined in the whole of ${\\mathbb{R}}$ \nare given such~that\n\\Begin{eqnarray}\n && \\oldgianni{\\ka^* \\geq \\ka_* > 0}\n \\quad\\hbox{and}\\quad\n \\hbox{$\\tilde A,\\tilde\\ka:{\\mathbb{R}}\\to{\\mathbb{R}}$ are continuous with}\n \\non\n \\\\\n && \\quad\n \\tilde A(y) \\geq 0 \n \\quad\\hbox{and}\\quad\n \\oldgianni{\\ka_* \\leq \\tilde\\ka(y) \\leq \\ka^*}\n \\quad \\hbox{for every $y\\in{\\mathbb{R}}$}.\n \\label{hpAkdiscr}\n\\End{eqnarray}\n\n\\Begin{notation}\\rm\n\\label{Akatilde}\nHowever, we prefer to use the lighter symbols $A$ and $\\ka$ \ninstead of the heavy $\\tilde A$ and~$\\tilde\\ka$.\nIndeed, no confusion can arise since the original functions \n$A$ and $\\ka$ introduced in \\eqref{hpAk} will never appear within the section.\n\\End{notation}\nFor a fixed positive integer~$N$, we set $\\tau:=T\/N$.\nThen, the time-discretized problem we are going to study is the following:\ngiven \n\\Begin{equation}\n \\nz_\\tau \\,, \\sz_\\tau \\,, \\iz_\\tau \\,, \\hz_\\tau \\in V\\cap\\Lx\\infty\n \\quad \\hbox{with} \\quad\n \\nz_\\tau \\geq 0 , \\quad\n \\hz_\\tau \\geq \\sz_\\tau \\geq 0\n \\quad\\hbox{and}\\quad\n \\iz_\\tau \\geq 0\n \\label{hpdatiV}\n\\End{equation}\nwe look for four $(N+1)$-tuples \n\\Begin{eqnarray}\n && (n_0,n_1,\\dots,n_N), \\\n (s_0,s_1,\\dots,s_N), \\\n (i_0,i_1,\\dots,i_N), \\\n (h_0,h_1,\\dots,h_N) \n \\non\n \\\\\n && \\quad \\hbox{belonging to $(V\\cap\\Lx\\infty)^{N+1}$}\n \\label{tuples}\n\\End{eqnarray}\nsatisfying, for $k=0,\\dots,N-1$, the variational equations\n\\Begin{eqnarray}\n && \\bigl( \\frac {n_{k+1}-n_k}\\tau , v \\bigr)\n + (i_k n_{k+1} , v)\n + (\\ka(n_{k+1}) \\nablan_{k+1} , \\nabla v)\n = (\\alpha-\\mu) (n_{k+1} , v)\n \\label{primak}\n \\\\\n && \\bigl( \\frac {s_{k+1}-s_k}\\tau , v \\bigr)\n + \\bigl( A(n_{k+1})\\skpi_k + A(n_{k+1})s_{k+1}(h_k-s_k) + \\mus_{k+1} , v \\bigr)\n \\non\n \\\\\n && \\quad {}\n + \\bigl( \\ka(n_{k+1}) \\nablas_{k+1} , \\nabla v \\bigr)\n = \\alpha (n_{k+1} , v)\n \\label{secondak}\n \\\\\n && \\bigl( \\frac {i_{k+1}-i_k}\\tau , v \\bigr)\n \\oldgianni{{}+(n_{k+1}i_{k+1} , v)}\n + (1+\\mu) (i_{k+1} , v)\n + \\bigl( \\ka(n_{k+1})\\nablai_{k+1} , \\nabla v \\bigr)\n \\non\n \\\\\n && \\quad {}\n = ( h_{k+1}-s_{k+1} , v)\n \\qquad\n \\label{terzak}\n \\\\\n && \\bigl( \\frac {h_{k+1}-h_k}\\tau , v \\bigr)\n + (1+\\mu) (h_{k+1} , v)\n + \\bigl( \\ka(n_{k+1})\\nablah_{k+1} , \\nabla v \\bigr)\n \\non\n \\\\\n && \\quad {}\n = (\\alphan_{k+1} + s_{k+1} , v)\n \\label{quartak}\n\\End{eqnarray}\nall for every $v\\in V$,\nas well as the sign and initial conditions\n\\Begin{eqnarray}\n && n_k \\geq 0 ,\\quad\n h_k \\geq s_k \\geq 0 \n \\quad\\hbox{and}\\quad\n i_k \\geq 0\n \\quad \\hbox{for $k=1,\\dots,N$}\n \\label{segnik}\n \\\\\n && n_0 = \\nz_\\tau \\,, \\quad\n s_0 = \\sz_\\tau \\,, \\quad\n i_0 = \\iz_\\tau \n \\quad\\hbox{and}\\quad\n h_0 = \\hz_\\tau .\n \\label{discrcauchy}\n\\End{eqnarray}\n\\Accorpa\\Pbltau primak discrcauchy\n\n\\Begin{theorem}\n\\label{Wellposednessdiscr}\nUnder assumptions \\eqref{hpAkdiscr} and \\eqref{hpam} on the structure,\nsuppose that\n\\Begin{equation}\n \\frac 1\\tau - \\alpha + \\mu > 0 .\n \\label{hptau}\n\\End{equation}\nThen, if the initial data satisfy \\eqref{hpdatiV}, problem \\Pbltau\\ has a unique solution.\n\\End{theorem}\n\nWe prepare an easy lemma.\n\n\\Begin{lemma}\n\\label{Gianni}\nLet $a_0$ and $b_0$ be positive constants and let $a,b,f\\in\\Lx\\infty$ satisfy \n\\Begin{equation}\n a \\geq a_0 \\,, \\quad b \\geq b_0 \\quad\\hbox{and}\\quad f \\geq 0 \\quad \\checkmmode{a.e.\\ in~$\\Omega$} .\n \\label{hpG}\n\\End{equation}\nThen, the problem of finding $u\\in V$ satisfying the variational problem\n\\Begin{equation}\n \\int_\\Omega a \\nabla u \\cdot \\nabla v\n + \\int_\\Omega b u v \n = \\int_\\Omega f v\n \\quad \\hbox{for every $v\\in V$}\n \\label{pblG}\n\\End{equation}\nhas a unique solution, and this solution satisfies\n\\Begin{equation}\n 0 \\leq u \\leq b_0^{-1} \\norma f_\\infty \\quad \\checkmmode{a.e.\\ in~$\\Omega$}.\n \\label{tesiG}\n\\End{equation}\nMoreover, if $\\lambda\\in{\\mathbb{R}}$ and $f\\geq\\lambda\\,b$ \\checkmmode{a.e.\\ in~$\\Omega$}, then $u\\geq\\lambda$ \\checkmmode{a.e.\\ in~$\\Omega$}.\n\\End{lemma}\n\n\\Begin{proof}\nThe existence of a unique solution is clear \nsince the left-hand side\\ of \\eqref{pblG} is an inner product in $V$ that is equivalent to the usual one.\nMoreover, the first inequality of \\eqref{tesiG} is given by the weak maximum principle\nand the same can be said for the last sentence of the statement, \nsince the equation obtained by replacing $f $ by $f-\\lambda\\,b$ is satisfied by~$u-\\lambda\\,$.\nNow, we prove the second inequality of~\\eqref{tesiG}.\n\\oldgianni{%\nWe set $w:=u-b_0^{-1}\\norma f_\\infty$. \nThen, we have for every $v\\in V$\n\\Begin{eqnarray}\n && \\int_\\Omega a \\nabla w \\cdot \\nabla v\n + \\int_\\Omega b w v\n = \\int_\\Omega a \\nabla u \\cdot \\nabla v\n + \\int_\\Omega b u v\n - \\int_\\Omega b \\, b_0^{-1} \\norma f_\\infty v\n \\non\n \\\\\n && = \\int_\\Omega \\bigl( f - b \\, b_0^{-1} \\norma f_\\infty \\bigr) v.\n \\non\n\\End{eqnarray}\nSince $f-b\\,b_0^{-1}\\norma f_\\infty\\leq f-\\norma f_\\infty\\leq0$,\nwe have $w\\leq0$ by the weak maximum principle, \ni.e., the desired inequality.}\n\\End{proof}\n\n\\step\nProof of Theorem~\\ref{Wellposednessdiscr}\n\nIt suffices to prove the following:\nfor $k=0,\\dots,N-1$, if $(n_k,s_k,i_k,h_k)$ belongs to $(V\\cap\\Lx\\infty)^4$ \nand satisfies the inequalities in \\eqref{segnik},\nthen system \\accorpa{primak}{quartak} has a unique solution $(n_{k+1},s_{k+1},i_{k+1},h_{k+1})$\nbelonging to $(V\\cap\\Lx\\infty)^4$ and satisfying the analogous inequalities,\ni.e., $n_{k+1}\\geq0$, $h_{k+1}\\geqs_{k+1}\\geq0$ and $i_{k+1}\\geq0$.\nWe recall \\eqref{hpdatiV} for the case $k=0$,\nfix $k$ and $(n_k,s_k,i_k,h_k)$ as said \nand show that we can find a unique solution to \\accorpa{primak}{quartak} with the proper sign conditions.\nThis is done in two steps.\n\n\\step\nSolution to the first equation\n\nWe introduce the function $K:{\\mathbb{R}}\\to{\\mathbb{R}}$ by setting\n\\Begin{equation}\n K(y) := \\int_0^y \\ka(z) \\, dz\n \\quad \\hbox{for $y\\in{\\mathbb{R}}$}\n \\label{defK}\n\\End{equation}\nand we would like to assume $u:=K(n_{k+1})$ as the new unknown for~\\eqref{primak}.\nDue to \\eqref{hpAkdiscr} for~$\\ka$,\nthe function $K$ is one-to-one, onto and Lip\\-schitz\\ continuous, and its inverse is Lip\\-schitz\\ continuous too.\nThus, equation \\eqref{primak} can be rewritten in term of~$u$.\n\\gianni{Namely, noting that $\\nabla u=\\nabla(K(n_{k+1}))=\\ka(n_{k+1})\\nablan_{k+1}$,\nwe can write it as the variational formulation of the homogeneous Neumann problem for the equation\n\\Begin{equation}\n \\lambda K^{-1}(u)\n - \\Delta u \n = (1\/\\tau)n_k\n \\quad \\hbox{where} \\quad\n \\lambda := \\frac 1\\tau + i_k - \\alpha + \\mu \\,.\n \\label{barbu}\n\\End{equation}\nNotice that every variational solution automatically belongs to $W$ \nand satisfies both \\eqref{barbu} and the homogeneous Neumann condition due to elliptic regularity.\nWe claim that the new problem has a unique solution.\nIndeed, it is equivalent to the minimization of the functional $J:V\\to{\\mathbb{R}}$ given~by\n\\Begin{equation}\n J(v)\n := \\frac 12 \\int_\\Omega |\\nabla v|^2\n + \\int_\\Omega \\lambda \\, \\calK(v)\n - \\frac 1\\tau \\int_\\Omega n_k \\, v \n \\label{minimo}\n\\End{equation}\nwhere \n\\Begin{equation}\n \\calK(r) := \\int_0^r K^{-1}(s) \\, ds\n \\quad \\hbox{for $r\\in{\\mathbb{R}}$}\n \\non\n\\End{equation}\nand we show that this problem has a unique solution.\nNotice that $J$ actually is well-defined, \nsince $K^{-1}$ is Lip\\-schitz\\ continuous, $\\lambda\\in\\Lx\\infty$ and $n_k\\in H$. \nMoreover, $J$~is strictly convex and coercive.\nTo see this, it suffices to recall that $\\inf\\lambda>0$ \n(since we are assuming \\eqref{hptau} and $i_k$ is nonnegative)\nand observe that\n$\\calK''=(K^{-1})'=1\/(\\ka\\circ K^{-1})\\geq1\/\\ka^*$.\nHence, the above problem has a unique solution $u$\nand we conclude that $n_{k+1}=K(u)$ is the unique solution to~\\eqref{primak}.\nSince $u\\in W$, we deduce that $u$ is bounded\nso that that $n_{k+1}$ is bounded too.\nTo see that it is nonnegative, it suffices to rearrange \\eqref{primak} and apply Lemma~\\ref{Gianni}.}\n\n\\step\nSolution to the other equations and conclusion\n\nSince $n_{k+1}$ is known, we can solve \\eqref{secondak} for~$s_{k+1}$\nby applying the first part of Lemma~\\ref{Gianni}.\nWe can do the same to find $h_{k+1}$ and $i_{k+1}$ from \\eqref{quartak} and \\eqref{terzak} in this order.\nThe lemma also ensures that $s_{k+1}$, $h_{k+1}$ and $i_{k+1}$ are bounded and nonnegative,\n\\oldgianni{provided we can prove that} $h_{k+1}\\geqs_{k+1}$.\nTo this end, we set $e_k:=h_k-s_k$ and $e_{k+1}=h_{k+1}-s_{k+1}$, \ntake the difference between \\oldgianni{\\eqref{quartak} and \\eqref{secondak}}\nand write it in the form\n\\Begin{eqnarray}\n && \\int_\\Omega \\ka(n_{k+1}) \\nablae_{k+1} \\cdot \\nabla v\n + ((1\/\\tau)+1+\\mu) \\int_\\Omega e_{k+1} v\n \\non\n \\\\\n && = \\int_\\Omega \\bigl( A(n_{k+1})\\skpi_k + A(n_{k+1})\\skpe_k + (1\/\\tau)e_k \\bigr) v\n \\non\n\\End{eqnarray}\nfor every $v\\in V$.\nThen, Lemma~\\ref{Gianni} yields that $e_{k+1}\\geq0$.\nThis completes the proof.\n\n\n\\section{Existence}\n\\label{EXISTENCE}\n\\setcounter{equation}{0}\n\nThis section is devoted to the proof of Theorem~\\ref{Existence}.\nOur argument relies on a priori estimates on the solution to a suitably specified discrete problem\nand on the convergence of proper interpolating functions.\nSo, we introduce some notations concerning interpolation at once.\n\n\\Begin{notation}\\rm\n\\label{Interpol}\nLet $N$ be a positive integer and $Z$ be a Banach space.\nWe set $\\tau:=T\/N$ and $I_k:=((k-1)\\tau,k\\tau)$ for $k=1,\\dots,N$.\nGiven $z=(z_0,z_1,\\dots ,z_N)\\in Z^{N+1}$,\nwe define the piecewise constant and piecewise linear interpolating functions\n\\Begin{equation}\n \\overline z_\\tau \\in \\spazio L\\infty Z , \\quad\n \\underline z_\\tau \\in \\spazio L\\infty Z \n \\quad\\hbox{and}\\quad\n \\hat z_\\tau \\in \\spazio W{1,\\infty}Z\n \\non\n\\End{equation}\nby setting \n\\Begin{eqnarray}\n && \\hskip -2em\n \\overline z_\\tau(t) = z^k\n \\quad\\hbox{and}\\quad\n \\underline z_\\tau(t) = z^{k-1}\n \\quad \\hbox{for a.a.\\ $t\\in I_k$, \\ $k=1,\\dots,N$},\n \\label{pwconstant}\n \\\\\n && \\hskip -2em\n \\hat z_\\tau(0) = z_0\n \\quad\\hbox{and}\\quad\n \\partial_t\\hat z_\\tau(t) = \\frac {z^k-z^{k-1}} \\tau\n \\quad \\hbox{for a.a.\\ $t\\in I_k$, \\ $k=1,\\dots,N$}.\n \\qquad\n \\label{pwlinear}\n\\End{eqnarray}\n\\End{notation}\n\nFor the reader's convenience,\nwe summarize the relations between the finite set of values\nand the interpolating functions in the following proposition,\nwhose proof follows from straightforward\\ computation:\n\n\\Begin{proposition}\n\\label{Propinterp}\nWith Notation~\\ref{Interpol}, we have that\n\\Begin{eqnarray}\n && \\norma{\\overline z_\\tau}_{\\spazio L\\infty Z}\n = \\max_{k=1,\\dots,N} \\norma{z^k}_Z \\,, \\quad\n \\norma{\\underline z_\\tau}_{\\spazio L\\infty Z}\n = \\max_{k=0,\\dots,N-1} \\norma{z^k}_Z\\,,\n \\label{ouLinftyZ}\n \\\\\n && \\norma{\\partial_t\\hat z_\\tau}_{\\spazio L\\infty Z}\n = \\max_{0\\leq k\\leq N-1} \\norma{(z^{k+1}-z^k)\/\\tau}_Z\\,,\n \\label{dtzLinftyZ}\n \\\\\n \\noalign{\\allowbreak}\n && \\norma{\\overline z_\\tau}_{\\L2Z}^2\n = \\tau \\somma k1N \\norma{z^k}_Z^2 \\,, \\quad\n \\norma{\\underline z_\\tau}_{\\L2Z}^2\n = \\tau \\somma k0{N-1} \\norma{z^k}_Z^2 \\,,\n \\label{ouLdueZ}\n \\\\\n \\noalign{\\allowbreak}\n && \\norma{\\partial_t\\hat z_\\tau}_{\\L2Z}^2\n = \\tau \\somma k0{N-1} \\norma{(z^{k+1}-z^k)\/\\tau}_Z^2\\,, \n \\label{dtzLdueZ}\n \\\\\n && \\norma{\\hat z_\\tau}_{\\spazio L\\infty Z}\n = \\max_{k=1,\\dots,N} \\max\\{\\norma{z^{k-1}}_Z,\\norma{z^k}_Z\\}\n = \\max\\{\\norma{z_0}_Z,\\norma{\\overline z_\\tau}_{\\spazio L\\infty Z}\\}\\,,\n \\qquad\\qquad\n \\label{hzLinftyZ}\n \\\\\n && \\norma{\\hat z_\\tau}_{\\L2Z}^2\n \\leq \\tau \\somma k1N \\bigl( \\norma{z^{k-1}}_Z^2 + \\norma{z^k}_Z^2 \\bigr)\n \\leq \\tau \\norma{z_0}_Z^2\n + 2 \\norma{\\overline z_\\tau}_{\\L2Z}^2 \\,.\n \\label{hzLdueZ}\n\\End{eqnarray}\nMoreover, it holds that\n\\Begin{eqnarray}\n && \\norma{\\overline z_\\tau-\\hat z_\\tau}_{\\spazio L\\infty Z}\n = \\max_{k=0,\\dots,N-1} \\norma{z^{k+1}-{z^k}}_Z\n = \\tau \\, \\norma{\\partial_t\\hat z_\\tau}_{\\spazio L\\infty Z}\\,,\n \\qquad\n \\label{diffLinfty}\n \\\\\n && \\norma{\\overline z_\\tau-\\hat z_\\tau}_{\\L2Z}^2\n = \\frac \\tau 3 \\somma k0{N-1} \\norma{z^{k+1}-z^k}_Z^2\n \\non\n \\\\\n && \\oldgianni{{}= \\frac 13 \\, \\norma{\\overline z_\\tau - \\underline z_\\tau}_{\\L2Z}^2}\n = \\frac {\\tau^2} 3 \\, \\norma{\\partial_t\\hat z_\\tau}_{\\L2Z}^2\\,,\n \\label{diffLdue}\n\\End{eqnarray}\nand similar identities for the difference $\\underline z_\\tau-\\hat z_\\tau$.\nFinally, we have that\n\\Begin{eqnarray}\n && \\tau \\somma k0{N-1} \\norma{(z^{k+1}-z^k)\/\\tau}_Z^2 \n \\leq \\norma{\\partial_t z}_{\\L2Z}^2\n \\non\n \\\\\n && \\quad \\hbox{if $z\\in\\H1Z$\\quad\\hbox{and}\\quad $z^k=z(k\\tau)$ for $k=0,\\dots,N$}.\n \\label{interpH1Z}\n\\End{eqnarray}\n\\End{proposition}\n\nWe are now ready to properly specify the discrete problem.\nNamely, starting from $A$ and $\\ka$ as in~\\eqref{hpAk},\nwe introduce new functions $\\tilde A$ and $\\tilde\\ka$ obtained by a truncation operator $\\tilde T$ \nto be used in the discrete problem.\nWe~set\n\\Begin{eqnarray}\n && n^* := e^{2T(\\alpha-\\mu)^+} \\norman^0_\\infty \\,, \\quad\n s^* := \\normas^0_\\infty + T\\alphan^*\n \\non\n \\\\\n && \\quad\n h^* := \\normah^0_\\infty + T(\\alphan^*+s^*) \\quad\n i^* := \\norma{i_0}_\\infty + T(h^*+s^*) \\quad\n \\non\n \\\\\n && \\quad\\hbox{and}\\quad\n n_* := e^{-T(i^*+(\\mu-\\alpha)^+)} \\infn^0\n \\label{defnstar}\n\\End{eqnarray}\nand define $\\tilde A,\\tilde\\ka:{\\mathbb{R}}\\to{\\mathbb{R}}$ by setting for $y\\in{\\mathbb{R}}$\n\\Begin{equation}\n \\tilde A(y) = A(\\tilde T(y)) \n \\quad\\hbox{and}\\quad\n \\tilde\\ka(y) = \\ka(\\tilde T(y))\n \\quad \\hbox{where} \\quad\n \\tilde T(y) := \\max\\graffe{n_*, \\min\\graffe{y,n^*}} .\n \\label{kaAstar}\n\\End{equation}\nNext, we approximate the initial data $n^0$, $s^0$, $i^0$ and $h^0$ as in \\eqref{hpdati}\nby smoother functions $\\nz_\\tau$, $\\sz_\\tau$, $\\iz_\\tau$ and $\\hz_\\tau$ satisfying\n\\Begin{eqnarray}\n && \\nz_\\tau \\in V \\cap \\Lx\\infty \n \\quad\\hbox{and}\\quad\n 0 \\leq \\nz_\\tau \\leq \\norman^0_\\infty\n \\quad \\checkmmode{a.e.\\ in~$\\Omega$}\n \\label{hpdatitau}\n \\\\\n && \\norma\\nz_\\tau \\leq \\norman^0 \n \\quad\\hbox{and}\\quad\n \\tau\\normaV\\nz_\\tau^2 \\leq \\norman^0^2\n \\label{stimadatitau}\n \\\\\n && \\nz_\\tau \\to n^0 \n \\quad \\hbox{strongly in $H$ as $\\tau\\searrow0$}\n \\label{convdatitau}\n \\\\\n\\noalign{\\noindent and the analogues for $\\sz_\\tau$, $\\iz_\\tau$ and~$\\hz_\\tau$ as well as \\medskip}\n && \\hz_\\tau \\geq \\sz_\\tau\n \\quad\\hbox{and}\\quad\n \\nz_\\tau \\geq \\infn^0\n \\quad \\checkmmode{a.e.\\ in~$\\Omega$}.\n \\label{nztmin}\n\\End{eqnarray}\nThis can be done, e.g., by a singular perturbation argument.\nIndeed, if $\\tau\\in(0,1)$ and $u\\in\\Lx\\infty$ is nonnegative, \nthe unique solution $u_\\tau\\in V$ to the variational problem\n\\Begin{equation}\n \\int_\\Omega \\bigl( u_\\tau v + \\tau \\nabla u_\\tau \\cdot \\nabla v \\bigr)\n = \\int_\\Omega u v\n \\quad \\hbox{for every $v\\in V$}\n \\non\n\\End{equation}\n\\oldgianni{belongs to $W\\subset\\Lx\\infty$,}\nsatisfies \n$\\inf u\\lequ_\\tau\\leq\\norma u_\\infty$, $\\normau_\\tau\\leq\\norma u$ \nand \\oldgianni{$(1\/2)\\normau_\\tau^2+\\tau\\norma{\\nablau_\\tau}^2\\leq(1\/2)\\norma u^2$,\nwhence also $\\tau\\normaVu_\\tau^2\\leq\\norma u^2$,}\nand converges to $u$ strongly in $H$ as $\\tau\\searrow0$.\nFinally, we assume \n\\Begin{equation}\n \\tau \\in (0,\\tau_0)\n \\quad \\hbox{where} \\quad\n \\tau_0 \\in (0,1)\n \\quad\\hbox{and}\\quad\n \\tau_0 \\leq \\frac 1 {2(\\alpha-\\mu)}\n \\quad \\hbox{if $\\alpha>\\mu$}.\n \\label{hptauz}\n\\End{equation}\nThe functions $\\tilde A$ and $\\tilde\\ka$ satisfy \\eqref{hpAkdiscr} with\n\\Begin{equation}\n \\oldgianni{\\ka_* := \\min \\graffe{\\ka(y) : \\ n_*\\leq y\\leqn^*}\n \\quad\\hbox{and}\\quad\n \\ka^* := \\max \\graffe{\\ka(y) : \\ n_*\\leq y\\leqn^*}.}\n \\label{sceltakazkau}\n\\End{equation}\nMoreover, the requirements \\eqref{hpdatiV} are fulfilled by the approximating data.\nFurthermore, \\eqref{hptauz} implies~\\eqref{hptau}. \nHence, we can solve the discretized problem \nwith the specified structure and the approximating initial data.\nWe have the following basic result, \nwhich removes the supplementary assumptions~\\eqref{hpAkdiscr}:\n\n\\Begin{proposition}\n\\label{Basic}\nUnder the restriction \\eqref{hptauz}, let \n\\Begin{equation}\n (n_0,n_1,\\dots,n_N), \\quad\n (s_0,s_1,\\dots,s_N), \\quad\n (i_0,i_1,\\dots,i_N)\n \\quad\\hbox{and}\\quad\n (h_0,h_1,\\dots,h_N) \n \\label{soluztau}\n\\End{equation} \nbe as in \\eqref{tuples} and solve the problem \\Pbltau\\ \nwhere one reads $\\tilde A$ and $\\tilde\\ka$ instead of $A$ and~$\\ka$, respectively.\nThen, \\eqref{soluztau} solve the discrete problem with the original $A$ and~$\\ka$ given in \\eqref{hpAk}.\n\\End{proposition}\n\n\\Begin{proof}\nAll the components $n_k$ are nonnegative.\nHowever, as $A$ and $\\ka$ are defined in the open interval $(0,+\\infty)$, \nwe have to reinforce this and show that each $n_k$ is strictly positive\nin order to give a meaning to $A(n_k)$ and~$\\ka(n_k)$.\nWe prove~that\n\\Begin{equation}\n n_* \\leq n_k \\leq n^* \n \\quad \\hbox{\\checkmmode{a.e.\\ in~$\\Omega$} \\quad for $k=0,\\dots,N$}.\n \\label{basic}\n\\End{equation}\nSince $\\tilde A(y)=A(y)$ and $\\tilde\\ka(y)=\\ka(y)$ for every $y\\in[n_*,n^*]$,\nthis also yields that $\\tilde A(n_k)=A(n_k)$ and $\\tilde\\ka(n_k)=\\ka(n_k)$ for $k=0,\\dots,N$, \ni.e., the thesis of the statement.\n\nThe proof of \\eqref{basic} is done in several steps.\nThe last of them needs $L^\\infty$ bounds for the other components of the solution,\nand these are proved as~well.\n\n\\step\nFirst upper bound\n\nWe assume $0\\leq k0$ by \\eqref{hptauz}.\nAs $n_{k+1}$ is nonnegative, the lemma implies that\n\\Begin{equation}\n \\norman_{k+1}_\\infty\n \\leq b_0^{-1} \\norma f_\\infty\n = \\frac \\tau {1-\\tau(\\alpha-\\mu)^+} \\, \\norma{(1\/\\tau)n_k}_\\infty \n = \\frac 1 {1-\\tau(\\alpha-\\mu)^+} \\, \\norman_k_\\infty \\,.\n \\non\n\\End{equation}\nAs this holds for $0\\leq k0$ small enough \nby comparison of the derivatives at~$0$.\nSince $\\tau(\\alpha-\\mu)^+\\leq1\/2$ by~\\eqref{hptauz},\nit follows that\n\\Begin{equation}\n \\ln(1-\\tau(\\alpha-\\mu)^+)\n \\geq -2\\tau(\\alpha-\\mu)^+ \n \\non\n\\End{equation}\nand \\eqref{forbasic} implies\n\\Begin{equation}\n \\norman_k_\\infty\n \\leq (1-\\tau(\\alpha-\\mu)^+)^{-N} \\norman^0_\\infty\n \\leq e^{2N\\tau(\\alpha-\\mu)^+} \\norman^0_\\infty\n = n^* \n \\quad \\oldgianni{\\hbox{for $k=0,\\dots,N$.}}\n \\label{stimank}\n\\End{equation}\nTherefore, the second inequality of \\eqref{basic} is proved.\n\nNow, we prove analogous upper bounds for the other components of the solution\nby applying Lemma~\\ref{Gianni} once more, always with $a=\\tilde\\ka(n_{k+1})$.\n\n\\step\nFurther upper bounds\n\nWe start with $s_k$.\nEquation \\eqref{pblG} is solved by $s_{k+1}$ with\n\\Begin{equation}\n b = \\frac 1\\tau + \\tilde A(n_{k+1})i_k + \\tilde A(n_{k+1})(h_k-s_k) + \\mu\n \\quad\\hbox{and}\\quad \n f = \\frac 1\\tau s_k + \\alphan_{k+1} \\,.\n \\non\n\\End{equation}\nBy recalling that $\\tilde A$ is nonnegative and that $i_k\\geq0$ and $h_k\\geqs_k$ \\checkmmode{a.e.\\ in~$\\Omega$}, \nwe can take $b_0=(1\/\\tau)+\\mu$.\nSince $s_{k+1}$ is nonnegative, the lemma yields\n\\Begin{equation}\n \\normas_{k+1}_\\infty\n \\leq \\frac \\tau {1+\\tau\\mu} \\, \\norma{(1\/\\tau)s_k+\\alphan_{k+1}}_\\infty\n \\leq \\normas_k_\\infty + \\tau\\alpha\\norman_{k+1}_{\\oldgianni\\infty} \\,.\n \\non\n\\End{equation}\nOn account of \\oldgianni{\\eqref{stimank} and \\eqref{defnstar}}, we deduce that\n\\Begin{equation}\n \\normas_k_\\infty\n \\leq \\norma{s_0}_\\infty + k\\tau\\alphan^*\n \\leq s^* \n \\quad \\hbox{for $k=0,\\dots,N$}.\n \\label{sinfty}\n\\End{equation}\nFor $h_{k+1}$ we can take\n\\Begin{equation}\n b = \\frac 1\\tau + 1 + \\mu\n \\quad\\hbox{and}\\quad\n f = \\frac 1\\tau \\, h_k + \\alpha\\,n_{k+1} + s_{k+1} \\,.\n \\non\n\\End{equation}\nThen the lemma and the estimates for $n_k$ and $s_k$ already proved give\n\\Begin{equation}\n \\normah_{k+1}_\\infty \n \\leq \\frac 1 {(1\/\\tau)+1+\\mu} \\, \\norma{(1\/\\tau)h_k+\\alphan_{k+1}+s_{k+1}}_\\infty\n \\leq \\normah_k_\\infty + \\tau (\\alphan^*+s^*) \\,.\n \\non\n\\End{equation}\nHence\n\\Begin{equation}\n \\normah_k_\\infty\n \\leq \\norma{h_0}_\\infty + k\\tau (\\alphan^*+s^*)\n \\leq h^*\n \\quad \\hbox{for $k=0,\\dots,N$}.\n \\label{hinfty}\n\\End{equation}\nFinally, for $i_{k+1}$ we can take \n\\Begin{equation}\n b = \\frac 1\\tau + \\oldgiannin_{k+1} + 1 + \\mu\n \\quad\\hbox{and}\\quad\n f = \\frac 1\\tau \\, i_k + h_{k+1} - s_{k+1} \n \\non\n\\End{equation}\nand a quite similar calculation also using \\eqref{hinfty} yields\n\\Begin{equation}\n \\normai_k_\\infty \n \\leq i^*\n \\quad \\hbox{for $k=0,\\dots,N$}.\n \\label{iinfty}\n\\End{equation}\n\n\\step\nLower bound\n\nFinally, we prove the \\oldgianni{left} inequality of~\\eqref{basic}.\nWe fix $k$ with $0\\leq k0$\nand we prove that $n_{k+1}\\geq\\lambda_{k+1}$ \\checkmmode{a.e.\\ in~$\\Omega$}\\ where\n\\Begin{equation}\n \\lambda_{k+1} := \\frac \\lambda_k {1+\\tau(i^*+(\\mu-\\alpha)^+)} \\,.\n \\non\n\\End{equation}\nAs already observed, \\eqref{pblG} is solved by $n_{k+1}$ with\n$a$, $b$ and $f$ given by \\eqref{gianninkp}.\nLet us estimate $f-\\lambda_{k+1} b$ from below.\nWe~have\n\\Begin{equation}\n f - \\lambda_{k+1} b\n = \\frac 1\\tau \\, n_k \n - \\frac \\lambda_k {1+\\tau(i^*+(\\mu-\\alpha)^+)} \\, \\Bigl( \\frac 1\\tau + i_k + \\oldgianni{\\mu-\\alpha} \\Bigr)\n \\geq \\frac 1\\tau \\, n_k - \\frac 1\\tau \\, \\lambda_k \n \\geq 0 \\,.\n \\non\n\\End{equation}\nThen, we can apply the last sentence of Lemma~\\ref{Gianni} and obtain that\n$n_{k+1}\\geq\\lambda_{k+1}$ \\checkmmode{a.e.\\ in~$\\Omega$}.\nSince $n_0=\\nz_\\tau\\geq\\infn^0$ \\checkmmode{a.e.\\ in~$\\Omega$}, we can take $\\lambda_0=\\infn^0$ and~have\n\\Begin{equation}\n n_k \\geq (\\infn^0) \\, \\bigl( 1+\\tau(i^*+(\\mu-\\alpha)^+) \\bigr)^{-k}\n \\quad \\checkmmode{a.e.\\ in~$\\Omega$} , \\quad \\hbox{for $k=0,\\dots,N$}.\n \\label{forlower}\n\\End{equation}\nSince $\\ln(1+y)\\leq y$ for every $y>-1$, we obtain\n\\Begin{equation}\n -k \\ln \\bigl( 1+\\tau(i^*+(\\mu-\\alpha)^+) \\bigr) \n \\geq -k \\tau(i^*+(\\mu-\\alpha)^+)\n \\geq -T(i^*+(\\mu-\\alpha)^+) \n \\non\n\\End{equation}\nwhence also\n\\Begin{equation}\n \\bigl( 1+\\tau(i^*+(\\mu-\\alpha)^+) \\bigr)^{-k}\n \\geq e^{-T(i^*+(\\mu-\\alpha)^+)} .\n \\non\n\\End{equation}\nHence, \\eqref{forlower} yields that $n_k\\geqn_*$ \\checkmmode{a.e.\\ in~$\\Omega$}\\ for every~$k$.\nThis concludes the proof.\n\\End{proof}\n\nAt this point, we establish two estimates whose proof is made very easy by the above $L^\\infty$-bounds.\nFor the second one it is convenient to rewrite the estimates in terms of the interpolating functions.\n\n\\Begin{notation}\\rm\n\\label{Constants}\nFor the sake of simplicity, in the rest of the section,\nwe use the same symbol $c$ for possibly different constants\nthat only depend on $\\ka$, $A$, $\\alpha$, $\\mu$, $T$, $\\Omega$ \nand the initial data appearing in~\\eqref{hpdati}.\nHence, the values of $c$ do not depend on~$\\tau$\nand might change from line to line and even within the same line.\n\\End{notation}\n\n\\step\nFirst a priori estimate\n\nWe rewrite \\accorpa{primak}{quartak} in the form\n\\Begin{eqnarray}\n && \\bigl( \\frac {n_{k+1}-n_k}\\tau , v \\bigr)\n + (\\ka(n_{k+1}) \\nablan_{k+1} , \\nabla v)\n + \\ka_* (n_{k+1} , v)\n = (g_{n,k} , v)\n \\label{primakbis}\n \\\\\n && \\bigl( \\frac {s_{k+1}-s_k}\\tau , v \\bigr)\n + \\bigl( \\ka(n_{k+1}) \\nablas_{k+1} , \\nabla v \\bigr)\n + \\ka_* (s_{k+1} , v)\n = (g_{s,k} , v)\n \\label{secondakbis}\n \\\\\n && \\bigl( \\frac {i_{k+1}-i_k}\\tau , v \\bigr)\n + \\bigl( \\ka(n_{k+1})\\nablai_{k+1} , \\nabla v \\bigr)\n + \\ka_* (i_{k+1} , v)\n = (g_{i,k} , v)\n \\label{terzakbis}\n \\\\\n && \\bigl( \\frac {h_{k+1}-h_k}\\tau , v \\bigr)\n + \\bigl( \\ka(n_{k+1})\\nablah_{k+1} , \\nabla v \\bigr)\n + \\ka_* (h_{k+1} , v)\n = (g_{h,k} , v)\n \\label{quartakbis}\n\\End{eqnarray}\nall for every $v\\in V$, where we have set\n\\Begin{eqnarray}\n && g_{n,k} := (\\alpha-\\mu-\\oldgiannii_k+\\ka_*) n_{k+1} \n \\non\n \\\\\n && g_{s,k} := \\alphan_{k+1} - A(n_{k+1})\\skpi_k - A(n_{k+1})s_{k+1}(h_k-s_k) + (\\ka_*-\\mu)s_{k+1}\n \\non\n \\\\\n && g_{i,k} := h_{k+1}-s_{k+1} + (\\ka_*-1-\\mu) i_{k+1} \\oldgianni{{} - n_{k+1} i_{k+1}}\n \\non\n \\\\\n && g_{h,k} := \\alphan_{k+1} + s_{k+1} + (\\ka_*-1-\\mu)h_{k+1} .\n \\non\n\\End{eqnarray}\nWe observe that \\eqref{basic}, \\accorpa{sinfty}{iinfty} and the continuity of $A$ imply that\n\\Begin{eqnarray}\n && \\normag_{n,k}\n + \\normag_{s,k}\n + \\normag_{i,k}\n + \\normag_{h,k}\n \\non\n \\\\\n && \\leq c \\, \\bigl(\\normag_{n,k}_\\infty\n + \\normag_{s,k}_\\infty\n + \\normag_{i,k}_\\infty\n + \\normag_{h,k}_\\infty \\bigr)\n \\leq c \\,.\n \\label{rhsbdd}\n\\End{eqnarray}\nNow, we test \\eqref{primakbis} by $\\taun_{k+1}$ and sum over $k$ from $0$ to $m-1$, \nwith an arbitrary positive integer $m\\leq N$.\nWe obtain\n\\Begin{equation}\n \\somma k0{m-1} (n_{k+1}-n_k,n_{k+1})\n + \\tau \\somma k0{m-1} \\bigl (\\ka(n_{k+1})\\nablan_{k+1},\\nablan_{k+1}) + \\ka_* \\norman_{k+1}^2 \\bigr)\n = \\tau \\somma k0{m-1} (g_{n,k} , n_{k+1}) .\n \\non\n\\End{equation}\nSince $\\ka\\geq\\ka_*$, by owing to \\eqref{elementare} and the Schwarz and Young inequality,\nwe deduce~that\n\\Begin{eqnarray}\n && \\frac 12 \\, \\norma{n_m}^2\n - \\frac 12 \\, \\norma{n_0}^2\n + \\frac 12 \\somma k0{m-1} \\norma{n_{k+1}-n_k}^2\n + \\ka_* \\tau \\somma k0{m-1} \\normaVn_{k+1}^2\n \\non\n \\\\\n && \\leq \\frac \\ka_* 2 \\, \\tau \\somma k0{m-1} \\norman_{k+1}^2\n + \\frac 1 {2\\ka_*} \\, \\tau \\somma k0{m-1} \\normag_{n,k}^2 \\,.\n \\non\n\\End{eqnarray}\nThen, it suffices to rearrange and account for the first \\oldgianni{condition in} \\eqref{stimadatitau} and \\eqref{rhsbdd} to~obtain\n\\Begin{equation}\n \\max_{k=0,\\dots,m} \\norman_k^2\n + \\tau \\somma k0N \\normaVn_k^2\n + \\somma k0{N-1} \\norma{n_{k+1}-n_k}^2\n \\leq c \\,.\n \\label{stiman}\n\\End{equation}\nBy treating \\accorpa{secondakbis}{quartakbis} in the same way, we also have\n\\Begin{eqnarray}\n && \\max_{k=0,\\dots,m} \\normas_k^2\n + \\tau \\somma k0N \\normaVs_k^2\n + \\somma k0{N-1} \\norma{s_{k+1}-s_k}^2\n \\leq c\n \\label{stimas}\n \\\\\n && \\max_{k=0,\\dots,m} \\normai_k^2\n + \\tau \\somma k0N \\normaVi_k^2\n + \\somma k0{N-1} \\norma{i_{k+1}-i_k}^2\n \\leq c\n \\label{stimai}\n \\\\\n && \\max_{k=0,\\dots,m} \\normah_k^2\n + \\tau \\somma k0N \\normaVh_k^2\n + \\somma k0{N-1} \\norma{h_{k+1}-h_k}^2\n \\leq c \\,.\n \\label{stimah}\n\\End{eqnarray}\n\nAt this point, we introduce the interpolating functions. \nNamely, we term\n\\Begin{equation}\n \\hat n_\\tau, \\hat s_\\tau,\\ \\hat i_\\tau,\\ \\hat h_\\tau,\\quad\n \\overline n_\\tau, \\overline s_\\tau,\\ \\overline i_\\tau,\\ \\overline h_\\tau\n \\quad\\hbox{and}\\quad\n \\underline n_\\tau, \\underline s_\\tau,\\ \\underline i_\\tau,\\ \\underline h_\\tau\n \\label{interpolating}\n\\End{equation}\nthe functions $\\hat z$, $\\overline z$ and $\\underline z$ given in Notation~\\ref{Interpol}\nwith $z=(n_0,\\dots,n_N)$, $z=(s_0,\\dots,s_N)$, $z=(i_0,\\dots,i_N)$ and $z=(h_0,\\dots,h_N)$, respectively.\nThen, we can owe to Proposition~\\ref{Propinterp} \nto rewrite the uniform bounds \\eqref{basic} and \\accorpa{sinfty}{hinfty}, which read\n\\Begin{eqnarray}\n && \\norma\\overline n_\\tau_{\\LQ\\infty}\n + \\norma\\overline s_\\tau_{\\LQ\\infty}\n + \\norma\\overline i_\\tau_{\\LQ\\infty}\n + \\norma\\overline h_\\tau_{\\LQ\\infty}\n \\leq c \n \\label{stimao}\n\\End{eqnarray}\nand the analogues for $\\underline n_\\tau$, $\\underline s_\\tau$, $\\underline i_\\tau$ and $\\underline h_\\tau$.\nNext, the estimates \\accorpa{stiman}{stimah} imply\n\\Begin{eqnarray}\n && \\norma\\overline n_\\tau_{\\L2V}\n + \\norma\\overline s_\\tau_{\\L2V}\n + \\norma\\overline i_\\tau_{\\L2V}\n + \\norma\\overline h_\\tau_{\\L2V}\n \\leq c\n \\label{stimeoL2V}\n \\\\\n && \\norma{\\overline n_\\tau-\\underline n_\\tau}_{\\L2H}\n + \\norma{\\overline s_\\tau-\\underline s_\\tau}_{\\L2H}\n \\non\n \\\\\n && \\quad {}\n + \\norma{\\overline i_\\tau-\\underline i_\\tau}_{\\L2H}\n + \\norma{\\overline h_\\tau-\\underline h_\\tau}_{\\L2H}\n \\leq c \\tau^{1\/2} .\n \\label{stimeo-u}\n\\End{eqnarray}\n\\oldgianni{Notice that \\eqref{stimeo-u} and \\eqref{diffLdue} provide the same estimate for $\\overline n_\\tau-\\hat n_\\tau$.}\nMoreover, \\oldgianni{the property} \\eqref{hzLdueZ} with $Z=V$, \nthe second \\oldgianni{inequality in} \\eqref{stimadatitau} and \\eqref{stimao} also yield\n\\Begin{equation}\n \\norma\\hat n_\\tau_{\\L2V}\n + \\norma\\hat s_\\tau_{\\L2V}\n + \\norma\\hat i_\\tau_{\\L2V}\n + \\norma\\hat h_\\tau_{\\L2V}\n \\leq c \\,.\n \\label{stimehL2V}\n\\End{equation}\nIn order to obtain an estimate for the time derivatives and let $\\tau$ tend to zero,\nwe write equations \\accorpa{primak}{quartak} in terms of the interpolating function.\nWe have\n\\Begin{eqnarray}\n && (\\partial_t\\hat n_\\tau , v)\n + (\\underline i_\\tau\\overline n_\\tau , v)\n + (\\ka(\\overline n_\\tau)\\nabla\\overline n_\\tau , \\nabla v)\n = (\\alpha-\\mu) (\\overline n_\\tau , v)\n \\label{primatau}\n \\\\\n && (\\partial_t\\hat s_\\tau , v)\n + (A(\\overline n_\\tau)\\overline s_\\tau\\underline i_\\tau , v)\n + (A(\\overline n_\\tau)\\oldgianni\\overline s_\\tau(\\underline h_\\tau-\\underline s_\\tau) , v)\n + \\oldgianni{\\mu (\\overline s_\\tau , v)}\n \\qquad\n \\non\n \\\\\n && \\quad {}\n + (\\ka(\\overline n_\\tau)\\nabla\\overline s_\\tau , \\nabla v)\n = \\alpha (\\overline n_\\tau , v)\n \\label{secondatau}\n \\\\\n && (\\partial_t\\hat i_\\tau , v)\n + (1+\\mu)(\\underline i_\\tau , v)\n + (\\ka(\\overline n_\\tau)\\nabla\\overline i_\\tau , \\nabla v)\n = (\\overline h_\\tau-\\overline s_\\tau , v)\n \\label{tezatau}\n \\\\\n && (\\partial_t\\hat h_\\tau , v)\n + (1+\\mu)(\\overline h_\\tau , v)\n + (\\ka(\\overline n_\\tau)\\nabla\\overline h_\\tau , \\nabla v)\n = (\\alpha\\overline n_\\tau+\\overline s_\\tau , v)\n \\label{quartatau}\n\\End{eqnarray}\nall of them being satisfied \\oldgianni{for every $v\\in V$ \\checkmmode{a.e.\\ in~$(0,T)$}}.\nMoreover, requirements \\eqref{segnik} and the initial conditions read\n\\Begin{eqnarray}\n && \\overline n_\\tau \\geq 0 , \\quad\n \\overline h_\\tau \\geq \\overline s_\\tau \\geq 0\n \\quad\\hbox{and}\\quad\n \\overline i_\\tau \\geq 0\n \\quad \\checkmmode{a.e.\\ in~$Q$}\n \\label{segnio}\n \\\\\n && \\hat n_\\tau(0) = \\nz_\\tau \\,, \\quad\n \\hat s_\\tau(0) = \\sz_\\tau \\,, \\quad\n \\hat i_\\tau(0) = \\iz_\\tau \n \\quad\\hbox{and}\\quad\n \\hat h_\\tau(0) = \\hz_\\tau \\,.\n \\label{cauchytau}\n\\End{eqnarray}\n\\oldgianni{Finally}, the bounds given by \\eqref{basic} and \\accorpa{sinfty}{iinfty} imply\n\\Begin{equation}\n n_* \\leq \\overline n_\\tau \\leq n^* , \\quad\n \\overline s_\\tau \\leq s^* , \\quad\n \\overline i_\\tau \\leq i^*\n \\quad\\hbox{and}\\quad\n \\overline h_\\tau \\leq h^*\n \\quad \\checkmmode{a.e.\\ in~$Q$} .\n \\label{bounds}\n\\End{equation}\n\n\\step\nSecond a priori estimate\n\nWe take an arbitrary $v\\in\\L2V$, write \\eqref{primatau} at the time $t$ and test it by~$v(t)$.\nThen, we integrate over~$(0,T)$ and rearrange.\n\\oldgianni{Thanks} to the above estimates, we obtain\n\\Begin{equation}\n \\int_Q \\partial_t\\hat n_\\tau \\, v\n = - \\int_Q \\ka(\\overline n_\\tau) \\nabla\\overline n_\\tau \\cdot \\nabla v\n + \\int_Q (\\alpha-\\mu-\\underline i_\\tau) \\overline n_\\tau v\n \\leq c \\norma v_{\\L2V} \\,.\n \\non\n\\End{equation}\nBy analogously treating the other equations, we conclude that\n\\Begin{equation}\n \\norma{\\partial_t\\hat n_\\tau}_{\\L2V^*}\n + \\norma{\\partial_t\\hat s_\\tau}_{\\L2V^*}\n + \\norma{\\partial_t\\hat i_\\tau}_{\\L2V^*}\n + \\norma{\\partial_t\\hat h_\\tau}_{\\L2V^*}\n \\leq c \\,.\n \\label{stimederivate}\n\\End{equation}\n\nSince all this holds under the only restriction on $\\tau$ given by~\\eqref{hptauz}, \nwe are ready to let $\\tau$ tend to zero.\n\n\\step\nConvergence and conclusion\n\nThe estimates obtained before and the Aubin-Lions lemma (see, e.g., \\cite[Thm.~5.1, p.~58]{Lions})\nensure the existence of a quadruple $(n,s,i,h)$ such~that\n\\Begin{eqnarray}\n && \\hat n_\\tau \\to n \\,, \\quad\n \\hat s_\\tau \\to s \\,, \\quad\n \\hat i_\\tau \\to i \n \\quad\\hbox{and}\\quad\n \\hat h_\\tau \\to h \n \\non\n \\\\\n && \\quad \\hbox{weakly star in $\\H1V^*\\cap\\C0H\\cap\\L2V$}\n \\qquad\n \\non\n \\\\\n && \\quad \\hbox{strongly in $\\L2H$ and \\checkmmode{a.e.\\ in~$Q$}}\n \\label{convhat}\n\\End{eqnarray}\nas $\\tau\\searrow0$ (more precisely for a suitable sequence $\\tau_j\\searrow0$).\nIn particular, the values at~$0$ converge weakly in $H$ to the left-hand side s of \\eqref{cauchy} \nand the initial conditions \\eqref{cauchy} themselves are satisfied on account of~\\eqref{convdatitau}.\nMoreover, the differences $\\overline n_\\tau-\\underline n_\\tau$ and $\\overline n_\\tau-\\hat n_\\tau$ converge to zero strongly in $\\L2H$,\nand also \\checkmmode{a.e.\\ in~$Q$}\\ without loss of generality,\nso that $\\overline n_\\tau$ and $\\underline n_\\tau$ tend to $n$ strongly in $\\L2H$.\nSince the same holds for the other variables, \nall the inequalities \\oldgianni{in} \\eqref{segni} are satisfied too.\nMoreover, \\eqref{bounds} yield\n\\Begin{equation}\n n_* \\leq n \\leq n^* , \\quad\n s \\leq s^* , \\quad\n i \\leq i^*\n \\quad\\hbox{and}\\quad\n h \\leq h^*\n \\quad \\checkmmode{a.e.\\ in~$Q$} .\n \\non \n\\End{equation}\nBy combining with the point-wise convergence, we infer~that\n\\Begin{equation}\n \\hbox{$\\overline n_\\tau$, $\\overline s_\\tau$, $\\overline i_\\tau$ and $\\overline h_\\tau$\n converge to their limits strongly in $\\LQ p$ for every $p\\in[1,+\\infty)$}\n \\non\n\\End{equation}\nand the same holds for $\\underline n_\\tau$, $\\underline s_\\tau$, $\\underline i_\\tau$ and $\\underline h_\\tau$.\nWe have some consequences. \nFirst, the products like $\\underline i_\\tau\\overline n_\\tau$ in \\eqref{primatau} converge to the right products\n(in~this case the limit is~$in$)\nin~$\\LQ p$ for $p\\in[1,+\\infty)$.\nNext, $A(\\overline n_\\tau)$~converges to $A(n)$ in the same topology,\nsince it is bounded in $\\LQ\\infty$ and converges to $A(n)$ \\checkmmode{a.e.\\ in~$Q$},\nall this for $A$ is continuous.\nBy the same reason, $\\ka(\\overline n_\\tau)$ converges to $\\ka(n)$ strongly in~$\\LQ p$ for $p<+\\infty$.\nFrom this, we \\oldgianni{claim} that\n\\Begin{eqnarray}\n && \\ka(\\overline n_\\tau)\\nabla\\overline n_\\tau \\to \\ka(n)\\nabla n , \\quad\n \\ka(\\overline n_\\tau)\\nabla\\overline s_\\tau \\to \\ka(n)\\nabla s , \\quad\n \\ka(\\overline n_\\tau)\\nabla\\overline i_\\tau \\to \\ka(n)\\nabla i \n \\non\n \\\\\n && \\quad\\hbox{and}\\quad\n \\ka(\\overline n_\\tau)\\nabla\\overline h_\\tau \\to \\ka(n)\\nabla h \n \\quad \\hbox{weakly in $(\\L2H)^3$}.\n \\non\n\\End{eqnarray}\n\\oldgianni{In fact we} prove the first one, only, since the others are analogous.\nFor the product, we have weak convergence to $\\ka(n)\\nabla n$ in $(\\LQ q)^3$ for $q\\in[1,2)$.\nOn the other hand, $\\ka(\\overline n_\\tau)$ is bounded in $\\LQ\\infty$ and $\\nabla\\overline n_\\tau$ is bounded in $(\\L2H)^3$,\nso that the product is bounded in $(\\L2H)^3$ and thus has a weak limit in this topology.\nClearly, this weak limit has to be $\\ka(n)\\nabla n$.\n\nAll this allows us to let $\\tau$ tend to zero in \\accorpa{primatau}{quartatau}.\nWe consider the first one.\nWe write its integrated version, namely\n\\Begin{equation}\n \\int_Q \\bigl(\n \\partial_t\\hat n_\\tau \\, v\n + \\underline i_\\tau\\overline n_\\tau \\, v\n + \\ka(\\overline n_\\tau)\\nabla\\overline n_\\tau \\cdot \\nabla v\n \\bigr)\n = (\\alpha-\\mu) \\int_Q \\overline n_\\tau \\, v\n \\quad \\hbox{for every $v\\in\\L2V$}.\n \\non\n\\End{equation}\nThen, it is clear that we can let $\\tau$ tend to zero and obtain \\eqref{intprima}.\nAs the same argument works for the other equations, the proof is complete.\n\n\n\\section{Uniqueness}\n\\label{UNIQUENESS}\n\\setcounter{equation}{0}\n\nIn this section we give \\oldgianni{the proofs of Theorems \\ref{Uniqueness1} and~\\ref{Uniqueness2}.}\n\n\\subsection{Proof of Theorem \\ref{Uniqueness1}}\n\\label{UNIQUENESS1}\n\nWe assume that $A$ is locally Lip\\-schitz\\ continuous and that $\\ka$ is a positive constant.\nWe pick any two solutions $(n_j,s_j,i_j,h_j)$, $j=1,2$, \nand prove that they are the same.\nFirst, we make some observations.\nBy recalling that $n_j$ are bounded (as~well as the other components) and that $\\inf n_j>0$,\nwe fix an interval $[n_\\star,n^\\star]\\subset(0,+\\infty)$ that contains all the values of $n_1$ and $n_2$.\nThen, the restriction of $A$ to $[n_\\star,n^\\star]$ is Lip\\-schitz\\ continuous.\nAs for the estimates we are going to prove,\nwe reinforce the convention given in Notation~\\ref{Constants} for the constants\nby allowing the values of $c$ to depend on the fixed solutions (through their $L^\\infty$ norms), in addition.\n\nWe set for brevity $n:=n_1-n_2$, $s:=s_1-s_2$, $i:=i_1-i_2$ and $h:=h_1-h_2$.\nWe write \\eqref{prima} for both solutions, take the difference and test it by~$n$.\nThen, we integrate over~$(0,t)$ and adjust.\nBy also owing to the Young inequality, we~have\n\\Begin{eqnarray}\n && \\frac 12 \\, \\int_\\Omega |n(t)|^2\n + \\ka \\int_{Q_t} |\\nabla n|^2\n = (\\alpha-\\mu) \\int_{Q_t} |n|^2\n - \\int_{Q_t} \\oldgianni{n_1 i n}\n - \\int_{Q_t} i_2 n^2 \n \\non\n \\\\\n && \\leq c \\int_{Q_t} \\bigl( |n|^2 + |i|^2 \\bigr) \\,.\n \\label{uprima}\n\\End{eqnarray}\nBy proceeding with \\eqref{seconda} in the same way, we obtain\n\\Begin{eqnarray}\n && \\frac 12 \\, \\int_\\Omega |s(t)|^2\n + \\ka \\int_{Q_t} |\\nabla s|^2\n + \\mu \\int_{Q_t} |s|^2\n \\non\n \\\\\n && = \\alpha \\int_{Q_t} n s\n - \\int_{Q_t} \\bigl(\n (A(n_1)-A(n_2)) s_1 i_1\n + A(n_2) s \\, i_1\n + A(n_2) s_2 i\n \\bigr) s\n \\non\n \\\\\n && \\quad{}\n - \\int_{Q_t} \\bigl(\n (A(n_1)-A(n_2)) s_1 (s_1-h_1)\n + A(n_2) s (s_1-h_1)\n + A(n_2) s_2 (s-h)\n \\bigr) s\n \\qquad\n \\non\n \\\\\n && \\leq c \\int_{Q_t} \\bigl( |n|^2 + |s|^2 + |i|^2 + \\oldgianni{|h|^2}\\bigr) \\,.\n \\label{useconda}\n\\End{eqnarray}\nAnalogously, by dealing with \\eqref{terza} and \\eqref{quarta}, we have\n\\Begin{eqnarray}\n && \\frac 12 \\, \\int_\\Omega |i(t)|^2\n + \\ka \\int_{Q_t} |\\nabla i|^2\n + \\frac 12 \\, \\int_\\Omega |h(t)|^2\n + \\ka \\int_{Q_t} |\\nabla h|^2\n \\non\n \\\\\n && \\leq c \\int_{Q_t} \\bigl( |n|^2 + |s|^2 + |i|^2 + |h|^2 \\bigr) \\,.\n \\label{ualtre}\n\\End{eqnarray}\nAt this point, it suffices to add \\accorpa{uprima}{ualtre} to each other\nand apply the Gronwall lemma to conclude that $n=s=i=h=0$.\n\n\n\\subsection{Proof of Theorem \\ref{Uniqueness2}}\n\\label{UNIQUENESS2}\n\nIt is understood that the assumptions listed in the statement are in force.\nSince the component $n$ of every solution belongs to $\\Lx\\infty$ \nand is bounded away from zero (see \\Regsoluz),\nwe can assume, without loss of generality, that \n\\Begin{equation}\n \\ka_* \\leq \\ka(y) \\leq \\ka^*\n \\quad \\hbox{for some positive constants $\\ka_*$ and $\\ka^*$ and every $y>0$}\n \\label{kabounds}\n\\End{equation}\nand that $A$ is Lip\\-schitz\\ continuous, whenever we fix one or two solutions.\nThe bounds $\\ka_*$ and $\\ka^*$ and the Lip\\-schitz\\ constant of $A$ \ndepend on the solutions we fix every time, \nand the same holds for the Lip\\-schitz\\ constants of the functions we are going to introduce.\nWe~set\n\\Begin{equation}\n K(y) := \\int_0^y \\ka(z) \\, dz\n \\quad \\hbox{for $y\\in(0,+\\infty)$}\n \\label{defKbis}\n\\End{equation}\nand observe that, under condition \\eqref{kabounds}, it is a bijection from $(0,+\\infty)$ onto itself and that\nboth $K$ and $K^{-1}$ are Lip\\-schitz\\ continuous.\nNext, to every solution $(n,s,i,h)$ to problem \\Pbl\\ satisfying \\Regsoluz\\\nwe associate the function $u$ defined~by\n\\Begin{equation}\n u := K(n) \n \\label{defu}\n\\End{equation}\nand observe that the regularity and boundedness properties of~$u$,\nbut that of the time derivative, are the same as those of~$n$.\n\nOur project is the following:\nfirst we prove that every solution $(n,s,i,h)$ to problem \\Pbl\\ satisfying \\Regsoluz\\\nenjoys the regularity properties specified in~\\eqref{highreg};\nthen, we prove uniqueness on account of the regularity already proved.\nThe main difficulty is a regularity result for the component $n$ of the solution.\nThis needs some preliminary results.\n\n\\Begin{proposition}\n\\label{Regnu}\nThe component $n$ of every solution $(n,s,i,h)$ to problem \\Pbl\\ satisfying \\Regsoluz\\\nand the corresponding function $u$ verify\n\\Begin{eqnarray}\n && \\partial_t n \\in \\L2H\n \\quad\\hbox{and}\\quad\n \\partial_t u \\in \\L2H\n \\label{regdt}\n \\\\\n && \\div(\\ka(n)\\nabla n) \\in \\L2H \n \\quad\\hbox{and}\\quad\n \\non\n \\\\\n && \\quad \\partial_t n - \\div(\\ka(n)\\nabla n)\n = (\\alpha - \\mu - i) n\n \\quad \\checkmmode{a.e.\\ in~$Q$}\n \\quad \\hbox{with} \\quad \\hbox{$\\partial_\\nu n=0$ on $\\Gamma$}\n \\label{bvpn}\n \\\\\n && \\Delta u \\in \\L2H \\quad\\hbox{and}\\quad\n \\non\n \\\\\n && \\quad \\partial_t u - \\ka(n) \\Delta u\n = \\ka(n) (\\alpha - \\mu - i) n\n \\quad \\checkmmode{a.e.\\ in~$Q$}\n \\quad \\hbox{with} \\quad \\hbox{$\\partial_\\nu u=0$ on $\\Gamma$}.\n \\qquad\n \\label{bvpu}\n\\End{eqnarray}\n\\End{proposition}\n\n\\Begin{proof}\nWe fix a solution $(n,s,i,h)$ \nand suitably extend the component~$n$.\nWe also introduce an auxiliary function.\nWe define $\\tilde n$ and $f$ on $(-1,T)$ by setting\n\\Begin{eqnarray}\n && \\tilde n(t) := n(t)\n \\quad\\hbox{and}\\quad\n f(t) := (\\alpha-mu-i(t)) n(t)\n \\quad \\hbox{if $t\\in(0,T)$}\n \\non\n \\\\\n && \\tilde n(t) := n^0 \n \\quad\\hbox{and}\\quad\n f(t) := - \\div (\\ka(n^0)\\nablan^0)\n \\quad \\hbox{if $t\\in(-1,0)$}.\n \\non\n\\End{eqnarray}\nNotice that $\\div(\\ka(n^0)\\nablan^0)$ belongs to~$H$ by \\eqref{hpdatareg} \n(for this it would be sufficient a much weaker assumption).\nAlso notice that $\\tilde n$ is a continuous $H$-valued function\nso that no jumps for its derivative can occur at $t=0$.\nTherefore, $\\tilde n$ and $f$ satisfy\n\\Begin{eqnarray}\n && \\tilde n \\in H^1(-1,T;V^*) \\cap L^2(-1,T;V) \n \\quad\\hbox{and}\\quad\n f \\in L^2(-1,T;H)\n \\non\n \\\\\n && \\< \\partial_t \\tilde n , v >\n + \\int_\\Omega \\ka(\\tilde n) \\nabla\\tilde n \\cdot \\nabla v\n = \\int_\\Omega f v\n \\quad \\hbox{for every $v\\in V$ a.e.\\ in $(-1,T)$}.\n \\label{proleqn}\n\\End{eqnarray}\nHowever, for simplicity, we write $n$ in place of $\\tilde n$ in the sequel.\nNow, we fix $\\tau>0$ small (namely, $\\tau<\\min\\{1,T\\}$). \nLater on, we let $\\tau$ tend to zero.\nWe~set\n\\Begin{equation}\n u(t) := K(n(t))\n \\quad\\hbox{and}\\quad\n U(t) := \\int_{-\\tau}^t u(y) \\, dy\n \\quad \\hbox{for $t\\in[-\\tau,T]$} \n \\non\n\\End{equation}\nby noting that $\\nabla u=\\ka(n)\\nabla n$ and $\\partial_t U=u$.\nThen, for $t\\in(-\\tau,T-\\tau)$, we integrate \\eqref{proleqn} on $(t,t+\\tau)$ with respect to time\nand test the resulting equality by $\\tau^{-2}(u(t+\\tau)-u(t))$.\nWe obtain for a.a.\\ $t\\in(-\\tau,T-\\tau)$\n\\Begin{eqnarray}\n && \\int_\\Omega \\frac{n(t+\\tau) - n(t)} \\tau \\, \\frac{u(t+\\tau) - u(t)} \\tau\n + \\int_\\Omega \\nabla \\, \\frac{U(t+\\tau) - U(t)} \\tau \\cdot \\nabla \\, \\frac{u(t+\\tau) - u(t)} \\tau\n \\non\n \\\\\n && = \\int_\\Omega \\Bigl( {\\textstyle\\frac 1\\tau \\int_t^{t+\\tau} f(t') \\, dt'} \\Bigr) \\, \\frac{u(t+\\tau) - u(t)} \\tau \\,.\n \\non\n\\End{eqnarray}\nWe treat each term of this equality, separately.\nSince $n=K^{-1}(u)$ and the derivative of $K^{-1}$ is bounded from below by~$1\/\\ka^*$, we~have\n\\Begin{equation}\n \\int_\\Omega \\frac{n(t+\\tau) - n(t)} \\tau \\, \\frac{u(t+\\tau) - u(t)} \\tau\n \\geq \\frac 1\\ka^* \\int_\\Omega \\Bigl| \\frac{u(t+\\tau) - u(t)} \\tau \\Bigr|^2 .\n \\non\n\\End{equation}\nAs for the second term, we recall that $u=\\partial_t U$, whence\n\\Begin{equation}\n \\int_\\Omega \\nabla \\, \\frac{U(t+\\tau) - U(t)} \\tau \\cdot \\nabla \\, \\frac{u(t+\\tau) - u(t)} \\tau\n = \\frac 12 \\, \\frac d{dt} \\int_\\Omega \\Bigl| \\nabla \\, \\frac{U(t+\\tau) - U(t)} \\tau \\Bigr|^2 .\n \\non\n\\End{equation}\nFinally, the Young inequality yields\n\\Begin{eqnarray}\n && \\int_\\Omega \\Bigl( {\\textstyle\\frac 1\\tau \\int_t^{t+\\tau} f(t') \\, dt'} \\Bigr) \\, \\frac{u(t+\\tau) - u(t)} \\tau\n \\non\n \\\\\n && \\leq \\frac 1 {2\\ka^*} \\int_\\Omega \\Bigl| \\frac{u(t+\\tau) - u(t)} \\tau \\Bigr|^2 \n + \\frac \\ka^* 2 \\int_\\Omega \\Bigl| {\\textstyle\\frac 1\\tau \\int_t^{t+\\tau} f(t') \\, dt'} \\Bigr|^2 .\n \\non\n\\End{eqnarray}\nBy collecting all this, rearranging and integrating over $(-\\tau,T-\\tau)$ we obtain\n\\Begin{eqnarray} \n && \\frac 1\\ka^* \\int_{-\\tau}^{T-\\tau} \\int_\\Omega \\Bigl| \\frac{u(t+\\tau) - u(t)} \\tau \\Bigr|^2 dt\n + \\int_\\Omega \\Bigl| \\nabla \\, \\frac{U(T) - U(T-\\tau)} \\tau \\Bigr|^2\n \\non\n \\\\\n && \\leq \\int_\\Omega \\Bigl| \\nabla \\, \\frac{U(0) - U(-\\tau)} \\tau \\Bigr|^2\n + \\ka^* \\int_{-\\tau}^{T-\\tau} \\int_\\Omega \\Bigl| {\\textstyle\\frac 1\\tau \\int_t^{t+\\tau} f(t') \\, dt'} \\Bigr|^2 dt \\,.\n \\non\n\\End{eqnarray}\nNow, we observe that the last term can be estimate uniformly with respect to $\\tau$\nby the norm of $f$ and $n^0$ in $\\L2H$ and~$\\Wx{2,\\infty}$, respectively,\nand that $U(0)-U(-\\tau)=\\tau K(n^0)$.\nTherefore, we conclude that $\\partial_t u\\in\\L2H$.\nAs $n=K^{-1}(u)$ and $K^{-1}$ is Lip\\-schitz\\ continuous, we infer that $\\partial_t n\\in\\L2H$,\nso that \\eqref{regdt} is proved.\nThis conclusions also imply that $\\partial_t u=\\ka(n)\\partial_t n$ \\checkmmode{a.e.\\ in~$Q$}.\nNext, we come back to \\eqref{proleqn} written on $(0,T)$ and recall that $f\\in\\LQ\\infty$.\nThus, \\eqref{bvpn} follows.\nFinally, we can rewrite \\eqref{proleqn} on $(0,T)$ this~way\n\\Begin{equation}\n \\int_\\Omega \\frac 1{\\ka(n)} \\, \\partial_t u \\, v \n + \\int_\\Omega \\nabla u \\cdot \\nabla v\n = \\int_\\Omega f v\n \\quad \\hbox{for every $v\\in V$ a.e.\\ in $(0,T)$}.\n \\non\n\\End{equation}\nThis implies that $\\Delta u\\in\\L2H$ and~that\n\\Begin{equation}\n \\frac 1{\\ka(n)} \\, \\partial_t u - \\Delta u = f\n \\quad \\checkmmode{a.e.\\ in~$Q$}\n \\quad\\hbox{and}\\quad\n \\partial_\\nu u = 0\n \\quad \\hbox{on $\\Gamma$}.\n \\non\n\\End{equation}\nThen \\eqref{bvpu} follows.\n\\End{proof}\n\n\\Begin{lemma}\n\\label{Unic}\nLet $a\\in\\CQ0$ be strictly positive and let $z$ satisfy\n\\Begin{eqnarray}\n && z \\in \\H1H \\cap \\L2W\n \\non\n \\\\\n && a \\partial_t z - \\Delta z = 0 \\quad \\checkmmode{a.e.\\ in~$Q$}\n \\quad\\hbox{and}\\quad \n z(0) = 0 . \n\\End{eqnarray}\nThen $z=0$.\n\\End{lemma}\n\n\\Begin{proof}\nBy multiplying the equation by $\\partial_t z$ and integrating over~$Q_t$, we obtain\n\\Begin{equation}\n \\int_{Q_t} a |\\partial_t z|^2\n + \\frac 12\\int_\\Omega |\\nabla z(t)|^2\n = 0\n \\quad \\hbox{for every $t\\in[0,T]$}.\n \\non\n\\End{equation}\nSince $a$ is strictly positive, we deduce that \nboth $\\partial_t z$ and $\\nabla z$ vanish \\checkmmode{a.e.\\ in~$Q$}, whence $z$ is a constant.\nAs $z(0)=0$, we conclude that $z=0$.\n\\End{proof}\n\n\\Begin{proposition}\n\\label{Highreg}\nEvery solution to problem \\Pbl\\ satisfying \\Regsoluz\\\nalso enjoys the regularity specified by~\\eqref{highreg}.\n\\End{proposition}\n\n\\Begin{proof}\nFix a solution $(n,s,i,h)$.\nFirst, we consider the component~$n$ and recall that\nit satisfies \\accorpa{regdt}{bvpn}, where the right-hand side\\ of the equation belongs to $\\LQ\\infty$.\nSince \\eqref{hpdatareg} implies $n^0\\in\\Cx0$, \nby applying \\cite[Thm.~1.3 and Rem.~1.1 of Chpt.~III]{DiB},\nwe deduce that $n\\in\\CQ0$, whence also $\\ka(n)\\in\\CQ0$.\nNow, for $p\\in[2,+\\infty)$, we consider the problem of finding $w$ satisfying\n\\Begin{eqnarray}\n && w \\in \\H1H \\cap \\L2W\n \\label{regw}\n \\\\\n && \\partial_t w - \\ka(n) \\Delta w\n = \\ka(n) (\\alpha - \\mu - i) n\n \\quad \\checkmmode{a.e.\\ in~$Q$}\n \\quad\\hbox{and}\\quad\n w(0) = K(n^0) \\,.\n \\qquad\n \\label{eqw}\n\\End{eqnarray}\nThanks to Proposition~\\ref{Regnu}, $u$~is a solution.\nOn the other hand, $\\ka(n)$ is continuous as just observed,\nthe right-hand side\\ of the equation is bounded and $K(n^0)$~belongs to~$\\Wx{2,\\infty}$ and satisfies $\\partial_\\nu K(n^0)=0$ on~$\\Gamma$\n(recall that $K$ is of class~$C^2$ with bounded derivatives by \\eqref{hpnonlin} and \\eqref{kabounds}).\nTherefore, we can apply \\cite[Thm.~2.1]{DHP} and ensure the existence \nof a unique $w\\in\\spazio W{1,p}{\\Lx p}\\cap\\spazio L p{\\Wx{2,p}}$ \nthat satisfies \\eqref{eqw} and the homogeneous Neumann boundary condition.\nSince $p\\geq2$, \\eqref{regw} holds as well \nand the assumptions of Lemma~\\ref{Unic} with $a=1\/\\ka(n)$ are fulfilled by $w-u$.\nWe conclude that $w=u$, so~that\n\\Begin{equation}\n u \\in \\spazio W{1,p}{\\Lx p} \\cap \\spazio L p{\\Wx{2,p}}.\n \\non\n\\End{equation}\nSince $p$ is arbitrary in $[2,+\\infty)$, $\\Omega$ is bounded, \n$n=K^{-1}(u)$ and $K^{-1}$ is of class~$C^2$ \nwith bounded derivatives, we deduce that \\eqref{highreg} holds for~$n$.\n\nNow, we consider the components $s$, $i$ and~$h$.\nWe observe that each of equations \\accorpa{seconda}{quarta} \nand the corresponding boundary and initial conditions have the form\n\\Begin{equation}\n \\partial_t w - \\div(\\ka(n)\\nabla w) = f\n \\quad \\hbox{with $\\partial_\\nu w=0$}\n \\quad\\hbox{and}\\quad\n w(0) = w_0\n \\label{eqdiv}\n\\End{equation}\nwhere $f\\in\\LQ\\infty$ and $w_0\\in\\Wx{2,\\infty}$ with $\\partial_\\nu w_0=0$.\nOn account of the already proved regularity of~$n$,\nwe can rewrite the equation in \\eqref{eqdiv} in the non-divergence form, namely\n\\Begin{equation}\n \\partial_t w - \\bigl( \\ka(n) \\Delta w + \\ka'(n) \\nabla n \\cdot \\nabla w) = f \n \\label{eqnondiv}\n\\End{equation} \nand apply \\cite[Thm.~2.1]{DHP} once more with $p\\geq2$\n(using the $4p$-summability of $\\nabla n$).\nThen the homogeneous Neumann problem for \\eqref{eqnondiv} \ncomplemented with the initial condition $w(0)=w_0$\nhas a unique solution $w\\in\\spazio W{1,p}{\\Lx p}\\cap\\spazio L p{\\Wx{2,p}}$.\nClearly, such a $w$ belongs to $\\H1H\\cap\\L2W$ and satisfies \\eqref{eqdiv}.\nSince \\eqref{eqdiv} has a unique solution in $\\H1H\\cap\\L2W$,\nwe deduce that $w\\in\\spazio W{1,p}{\\Lx p}\\cap\\spazio L p{\\Wx{2,p}}$\nwhenever $w$ belongs to $\\H1H\\cap\\L2W$ and satisfies \\eqref{eqdiv}.\nThis is the case for $s$, $i$ and~$h$.\nSince $p\\geq2$ is arbitrary and $\\Omega$ is bounded, \\eqref{highreg} is completely proved.\n\\End{proof}\n\n\\step\nConclusion of the proof of Theorem~\\ref{Uniqueness2}\n\nWe pick any two solutions $(n_j,s_j,i_j,h_j)$, $j=1,2$, \nand prove that they are the same.\nThanks to Proposition~\\ref{Highreg}, they satisfy \\eqref{highreg}.\nIn particular, the gradients of all the components belong to $\\L4\\Lx\\infty$\nsince $\\Wx{2,4}\\subset\\Wx{1,\\infty}$.\nFor simplicity, we use the rule of Notation~\\ref{Constants} concerning the constants:\nthe symbol $c$ stands for possible difference constants \nthat depend on the structure, the data, some norms of the solutions we have fixed\nand the constants appearing in \\eqref{kabounds} \n(which are chosen after fixing the solutions) \nand the consequent Lip\\-schitz\\ constants of the nonlinearities.\nAs in the proof of Theorem~\\ref{Uniqueness1}, we set for brevity \n$n:=n_1-n_2$, $s:=s_1-s_2$, $i:=i_1-i_2$ and $h:=h_1-h_2$,\nwrite equations \\accorpa{prima}{quarta} for both solutions and test the differences by\n$n$, $s$, $i$ and~$h$, respectively.\nAs for the first equation, we~have\n\\Begin{eqnarray}\n && \\frac 12 \\, \\int_\\Omega |n(t)|^2\n + \\int_{Q_t} \\ka(n_1) |\\nabla n|^2\n \\non\n \\\\\n && = \\int_{Q_t} (\\ka(n_1)-\\ka(n_2)) \\nabla n_2 \\cdot \\nabla n\n + (\\alpha-\\mu) \\int_{Q_t} |n|^2\n - \\int_{Q_t} n_1 i n\n - \\int_{Q_t} i_2 n^2 \\,.\n \\non\n\\End{eqnarray}\nWhile we can simply use the inequality $\\ka(n_1)\\geq\\ka_*$ on the left-hand side,\nthe true novelty is the first term on the right-hand side.\nWe treat it by owing to the H\\\"older\\ and Young inequalities this~way\n\\Begin{eqnarray}\n && \\int_{Q_t} (\\ka(n_1)-\\ka(n_2)) \\nabla n_2 \\cdot \\nabla n\n \\leq c \\int_0^t \\norma{n(t')} \\, \\norma{\\nabla n_2(t')}_\\infty \\, \\norma{\\nabla n(t')} \\, dt'\n \\non\n \\\\\n && \\leq \\ka_* \\int_{Q_t} |\\nabla n|^2\n + c \\int_0^t \\norma{\\nabla n_2(t')}_\\infty^2 \\, \\norma{n(t')}^2 \\, dt' .\n \\non \n\\End{eqnarray}\nThe other terms on the right-hand side\\ can be dealt with as in the above proof.\nBy combining and rearranging, we easily deduce that\n\\Begin{equation}\n \\int_\\Omega |n(t)|^2\n \\leq c \\int_0^t \\bigl( \\norma{\\nabla n_2(t')}_\\infty^2 +1 \\bigr) \\bigl( \\norma{n(t')}^2 + \\norma{i(t')}^2 \\bigr) \\, dt'\\,.\n \\label{okprima}\n\\End{equation}\nThe next equation \\eqref{seconda} can be treated in a similar way.\nArguing as we did to derive \\eqref{okprima} and~\\eqref{useconda},\nwe~have\n\\Begin{eqnarray}\n && \\frac 12 \\, \\int_\\Omega |s(t)|^2\n + \\int_{Q_t} \\ka(n_1) |\\nabla s|^2\n + \\mu \\int_{Q_t} |s|^2\n \\non\n \\\\\n && = \\int_{Q_t} (\\ka(n_1)-\\ka(n_2) \\nabla s_2 \\cdot \\nabla s\n \\non\n \\\\\n && \\quad {}\n + \\alpha \\int_{Q_t} n s\n - \\int_{Q_t} \\bigl(\n (A(n_1)-A(n_2)) s_1 i_1\n + A(n_2) s \\, i_1\n + A(n_2) s_2 i\n \\bigr) s\n \\non\n \\\\\n && \\quad{}\n - \\int_{Q_t} \\bigl(\n (A(n_1)-A(n_2)) s_1 (s_1-h_1)\n + A(n_2) s (s_1-h_1)\n + A(n_2) s_2 (s-h)\n \\bigr) s\n \\qquad\n \\non\n \\\\\n && \\leq \\ka_* \\int_{Q_t} |\\nabla s|^2\n + c \\int_0^t \\bigl( \\norma{\\nabla s_2(t')}_\\infty^2 +1 \\bigr)\n \\bigl(\n \\norma{n(t')}^2 + \\norma{s(t')}^2 + \\norma{i(t')}^2 + \\norma{h(t')}^2\n \\bigr) \\, dt' \\,.\n \\non\n\\End{eqnarray}\nMoreover, we can use the inequality $\\ka(n_1)\\geq\\ka_*$ on the left-hand side\\ and rearrange also in this case.\nBy treating equations \\accorpa{terza}{quarta} in the same way and summing up,\nwe conclude~that\n\\Begin{eqnarray}\n && \\int_\\Omega \\bigl( |n(t)|^2 + |s(t)|^2 + |i(t)|^2 + |h(t)|^2 \\bigr)\n \\non\n \\\\\n && \\leq \\int_0^t \\psi(t')\n \\bigl(\n \\norma{n(t')}^2 + \\norma{s(t')}^2 + \\norma{i(t')}^2 + \\norma{h(t')}^2\n \\bigr) \\, dt'\n \\quad \\hbox{for every $t\\in[0,T]$}\n \\non\n\\End{eqnarray}\nwith a function $\\psi\\in L^1(0,T)$.\nThus, the Gronwall lemma yields $n=s=i=h=0$,\nand the proof is complete.\n\n\n\\section*{Acknowledgments}\n\\pier{This research was supported by the Italian Ministry of Education, \nUniversity and Research~(MIUR): Dipartimenti di Eccellenza Program (2018--2022) \n-- Dept.~of Mathematics ``F.~Casorati'', University of Pavia. \nIn addition, {PC and ER gratefully mention} some other support \nfrom the MIUR-PRIN Grant 2020F3NCPX \n``Mathematics for industry 4.0 (Math4I4)'' and the GNAMPA (Gruppo Nazionale per l'Analisi Matematica, \nla Probabilit\\`a e le loro Applicazioni) of INdAM (Isti\\-tuto \nNazionale di Alta Matematica), while AR gratefully acknowledges the partial support of the MIUR-PRIN project XFAST-SIMS (no. 20173C478N).}\n\n\n\\vspace{3truemm}\n\n\\Begin{thebibliography}{10}\n\n\t\t\n\\bibitem{Albi} \n\\pcol{G. Albi, L. Pareschi, M. Zanella,\nControl with uncertain data of socially structured compartmental epidemic models,\n{\\it J. Math. Biol.} {\\bf 82} (2021) Paper No. 63, 41 pp.}\n\n\\bibitem{Barbu}\nV. Barbu,\n\\gianni{``Nonlinear Differential Equations of Monotone Types in Banach Spaces'',\nSpringer, \nNew York,\n2010.}\n\n\\bibitem{Bellomo}\n\\ale{N. Bellomo, R. Bingham, M.A.J. Chaplain, G. Dosi, G. Forni, D.A. Knopoff, J. Lowengrub, R. Twarock, M.E. Virgillito,\nA multiscale model of virus pandemic: heterogeneous interactive entities in a globally connected world,\n{\\it \\pcol{Math. Models Methods Appl. Sci.}} {\\bf 30} (2020) 1591-1651.}\n\n\\bibitem{Bellomo2}\n\\ale{N. Bellomo, K.J. Painter, Y. Tao, M. Winkler,\nOccurrence vs. absence of taxis-driven instabilities in a May-Nowak model for virus infection,\n{\\it SIAM J. Appl. Math.} \\pcol{{\\bf 79} (2019) 1990-2010.}}\n\n\\bibitem{Bellomo3}\n\\ale{N. Bellomo, N. Outada, J. Soler, Y. Tao, M. Winkler,\nChemotaxis and Cross-diffusion Models in Complex Environments: \nModels and Analytic Problems Towards a Multiscale Vision,\n{\\it \\pcol{Math. Models Methods Appl. Sci.}}, to appear.}\n\n\\bibitem{Bellomo4}\n\\ale{N. Bellomo, F. Brezzi, M.A.J. Chaplain,\nSpecial Issue on ``Mathematics Towards COVID19 and Pandemic'',\n{\\it \\pcol{Math. Models Methods Appl. Sci.}} {\\bf 31} (2021) Issue 12.}\n\n\\bibitem{Berestycki}\n\\ale{H. Berestycki, J.-M. Roquejoffre, L. Rossi,\nPropagation of epidemics along lines with fast diffusion,\n{\\it Bull. Math. Biol.} {\\bf 83} (2021) \\pcol{Paper No. 2, 34 pp.}}\n\n\\bibitem{Bertaglia}\n\\ale{G. Bertaglia, L. Pareschi,\nHyperbolic compartmental models for epidemic spread on networks with uncertain data: application to the emergence of COVID-19 in Italy,\n{\\it \\pcol{Math. Models Methods Appl. Sci.}} {\\bf 31} (2021) 2495-2531.}\n\n\\bibitem{Calleri} \n\\pcol{F. Calleri, G. Nastasi, V. Romano, \nContinuous-time stochastic processes for the spread of COVID-19 disease simulated \nvia a Monte Carlo approach and comparison with deterministic models,\n{\\it J. Math. Biol.} {\\bf 83} (2021) Paper No. 34, 26 pp.}\n\n\\bibitem{DHP}\nR. Denk, M. Hieber, J. Pr\\\"uss,\nOptimal $L^p$-$L^q$-estimates for parabolic boundary value problems with inhomogeneous data,\n{\\it Math. Z.} {\\bf 257} (2007) 193-224.\n\n\\bibitem{DiB}\nE. DiBenedetto,\n``Degenerate Parabolic Equations'',\nSpringer-Verlag, \nNew York, \n1993.\n\n\\bibitem{Gatto}\n\\ale{M. Gatto, E. Bertuzzo, L. Mari, S. Miccoli, L. Carraro, R. Casagrandi, A. Rinaldo,\nSpread and dynamics of the COVID-19 epidemic in Italy: effects of emergency containment measures,\n{\\it Proc. Nat. Acad. Sci.} {\\bf 117} (2020) 10484-10491.}\n\n\\bibitem{Giordano}\n\\ale{G. Giordano, F. Blanchini, R. Bruno, P. Colaneri, A. Di Filippo, A. Di Matteo, M. Colaneri,\nModelling the COVID-19 epidemic and implementation of population-wide interventions in Italy,\n{\\it Nat. Med.} {\\bf 26} (2020) 855-860.}\n\n\\bibitem{Grave}\n\\ale{M. Grave, A.L.G.A. Coutinho,\nAdaptive mesh refinement and coarsening for diffusion-reaction epidemiological models,\n{\\it Comput. Mech.} {\\bf 67} (2021) 1177-1199.}\n\n\\bibitem{Grave2}\n\\ale{M. Grave, A. Viguerie, G.F. Barros, A. Reali, A.L.G.A. Coutinho,\nAssessing the spatio-temporal spread of COVID-19 via compartmental models with diffusion in Italy, USA, and Brazil,\n{\\it Arch. Comput. Mech. Eng.} {\\bf 28} (2021) 4205-4223.}\n\n\\bibitem{Guglielmi}\n\\pcol{N. Guglielmi, E. Iacomini, A. Viguerie,\nDelay differential equations for the spatially resolved simulation of epidemics with specific application to COVID-19\n{\\it Math. Methods Appl. Sci}, to appear, DOI:~10.1002\/mma.8068}\n\n\\bibitem{Jha}\n\\ale{P.K. Jha, L. Cao, J.T. Oden, \nBayesian-based predictions of COVID-19 evolution in Texas using multispecies mixture-theoretic continuum models,\n{\\it Comput. Mech.} {\\bf 66} (2020) 1055-1068.}\n\n\\bibitem{Linka}\n\\ale{K. Linka, P. Rahman, A. Goriely, E. Kuhl,\nIs it safe to lift COVID-19 travel bans? The Newfoundland story,\n{\\it Comput. Mech.} {\\bf 66} (2020) 1081-1092.}\n\n\\bibitem{Lions}\nJ.-L.~Lions,\n``Quelques m\\'ethodes de r\\'esolution des probl\\`emes\naux limites non lin\\'eaires'',\nDunod; Gauthier-Villars, Paris, 1969.\n\n\\bibitem{Parolini}\n\\pcol{N. Parolini, L. Ded\\`e, P.F. Antonietti, G. Ardenghi, A. Manzoni, E. Miglio, A. Pugliese, M. Verani, A. Quarteroni, \nSUIHTER: a new mathematical model for COVID-19. Application to the analysis of the second epidemic outbreak in Italy,\n{\\it Proc. A.} {\\bf 477} (2021), Paper No. 20210027, 21 pp.}\n\n\\bibitem{Vig1}\nA. Viguerie, G. Lorenzo, F. Auricchio, D. Baroli,\nT.J.R. Hughes, A. Patton, A. Reali,\nTh.E. Yankeelov, A. Veneziani,\nSimulating the spread of COVID-19 via a spatially-resolved\nsusceptible-exposed-infected-recovered-deceased (SEIRD) model with heterogeneous diffusion,\n\\pier{{\\it Appl. Math. Lett.} {\\bf 111} (2021) Paper No. 106617, 9 pp.}\n\n\\bibitem{Vig2}\nA. Viguerie, A. Veneziani, G. Lorenzo, D. Baroli, N. Aretz-Nellesen,\nA. Patton, T.E. Yankeelov, A. Reali, T.J.R. Hughes, F. Auricchio,\n\\pier{Diffusion-reaction compartmental models formulated in a continuum mechanics framework: application to COVID-19, mathematical analysis, and numerical study,\n{\\it Comput. Mech.} {\\bf 66} (2020) 1131-1152.}\n\n\\bibitem{Wang}\n\\ale{Z. Wang, X. Zhang, G.H. Teichert, M. Carrasco-Teja, K. Garikipati,\nSystem inference for the spatio-temporal evolution of infectious diseases: Michigan in the time of COVID-19,\n{\\it Comput. Mech.} {\\bf 66} (2020) 1153-1176.}\n\n\\bibitem{Winkler}\n\\ale{M. Winkler,\nBoundedness in a chemotaxis-May-Nowak model for virus dynamics with mildly saturated chemotactic sensitivity\n{\\it Acta Appl. Math.} \\pcol{{\\bf 163} (2019) 1-17.}}\n\n\\bibitem{Zohdi}\n\\ale{T.I. Zohdi,\nAn agent-based computational framework for simulation of global pandemic and social response on planet X,\n{\\it Comput. Mech.} {\\bf 66} (2020) 1195-1209.}\n\n\\End{thebibliography}\n\n\\End{document}\n\n\n\\bibitem{Brezis}\nH. Brezis,\n``Op\\'erateurs maximaux monotones et semi-groupes de contractions\ndans les espaces de Hilbert'',\nNorth-Holland Math. Stud.\n{\\bf 5},\nNorth-Holland,\nAmsterdam,\n1973.\n\n\\bibitem{Simon}\nJ. Simon,\n{Compact sets in the space $L^p(0,T; B)$},\n{\\it Ann. Mat. Pura Appl.~(4)\\\/} \n{\\bf 146} (1987) 65-96.\n\n\n\\Begin{lemma}\n\\label{Unicn}\nAssume that $(n,s,i,h)$ is a solution to problem \\Pbl\\ satisfying \\Regsoluz.\nSet $f:=(\\alpha-\\mu-i)n$\nand consider the problem of finding $w$ such~that\n\\Begin{eqnarray}\n && w \\in \\H1V^* \\cap \\L2V \\cap \\LQ\\infty\n \\quad \\hbox{with} \\quad\n \\inf w > 0 \n \\label{regw}\n \\\\\n && \\< \\partial_t w , v >\n + \\int_\\Omega \\ka(w) \\nabla w \\cdot \\nabla v\n = \\int_\\Omega f v\n \\quad \\hbox{for every $v\\in V$ \\checkmmode{a.e.\\ in~$(0,T)$}}.\n \\label{eqw}\n\\End{eqnarray}\nThen, $n$ is its unique solution.\n\\End{lemma}\n\n\\Begin{proof}\nSince \\eqref{eqw} with $w=n$ coincides with \\eqref{prima},\n$w=n$ is a solution.\nNow, let $w$ be any solution: we prove that $w=n$.\nWe recall \\eqref{regw}, owe to \\eqref{kabounds}, whose bounds now also depend on~$w$,\nand term $L$ the Lip\\-schitz\\ constant of~$\\ka$.\nLet us introduce $\\sign_\\eps,\\modeps\\cdot:{\\mathbb{R}}\\to{\\mathbb{R}}$ by setting\n\\Begin{eqnarray}\n && \\sign_\\eps(y) := y\/\\eps \n \\quad \\hbox{if $|y|\\leq\\eps$}\n \\quad\\hbox{and}\\quad\n \\sign_\\eps(y) := y\/|y|\n \\quad \\hbox{otherwise}\n \\non\n \\\\\n && \\modeps y := \\int_0^y \\sign_\\eps(y') \\, dy'\n \\quad \\hbox{for every $y\\in{\\mathbb{R}}$}.\n \\non\n\\End{eqnarray}\nWe set $z:=w-n$ and choose $v=\\sign_\\eps(z)$ in the difference of \\eqref{eqw} \nwritten for $w$ and~$n$.\nBy integrating over $(0,t)$, we have\n\\Begin{equation}\n \\int_\\Omega \\modeps{z(t)}\n + \\int_{Q_t} \\ka(w) \\sign_\\eps'(z) |\\nabla z|^2\n = \\int_{Q_t} (\\ka(n)-\\ka(w)) \\nabla n \\cdot \\nabla z \\, \\sign_\\eps'(z)\n \\non\n\\End{equation}\nWe estimate the second integral on the left-hand side\\ from blow by owing to~\\eqref{kabounds}\nand observe that $|y|(\\sign_\\eps'(y))^{1\/2}\\leq\\eps^{1\/2}$ for every $y\\in{\\mathbb{R}}$.\nHence, we~have\n\\Begin{eqnarray}\n && \\int_\\Omega \\modeps{z(t)}\n + \\ka_* \\int_{Q_t} \\sign_\\eps'(z) |\\nabla z|^2\n \\non\n \\\\\n && \\leq L \\int_{Q_t} |z| (\\sign_\\eps'(z))^{1\/2} |\\nabla n| \\, |\\nabla z| (\\sign_\\eps'(z))^{1\/2}\n \\non\n \\\\\n && \\leq L\\eps^{1\/2} \\int_{Q_t} |\\nabla n| \\, |\\nabla z| (\\sign_\\eps'(z))^{1\/2}\n \\non\n \\\\\n && \\leq \\frac {L\\eps^{1\/2}} 2 \\int_{Q_t} \\sign_\\eps'(z) |\\nabla z|^2\n + \\frac {L\\eps^{1\/2}} 2 \\int_Q |\\nabla n|^2.\n \\non\n\\End{eqnarray}\nWe deduce that\n\\Begin{equation}\n \\int_\\Omega \\modeps{z(t)}\n + (\\ka_0 - L\\eps^{1\/2}\/2) \\int_{Q_t} \\sign_\\eps'(z) |\\nabla z|^2\n \\leq \\frac {L\\eps^{1\/2}} 2 \\int_Q |\\nabla z|^2.\n \\non\n\\End{equation}\nSince the second term on the left-hand side\\ is nonnegative if $\\eps<(2\\ka_*\/L)^2$,\nby letting $\\eps$ tend to zero we obtain $z=0$.\n\\End{proof}\n\n\nWe take any $p>1$ and $m>0$,\nset $v:=\\min\\graffe{u,m}$ and test \\eqref{pblG} by $v^{p-1}$.\nWith $\\delta:=b_0^{1\/p'}$ we obtain\n\\Begin{eqnarray}\n && b_0 \\norma v_p^p\n \\leq \\int_\\Omega a \\nabla u \\cdot \\nabla(v^{p-1})\n + \\int_\\Omega b u v^{p-1}\n = \\int_\\Omega f v^{p-1}\n \\non\n \\\\\n && \\leq \\norma f_p \\norma{v^{p-1}}_{p'}\n = \\delta^{-1} \\norma f_p \\, \\delta \\norma v_p^{p-1} \\,.\n \\non\n\\End{eqnarray}\nBy applying the Young inequality, we deduce that\n\\Begin{equation}\n b_0 \\norma v_p^p\n \\leq \\frac 1p \\, \\delta^{-p} \\norma f_p^p\n + \\frac 1{p'} \\, b_0 \\norma v_p^p \\,,\n \\quad \\hbox{i.e.,} \\quad\n b_0 \\norma v_p^p\n \\leq \\delta^{-p} \\norma f_p^p \\,.\n \\non\n\\End{equation}\nHence, we have\n\\Begin{equation}\n b_0^{1\/p} \\norma v_p\n \\leq \\delta^{-1} \\norma f_p\n = b_0^{-1\/p'} \\norma f_p \\,.\n \\non\n\\End{equation}\nBy letting $p$ tend to infinity we infer that $v$ is bounded and that\n\\Begin{equation}\n \\norma v_\\infty\n \\leq b_0^{-1} \\norma f_\\infty \\,,\n \\quad \\hbox{i.e.,} \\quad\n \\min\\graffe{u,m}\n \\leq b_0^{-1} \\norma f_\\infty \n \\quad \\checkmmode{a.e.\\ in~$\\Omega$}\n\\End{equation}\nand we conclude by letting $m$ tend to infinity. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nInterest in the application of loop quantum gravity technique to model\nsystems has significantly increased in the recent years. Most\ntreatments have dealt with homogeneous space-times (see the \nrecent reviews of Ashtekar and Bojowald\\cite{lqc}). \nMore recently extensions to the exterior\\cite{exterior} and \ninterior\\cite{interior} Schwarzschild space-times\nwere carried out, and also with some limitations to Gowdy \nmodels\\cite{Gowdy}. In all these studies, as is customary in mini or\nmidi-superspace treatments, gauge fixings are conducted in order to\nexploit the simplifications inherent in the symmetries present in the\nmodels. On the other hand, in the full theory, one is interested in\ndiffeomorphism invariance. It is therefore of interest to study in the\ncontext of the symmetry reduced models what happens to any remnants of\ndiffeomorphism invariance that are left after the gauge fixing. In\nthis paper we would like to discuss the issue of diffeomorphism\ninvariance within the context of the treatment of the exterior\nspherically symmetric space-times. We will show that the remaining\ndiffeomorphism invariance is successfully recovered in the\nsemi-classical limit of the quantum theory but that there are\nlimitations imposed by the used of holonomic variables in the\nquantization. In spite of this, the quantum theory has the same\ndegrees of freedom as if one used metric variables and the \nsolutions of the semiclassical ``polymerized'' theory are uniquely\ndetermined by the mass, as in ordinary general relativity. This \nappears in contrast to treatments of the interior\\cite{interior} \nof the Schwarzschild\nspace-time and may shed light on the treatment of the\ncomplete space-time where the issue of uniqueness is still not \nsettled\\cite{complete}.\n\n\nThe organization of this paper is as follows. In the next section\nwe briefly review the loop quantum gravity treatment of the exterior\nof the Schwarzschild space-time. In section III we discuss the issue\nof diffeomorphism invariance. We end with a discussion.\n\n\\section{Spherically symmetric space-times in loop quantum gravity}\n\nWe briefly review here the treatment of the exterior in loop quantum\ngravity. More details can be found in our previous paper\\cite{exterior}.\n\n\nOne assumes that the topology of the spatial manifold is of the form\n$\\Sigma=R^+\\times S^2$. We will choose a radial coordinate $x$ and\nstudy the theory in the range $[0,\\infty]$. We will later assume that\nthere is a horizon at $x=0$, with appropriate boundary conditions as\nwe discuss below.\nThe invariant connection can be written \nas,\n\\begin{eqnarray}\n A &=& A_x(x) \\Lambda_3 dx + \n\\left(A_1(x) \\Lambda_1+ A_2(x) \\Lambda_2\\right)d\\theta\\\\&& +\n\\left(\\left(A_1(x) \\Lambda_2- A_2(x) \\Lambda_1\\right)\\sin \\theta +\n\\Lambda_3 \\cos \\theta\\right) d\\varphi,\\nonumber\n\\end{eqnarray}\nwhere $A_x, A_1$ and $A_2$ are real arbitrary functions on $R^+$,\nthe $\\Lambda_I$ are generators of $su(2)$, for instance $\\Lambda_I = \n-i\\sigma_I\/2$ where $\\sigma_I$ are the Pauli matrices or\nrigid rotations thereof. The invariant triad takes the form,\n\\begin{eqnarray}\n E &=& E^x(x) \\Lambda_3 \\sin \\theta {\\partial \\over \\partial x} + \n\\left(E^1(x) \\Lambda_1 + E^2(x) \\Lambda_2\\right) \\sin \\theta {\\partial \\over \n\\partial \\theta} \\nonumber\\\\&&\n+\n\\left(E^1(x) \\Lambda_2 - E^2(x) \\Lambda_1\\right) {\\partial \\over \n\\partial \\varphi},\n\\end{eqnarray}\nwhere again, $E^x, E^1$ and $E^2$ are functions on $R^+$. \n\nAs discussed in our recent paper\\cite{exterior} and \noriginally by Bojowald and Swiderski\\cite{boswi}, \nit is best to make several changes of variables to simplify\nthings and improve asymptotic behaviors. It is also useful to gauge\nfix the diffeomorphism constraint to simplify the model as much as\npossible. It would be too lengthy and not particularly useful to go\nthrough all the steps here. It suffices to notice that one is left\nwith one pair of canonical variables $E^\\varphi$ and ${A}_\\varphi$ (in\nour recent paper\\cite{exterior} called $\\bar{A}_\\varphi$), and that they are\nrelated to the traditional canonical variables in spherical symmetry\n$ds^2=\\Lambda^2 dx^2+R^2 d\\Omega^2$ by $\\Lambda=E^\\varphi\/(x+a)$ and\n$P_\\Lambda= -(x+a)A_\\varphi\/(2\\gamma)$ where $\\gamma$ is the Immirzi\nparameter and $P_\\Lambda$ is the momentum canonically conjugate to\n$\\Lambda$. The gauge fixing chosen is such that $R=(x+a)$ where $a$\nis, at the moment, a dynamical variable function of $t$. We will also\nchoose $A_\\varphi$ to be independent of $t$. The variable $x$ ranges\nfrom zero to infinity. At zero we will impose isolated horizon\nboundary conditions, i.e. $x=0$ will be the horizon, whereas\n$x=\\infty$ corresponds to $i^0$. An asymptotic analysis of terms at\nspatial infinity shows that $a$ ends up being a constant related to\nthe mass of the space-time. Again we refer the reader to\nour recent paper\\cite{exterior} for details.\n\n\nIn terms of these\nvariables the Hamiltonian constraint reads,\n\\begin{eqnarray}\\label{hamilclass}\n H&=&-{E^\\varphi \\over (x+a) \\gamma^2}\\left({A^2_\\varphi (x+a)\\over 8}\\right)'\n-{E^\\varphi \\over 2 (x+a)}\\\\&&+ {3 (x+a) \\over 2 E^\\varphi} \n+ (x+a)^2 \\left({1 \\over E^\\varphi}\\right)'=0.\\nonumber\n\\end{eqnarray}\nand since the variables are gauge invariant there is no Gauss law.\nThe Hamiltonian has a\nnon-trivial Poisson bracket with itself, proportional to a Hamiltonian with\nstructure functions. This makes the treatment of the constraint at a \nquantum level problematic since it has the usual ``problem of dynamics''\n(see Giesel and Thiemann\\cite{thiemanngiesel} for a good discussion) . \nTo avoid this in a first approach, it is worthwhile noticing that through a\nsimple rescaling, the Hamiltonian constraint can be made Abelian,\njust multiplying by $\\frac{2(x+a)}{E^\\varphi}$ and grouping terms as\n\\begin{equation}\\label{abelianized}\nH= \\left(\\frac{(x+a)^3}{(E^\\varphi)^2}\\right)'-1 -\\frac{1}{4 \\gamma^2}\n\\left((x+a) A_\\varphi^2\\right)'=0,\n\\end{equation}\nyields and Abelian constraint. Since the constraint is a total\nderivative, it can immediately be integrated to yield,\n\\begin{equation}\n\\int H dx = C =\n\\left(\\frac{(x+a)^3}{(E^\\varphi)^2}\\right)-x -\\frac{1}{4 \\gamma^2}\n\\left((x+a) A_\\varphi^2\\right),\n\\end{equation}\nwith $C$ a constant of integration. Recalling that at $x=0$ the\nisolated horizon boundary conditions imply $1\/E^\\varphi=0$ and\n$A_\\varphi=0$ one gets that the constant of integration $C$ vanishes.\nThis in particular implies that at infinity, $a=2M$, imposing the\nappropriate boundary conditions there, $E^\\varphi=x+3M$, \n$A_\\varphi=0$.\n\nTo promote the constraint to a quantum operator, one needs to discretize\nthe radial direction and then apply techniques at each point akin to those\nof loop quantum cosmology.\nOne wishes to write the discretization in terms of classical quantities\nthat are straightforward to represent in the quantum theory. Here one\nhas to make choices, since there are infinitely many ways of\ndiscretizing a classical expression. In particular, we will notice\nthat there exists, for this model, a way of discretizing the\nconstraint in such a way that it remains first class (more precisely,\nAbelian) upon discretization. This is unusual, and we do not expect\nsuch a behavior in more general models.\n\nWe now proceed to discretize this expression and to ``polymerize'' it,\nthat is, to cast it in terms of quantities that are easily representable\nby holonomies,\n\\begin{eqnarray}\n H^\\rho_m &=& \\frac{1}{\\epsilon}\\left[\n\\left(\\frac{(x_m+2M)^3 \\epsilon^2}{(E^\\varphi_m)^2}-\n\\frac{(x_{m-1}+2M)^3\\epsilon^2}{(E^\\varphi_{m-1})^2}\\right)\n-\\epsilon \\right.\\nonumber\\\\\n&&-\\frac{1}{4 \\gamma^2 \\rho^2}\n\\left(\n(x_m+2M)\\sin^2\\left(\\rho A_{\\varphi,m}\\right)\\right.\\nonumber\\\\\n&&\\left.\\left.-(x_{m-1}+2M)\\sin^2\\left(\\rho A_{\\varphi,m-1}\\right)\n\\right)\n\\right],\n\\end{eqnarray}\nexpression that recovers (\\ref{abelianized}) in the limit $\\epsilon\\to\n0$, $\\rho \\to 0$. In the above expression $x_m$ are the positions of\nthe lattice points and $\\epsilon$ is the separation of two points in a\nfiducial metric. Although it is not necessary, for simplicity we\nassume $\\epsilon$ is a constant. The parameter $\\rho$ arises in the\n``polymerization'', i.e. in replacing $A_{\\varphi,m}$ by $\\sin(\\rho\nA_{\\varphi,j})\/\\rho$. Whereas the parameter $\\epsilon$ is introduced\njust as a calculational device and can be taken $\\epsilon\\to 0$ in the \nend, the parameter $\\rho$ is expected in loop quantum gravity to have\na fundamental minimum value related to the quantum of area. \nThe above expression is immediately Abelian since it\ncan be written as the difference of two terms, one dependent on the\nvariables at $m$ and the other at $m-1$. Therefore each term has\nautomatically vanishing Poisson brackets with itself and with the\nother.\n\n To implement the constraints as quantum operators as one does in the\n Dirac procedure, it is convenient to solve the constraint for the\n $E^\\varphi_m$,\n\\begin{equation}\\label{convenient}\n E^\\varphi_m = \\pm \\frac{(x_m+2M)\\epsilon}{\n\\sqrt{1-\\frac{2M}{x_m+2M}+\\frac{1}{4\\gamma^2\\rho^2}\n\\sin^2\\left( \\rho A_{\\varphi,m}\\right)}},\n\\end{equation}\nand this relation can be immediately implemented as an operatorial\nrelation and find the states that satisfy it. It should be noted that\nthis relation can be implemented for other gauges as well in a\nstraightforward manner. The states are given by,\n\\begin{equation}\\label{109}\n \\Psi[A_{\\varphi,m},\\tau,M] =C(\\tau,M) \\exp\\left(\n\\pm \\frac{i}{\\ell_{\\rm Planck}^2}\\sum_m\nf[A_{\\varphi,m}]\\right), \n\\end{equation}\nwhere $C(\\tau,M)$ is a function of the variables at the boundary\n$\\tau$ and $M$, which has to solve the constraint at the boundary, as\nwe shall soon see. $\\tau$ is the proper time at infinity, that for\ninstance determines the position of the spatial hypersurfaces of\nvanishing extrinsic curvature (usual Schwarzschild slicings). The\nfunctional $f$ has the form,\n\\begin{eqnarray}\n&& f[A_{\\varphi,m}]=\n\\frac{1}{4\\gamma^2\\rho^2\\left(1-\\frac{2M}{x_m+2M}\\right)}(x_m+2M)\n\\nonumber \\epsilon\\\\&&\n\\left[F\\left(\\sin( \\rho A_{\\varphi,m}),\n\\frac{i}{4\\gamma^2\\rho^2\\left(1-\\frac{2M}{x_m+2M}\\right)}\\right)\\right.\\\\\n&& \\left.+ 2 F\\left(1,\n\\frac{i}{4\\gamma^2\\rho^2\\left(1-\\frac{2M}{x_m+2M}\\right)}\\right)\n{\\rm sgn}\\left(\\sin(\\rho A_{\\varphi,m})\\right)\\right] \\nonumber\n\\end{eqnarray}\nwith $F(\\phi,m)\\equiv \\int_0^\\phi (1-m^2\\sin^2 t)^{-1\/2}dt$ the Jacobi\nElliptic function of the first kind. Notice that the continuum limit\nof this expression for the state is immediate, i.e. the sum in $m$ \nbecomes an integral.\n\nWe now need to impose the constraints on the boundary, in particular\n$p^\\tau = -M$ (in the limit $N\\to\\infty$). Quantum mechanically\n$\\hat{p}^\\tau = -i \\ell_{\\rm Planck}^2 {\\partial\/\\partial \\tau}$ and\ntherefore,\n\\begin{equation}\n C(\\tau, M) = C_0(M) \\exp\\left(-\\frac{i M \\tau}{\\ell_{\\rm Planck}^2}\\right)\n\\end{equation}\nand $C_0(M)$ is an arbitrary function. This is analogous to the\nquantization that Kucha\\v{r} found where one had wavefunctions that\nonly depended on the mass. We have therefore completely solved the\ntheory. \n\n\n\n\n\\section{Diffeomorphism invariance of the model}\n\n\nWe start by pointing out that the quantization is straightforward,\nsince the only remaining canonical variables are $M$ and $\\tau$. These\nvariables have no dynamics. \nOne can immediately introduce an eigenbasis of the mass operator,\nlabeled by eigenvalues $m$, \n$\\hat{M} \\phi(m)=m \\phi(m)$ and the equations of motion at the boundary\nimply that the $\\phi(m)$ do not evolve. This completes the quantization.\n\nSince we have isolated the true degree of freedom of the model and\nquantized it, there are no remnants left of the diffeomorphism\ninvariance of space-time in any manifest way. To reconstruct\ndiffeomorphism invariance in an explicit form it is useful to\nintroduce evolving constants\\cite{evolving}. For instance, given that\nthe mass of the space-time can be written as a function of the\ncanonical variables $M=M(E^\\varphi,\\hat{A}_\\varphi)$, one can\nconstruct an evolving constant associated with the triad as\n$E^\\varphi_{\\rm Evolv}=E^\\varphi(M,A_\\varphi^{(0)})$ where\n$\\hat{A}_\\varphi^{(0)}$ is a parameter, as given by equation\n(\\ref{convenient}). Explicitly,\n\\begin{equation}\\label{polymerized}\nE^\\varphi_{\\rm Evolv} = \\pm \\frac{(x+2M)}{\n\\sqrt{1-\\frac{2M}{x+2M}+\\frac{1}{4\\gamma^2\\rho^2}\n\\sin^2\\left(\\rho A_{\\varphi}^{(0)}\\right)}}.\n\\end{equation}\n The quantity is such that if\none chooses ${A}_\\varphi^{(0)}={A}_\\varphi$ one recovers the\ndynamical variable $E^\\varphi$. The evolving constant is a Dirac\nobservable of the theory and therefore can be realized as an operator\nacting on the physical space of the theory. Notice that the choice \n$A_{\\varphi}^{(0)}=0$ corresponds to the ordinary form of the Schwarzschild\nmetric in Schwarzschild coordinates.\n\nThe four dimensional metric of the model can be written in terms of \n$E^\\varphi_{\\rm Evolv}$ and the parameter $A_\\varphi^{(0)}$,\nby determining the lapse and shift using the gauge fixing condition and\nsetting to zero the time derivatives of the variables.\nTherefore the components of the four dimensional metric can also be\nviewed as evolving constants. The explicit expressions\nare,\n\\begin{eqnarray}\ng_{00}^{\\rm Evolv}(M,A_\\varphi^{(0)})\n&=&-\\frac{x^2}{(E^\\varphi_{\\rm Evolv})^2}\n+\\frac{\\sin^2(\\rho A_\\varphi^{(0)})}{4\\rho^2\\gamma^2}\\\\\ng_{0x}^{\\rm Evolv}(M,A_\\varphi^{(0)})\n&=&\\frac{E^\\varphi_{\\rm Evolv} \\sin(\\rho A_\\varphi^{(0)})}{2\\rho \\gamma x}\\\\\ng_{xx}^{\\rm Evolv}(M,A_\\varphi^{(0)})\n&=&\\frac{(E^\\varphi_{\\rm Evolv})^2}{x^2}.\n\\end{eqnarray}\nIt is worthwhile pointing out that all of the above expressions are\nreadily promoted to quantum operators acting on the physical Hilbert\nspace simply substituting $M$ by $\\hat{M}$.\n\nThe above results hold in the quantum polymerized theory. It is worthwhile\ncomparing them with the results in classical general relativity. The\nexpressions for the components of the metric in\ntraditional general relativity are,\n\\begin{eqnarray}\ng_{00}&=&-\\frac{x^2}{(E^\\varphi)^2}+\\frac{A_\\varphi^2}{4\\gamma^2}\\\\\ng_{0x}&=&\\frac{E^\\varphi A_\\varphi}{2\\gamma x}\\\\\ng_{xx}&=&\\frac{(E^\\varphi)^2}{x^2}.\n\\end{eqnarray}\n\nIt is instructive to substitute the explicit expression of the triad,\n\\begin{equation}\nE^\\varphi = \\pm \\frac{(x+2M)}{\n\\sqrt{1-\\frac{2M}{x+2M}+\\frac{A^2_{\\varphi}}{4\\gamma^2}}}\n\\end{equation}\nDifferent choices of gauge correspond to different choices of $A_\\varphi$\nand these translate themselves in different coordinate choices for the \nfour-metric. \nThe explicit form of the four dimensional metric therefore is,\n\\begin{eqnarray}\ng_{00}&=& -1+\\frac{2M}{x+2M}\\label{21}\\\\\ng_{0x}&=&\\frac{A_\\varphi}{2\\gamma\\sqrt{1-\\frac{2M}{x+2M}+\\frac{A_\\varphi^2}{4\\gamma^2}}}\\\\\ng_{xx}&=& \\frac{1}{1-\\frac{2M}{x+2M}+\\frac{A_\\varphi^2}{4\\gamma^2}}.\n\\label{23}\n\\end{eqnarray}\n\nSince spatial diffeomorphisms have been gauge fixed the only\ndiffeomorphisms left are space-time ones, which modify the value of\n$g_{xx}$ and $g_{0x}$. If one starts in a gauge where\n$A_\\varphi=0$ with coordinates $t,x$, one can go to an arbitrary gauge\n$A_\\varphi(x)$ by choosing $x'=x$ and $t'=t-u(x)$ with\n\\begin{equation}\nu(x)= \\int_x^\\infty dx \n\\frac{A_\\varphi(x)}\n{2\\gamma x \\left(1-\\frac{2M}{x+2M}+\\frac{A_\\varphi^2}{4\\gamma^2}\\right)}.\n\\label{change}\n\\end{equation}\n\nLet us now compare with the quantum theory. In this example\nthings are so simple that we could actually talk about the full\nquantum theory itself, it would just correspond to replace the mass by\na quantum operator in the following expressions. Since the use of a\n``polymerized'' classical theory to capture the semiclassical\nbehaviors of the quantum theory is a technique used in a variety of\ncontexts, we frame the discussion in it. The expression for $g_{00}$\nis unchanged. The expressions for $g_{0x}$ and $g_{xx}$ become,\n\\begin{eqnarray}\ng_{00}&=& -1+\\frac{2M}{x+2M} \\label{25}\\\\\ng_{0x}&=&\\frac{\\sin(\\rho A_\\varphi)}{2\\rho\\gamma\\sqrt{1-\\frac{2M}{x+2M}+\\frac{\\sin\\left(\\rho A_\\varphi\\right)^2}{4\\rho^2\\gamma^2}}}\\\\\ng_{xx}&=& \\frac{1}{1-\\frac{2M}{x+2M}+\\frac{\\sin\\left(\\rho A_\\varphi\\right)^2}{4\\rho^2 \\gamma^2}}.\\label{27}\n\\end{eqnarray}\nWe therefore see that these expressions are particular cases of the\nones we found in the non-polymerized theory, in the sense that for every\nchoice of $A_\\varphi$ for the polymerized theory one can find a choice in\nclassical general relativity that leads to the same metric. The converse,\nhowever, is not true. The gauge transformations\nof the polymerized theory therefore correspond to diffeomorphisms,\njust like in classical general relativity. But not all of the\ndiffeomorphisms available in classical general relativity, at least\nfor finite values of $\\rho$, appear in the polymerized theory. It is\nclear that if one chooses a small value of $\\rho$ as suggested from\nfull loop quantum gravity, where it is associated with the quantum of\narea which is related to Planck's length, ``most'' diffeomorphisms\nwill be allowed in the polymerized theory, but there will be a subset\nthat is not. The reason for this limitation is that one does not\nexpect diffeomorphism invariance to allow us to ``blow up'' regions\nthat are smaller than Planck size to macroscopic sizes. That would\nimply that somehow we can probe space-time at sub-Planckian\nlengths. This does not appear reasonable. We can see that this is\nprecisely what happens in this example. Which diffeomorphisms are\nbeing excluded? Comparing (\\ref{21}-\\ref{23}) to (\\ref{25}-\\ref{27})\nwe see that when $A_\\varphi$ is large, this corresponds to $g_{xx}\\to\n0$ and $g_{0x}\\to 1$ in classical general relativity, whereas in the\npolymerized theory $g_{xx}$ reaches a minimum value. Let us recall\nthat we are talking about space-time diffeomorphisms here, that is,\nchanges of the space-time foliation (the radial coordinate is\nfixed). That means one is excluding foliations where radial distances\nbecome very small, i.e. foliations of large values of the extrinsic\ncurvature.\n\n\n\n\n\\section{Discussion}\n\nThe quantization of the exterior Schwarzschild space-time in loop\nquantum gravity can be carried out completely and it isolates the same\ntrue degrees of freedom as the quantization carried out by Kucha\\v{r}\nin terms of the traditional variables. One has that the only degree of\nfreedom is the mass of the space-time. Wavefunctions are functions of\nthe mass that do not evolve. In spite of this similarity, if one tries\nto reconstruct space-time diffeomorphisms in terms of evolving\nconstants that are Dirac observables on the physical Hilbert space,\none notices effects due to the ``polymerization'' introduced by the\nloop variables. In particular one notes that only a subset of\nspace-time diffeomorphisms get implemented. This corresponds physically\nto the fact that one cannot probe distances of sub-Planckian nature.\nThis provides a simple example in a controlled situation \nof behaviors that are widely believed\nto hold in the full theory.\n\nIt should be noted that in this paper we have taken the parameter\n$\\rho$ in the polymerized theory to be a constant. Current treatments\nin cosmology suggest that an improved dynamics may be achieved with\n$\\rho$ that depends on the dynamical variables\\cite{aspasi}. The\ndetails of the conclusions about the permissible diffeomorphisms will\nchange if one makes such a choice, though we expect the generic\nfeatures to remain the same.\n\nThe issue of how diffeomorphisms get implemented in the polymerized\ntheory becomes quite relevant when one considers the full Kruskal-like\nextension of the Schwarzschild space-time\\cite{complete}. There, at\nthe moment, there exists knowledge of a family of solutions. It is not\nclear if this family is unique or even if different members of the\nfamily correspond to different space-times. This raises the issue of\nthe existence of a Birkhoff theorem in loop quantum gravity. In the\nexterior case we have shown that the only quantum solutions can be\nsuperpositions of space-times with different mass, in spite of the\nfact that one does not implement in the semi-classical theory all the\ndiffeomorphisms present in the classical theory. In the interior case\nit is known that the solutions of the ``polymerized'' semi-classical\ntheory may depend on an extra parameter in addition to the mass. The\nquestion is still open if an analysis of diffeomorphism symmetry like\nwe carried out in this paper can yield a Birkhoff-like theorem in the\ncase of the complete space-time.\n\n\n\\section{Acknowledgements}\n\nWe wish to thank Abhay Ashtekar for discussions and comments on the\nmanuscript. This work was supported in part by grant NSF-PHY-0650715,\nfunds of the Hearne Institute for Theoretical Physics, FQXi, CCT-LSU\nand Pedeciba (Uruguay).\n\n \n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe textbook paradigm of quantum error correction (QEC) focuses on the case of \\emph{perfect} error correction \\cite{EM96, BDSW96}, where the code $\\mathcal{C}$ and the noise are such that there exists a recovery operation that completely removes the effects of the noise on the information stored in the code.\nMathematically, this idea is captured by a set of conditions for perfect error correction~\\cite{KnillLaflamme} that must be satisfied by the code as well as the noise process.\n\nThat such perfect QEC conditions can be satisfied tends to be special rather than generic.\nThe prototypical example is that of independent noise acting on a few physical qubits, and one finds codes that satisfy the perfect QEC conditions assuming that no more than $t$ of the qubits have errors.\nIn such a scenario, what is taken as the noise process $\\mathcal{E}$ in the QEC conditions is the part of the noise that describes $t$ or fewer errors, while the full physical noise process $\\mathcal{E}_0$ contains terms describing more than $t$ errors, albeit with a lower probability of occurrence.\nA code that satisfies the QEC conditions for $\\mathcal{E}$ will thus only satisfy the conditions \\emph{approximately} for the full noise process $\\mathcal{E}_0$.\nFurthermore, in practice, it is unrealistic to expect complete characterization of the noise process.\nThus, a code designed to satisfy the perfect QEC conditions for the \\emph{expected} noise process will typically only satisfy those conditions approximately for the true noise process.\nThis motivates the idea of \\emph{approximate} quantum error correction (AQEC), where the recovery operation removes most, but not necessarily all, the effects of noise on the information stored in the code.\n\nRecent studies on AQEC, using analytical \\cite{Leung,BK,Tyson,BenyOreshkov1,aqecPRA,Renes} and numerical~\\cite{Yamamoto,Reimpell,Kosut08,Fletcher07b} approaches, have discovered examples of approximate codes that allow for recovery of stored information with fidelity comparable to that of perfect QEC codes, while making use of fewer physical resources. These results suggest that the requirement for perfect recovery may be too stringent for certain tasks and approximate QEC may be more natural and practical.\n\nIn~\\cite{aqecPRA}, we demonstrated a universal, near-optimal recovery map---the \\emph{transpose channel}~\\cite{BK,petzbook}---for AQEC codes with a subspace structure, wherein information is stored in an entire subspace of the Hilbert space of the physical quantum system. Optimality was defined in terms of the worst-case recovery fidelity over all states in the code. Our analytical approach was a departure from earlier work relying on exhaustive numerical search for the optimal recovery map, with optimality based on entanglement fidelity~\\cite{Reimpell,FletcherThesis,Kosut08}. We obtained quantitative bounds showing the efficacy of the transpose channel as a universal and analytical recovery operation that works well, regardless of the noise process or the code used.\nThis allowed for complete characterization of approximate subspace codes, in terms of necessary and sufficient conditions for approximate correctability, and provided an easy route for constructing approximate subspace codes.\n\nIn this article, we extend our approach based on the transpose channel to the more general case of AQEC codes with a \\emph{subsystem} structure, where information is stored only in a subsystem of the code subspace.\nA subsystem code (sometimes referred to as an \\emph{operator} QEC code) has a bipartite tensor-product structure, where one subsystem $A$ (the \\emph{correctable subsystem}) is correctable under action of the noise, while the other subsystem $B$ (the \\emph{noisy subsystem}) can be disturbed by the noise beyond repair~\\cite{OQECC, OQEClong, NielsenPoulin07}.\nThe information to be protected against noise is stored only in subsystem $A$.\nSubspace codes can be viewed as special cases of subsystem codes with a trivial noisy subsystem.\nWhile this generalization does not lead to new families of codes, the alternate perspective does sometimes lead to more efficient decoding procedures~\\cite{Poulin05stabilizer,bacon06oqec}, and hence to better fault-tolerant schemes and improved bounds on the accuracy threshold~\\cite{aliferis07}.\nStarting with the Bacon-Shor codes~\\cite{bacon06oqec}---a family of subsystem codes arising from Shor's 9-qubit code---several examples of perfectly correctable stabilizer subsystem codes have been constructed~\\cite{aly06subsystemcodes}.\nHere, we address the general question of characterizing approximate subsystem codes, and explore the extent to which the transpose channel is useful towards understanding approximate codes.\n\nAfter a preliminary section setting down basic definitions and notations, we begin by proving a set of perfect QEC conditions (Section \\ref{sec:perfectOQEC}) that is completely equivalent to the standard well-known QEC conditions. This alternate set of conditions clarifies the role of the transpose channel in perfect QEC. Furthermore, it serves as a natural starting point for perturbation to a set of sufficient conditions for approximate QEC (Section~\\ref{sec:sufficiency}). We then proceed, in Section \\ref{sec:necessary}, to show the near-optimality of the transpose channel recovery map for AQEC for four classes of codes and noise processes. These four classes provide evidence towards our conjecture that the transpose channel is near-optimal for arbitrary subsystem codes and noise, which, if true, would establish a simple, analytical, and universal framework for the study of approximate codes. We conclude with a few suggestions for future work.\n\n\n\\section{Basic definitions}\\label{sec:definition}\n\nWe consider a decomposition of the Hilbert space of our quantum system,\n\\begin{equation}\\label{eq:decompH}\n\\mathcal{H}=\\mathcal{H}_A\\otimes \\mathcal{H}_B+\\mathcal{K}.\n\\end{equation}\nSuppose we wish to store information in the $\\mathcal{H}_A$ factor.\n$\\mathcal{H}_{AB}\\equiv\\mathcal{H}_A\\otimes \\mathcal{H}_B$ is the Hilbert space of a composite system comprising two subsystems $A$ and $B$ of dimensions $d_A$ and $d_B$ respectively.\nWe denote the projector onto $\\mathcal{H}_{AB}$ as $P$.\n$P$ can also be written as a tensor-product: $P=P_A\\otimes P_B$, where $P_{A(B)}$ is the projector onto $\\mathcal{H}_{A(B)}$.\nIn principle, subsystems $A$ and $B$ may only correspond to mathematical tensor-product factors in the decomposition, rather than ``natural\" separate physical degrees of freedom of the quantum system.\nIn practice, one might prefer to work with $A$ and $B$ that are natural degrees of freedom for easy experimental accessibility.\nAlso, it is often helpful to use a decomposition of $\\mathcal{H}$ that is not arbitrarily invented by the experimenter, but induced by the structure of the noise afflicting the quantum system, so as to identify a subsystem that best ensures survival of the stored information.\n\nInformation is stored as a choice between states of subsystem $A$.\nThe state on subsystem $B$ can be arbitrary and carries no information. More concretely, we make use of a \\emph{code} $\\mathcal{C}$, comprising all product states on $AB$,\n\\begin{equation}\\label{eq:code}\n\\mathcal{C}\\equiv\\{\\rho=\\rho_A\\otimes\\rho_B,~\\forall\\rho_A\\in\\mathcal{S}(\\mathcal{H}_A),\\rho_B\\in\\mathcal{S}(\\mathcal{H}_B)\\},\n\\end{equation}\nwhere $\\mathcal{S}(\\mathcal{H}_{A(B)})$ denotes the set of all states (density operators) on subsystem $A(B)$.\nThe information is stored only in subsystem $A$ in that two states $\\rho_A\\otimes \\tau_B$ and $\\rho_A\\otimes \\sigma_B$ differing only in the state of $B$ correspond to the same encoded information.\n\nWe wish to examine the longevity of the information stored in subsystem $A$ in the presence of noise.\nWe describe the noise by a quantum channel acting on $AB$, that is, a completely positive (CP), trace-preserving (TP) map $\\mathcal{E}:\\mathcal{B}(\\mathcal{H}_{AB})\\longrightarrow \\mathcal{B}(\\mathcal{P}_\\mathcal{E})$.\nHere, $\\mathcal{B}(\\mathcal{V})$ refers to the set of all bounded operators on a vector space $\\mathcal{V}$. $\\mathcal{P}_\\mathcal{E}$ is the support of $\\mathcal{E}(\\mathcal{B}(\\mathcal{H}_{AB}))$, or equivalently, the support of $\\mathcal{E}(P)$.\n$\\mathcal{E}$ can be specified by a set of Kraus operators $\\{E_i\\}_{i=1}^N$, so that $\\mathcal{E}$ acts as\n\\begin{equation}\n\\mathcal{E}(\\rho)=\\sum_{i=1}^NE_i\\rho E_i^\\dagger.\n\\end{equation}\nThat $\\mathcal{E}$ is TP translates into the statement $\\sum_{i=1}^NE_i^\\dagger E_i=P$.\nThe Kraus representation of a CPTP channel is non-unique: If $\\{E_i\\}$ is a Kraus representation of $\\mathcal{E}$, then $\\{F_j\\equiv \\sum_i u_{ij}E_i\\}$, for unitary $(u_{ij})$, is a Kraus representation of the same channel.\nA recovery operation $\\mathcal{R}:\\mathcal{B}(\\mathcal{P}_\\mathcal{E})\\longrightarrow \\mathcal{B}(\\mathcal{H}_{AB})$ performed\nafter each application of the noise $\\mathcal{E}$, to attempt to reverse the effects of the noise, is also described as a CPTP map.\n\nSince information is stored in subsystem $A$ only, we are concerned only with how well the noise preserves the information initially stored in $A$, while any state on $B$ can be distorted beyond repair by the noise.\nHeuristically, we say that a code $\\mathcal{C}$ is \\emph{approximately correctable} under noise $\\mathcal{E}$ if and only if there exists a CPTP recovery map $\\mathcal{R}$ such that\n\\begin{equation}\n\\text{tr}_B[(\\mathcal{R}\\circ\\mathcal{E})(\\rho)]\\simeq\\text{tr}_B(\\rho)\\quad\\forall\\rho\\in\\mathcal{C},\n\\end{equation}\nwhere $\\text{tr}_B(\\cdot)$ denotes the partial trace over subsystem $B$.\n\nThis heuristic notion is formalized by quantifying the deviation of the recovered state from the initial state in terms of the fidelity between the two states.\nThe fidelity between two states $\\rho$ and $\\sigma$ is $F(\\rho,\\sigma)\\equiv \\text{tr}\\sqrt{\\rho^{1\/2}\\sigma\\rho^{1\/2}}$. For the case of $\\rho$ being a pure state $\\psi\\equiv |\\psi\\rangle\\langle\\psi|$, $F$ can be written as\n\\begin{equation}\nF(|\\psi\\rangle,\\sigma)\\equiv \\sqrt{\\langle \\psi|\\sigma|\\psi\\rangle}.\n\\end{equation}\nWe define the \\emph{fidelity loss for state $\\rho$}, $\\eta_\\mathcal{R}\\{\\rho\\}$, under noise $\\mathcal{E}$ and recovery $\\mathcal{R}$, as the deviation from 1 of the square of the fidelity between the initial state and the recovered state, that is,\n\\begin{equation}\n\\eta_\\mathcal{R}\\{\\rho\\}\\equiv 1-F^2{\\big(\\text{tr}_B(\\rho),\\text{tr}_B[(\\mathcal{R}\\circ\\mathcal{E})(\\rho)]\\big)}.\n\\end{equation}\nThe performance of a recovery $\\mathcal{R}$ on a code $\\mathcal{C}$ is then characterized by the \\emph{fidelity loss for $\\mathcal{C}$} defined as\n\\begin{equation}\\label{eq:fidelityloss}\n\\eta_\\mathcal{R}\\{\\mathcal{C}\\}\\equiv \\max_{\\rho\\in\\mathcal{C}}\\eta_\\mathcal{R}\\{\\rho\\}.\n\\end{equation}\nHow well $\\mathcal{R}$ recovers the information initially stored in subsystem $A$ is hence gauged by the \\emph{worst-case fidelity} (over all states in the code) between the initial and recovered states. Because the fidelity is jointly concave in its arguments, the worst-case fidelity is always attained on a pure state on $AB$. The maximization in Eq.~\\eqref{eq:fidelityloss} can thus be restricted to pure states on $AB$ only. Often, when the meaning is clear from the context, we will drop the argument from $\\eta_\\mathcal{R}\\{\\mathcal{C}\\}$ and simply write $\\eta_\\mathcal{R}$.\n\nLet $\\cR_\\text{op}$ be the recovery map with the smallest fidelity loss among all possible recovery maps for code $\\mathcal{C}$, that is,\n\\begin{equation}\n\\eta_\\text{op}\\{\\mathcal{C}\\}\\equiv\\eta_{\\cR_\\text{op}}\\{\\mathcal{C}\\}=\\min_\\mathcal{R}\\eta_\\mathcal{R}\\{\\mathcal{C}\\}.\n\\end{equation}\nWe refer to $\\cR_\\text{op}$ as the \\emph{optimal recovery}, and $\\eta_\\text{op}$ as the \\emph{optimal fidelity loss}.\nAs is clear from the notation, whether or not a recovery map is optimal for a given noise process depends on the code in question.\n\nA code $\\mathcal{C}$ with $\\eta_\\text{op}=0$ under noise $\\mathcal{E}$ is said to be \\emph{perfectly correctable} on $A$ under $\\mathcal{E}$.\nIn general, we say that a code is \\emph{$\\epsilon$-correctable} on $A$ under noise $\\mathcal{E}$ if $\\epsilon\\geq\\eta_\\text{op}$, which means that it is possible to recover the information stored in $A$ with a fidelity no smaller than $\\sqrt{1-\\epsilon}$.\nApproximate \\emph{noiseless subsystems} are included within this framework by considering codes for which the identity map (the ``do nothing\" operation) is sufficient as an approximate recovery for the code.\n\nCentral to our analysis is a recovery map built from the noise channel and code, known as the \\emph{transpose channel} (see previous uses of this channel in Refs.~\\cite{petz2003,haydenpetz,BK,IPSPaper,IPSPaper2}).\nThe transpose channel corresponding to a channel $\\mathcal{E}$ and code $\\mathcal{C}$, denoted as $\\cR_P$, is defined in a manifestly representation-invariant way as\n\\begin{equation}\n\\cR_P\\equiv \\mathcal{P}_\\mathcal{C}\\circ\\mathcal{E}^\\dagger\\circ\\mathcal{N}.\n\\end{equation}\nHere, $\\mathcal{E}^\\dagger$ is the adjoint of $\\mathcal{E}$, that is, the channel with Kraus operators $\\{E_i^\\dagger\\}_{i=1}^N$ if $\\mathcal{E}$ has Kraus operators $\\{E_i\\}_{i=1}^N$. $\\mathcal{N}$ is a normalization map $\\mathcal{N}(\\cdot)\\equiv \\mathcal{E}(P)^{-1\/2}(\\cdot)\\mathcal{E}(P)^{-1\/2}$ (the inverse is taken on the support of $\\mathcal{E}(P)$). $\\mathcal{P}_\\mathcal{C}$ is the projection onto the support of $\\mathcal{C}$, $\\mathcal{P}_\\mathcal{C}(\\cdot)=P(\\cdot)P$. One can write $\\cR_P$ explicitly in terms of its Kraus operators $\\{R_i^P\\}_{i=1}^N$, where\n\\begin{equation}\nR_{i}^{P} \\equiv PE_{i}^\\dagger \\mathcal{E}(P)^{-1\/2}.\n\\end{equation}\n$\\cR_P$ is trace-preserving (TP) on $\\mathcal{P}_\\mathcal{E}$. The fidelity loss obtained using the transpose channel as the recovery is denoted by $\\eta_P$.\n\n\n\\section{Approximate QEC: sufficient conditions}\\label{sec:AQECSuff}\n\n\\subsection{Perfect QEC conditions}\\label{sec:perfectOQEC}\n\nWe begin with the case of perfect error correction, where, there exists a recovery map such that the fidelity of any state on $A$ after noise and recovery attains the maximal value of 1. Necessary and sufficient algebraic conditions for the existence of a perfectly correctable code for a given channel $\\mathcal{E}$ are expressed by the following theorem:\n\\begin{theorem}\\label{thm:equivcond}\nConsider a CPTP noise process $\\mathcal{E}$ with Kraus representation $\\{E_i\\}$ acting on $AB$, and a code $C$ on $\\mathcal{H}_{AB}$ as defined in Eq.~\\eqref{eq:code}. $\\mathcal{C}$ is perfectly correctable on $A$ under $\\mathcal{E}$ if and only if\n\\begin{equation}\\label{cond1}\nPE_i^\\dagger \\mathcal{E}(P)^{-1\/2}E_jP=P_A\\otimes B_{ij},\n\\end{equation}\nfor all $i,j$, and $B_{ij}\\in\\mathcal{B}(\\mathcal{H}_B)$.\n\\end{theorem}\n\\noindent The special case of Theorem~\\ref{thm:equivcond} for subspace codes appeared in \\cite{aqecPRA}, and the proof of this generalization (provided in Appendix \\ref{app:perfectCond}) follows a similar logic.\n\nAlgebraic conditions for perfect error correction for subsystem codes were originally discovered in \\cite{NielsenPoulin07, OQECC, OQEClong}, generalizing the well-known perfect QEC conditions for subspace codes \\cite{KnillLaflamme}.\nCompared to the original QEC conditions, our conditions given above differ only in the appearance of the $\\mathcal{E}(P)^{-1\/2}$ factor on the left-side of Eq.~\\eqref{cond1}. However, this alternate form of the conditions offers better intuition on the correctability of codes.\nObserve that the expression on the left-side of Eq.~\\eqref{cond1} is a Kraus operator $R_i^PE_j$ of the channel $\\cR_P\\circ\\mathcal{E}$, from which we can immediately conclude that $\\text{tr}_B\\{(\\cR_P\\circ\\mathcal{E})(\\rho)\\}=\\text{tr}_B(\\rho)$ for any $\\rho\\in\\mathcal{C}$, as is required for perfect correctability on $A$.\nTheorem \\ref{thm:equivcond} can thus be viewed as demonstrating correctability of codes by explicitly giving the recovery map---the transpose channel $\\cR_P$---needed to perfectly recover the state on subsystem $A$ after the action of the channel $\\mathcal{E}$.\n\n\n\n\\subsection{Sufficient AQEC conditions}\\label{sec:sufficiency}\n\nThe form of the QEC conditions given in Eq.~\\eqref{cond1} is particularly well-suited for perturbation to approximate QEC, as was previously pointed out for the special case of subspace codes in \\cite{aqecPRA}.\nTheorem \\ref{thm:equivcond} states that $\\cR_P\\circ\\mathcal{E}$ acts as the identity channel on subsystem $A$.\nPerturbing Eq.~\\eqref{cond1}, by adding to the right-side a small correction to $P_A\\otimes B_{ij}$, modifies this to the statement that $\\cR_P\\circ\\mathcal{E}$ acts \\emph{nearly} as the identity channel on subsystem $A$.\nThis provides a natural route to sufficient conditions for approximate subsystem codes:\nIf the perturbation to Eq.~\\eqref{cond1} is small enough, the code is $\\epsilon$-correctable on $A$ with small $\\epsilon$. What remains is to relate quantitatively the size of the perturbation to $\\epsilon$, which is the content of the following theorem:\n\n\\begin{theorem}\\label{thm:suff}\nConsider a CPTP noise channel $\\mathcal{E}$ with Kraus representation $\\{E_{i}\\}$ and a code $\\mathcal{C}$ on $\\mathcal{H}_{AB}$ as defined in Eq.~\\eqref{eq:code}. Suppose\n\\begin{equation}\\label{eq:AOQECcond}\nPE_{i}^\\dagger \\mathcal{E}(P)^{-1\/2} E_{j}P = P_{A}\\otimes B_{ij} + \\Delta_{ij},\n\\end{equation}\nfor all $i,j$, $B_{ij} \\in\\mathcal{B}(\\mathcal{H}_B)$, and $\\Delta_{ij}\\in\\mathcal{B}(\\mathcal{H}_{AB})$. Then, $\\mathcal{C}$ is $\\epsilon$-correctable on $A$ under $\\mathcal{E}$ for $\\epsilon\\geq\\eta_P$, where\n\\begin{align}\\label{eq:etaP}\n\\eta_P&\\equiv \\max_{|\\psi_A,\\phi_B\\rangle}\\langle \\phi_B|\\sum_{ij}{\\left[\\langle\\psi_A|\\Delta_{ij}^\\dagger\\Delta_{ij}|\\psi_A\\rangle\\right.}\\\\\n&\\hspace{2.65cm}{\\left.-\\langle\\psi_A|\\Delta_{ij}^\\dagger|\\psi_A\\rangle\\langle\\psi_A|\\Delta_{ij}|\\psi_A\\rangle\\right]}|\\phi_B\\rangle.\\nonumber\n\\end{align}\n\\end{theorem}\n\n\\begin{proof}\nThe TP condition on $\\cR_P\\circ\\mathcal{E}$ gives the relation $P=\\sum_{ij}[P_A\\otimes B_{ij}^\\dagger B_{ij}+\\Delta_{ij}^\\dagger\\Delta_{ij}+(P_A\\otimes B_{ij}^\\dagger)\\Delta_{ij}+\\Delta_{ij}^\\dagger(P_A\\otimes B_{ij})]$. Using this, direct computation gives\n\\begin{align}\n&\\quad F^2{\\left[|\\psi_A\\rangle,(\\text{tr}_B\\circ\\cR_P\\circ\\mathcal{E})(|\\psi_A,\\phi_B\\rangle\\langle\\psi_A,\\phi_B|)\\right]}\\\\\n&=1-\\langle\\psi_A,\\phi_B|\\sum_{ij}\\Delta_{ij}^\\dagger{\\left(P_A-|\\psi_A\\rangle\\langle\\psi_A|\\right)}\\otimes P_B\\Delta_{ij}|\\psi_A,\\phi_B\\rangle.\\nonumber\n\\end{align}\nThis yields the expression for $\\eta_P$ in Eq.~\\eqref{eq:etaP} upon recalling that the worst-case fidelity is attained on a pure state on $AB$.\n\\end{proof}\n\nWhile the bound $\\epsilon\\geq \\eta_P$ in Theorem~\\ref{thm:suff} is tight, the maximization over all pure product states on $AB$ in the expression for $\\eta_P$ may not be easy to evaluate.\nInstead, we can relax the bound and obtain a simpler (but weaker) sufficient condition,\n\\begin{corollary}\\label{cor:suff}\n$\\mathcal{C}$ is $\\epsilon$-correctable on $A$ under $\\mathcal{E}$ if\n\\begin{equation}\n\\epsilon\\geq \\Big\\Vert\\sum_{ij}\\Delta_{ij}^\\dagger\\Delta_{ij}\\Big\\Vert,\n\\end{equation}\nwhere $\\Vert\\cdot\\Vert$ is the operator norm.\n\\end{corollary}\n\n\\begin{proof}\nObserve that, for any pure product state $|\\psi_A,\\phi_B\\rangle$, the expression in Eq.~\\eqref{eq:etaP} to be maximized is bounded from above by $\\langle\\psi_A,\\phi_B|\\sum_{ij}\\Delta_{ij}^\\dagger\\Delta_{ij}|\\psi_A,\\phi_B\\rangle\\leq \\big\\Vert\\sum_{ij}\\Delta_{ij}^\\dagger\\Delta_{ij}\\big\\Vert$. This gives $\\eta_P\\leq \\big\\Vert\\sum_{ij}\\Delta_{ij}^\\dagger\\Delta_{ij}\\big\\Vert$, which immediately yields the corollary statement.\n\\end{proof}\n\\noindent Corollary \\ref{cor:suff} gives an easily checkable sufficient condition, which may be more useful than Theorem \\ref{thm:suff} in the search for approximate subsystem codes.\n\n\n\n\\section{Towards necessary AQEC conditions}\\label{sec:necessary}\n\nThe previous section discusses sufficient conditions for the existence of approximate QEC codes.\nThe next natural question to ask is: what about necessary conditions?\nFor the special case of subspace codes, we obtained necessary conditions by deriving a near-optimality bound for the transpose channel recovery \\cite{aqecPRA}.\nThe near-optimality result led to the conclusion that every approximately correctable subspace code must also be well-corrected by the transpose channel.\nThis relation to the transpose channel gave rise to necessary conditions for the existence of approximate subspace codes of a form similar to the sufficient conditions.\n\nExtending the near-optimality bound to arbitrary subsystem codes proved difficult.\nNevertheless, as is described in this section, we can show near-optimality of the transpose channel for restricted classes of subsystem codes and noise processes.\nMore specifically, we consider four scenarios: (A) subspace codes, with trivial subsystem $B$ (a review of results from \\cite{aqecPRA}); (B) code states with the maximally mixed state on $B$; (C) subsystem $B$ is perfectly correctable; and (D) the noise $\\mathcal{E}$ destroys information on $B$.\nIn each scenario, the transpose channel works nearly as well as the optimal recovery operation, which leads to necessary conditions on the noise process as well as the code, \\emph{provided} the restrictions are satisfied\n\nIn the broader picture of arbitrary subsystem codes and noise processes, we believe that the transpose channel still works well whenever the code is approximately correctable.\nAfter all, in the case of perfect QEC, the transpose channel is \\emph{the} recovery operation for perfect recovery.\nHowever, the general near-optimality of the transpose channel is only a conjecture at this point.\nHere, we seek only to provide evidence towards the conjecture, and leave the proof (or disproof) to future work.\n\n\n\\subsection{Trivial subsystem $B$: subspace codes}\\label{sec:subspace}\n\nIn \\cite{aqecPRA}, the transpose channel was shown to be near-optimal for subspace codes $\\mathcal{C}$, that is, its fidelity loss for code $\\mathcal{C}$ under noise $\\mathcal{E}$ is close to the optimal fidelity loss.\nFor completeness, we repeat here the quantitative statement of the near-optimality of the transpose channel, adapted to the language suited for this paper:\n\\begin{theorem}[Corollary 4 of \\cite{aqecPRA}]\\label{thm:subspace}\nConsider a subspace code $\\mathcal{C}$ ($B$ is trivial), with $d_A$ denoting the dimension of $\\mathcal{H}_A$, and optimal fidelity loss $\\eta_\\text{op}$ under CPTP noise channel $\\mathcal{E}$. The fidelity loss $\\eta_P$ for the transpose channel satisfies\n\\begin{equation}\\label{eq:ineqEta}\n\\eta_\\text{op}\\leq \\eta_P\\leq\\eta_\\text{op} f(\\eta_\\text{op};d_A),\n\\end{equation}\nwhere $f(\\eta;d)$ is the function\n\\begin{equation}\\label{eq:f}\nf(\\eta;d)\\equiv \\frac{(d+1)-\\eta}{1+(d-1)\\eta}=(d+1)+O(\\eta).\n\\end{equation}\n\\end{theorem}\n\\noindent The left inequality $\\eta_\\text{op}\\leq \\eta_P$ of Eq.~\\eqref{eq:ineqEta} is true simply by definition of $\\eta_\\text{op}$. The proof of the right inequality $ \\eta_P\\leq\\eta_\\text{op} f(\\eta_\\text{op};d_A)$ requires the following inequality (derived in \\cite{aqecPRA}) which holds for any pure state $\\psi_A\\equiv |\\psi_A\\rangle\\langle\\psi_A|$ in a subspace code $\\mathcal{C}$,\n\\begin{equation}\\label{eq:ineqEta2}\n1-\\eta_\\text{op}\\{\\psi_A\\}\\leq \\sqrt{[1+(d_A-1)\\eta_\\text{op}\\{\\mathcal{C}\\}][1-\\eta_P\\{\\psi_A\\}]}.\n\\end{equation}\nInverting Eq.~\\eqref{eq:ineqEta2} and recalling the definitions of $\\eta_\\text{op}$ and $\\eta_P$ as the maximization of $\\eta_{(\\cdot)}\\{\\psi_A\\}$ over all states in the code yields the right inequality of Eq.~\\eqref{eq:ineqEta}.\n\nEquation~\\eqref{eq:ineqEta} implies that an approximately correctable subspace code must necessarily be such that the fidelity loss for the transpose channel is small.\nA small fidelity loss for the transpose channel in turn requires that $\\mathcal{E}$ has Kraus operators that satisfy Eq.~\\eqref{eq:AOQECcond} with $\\Delta_{ij}$ small.\nEquation.~\\eqref{eq:AOQECcond} with $\\Delta_{ij}$ small is thus not only sufficient (as shown in Sec.~\\ref{sec:sufficiency}), but also necessary for subspace codes.\n\nAn obvious extension of the current case to subsystem codes is one where $\\mathcal{E}$ is a product channel, that is, $\\mathcal{E}(\\rho_A\\otimes\\rho_B)=\\mathcal{F}_A(\\rho_A)\\otimes \\mathcal{F}_B(\\rho_B)$, for CPTP (on their respective domains) channels $\\mathcal{F}_A$ and $\\mathcal{F}_B$.\nFor such an $\\mathcal{E}$, the transpose channel is also a product channel, namely, the product of the respective transpose channels of $\\mathcal{F}_A$ and $\\mathcal{F}_B$.\nSince there is no flow of information between $A$ and $B$, whether subsystem $A$ is correctable relies only on the properties of $\\mathcal{F}_A$.\nWe can thus treat this case as if we have a subspace code on $A$ under noise $\\mathcal{F}_A$, for which the transpose channel is immediately near-optimal from Theorem~\\ref{thm:subspace}.\n\n\n\\subsection{Maximally mixed state on subsystem $B$}\\label{sec:maxmixed}\n\nConsider the subset of code states where $B$ is in the maximally mixed state,\n\\begin{equation}\\label{eq:C0}\n\\mathcal{C}_0\\equiv\\left\\{\\rho_A\\otimes \\frac{P_B}{d_B},\\quad\\forall\\rho_A\\in\\mathcal{S}(\\mathcal{H}_A)\\right\\}\\subset\\mathcal{C}.\n\\end{equation}\nFor states in $\\mathcal{C}_0$, the action of the noise channel $\\mathcal{E}$ can be written as\n\\begin{equation}\n\\mathcal{E}{\\left(\\rho_A\\otimes \\frac{P_B}{d_B}\\right)}=\\sum_{is}\\bar E_{is}\\rho_A \\bar E_{is}^\\dagger\\equiv \\bar\\mathcal{E}_A(\\rho_A).\n\\end{equation}\n$\\bar\\mathcal{E}_A$ is a CPTP channel on $A$ with Kraus operators $\\{\\bar E_{is}\\equiv (1\/\\sqrt{d_B})E_i|s_B\\rangle\\}$, where $\\{|s_B\\rangle\\}_{s=1}^{d_B}$ is an orthonormal basis for $\\mathcal{H}_B$.\nLet us, for a moment, forget about subsystem $B$ and ask about correctability of $\\mathcal{C}_0$---now viewed as a subspace code on $A$---under the noise $\\bar\\mathcal{E}_A$.\nTheorem~\\ref{thm:subspace} applies and ensures that the transpose channel corresponding to noise $\\bar\\mathcal{E}_A$ and code $\\mathcal{C}_{0}$, denoted as $\\mathcal{R}_{A,P}$, has fidelity loss close to that of the optimal recovery $\\mathcal{R}_{A,\\textrm{op}}$,\n\\begin{equation}\\label{eq:maxmixed}\n\\eta_P\\{\\mathcal{C}_0\\}\\leq \\eta_\\text{op}\\{\\mathcal{C}_0\\} f(\\eta_\\text{op}\\{\\mathcal{C}_0\\};d_A),\n\\end{equation}\nfor $f(\\eta;d)$ defined in Eq.~\\eqref{eq:f}.\n\nSuch a code $\\mathcal{C}_0$ is of practical relevance whenever one lacks control over subsystem $B$. Full control over subsystem $A$ alone is sufficient to guarantee preparation of a product code state, while rapid and complete decoherence (for example) causes the state on $B$ to quickly approach a random state well-described by the maximally mixed state. Equation \\eqref{eq:maxmixed} reassures us that, in this case, the transpose channel still works well as a recovery map.\n\nWe are, however, more interested in the performance of the transpose channel on the original subsystem code $\\mathcal{C}$ where the state on $B$ is unrestricted. After all, the freedom to choose the state of $B$, without incurring adverse effects on the information-carrying capability of the code, is the essence of a subsystem code.\nObserve that, for any state in $\\mathcal{C}_0$, the actions of $(\\text{tr}_B\\circ\\cR_P\\circ\\mathcal{E})$ (as usual, $\\cR_P$ is the transpose channel for $\\mathcal{E}$ on code $\\mathcal{C}$) and $(\\mathcal{R}_{A,P}\\circ\\bar\\mathcal{E}_A)$ are identical.\nThe optimal recovery for $\\mathcal{C}_0$, however, need not be the same map as the optimal recovery for $\\mathcal{C}$---the optimal recovery for $\\mathcal{C}$ has to work well for \\emph{all} states in $\\mathcal{C}$, not just those in $\\mathcal{C}_0$.\nHowever, since $\\mathcal{C}_0\\subset\\mathcal{C}$, we have the following inequality,\n\\begin{equation}\\label{eq:C0C}\n\\eta_\\text{op}\\{\\mathcal{C}_0\\}\\leq \\eta_\\text{op}\\{\\mathcal{C}\\}.\n\\end{equation}\nFurthermore, since $\\eta f(\\eta;d)$ is a monotonically increasing function of $\\eta$, we can combine \\eqref{eq:maxmixed} and \\eqref{eq:C0C} to obtain the following corollary,\n\\begin{corollary}\\label{cor:maxmixed}\nFor the subset of code states $\\mathcal{C}_0 \\subset \\mathcal{C}$,\n\\begin{equation}\n\\eta_P\\{\\mathcal{C}_0\\}\\leq \\eta_\\text{op}\\{\\mathcal{C}\\}f(\\eta_\\text{op}\\{\\mathcal{C}\\};d_A).\\label{eq:maxmixed2}\n\\end{equation}\n\\end{corollary}\n\\noindent This says that for the code states where $B$ is in the maximally mixed state---which can be viewed as the ``average state\" for the full degree of freedom described by $B$---the transpose channel works nearly as well as the optimal recovery operation for $\\mathcal{C}$.\n\nAs an aside, we note that a ``state-dependent\" transpose channel is near-optimal for codes where $B$ is always prepared in some \\emph{known} state.\nCode $\\mathcal{C}_0$ is a special case of this, but now $B$ can be in a fixed state other than the maximally mixed state.\nFor example, the rapid decoherence process of subsystem $B$ may have a fixed point that is not the maximally mixed state (for instance, the ground state, if the noise is dissipative), so that any initial preparation of the $B$ state quickly relaxes into this fixed state.\nSuch codes should properly be viewed as (isomorphic to) subspace codes on subsystem $A$.\nSince the identity of the state on $B$ is known, the optimal recovery map for this code must make use of this knowledge.\nLikewise, the associated transpose channel, in order to work well, must also depend on the fixed state on $B$.\nUsing similar techniques as above, one can show\n\\begin{equation}\n\\eta_{P_{\\phi_B}}\\{\\mathcal{C}_{\\phi_B}\\}\\leq \\eta_\\text{op}\\{\\mathcal{C}_{\\phi_B}\\} f(\\eta_\\text{op}\\{\\mathcal{C}_{\\phi_B}\\};d_A),\n\\end{equation}\nwhere $\\mathcal{C}_{\\phi_B}$ is the set of code states with $\\phi_B$ as the fixed state on $B$, and $\\eta_{P_{\\phi_B}}$ refers to the fidelity loss of the state-dependent transpose channel with Kraus operators $ \\{ (P_A \\otimes \\sqrt{\\phi_{B}})E_{i}^\\dagger \\left[\\mathcal{E}(P_A\\otimes\\phi_B)\\right]^{-1\/2}\\}$. This is similar to previously known results from Ref.~\\cite{BK}, derived in the context of entanglement fidelity for reversing dynamics on a given input state.\n\n\n\\subsection{$B$ is perfectly correctable}\\label{sec:Bcorr}\nSuppose subsystem $B$ is in fact perfectly correctable, but we choose to use subsystem $A$ to store the information.\nA simple and often-encountered example is where the noise on $B$ is describable by Kraus operators that are products of Pauli operators.\nMore generally, any noise process that satisfies the perfect QEC conditions for $B$ falls under our current considerations.\nDespite the perfect correctability on $B$, one might still choose to store information in $A$, for example, when $B$ is experimentally inaccessible or uncontrollable, or if $A$ is a much larger system with greater storage capacity than $B$.\nThe transpose channel is again near-optimal in this case.\n\nWe demonstrate this near-optimality by first showing that, for $B$ perfectly correctable, the fidelity for a pure initial state on subsystem $A$ from using the transpose channel as recovery is independent of the initial state of subsystem $B$.\n\\begin{lemma}\\label{lem}\nIf subsystem $B$ is perfectly correctable under noise $\\mathcal{E}$, then $F{\\left[|\\psi\\rangle_A,(\\text{tr}_B\\circ\\cR_P\\circ\\mathcal{E})(\\psi_A\\otimes\\rho_B)\\right]}$, where $\\psi_A\\equiv |\\psi_A\\rangle\\langle\\psi_A|$, is independent of $\\rho_B$.\n\\end{lemma}\n\\begin{proof}\n$B$ perfectly correctable under noise $\\mathcal{E}$ implies that the perfect QEC conditions (Eq.~\\eqref{cond1} of Theorem~\\ref{thm:equivcond} with the roles of $A$ and $B$ interchanged) hold: There exists operators $A_{ij}$ on $A$ for all $i,j$ such that $PE_i^\\dagger \\mathcal{E}(P)^{-1\/2}E_j P=A_{ij}\\otimes P_B$. From this, we have $F^2\\left[|\\psi_A\\rangle,\\left(\\text{tr}_B\\circ\\cR_P\\circ\\mathcal{E}\\right)(\\psi_A\\otimes\\rho_B)\\right]=\\sum_{ij}|\\langle\\psi_A|A_{ij}|\\psi_A\\rangle|^2$,\nwhich is independent of $\\rho_B$.\n\\end{proof}\n\nLemma~\\ref{lem} implies the following sequence of relations:\n\\begin{align}\n\\eta_P\\{\\mathcal{C}\\}=\\max_{\\rho\\in\\mathcal{C}}\\eta_P\\{\\rho\\}&=\\max_{\\rho=\\psi_A\\otimes\\rho_B}\\eta_P\\{\\rho\\}\\nonumber\\\\\n&=\\max_{\\psi_A}\\eta_P\\{\\psi_A\\otimes P_B\/d_B\\}\\nonumber\\\\\n&=\\eta_P\\{\\mathcal{C}_0\\}\\nonumber\\\\\n&\\leq \\eta_\\text{op}\\{\\mathcal{C}\\}f(\\eta_\\text{op}\\{\\mathcal{C}\\};d_A).\\label{eq:Bperf}\n\\end{align}\nThe second equality in the first line of Eq.~\\eqref{eq:Bperf} follows from the concavity of the fidelity, with $\\psi_A$ denoting a pure state. The second line makes use of Lemma~\\ref{lem}, and the last inequality is just Eq.~\\eqref{eq:maxmixed2}.\nEquation~\\eqref{eq:Bperf} gives exactly the right inequality in Eq.~\\eqref{eq:ineqEta} applied to the current scenario, from which we draw the conclusion that the transpose channel is near-optimal on $A$ under channel $\\mathcal{E}$ when $B$ is perfectly correctable.\n\n\\subsection{$\\mathcal{E}$ destroys distinguishability on $B$}\\label{sec:arbitrarystate}\n Suppose the noise process $\\mathcal{E}$ satisfies the following condition:\n \\begin{condition}\\label{condTr}\nFor CPTP $\\mathcal{E}$, suppose there exists $\\delta\\geq 0$ such that\n\\begin{equation}\n\\left\\Vert \\mathcal{E}(\\rho_A\\otimes \\rho_B)-\\mathcal{E}{\\left(\\rho_A\\otimes \\frac{P_B}{d_B}\\right)}\\right\\Vert_\\text{tr}\\leq \\delta{\\left\\Vert\\rho_B-\\frac{P_B}{d_B}\\right\\Vert}_\\text{tr}\n\\end{equation}\nfor all states $\\rho_A \\in \\mathcal{S}(\\mathcal{H}_A)$ and $\\rho_B \\in \\mathcal{S}(\\mathcal{H}_B)$. $\\Vert O\\Vert_\\text{tr}$ denotes the trace norm of $O$ given by $\\text{tr}|O|$.\n\\end{condition}\n\\noindent If $\\delta\\ll 1$, any two states on $\\mathcal{H}_B$, after the action of $\\mathcal{E}$, become close together and nearly indistinguishable (as quantified by the trace norm used in Condition~\\ref{condTr}) from each other.\nA simple example is a product channel $\\mathcal{E}=\\mathcal{E}_A\\otimes \\mathcal{E}_B$ where $\\mathcal{E}_B$ maps all states on $B$ to some fixed state $\\tau_B$. In this case, $\\delta$ can be chosen to be zero.\nWhile we have chosen, for convenience of subsequent analysis, to state Condition~\\ref{condTr} in terms of comparing states on $B$ before and after the channel $\\mathcal{E}$ to what happens to the maximally mixed state $P_B\/d_B$, one is free to choose other reference states on $B$ if desired.\n\nFor channels and codes satisfying Condition~\\ref{condTr}, the transpose channel also works well as a recovery channel, as encapsulated in the following corollary:\n\\begin{corollary}\\label{thm:AbStateFidelity}\nGiven that Condition~\\ref{condTr} is satisfied, for a subsystem code $\\mathcal{C}$,\n\\begin{align}\n\\eta_P&\\leq (d_A+1)\\eta_\\text{op}+3\\delta+O(\\delta^2,\\eta_\\text{op}^2,\\eta_\\text{op}\\delta).\n\\end{align}\n\\end{corollary}\n\\noindent The proof of this corollary is detailed in Appendix~\\ref{app}. The idea behind the proof is to first show that the transpose channel works well as a recovery for the information stored in $A$ when $B$ is initially in the maximally mixed state. Since Condition~\\ref{condTr} says that $\\mathcal{E}$ brings code states with different states on $B$ close together, if the transpose channel works well as a recovery for $B$ being initially in the maximally mixed state, it will also work well when $B$ is initially in a different state.\n\nCorollary~\\ref{thm:AbStateFidelity}, like similar statements before, tells us that the fidelity loss for the transpose channel is not much worse than that of the optimal recovery. The additional fidelity loss suffered from using the simpler transpose channel rather than the optimal recovery is governed by $d_A$ as well as the parameter $\\delta$ which characterizes how badly $\\mathcal{E}$ destroys distinguishability between states on subsystem $B$.\n\n\\section{Conclusion}\\label{sec:concl}\n\nWe studied the role of the transpose channel in approximate quantum error correction.\nWe first obtained a set of conditions for perfect subsystem error correction that explicitly involves the transpose channel. This completes our understanding as to why certain channels admit perfectly correctable codes, in a particularly intuitive way. Our perfect QEC conditions naturally lead to sufficient conditions for approximate QEC, where the resilience to noise of the information stored in the code is quantified in a simple way. We also demonstrated that the transpose channel works nearly as well as any other recovery channel for four different scenarios of codes and noise. In all these cases, the near-optimality of the transpose channel relies only on $d_A$, the dimension of the information-carrying subsystem $A$, and not on $d_B$, the dimension of the noisy subsystem that carries no information.\n\nUsing our transpose channel approach to derive necessary conditions for approximate QEC for general subsystem codes will provide the final missing link in our unifying and analytical framework for understanding approximate quantum error correction.\nEven disproving our conjecture that the transpose channel is a universally good recovery operation for approximate codes will be a useful step forward.\nIn this case, then, the question will be to discover a different recovery map that can serve as a universal recovery.\n\nAnother possible extension is to consider codes that include not just product states on $AB$ in $\\mathcal{C}$ (as we have done), but also correlated states.\nOnce there is correlation between $A$ and $B$, it is, of course, no longer clear where the information initially resides.\nIf one has complete control over the preparation of the initial code states, it would be simpler to make use of the subsystem structure and confine the information to only one subsystem.\nPractically, however, experimental restrictions may result in an initial (possibly small) correlation between $A$ and $B$, leading to a different notion of ``approximate\" or imperfection in the code.\nSuch a situation was previously studied in \\cite{Shabani05} in the context of perfectly noiseless subsystems that require no careful initialization.\nOne can ask similar questions for approximate subsystem codes.\n\nA separate future direction is to perform the transpose channel recovery on experimental implementations of approximate codes. The transpose channel, like any CPTP map, can be implemented physically using operations on an extended Hilbert space. The more pertinent and fruitful question, however, will be to discover simple and efficient ways of implementing the transpose channel on a specific physical system of our choice.\n\n\\section{Acknowledgments}\nP.M. would like to thank David Poulin, John Preskill and Todd Brun for useful discussions. H.K.N is supported by the National Research Foundation and the Ministry of Education, Singapore.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter*{Certificate}\n\\vspace{-33pt}\n\\par This is to certify that the thesis titled \\textit{An investigation on the nonclassical\nand quantum phase properties of a family of engineered quantum states}, submitted by \\textit{Priya Malpani (P14EN001)} to the Indian Institute of Technology Jodhpur for the award of the degree of \\textit{Doctor of Philosophy}, is a bonafide record of the research work done by her under my supervision. To the best of my knowledge, the contents of this report, in full or in parts, have not been submitted to any other Institute or University for the award of any degree or diploma.\n\\vspace{22pt}\n\\begin{center}\n\\flushright \n\\normalfont\\sffamily\\itshape{Dr. V. Narayanan} \\\\\nPh.D.Thesis Supervisor\n\\end{center}\n\n\\begin{center}\n\\chapter*{Acronyms}\n\\end{center}\n\n\\vspace{-33pt}\n~\n\\begin{center}\n\t~%\n\t\\begin{tabular}{ll}\n\t\tBS & Binomial state\\tabularnewline\n\t\tDFS & Displaced Fock state\\tabularnewline\n\t\tECS & Even coherent state\\tabularnewline\n\t\tHOA & Higher-order antibunching\\tabularnewline\n\t\tHOSPS & Higher-order sub-Poissionian photon statistics\\tabularnewline\n\t\tHOS & Higher-order squeezing\\tabularnewline\n\t\tKS & Kerr state\\tabularnewline\n\t\tLE & Linear entropy\\tabularnewline\n\t\tPADFS & Photon added displaced Fock state\\tabularnewline\n\t\tPSDFS & Photon subtracted displaced Fock state\\tabularnewline\n\t\tPABS & Photon added binomial state\\tabularnewline\n\t\tPAECS & Photon added even coherent state\\tabularnewline\n\t\tPAKS & Photon added Kerr state\\tabularnewline\n\t\tPASDFS & Photon added then subtracted displaced Fock state\\tabularnewline\n\t\tVFBS & Vacuum filtered binomial state \\tabularnewline\n\t\tVFECS & Vacuum filtered even coherent state\\tabularnewline\n\t\tVFKS & Vacuum filtered Kerr state\\tabularnewline\n\t\\end{tabular}\n\t\\par\\end{center}\n\n\n\n\\chapter*{Declaration}\n\\vspace{-33pt}\n\\par I hereby declare that the work presented in this thesis entitled \\textit{An investigation on the nonclassical\nand quantum phase properties of a family of engineered quantum states} submitted to the Indian Institute of Technology Jodhpur in partial fulfillment of the requirements for the award of the degree of Doctor of Philosophy, is a bonafide record of the research work carried out under the supervision of Dr. V. Narayanan. The contents of this thesis in full or in parts, have not been submitted to, and will not be submitted by me to, any other Institute or University in India or abroad for the award of any degree or diploma.\n\\vspace{22pt}\n\\begin{center}\n\\flushright \n\\normalfont\\sffamily\\itshape{Priya Malpani} \\\\\nP14EN001\n\\end{center}\n\n\\chapter*{Acknowledgment}\n\\addcontentsline{toc}{chapter}{Acknowledgment}\n\t\n\\small\nIt is my great pleasure to express my gratitude to the person (Prof.\nAnirban Pathak) without whom my Ph.D. journey was impossible. He has\nnot only guided me in my thesis but also inspired my life with his\nvaluable suggestions and magical words \\textquotedblleft Tension mat\nlo..sab ho jayega..\\textquotedblright ., I was in the dark with no\nhope then Pathak Sir was candle of hope. Thank you for returning to\nme faith in myself. Nothing better than these words can express my\nfeelings \\textquotedblleft koi mujhko yun mila hai..jaise banjaare\nko ghar..\\textquotedblright .\n\nSpecial Thanks to my supervisor Dr. V. Narayanan. Thank you sir for\nall the support and being one of the best human being in research\nfield. I am grateful to Dr. Subhashish Banerjee for his helpful discussion\nand motivation. I am thankful to ample number of people involved in\nmy research activity. I am deeply indebtied to Dr. Kishore Thapliyal\n(Google Scholar- apko jitna bhi thanku bolu bahut kam hi hoga!! Thankyou\nSir for ignoring my silly mistakes, your continuous support, motivation\nand everlasting inspiration) and Dr. Nasir Alam (Thank you Sir for\nall valuable help), who had always supported me and I feel blessed\nhaving seniors like you both. I acknowledge my colleagues, Sanjoy\nChatterjee (Dada, the best lab-mate), Javid Naikoo (Best Advisor \\dots without whom the journey of Ph.D. was next to impossible), Khushboo Dixit,\nVandana Dahiya (Cutest friend-Jatni\\dots yaara teri yari ko maine\nto khuda mana.. thanku for all the support, without your help survival\nin IITJ was not possible), Shilpa Pandey (Best roommate), Mitali Sisodia\n(thank you for listening my endless drama), Swarn Rajpoot (naam hi\nkafi hai), Satish Sangwan (Tau- the best buddy), Vishwadeepak Kumar\n(Rockstar), Ashwin Saxena, and other research scholars, teaching and\nnon-teaching staff in IITJ department, who had become part of my life\nduring this period of time. Now, I would like to thank everyone who\nhad played a prominent role in my life. Special thanks to my parents\nfor their unconditional love and support, without your support it\nwas not possible for me to reach this level. I would like to thank\nmy grandpa (Baba). I will thank my bua (Sheelam Bua) for her continuous\nmoral support. A big thanks to my lovely Sadi, Vaibhav (no words can\nexpress my thank you to you bro. Thank you for handling all my moods\nwith extreme patience), Meghai (Friendster), Doll (my soul sister),\nLabbu (blessed to have you in my life bro), Yashivangi (strong pillers\nof my life) and a big thanks to someone who has completely changed\nmy way of thinking. I would also like to acknowledge the financial\nhelps I received in different phases of my research work from IITJ,\nJIIT and MHRD without which this work would not have been possible. \n\\small\n\t\n\t\\vspace{22pt}\n\t\\begin{center}\n\t\t\\flushright \n\t\t\\normalfont\\sffamily\\itshape{Priya Malpani.}\n\t\\end{center}\n\n\\chapter*{Abstract}\nThe main focus of this thesis is to study the nonclassical\nand phase properties of a family of engineered quantum states, most\nof which show various nonclassical features. The beauty of these states\nis that these states can be used to establish quantum supremacy. Earlier,\na considerable amount of works has been reported on various types\nof quantum states and their nonclassical properties. Here, complementing\nthe earlier works, the effect of non-Gaussianity inducing operators\non the nonclassical and phase properties of displaced Fock states\nhave been studied. This thesis includes 6 chapters. In Chapter \\ref{cha:Introduction1},\nmotivation behind performing the present work is stated explicitly,\nalso the basic concepts of quantum optics are discussed with a specific\nattention on the witnesses and measures of nonclassicality. In Chapter\n\\ref{cha:PADFS-PSDFS}, nonclassical properties of photon added and\nsubtracted displaced Fock states have been studied using various witnesses\nof lower- and higher-order nonclassicality which are introduced in\nChapter 1. In Chapter \\ref{cha:phase}, we have continued our investigation\non photon added and subtracted displaced Fock states (and their limiting\ncases). In this chapter, quantum phase properties of these states\nare investigated from a number of perspectives, and it is shown that\nthe quantum phase properties are dependent on the quantum state engineering\noperations performed. In Chapter \\ref{cha:PASDFS}, we have continued\nour investigation on the impact of non-Gaussianity inducing operators\non the nonclassical and phase properties of the displaced Fock states.\nIn Chapter \\ref{cha:QSE-1}, we have performed a comparison between to process\nthat are used in quantum state engineering to induce nonclassical\nfeatures. Finally, this thesis is concluded in Chapter \\ref{cha:Conclusions-and-Scope},\nwhere we have summarized the findings of this thesis and have also\ndescribed scope of the future works.\n\\addcontentsline{toc}{chapter}{Abstract}\n\n\n\t\\vspace{500cm}\n\t\\begin{center}\n\t\\bfseries{\\itshape{}}\n\t\\end{center}\n\t{\\bfseries{\\itshape{Dedicated to my parents for their unconditional love and support.}}}\n\t\\tableofcontents\n\t\\listoffigures\n\t\\listoftables\n\n\t\n\n\t\n\t\n\t\n\t\n\t\n\t \n\t\n\t \n\t\n\n\\newpage\n\n\\mainmatter\n\n\\pagenumbering{arabic}\\setcounter{page}{1}\n\n\\chapter{Introduction\\textsc{\\label{cha:Introduction1}}}\n\n\n\n\\section{Introduction}\n\nAs the title of the thesis suggests, in this thesis, we aim to study\nthe nonclassical and phase properties of a family of engineered quantum\nstates. Before, we introduce such states and properties, it would\nbe apt to lucidly introduce the notion of nonclassical and engineered\nquantum states. By nonclassical state we refer to a quantum state\nhaving no classical analogue. Such states are characterized by the\nnegative values of Glauber-Sudarshan $P$-function or\n$P$-function more singular than Dirac delta function \\cite{sudarshan1963equivalence,glauber1963coherent}\nand witnessed by various operational criteria (to be described in\nSection \\ref{subsec:Witnesses-of-nonclassicality}). To visualize\nthe relevance of nonclassical states we may first note that quantum\nsupremacy refers to the ability of performing a task using quantum\nresources in such a manner that either the task itself cannot be performed\nusing classical resources or the speed\/efficiency achieved using quantum\nresources cannot be achieved in the classical world \\cite{grover1997quantum}.\nA recent experiment performed by Google aimed at estabilishing quantum\nsupremacy has drawn much of public attention \\cite{courtland2017google}.\nThe relevance of the present study lies in the fact that to establish\nquantum supremacy or to perform a fundamental test of quantum mechanics,\nwe would require a state having some features that would not be present\nin any classical state. As we have already mentioned, such a state\nhaving no classical analogue is referred to as the nonclassical state.\nFrequently used examples of nonclassical states include squeezed,\nantibunched, entangled, steered, and Bell nonlocal states. The relevance\nof the states having nonclassical features has already been established\nin the various domains of physics. For example, we may mention, teleportation\nof coherent states \\cite{furusawa1998unconditional}, continuous variable\nquantum cryptography \\cite{hillery2000quantum}, quantum radar \\cite{lanzagorta2011quantum},\nand many more. Further, we may note that the art of generating and\nmanipulating quantum states as per need is referred to as the ``quantum\nstate engineering'' \\cite{dakna1998quantum,sperling2014quantum,vogel1993quantum,miranowicz2004dissipation,dell2006multiphoton}.\nParticularly interesting examples of such engineered quantum states\nare Fock state, photon added\/subtracted coherent state \\cite{agarwal1991nonclassical},\ndisplaced Fock state (DFS) which is also referred to as generalized\ncoherent state and displaced number state \\cite{satyanarayana1985generalized,wunsche1991displaced,ziesel2013experimental,zavatta2004quantum,de1990properties,malpani2019lower},\nphoton added DFS (PADFS) \\cite{malpani2019lower}, and photon subtracted\nDFS (PSDFS) \\cite{malpani2019lower}. In what follows, we will state\nthe relevance of such engineered quantum states in the implementation\nof different tasks exploiting their nonclassical and phase properties. The relatively new area of research on quantum state engineering\nhas drawn much attention of the scientific community because of its\nsuccess in experimentally producing various quantum states \\cite{zavatta2004quantum,torres2003preparation,rauschenbeutel2000step,gao2010experimental,lu2007experimental}\nhaving nonclassical properties and applications in realizing quantum\ninformation processing tasks, like quantum key distribution \\cite{bennett1984quantum}\nand quantum teleportation \\cite{brassard1998teleportation,chen2015bidirectional}.\nEngineered quantum states, such as cat states, Fock state and superposition\nof Fock states, are known to play a crucial role in performing fundamental\ntests of quantum mechanics and in establishing quantum supremacy in\nthe context of quantum computation and communication (\\cite{kues2017chip}\nand references therein).\n\nAs mentioned in the previous paragraph, with the advent of quantum\nstate engineering \\cite{vogel1993quantum,sperling2014quantum,miranowicz2004dissipation,marchiolli2004engineering}\nand quantum information processing (\\cite{pathak2013elements} and\nreferences therein), the study of nonclassical properties of engineered\nquantum states have become a very important field. This is so because\nonly the presence of nonclassical features in a quantum state can\nprovide quantum supremacy \\cite{grover1997quantum}. In the recent past, various techniques for quantum state engineering\nhave been developed \\cite{vogel1993quantum,sperling2014quantum,miranowicz2004dissipation,agarwal1991nonclassical,lee2010quantum,marchiolli2004engineering}.\nIf we restrict ourselves to optics, these techniques are primarily\nbased on the clever use of beam splitters, detectors, and measurements\nwith post selection, etc. Such techniques are useful in creating holes\nin the photon number distribution \\cite{escher2004controlled} and\nin generating finite dimensional quantum states \\cite{miranowicz2004dissipation},\nboth of which are nonclassical \\cite{pathak2018classical}. The above\nsaid techniques are also useful in realizing non-Gaussianity inducing\noperations, like photon addition and subtraction \\cite{zavatta2004quantum,podoshvedov2014extraction}.\nMotivated by the above, in this thesis, we aim to study the nonclassical\nproperties of a set of engineered quantum states such as photon added,\nphoton subtracted, and photon added then subtracted displaced Fock\nstates which can be produced by using the above mentioned techniques.\nIn the present thesis, we also wish to investigate the phase properties\nof the above mentioned engineered quantum states for the reasons explained\nbelow.\n\nThe impossibility of writing a Hermitian operator for quantum phase\nis a longstanding problem (see \\cite{perinova1998phase,carruthers1968phase,lynch1987phase}\nfor review). Early efforts of Dirac \\cite{dirac1927quantum} to introduce\na Hermitian quantum phase operator were not successful, but led to\nmany interesting proposals \\cite{susskind1964quantum,pegg1989phase,barnett1986phase}.\nSpecifically, Susskind-Glogower \\cite{susskind1964quantum}, Pegg-Barnett\n\\cite{pegg1988unitary,pegg1989phase,barnett1990quantum}, and Barnett-Pegg\n\\cite{barnett1986phase} formalisms played very important role in\nthe studies of phase properties and the phase fluctuation \\cite{imry1971relevance}.\nThereafter, phase properties of various quantum states have been reported\nusing these formalisms \\cite{sanders1986bc,gerry1987phase,yao1987phase,carruthers1968phase,vaccaro1989phase,pathak2000phase,alam2016quantum,alam2017quantum,verma2009reduction}.\nOther approaches have also been used for the study of the phase properties.\nFor example, quantum phase distribution is defined using phase states\n\\cite{agarwal1992classical}, while Wigner \\cite{garraway1992quantum}\nand $Q$ \\cite{leonhardt1993phase,leonhardt1995canonical} phase distributions\nare obtained by integrating over radial parameter of the corresponding\nquasidistribution function. In experiments, the phase measurement\nis performed by averaging the field amplitudes of the $Q$ function\n\\cite{noh1991measurement,noh1992operational}; Pegg-Barnett and Wigner\nphase distributions are also reported with the help of reconstructed\ndensity matrix \\cite{smithey1993complete}. Further, quantum phase\ndistribution under the effect of the environment was also studied\nin the past leading to phase diffusion \\cite{banerjee2007phase,banerjee2007phaseQND,abdel2010anabiosis,banerjee2018open}.\nA measure of phase fluctuation named phase dispersion using quantum\nphase distribution has also been proposed in the past \\cite{perinova1998phase,banerjee2007phase}.\nRecently, quantum phase fluctuation \\cite{zheng1992fluctuation} and\nPancharatnam phase \\cite{mendas1993pancharatnam} have been studied\nfor DFS. The quantum phase fluctuation in parametric down-conversion\n\\cite{gantsog1991quantum} and its revival \\cite{gantsog1992collapses}\nare also reported. Experiments on phase super-resolution without using\nentanglement \\cite{resch2007time} and role of photon subtraction\nin concentration of phase information \\cite{usuga2010noise} are also\nperformed. Optimal phase estimation \\cite{sanders1995optimal} using\ndifferent quantum states \\cite{higgins2007entanglement} (including\nNOON and other entangled states and unentangled single-photon states)\nhas long been the focus of quantum metrology \\cite{giovannetti2006quantum,giovannetti2011advances}. Nonclassicality measure based on the shortening of the regular distribution\ndefined on phase difference interval broadbands due to nonclassicality\nis also proposed in the recent past \\cite{perina2019quasidistribution,thapliyal2019quasidistribution}.\nIn brief, quantum phase properties are of intense interest of the\ncommunity since long (see \\cite{pathak2002quantum,perinova1998phase}\nand references therein), and the interest in it has been further enlightened\nin the recent past as many new applications of quantum phase distribution\nand quantum phase fluctuation have been realized.\n\nTo be specific, this work is also motivated by the fact that recently\nseveral applications of nonclassical states and quantum phase properties\nhave been reported. Specifically, squeezed states have played an important\nrole in {{} the studies related to phase diffusion} \\cite{banerjee2007phase,banerjee2007phaseQND},\nthe detection of gravitational waves in LIGO experiments \\cite{abbasi2013thermal,abbott2016gw151226,abbott2016observation}.\nThe rising demand for a single photon source can be fulfilled by an\nantibunched light source \\cite{yuan2002electrically}. {The study\nof quantum correlations is important both from the perspective of\npure and mixed states} \\cite{chakrabarty2010study,dhar2013controllable,banerjee2010dynamics,banerjee2010entanglement}.\nEntangled states are found to be useful in both secure \\cite{ekert1991quantum}\nand insecure \\cite{bennett1992communication,bennett1993teleporting}\nquantum communication schemes. Stronger quantum correlation present\nin the steerable states are used to ensure the security against all\nthe side-channel attacks on devices used in one-side (i.e., either\npreparation or detector side) for quantum cryptography \\cite{branciard2012one}.\nQuantum supremacy in computation is established due to quantum algorithms\nfor unsorted database search \\cite{grover1997quantum}, factorization\nand discrete logarithm problems \\cite{shor1999polynomial}, and machine\nlearning \\cite{biamonte2017quantum} using essentially nonclassical\nstates. We may further stress on the recently reported applications\nof quantum phase distribution and quantum phase fluctuation by noting\nthat these have applications in quantum random number generation \\cite{xu2012ultrafast,raffaelli2018soi},\ncryptanalysis of squeezed state based continuous variable quantum\ncryptography \\cite{horak2004role}, generation of solitons in a Bose-Einstein\ncondensate \\cite{denschlag2000generating}, storage and retrieval\nof information from Rydberg atom \\cite{ahn2000information}, in phase\nencoding quantum cryptography \\cite{gisin2002quantum}, phase imaging\nof cells and tissues for biomedical application \\cite{park2018quantitative};\nas well as have importance in determining the value of transition\ntemperature for superconductors \\cite{emery1995importance}.\n\nNow to achieve the above advantages of the nonclassical states, we\nneed to produce these states via the schemes of quantum state engineering.\nFor the same, there are some distinct theoretical tools, like quantum\nscissoring \\cite{miranowicz2001quantum}, hole-burning \\cite{escher2004controlled,gerry2002hole,malpani2019filter}\nor filtering out a particular Fock state from the photon number distribution\n\\cite{Meher2018}, applying non-Gaussianity inducing operations \\cite{agarwal2013quantum}.\nHowever, these distinct mechanisms are experimentally realized primarily\nby appropriately using beam splitters, mirrors, and single photon\ndetectors or single photon counting module. Without going into finer\ndetails of the optical realization of quantum state engineering tools,\nwe may note that these tools can be used to generate various nonclassical\nstates, e.g., DFS \\cite{de1990properties}, PADFS \\cite{malpani2019lower},\nPSDFS \\cite{malpani2019lower}, photon added squeezed coherent state\n\\cite{thapliyal2017comparison}, photon subtracted squeezed coherent\nstate \\cite{thapliyal2017comparison}, number state filtered coherent\nstate \\cite{Meher2018}. Some of these states, like photon added coherent\nstate, have already been realized experimentally \\cite{zavatta2004quantum}.\n\nMany of the above mentioned engineered quantum states have already\nbeen studied in detail. Primarily, three types of investigations have\nbeen performed on the engineered quantum states- (i) study of various\nnonclassical features of these states (and their variation with the\nstate parameters) as reflected through different witnesses of nonclassicality.\nInitially, such studies were restricted to the lower-order nonclassical\nfeatures. In the recent past, various higher-order nonclassical features\nhave been predicted theoretically \\cite{alam2018higher,alam2018higher1,pathak2006control,pathak2010recent,verma2008higher,thapliyal2017comparison}\nand confirmed experimentally (\\cite{hamar2014non,perina2017higher}\nand references therein) in quantum states generated in nonlinear optical\nprocesses. (ii) Phase properties of the nonclassical states have been\nstudied \\cite{el2000phase} by computing quantum\nphase fluctuations, phase dispersion, phase distribution functions,\netc., under various formalisms, like Susskind and Glogower \\cite{susskind1964quantum},\nPegg-Barnett \\cite{pegg1989phase} and Barnett-Pegg \\cite{barnett1986phase}\nformalisms. (iii) Various applications of the engineered quantum states\nhave been designed. Some of them have already been mentioned.\n\nMotivated by the above observations, in this thesis, we would like\nto perform an investigation on nonclassical and phase properties of\na particularly interesting set of engineered quantum states which\nwould have the flavor of the first two facets of the studies mentioned\nabove. Applications of the engineered quantum states will also be\ndiscussed briefly, but will not be investigated in detail. To begin with we would like to briefly describe the physical and\nmathematical concepts used in this thesis, and that will be the focus\nof the rest of this chapter.\n\nThe rest of this chapter is organized as follows. In Section \\ref{sec:Quantum-theory-of-radiation},\nwe will briefly discuss quantum theory of radiation and introduce\nannihilation, creation, and number operators as well as Fock and coherent\nstates. In Section \\ref{sec:Quantum-states-of-4} this will be followed\nby an introduction to a set of quantum states which will be studied\nin this thesis. After this, the notion of nonclassicality will be\nintroduced mathematically in Section \\ref{sec:The-notion-of nonclassicality},\nand a set of operational criteria for observing nonclassical properties\nwill be introduced in Section \\ref{sec:Nonclassical-states:-witnesses}.\nSubsequently, the parameters used for the study of quantum phase properties\nwill be introduced in Section \\ref{sec:Analytic-tools-forphase}.\nThese witnesses of nonclassicality and the parameters for the study\nof phase properties will be used in the subsequent chapters, to investigate\nthe nonclassical and phase properties of the quantum states discussed\nin Section \\ref{sec:Quantum-states-of-4}. Finally, the structure\nof the rest of the thesis will be provided in Section \\ref{sec:Structure-of-the thesis}. \n\n\\section{Quantum theory of radiation field\\label{sec:Quantum-theory-of-radiation}}\n\nHistorically, quantum physics started with the ideas related to quanta\nof radiation. To be precise, Planck's work on black body radiation\n\\cite{planck1901law} and Einstein's explanation of photoelectric\neffect \\cite{einstein1905tragheit} involved a notion of quantized\nradiation field. In Planck's work, light was considered to be emitted\nfrom and absorbed by a black body in quanta; and\nin Einstein's work, it was also considered that the radiation field\npropagates from one point to another as quanta. These initial works\ncontributed a lot in the development of quantum mechanics, but after\nthe introduction of quantum mechanics, in 1920s, in the initial days,\nmost of the attention was given to the quantization of matter. A quantum\ntheory of radiation was introduced by Dirac in 1927 \\cite{dirac1927quantum}.\nIn what follows, we will describe it briefly as this would form the\nbackbone of the present thesis.\n\n\\subsection{Creation and annihilation operator}\n\nMaxwell gave a classical description of electromagnetic field. But\nhere the objective is to study light and its properties apart from Maxwell's equations. So, to begin with, it would be reasonable to\nwrite Maxwell's equations in free space:\n\n\\begin{equation}\n\\nabla\\,.\\,E=0,\\label{eq:Maxwell1}\n\\end{equation}\n\n\\begin{equation}\n\\nabla\\,.\\,B=0,\\label{Maxwell2}\n\\end{equation}\n\n\\begin{equation}\n\\nabla\\times E=-\\frac{\\partial B}{\\partial t},\\label{Maxwell3}\n\\end{equation}\nand\n\n\\begin{equation}\n\\nabla\\times B=\\frac{1}{c^{2}}\\frac{\\partial E}{\\partial t},\\label{Maxwell4}\n\\end{equation}\nwhere $c=\\frac{1}{\\sqrt{\\mu_{0}\\epsilon_{0}}}$ is the speed of light\nwithout any medium (i.e. vacuum). Using the set of above equations\none can express magnetic and electric fields in the form of the solution\nof wave equations, like\n\n\\begin{equation}\n\\nabla^{2}E-\\frac{1}{c^{2}}\\frac{\\partial^{2}E}{\\partial t^{2}}=0.\\label{wave-equantion}\n\\end{equation}\nThe quantization of radiation field can be done by assuming a cavity\nof length $L$ having a linearly polarized electric field whose direction\nof propagation is in $z$ direction. Because\nof the linearity of the wave equation (\\ref{wave-equantion}), we\nare allowed to write the electric field in the form of linear combination\nof all the normal modes as\n\n\\begin{equation}\nE_{x}\\left(z,t\\right)=\\sum_{n}A_{n}q_{n}\\left(t\\right){\\rm sin}\\left(k_{n}z\\right),\\label{eq:electric}\n\\end{equation}\nwhere $q_{n}$ being the amplitude of the $n$th normal mode with\n$k_{n}=\\frac{n\\pi}{L},$$V$ is the volume of the resonator, $A_{n}=\\frac{2m_{n}\\nu_{n}^{2}}{\\epsilon_{0}V}$\nwith $\\nu_{n}=ck_{n}$, and $m_{n}$ is a constant (in the units of\nmass). With the help of this, we try to form\nan analogy of radiation with mechanical oscillator. In analogy to\nEq. (\\ref{eq:electric}) we are able to write the corresponding magnetic\nfield equation in cavity as\n\n\\begin{equation}\nB_{y}\\left(z,t\\right)=\\sum_{n}A_{n}\\left(\\frac{\\dot{q_{n}}}{c^{2}k_{n}}\\right){\\rm cos}\\left(k_{n}z\\right).\\label{eq:magnetic}\n\\end{equation}\nSo, the total energy of the field can be written as a classical Hamiltonian\n\n\\begin{equation}\nH=\\frac{1}{2}\\int_{V}d\\tau\\left(\\epsilon_{0}E_{x}^{2}+\\frac{1}{\\mu_{0}}B_{y}^{2}\\right),\\label{eq:hamiltonian}\n\\end{equation}\n\n\\begin{equation}\nH=\\frac{1}{2}\\sum_{n}\\left(m_{n}\\nu_{n}^{2}q_{n}^{2}+m_{n}\\dot{q}_{n}^{2}\\right)=\\frac{1}{2}\\sum_{n}\\left(m_{n}\\nu_{n}^{2}q_{n}^{2}+\\frac{p_{n}^{2}}{m_{n}}\\right),\\label{eq:hamiltonian2}\n\\end{equation}\n Substituting position and momentum variables by\ncorresponding operators to obtain the quantum mechanical Hamiltonian,\nwhere $p_{n}=m_{n}\\dot{q_{n}}.$ The position ($q_{n}$) and momentum\n($p_{n}$) operators follow the commutation relations\n\n\\[\n\\left[q_{n},p_{m}\\right]=\\iota\\hbar\\delta_{nm},\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\left[q_{n},q_{m}\\right]=\\left[p_{n},p_{m}\\right]=0,\n\\]\nwhere $\\hbar$ is the reduced Planck's constant. Using these one\nmay define a new set of operators which can be analytically written\nas\n\n\\begin{equation}\n\\hat{a_{n}}{\\rm exp\\left[-\\iota\\nu_{n}t\\right]=\\frac{1}{\\sqrt{2\\hbar\\text{\\ensuremath{m_{n}}}\\nu_{n}}}}\\left(\\text{\\ensuremath{m_{n}}}\\nu_{n}q_{n}+\\iota p_{n}\\right)\\label{annihilation}\n\\end{equation}\nand \n\\begin{equation}\n\\hat{a}_{n}^{\\dagger}{\\rm exp\\left[\\iota\\nu_{n}t\\right]=\\frac{1}{\\sqrt{2\\hbar\\text{\\ensuremath{m_{n}}}\\nu_{n}}}}\\left(\\text{\\ensuremath{m_{n}}}\\nu_{n}q_{n}-\\iota p_{n}\\right).\\label{eq:creation}\n\\end{equation}\nThus, the Hamiltonian can be written as\n\n\\begin{equation}\nH=\\sum_{n}\\hbar\\nu_{n}\\left(\\hat{a_{n}}^{\\dagger}\\hat{a_{n}}+\\frac{1}{2}\\right),\\label{eq:hamiltonian2-1}\n\\end{equation}\nand the commutation relations\n\n\\[\n\\left[\\hat{a}_{n},\\hat{a}_{m}^{\\dagger}\\right]=\\delta_{nm},\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\left[\\hat{a}_{n},\\hat{a}_{m}\\right]=\\left[\\hat{a}_{n}^{\\dagger},\\hat{a}_{m}^{\\dagger}\\right]=0,\n\\]\nwith corresponding electric and magnetic fields, as given by Eq. 1.1.27 in~ \\cite{scully1997quantum} \n\n\\[\nE\\left(\\overrightarrow{r},t\\right)=\\sum_{k}\\hat{\\epsilon_{k}}\\xi_{k}\\hat{a}_{k}{\\rm exp}\\left[-\\iota\\nu_{k}t+\\iota k.\\overrightarrow{r}\\right]+{\\rm H.c}.\n\\]\nand\n\n\\[\nB\\left(\\overrightarrow{r},t\\right)=\\sum_{k}\\frac{k\\times\\hat{\\epsilon_{k}}}{\\nu_{k}}\\xi_{k}\\hat{a}_{k}{\\rm exp}\\left[-\\iota\\nu_{k}t+\\iota k.\\overrightarrow{r}\\right]+{\\rm H.c}.,\n\\]\nwhere $\\xi_{k}=\\left(\\frac{\\hbar\\nu_{k}}{\\epsilon_{0}V}\\right)^{1\/2}$\nis a constant, and $\\hat{\\epsilon_{k}}$ is a unit polarization vector\nwith the wave vector $k$.\n\nThe above analysis shows that a single-mode field is identical to\nharmonic oscillator. So, in the domain of quantum optics, harmonic\noscillator system plays an important role.\n\nNotice that the quantum treatment of electromagnetic radiation hinges\non annihilation $\\hat{a}$ (depletes photon) and creation $\\hat{a}^{\\dagger}$\n(creates photon) operators. The annihilation operator $\\hat{a}$ depletes\none quantum of energy and thus lowers down the system from harmonic\noscillator level $\\left|n\\right\\rangle $ to $\\left|n-1\\right\\rangle $,\ngiven by\n\n\\begin{equation}\n\\hat{a}\\left|n\\right\\rangle =\\sqrt{n}\\left|n-1\\right\\rangle .\\label{eq:annihilationop}\n\\end{equation}\nHere, $\\left|n\\right\\rangle $ is called Fock or number state. Further,\nan application of annihilation operator on vacuum leads to $0$, i.e.,\n$\\hat{a}\\left|0\\right\\rangle =0$. The creation operator $\\hat{a}^{\\dagger}$\ncreates one quantum of energy by raising the state from $\\left|n\\right\\rangle $\nto $\\left|n+1\\right\\rangle $. Therefore, the creation operator in\nthe number state can be represented as \n\\begin{equation}\n\\hat{a}^{\\dagger}\\left|n\\right\\rangle =\\sqrt{n+1}\\left|n+1\\right\\rangle .\\label{eq:creationop}\n\\end{equation}\nIf creation operator is applied to vacuum it creates a photon so these\noperators enables one to write a Fock state ($\\left|n\\right\\rangle $)\nin terms of the vacuum state as \n\\[\n\\left|n\\right\\rangle =\\frac{\\left(\\hat{a}^{\\dagger}\\right)^{n}}{\\sqrt{n!}}\\left|0\\right\\rangle .\n\\]\nIn the above, we have seen that annihilation and creation operators\nare the important field operators and are required for the quantum\ndescription of radiation. These operators can induce nonclassicality\nand non-Gaussianity when applied on classical states \\cite{zavatta2004quantum,agarwal2013quantum}.\nIn the present thesis, we study the role of these non-Gaussianity\ninducing operations in controlling the nonclassicality of the quantum\nstates which are often already nonclassical. For\ninstance, enhacement in squeezing in a nonclassical state does not\nensure advantage with respect to use as a single photon source and\nvice-versa. In the following subsection, we will introduce a set\nof other operators which can be expressed in terms of annihilation\nand creation operators and which play a crucial role in our understanding\nof the quantum states of radiation field.\n\n\\subsection{Some more quantum operators of relevance }\n\n\\label{SMQO}\n\nSo far we have introduced some non-unitary operations (operations\n$\\hat{O}$ which are not norm preserving and do not satisfy $\\hat{O}^{\\dagger}=\\hat{O}^{-1}$,\nwhere $\\hat{O}^{\\dagger}$ and $\\hat{O}^{-1}$ are the Hermitian conjugate\nand inverse of $\\hat{O}$, respectively), namely photon addition and\nsubtraction. We now aim to introduce some more unitary operations\nimportant in the domain of quantum state engineering in general, and\nin this thesis in particular. To begin with let us describe displacement\noperator.\n\n\\subsubsection{Displacement operator}\n\nDisplacement operator is a unitary operator. The mathematical from\nof displacement operator is given as\n\n\\begin{equation}\n\\hat{D}(\\alpha)=\\exp\\left(\\alpha\\hat{a}^{\\dagger}-\\alpha^{\\star}\\hat{a}\\right).\\label{eq:Displacement}\n\\end{equation}\nThis operator can be used as a tool to generate coherent state from\nvacuum. Specifically, a coherent state $\\left|\\alpha\\right\\rangle $\nis defined as $\\left|\\alpha\\right\\rangle =\\hat{D}(\\alpha)\\left|0\\right\\rangle .$\n\n\\subsubsection{Squeezing operator}\n\n\\label{SqOpt}\n\nThe squeezing operator for a single mode of electromagnetic field\nis \n\\begin{equation}\n\\hat{S}(z)=\\exp\\left(\\frac{1}{2}\\left(z^{\\star}\\hat{a}^{2}-z\\hat{a}^{\\dagger2}\\right)\\right).\\label{eq:squeezed}\n\\end{equation}\nThe description of light is given by two quadratures namely phase\n$(X_{1})$ and amplitude $(X_{2})$ in the domain of quantum optics,\nmathematically defined as \n\\[\n\\hat{X_{\\theta}}=\\frac{1}{\\sqrt{2}}\\left( i \\hat{a}^\\dagger \\exp\\left[\\iota\\theta \\right] - \\iota \\hat{a}\\exp\\left[-\\iota\\theta\\right]\\right),\n\\]\nThe corresponding uncertainty of these two quadratures is observed\nby relation $\\Delta X_{1}\\Delta X_{2} \\geq \\hbar\\,\/2$,\nwhere $\\Delta X_{1}$ $\\left(\\Delta X_{2}\\right)$ is variance in\nthe measured values of quadrature $\\hat{X_{1}}=\\hat{X_{1}}\\left(\\theta=0\\right)$\n$\\left(\\hat{X_{2}}=\\hat{X_{2}}\\left(\\theta=\\frac{\\pi}{2}\\right)\\right)$.\nSpecifically, $\\Delta X_{i}=\\sqrt{\\left\\langle X_{i}^{2}\\right\\rangle -\\left\\langle X_{i}\\right\\rangle ^{2}},$where\n$\\left\\langle \\hat{A}\\right\\rangle =\\left\\langle \\psi\\left|\\hat{A}\\right|\\psi\\right\\rangle $\nis the expectation value of the operator $A$ with respect to the\nquantum state $\\left|\\psi\\right\\rangle $. Coherent state has an equal\nuncertainty in both quadratures so they form a circle in the phase\npicture (shown in Fig. \\ref{fig:coh-sq}). Least value\nof the variance for suitable $\\theta$ is studied as principle squeezing.\nWith the advent of nonlinear optics, a very special branch of optics,\nthe uncertainty in one of the quadratures can be\nreduced at the cost of increment in other quadrature's uncertainty,\nwhich means that the circle can be squeezed.\n\n\\subsection{Eigen states of the field operators}\n\nHere, we will discuss eigen states of some of the operators we have\nintroduced. The eigenvalue equation can be defined as $\\hat{A}\\lambda=a\\lambda$\nwith eigen operator $\\hat{A}$, eigenvalue $a$, and eigen function\n$\\lambda$. For example, Schrodinger equation $H\\psi_{i}=E_{i}\\psi_{i}$\nhas Hamiltonian $H$ as eigen operator with eigen functions $\\psi_{i}$\nand eigenvalues as allowed energy levels.\n\n\\subsubsection{Fock state: Eigen state of the number operator}\n\nIn case of quantum optics or quantized light picture, photon number\nstate is known as number state. The single-mode photon number states\nare known Fock states, and its ground state is defined as vacuum state. As the set of number states are a full set of orthonormal basis so\nany quantum state can be written in terms of these basis. The method\nof representing a quantum state as superposition of number states\nis known as number state representation. Now using Eq. (\\ref{eq:annihilationop})\nand (\\ref{eq:creationop}), we can introduce an operator \n\\[\n\\hat{N}=\\hat{a}^{\\dagger}\\hat{a}.\n\\]\nwhich would satisfy the following eigen value equation \n\\[\n\\hat{N}\\left|n\\right\\rangle =n\\left|n\\right\\rangle .\n\\]\nClearly Fock states are the eigen states of the number operators and\nin consistency with what has already been told, a Fock state $\\left|n\\right\\rangle $\nrepresent a $n$ photon state.\n\n\\subsubsection{Coherent state: Eigen state of the annihilation operator \\label{subsec:Coherent-state}}\n\nCoherent state \\cite{fox2006quantum} is considered as a state of\nthe quantized electromagnetic field which shows classical behaviour\n(specifically, beavior closest to classical states). According to\nErwin Schrodinger it is a minimum uncertainty state, having same uncertainity\nin position and momentum \\cite{schrodinger1926stetige}. According\nto Glauber, any of three mathematical definitions described below\ncan define coherent state:\n\n(i) Eigen vectors of annihilation operator $\\hat{a}\\left|\\alpha\\right\\rangle =\\alpha\\left|\\alpha\\right\\rangle $,\n$\\alpha$ being a complex number.\n\n(ii) Quantum states having minimum uncertainty $\\Delta X_{1}=\\Delta X_{2}=1\/\\sqrt{2}$,\nwith $X_{2}$ and $X_{1}$ as momentum and position operators.\n\n(iii) States realized by the application of the displacement operator\n$D(\\alpha)$ on the vacuum state. Thus, is also known as displaced\nvacuum state and can be expressed as \n\\[\n\\left|\\alpha\\right\\rangle =D\\left(\\alpha\\right)\\left|0\\right\\rangle .\n\\]\nIn Fock basis, it is expressed as infinite superposition of Fock state\nas \n\\begin{equation}\n\\left|\\alpha\\right\\rangle =\\exp\\left[-\\frac{\\mid\\alpha\\mid^{2}}{2}\\right]\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}|n\\rangle,\\label{eq:CS}\n\\end{equation}\nwhere $\\alpha$ is a complex number. Experimentally established state\nvery close to this coherent state was possible only after the successful\ndevelopment of laser. Finally, one can easily see that $\\hat{a}\\left|\\alpha\\right\\rangle =\\alpha\\left|\\alpha\\right\\rangle $\nimplies $\\left\\langle \\alpha\\right|\\hat{a}^{\\dagger}=\\left\\langle \\alpha\\right|\\alpha^{\\star}$\nand consequently $\\left\\langle \\alpha\\right|\\hat{a}^{\\dagger}\\hat{a}\\left|\\alpha\\right\\rangle =\\left\\langle \\alpha\\right|\\hat{N}\\left|\\alpha\\right\\rangle =N=\\left|\\alpha\\right|^{2}$\nor average photon number in a coherent state is $\\left|\\alpha\\right|^{2}$.\n\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{c}\n\\includegraphics[width=110mm]{coh-sq.jpg}\\tabularnewline\n\\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:coh-sq} Phase picture for coherent state and squeezed\nstate. }\n\\end{figure}\n\n\\section{Quantum states of our interest\\label{sec:Quantum-states-of-4}}\n\nIn this section, we provide basic mathematical details of the set\nof engineered quantum states studied in the present thesis.\n\n\\subsection{Displaced Fock state}\n\nDisplaced Fock state \\cite{satyanarayana1985generalized} are formed\nby applying displacement operator on Fock state and thus a DFS is\ndefined as \n\\[\n\\left|\\phi\\right\\rangle =D\\left(\\alpha\\right)\\left|n\\right\\rangle .\n\\]\nAnalytically it is given as \n\\begin{equation}\n|\\phi(n,\\alpha)\\rangle=\\frac{1}{\\sqrt{n!}}\\sum_{p=0}^{n}{n \\choose p}(-\\alpha^{\\star})^{(n-p)}\\exp\\left(-\\frac{\\mid\\alpha\\mid^{2}}{2}\\right)\\sum_{m=0}^{\\infty}\\frac{\\alpha^{m}}{m!}\\sqrt{(m+p)!}|m+p\\rangle.\\label{eq:GCS}\n\\end{equation}\nVarious nonclassical properties of DFS are reported in literature\n\\cite{de1990properties,el2000phase,lvovsky2002synthesis,mendas1993pancharatnam}.\n\n\\subsection{Photon added and photon subtracted displaced Fock state\\label{subsec:Photon-added-and sub}}\n\nUsing DFS, we can define a $u$ photon added DFS (i.e., a PADFS) as\n\\begin{eqnarray}\n|\\psi_{+}(u,n,\\alpha)\\rangle & = & N_{+}\\hat{a}^{\\dagger u}|\\phi(n,\\alpha)\\rangle=\\frac{N_{+}}{\\sqrt{n!}}\\sum_{p=0}^{n}{n \\choose p}(-\\alpha^{\\star})^{(n-p)}\\exp\\left(-\\frac{\\mid\\alpha\\mid^{2}}{2}\\right)\\sum_{m=0}^{\\infty}\\frac{\\alpha^{m}}{m!}\\nonumber \\\\\n & \\times & \\sqrt{(m+p+u)!}|m+p+u\\rangle.\\label{eq:PADFS}\n\\end{eqnarray}\nSimilarly, a $v$ photon subtracted DFS (i.e., a PSDFS) can be expressed\nas \n\\begin{eqnarray}\n|\\psi_{-}(v,n,\\alpha)\\rangle & = & N_{-}\\hat{a}^{v}|\\phi(n,\\alpha)\\rangle=\\frac{N_{-}}{\\sqrt{n!}}\\sum_{p=0}^{n}{n \\choose p}(-\\alpha^{\\star})^{(n-p)}\\exp\\left(-\\frac{\\mid\\alpha\\mid^{2}}{2}\\right)\\sum_{m=0}^{\\infty}\\frac{\\alpha^{m}}{m!}\\nonumber \\\\\n & \\times & \\dfrac{(m+p)!}{\\sqrt{(m+p-v)!}}|m+p-v\\rangle,\\label{eq:PSDFS}\n\\end{eqnarray}\nwhere $m$ and $p$ are the real integers. Here, {\\small{}\n\\begin{eqnarray}\nN_{+}=\\left[\\frac{1}{n!}\\sum_{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\sum_{m=0}^{\\infty}\\frac{\\alpha^{m}(\\alpha^{\\star})^{m+p-p'}(m+p+u)!}{m!(m+p-p')!}\\right]^{-\\frac{1}{2}}\\label{eq:norP}\n\\end{eqnarray}\nand \n\\begin{equation}\nN_{-}=\\left[\\frac{1}{n!}\\sum_{p,p'=0}^{n}{n \\choose p}{n \\choose p'}\\left(\\alpha^{\\star}\\right)^{(n-p)}\\left(-\\alpha\\right){}^{(n-p')}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\sum_{m=0}^{\\infty}\\frac{\\alpha^{m}\\left(\\alpha^{\\star}\\right){}^{m+p-p'}(m+p)!}{m!(m+p-p')!(m+p-v)!}\\right]^{-\\frac{1}{2}}.\\label{eq:norS}\n\\end{equation}\n}are the normalization constants, and subscripts $+$ and $-$ correspond\nto photon addition and subtraction. Thus, $|\\psi_{+}(u,n,\\alpha)\\rangle$\nand $|\\psi_{-}(v,n,\\alpha)\\rangle$ represent $u$ photon added DFS\nand $v$ photon subtracted DFS, respectively, for the DFS which has\nbeen produced by displacing the Fock state $|n\\rangle$ by a displacement\noperator $D(\\alpha)$ characterized by the complex parameter $\\alpha.$\nClearly, the addition and the subtraction of photons on the DFS can\nbe mathematically viewed as application of the creation and annihilation\noperators from the left on the Eq. (\\ref{eq:GCS}). Here, it may be\nnoted that different well-known states can be obtained as special\ncases of these two states. For example, using the notation introduced\nabove to define PADFS and PSDFS, we can describe a coherent state\n$|\\alpha\\rangle$ as $|\\alpha\\rangle=|\\psi_{+}(0,\\alpha,0)\\rangle=|\\psi_{-}(0,\\alpha,0)\\rangle$,\nnaturally, coherent state can be viewed as a special case of both\nPADFS and PSDFS. Similarly, we can describe a single photon added\ncoherent state as $|\\psi\\rangle_{+1}=|\\psi_{+}(1,\\alpha,0)\\rangle$,\na Fock state as $|n\\rangle=|\\psi_{+}(0,0,n)\\rangle=|\\psi_{-}(0,0,n)\\rangle$\nand a DFS as $|\\psi\\rangle_{{\\rm DFS}}=|\\psi_{+}(0,\\alpha,n)\\rangle=|\\psi_{-}(0,\\alpha,n)\\rangle.$\n\n\\subsection{Photon added then subtracted displaced Fock state \\label{subsec:PASDFS}}\n\nA PASDFS can be obtained by sequentially applying appropriate number\nof annihilation (photon subtraction) and creation (photon addition)\noperators on a DFS. Analytical expression for PASDFS (specifically,\na $k$ photon added and then $q$ photon subtracted DFS) in Fock basis\ncan be shown to be\n\n\\begin{eqnarray}\n|\\psi(k,q,n,\\alpha)\\rangle & = & N\\hat{a}^{q}\\hat{a}^{\\dagger k}|\\psi(n,\\alpha)\\rangle=\\frac{N}{\\sqrt{n!}}\\sum_{p=0}^{n}{n \\choose p}(-\\alpha^{\\star})^{(n-p)}\\exp\\left(-\\frac{\\mid\\alpha\\mid^{2}}{2}\\right)\\nonumber \\\\\n & \\times & \\sum_{m=0}^{\\infty}\\frac{\\alpha^{m}\\left(m+p+k\\right)!}{m!\\sqrt{(m+p+k-q)!}}|m+p+k-q\\rangle,\\label{eq:PADFS-1}\n\\end{eqnarray}\nwhere {\\small{}\n\\[\nN=\\left[\\frac{1}{n!}\\sum_{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\right]^{-\\frac{1}{2}}\n\\]\n}is the normalization factor.\n\n\\subsection{Even coherent state and states generated by holeburning on it}\n\nEven coherent state can be defined as the superposition of two coherent\nstates having opposite phase ($|\\phi(\\alpha)\\rangle\\propto|\\alpha\\rangle+|-\\alpha\\rangle$).\nThe analytical expression for ECS in number basis can be written as\n\\begin{equation}\n\\begin{array}{lcl}\n|\\phi(\\alpha)\\rangle & = & \\frac{\\,\\exp\\left[-\\frac{\\mid\\alpha\\mid^{2}}{2}\\right]}{\\sqrt{2\\left(1+\\exp\\left[-2\\mid\\alpha\\mid^{2}\\right]\\right)}}\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}\\left(1+\\left(-1\\right)^{n}\\right)|n\\rangle.\\end{array}\\label{eq:ECS}\n\\end{equation}\nThe parameter $\\alpha=|\\alpha|\\exp(i\\theta)$, in Eq. (\\ref{eq:ECS}),\nis complex in general and $\\theta$ is phase angle in the complex\nplane. Various schemes to generate ECS are reported in \\cite{brune1992manipulation,ourjoumtsev2007generation,gerry1993non}.\nThe nonclassical properties (witnessed through the antibunching and\nsqueezing criteria, $Q$ function, Wigner function, and photon number\ndistribution, etc.) of ECS have been studied in the recent past \\cite{gerry1993non}.\n\n\\subsubsection{Vacuum filtered even coherent state}\n\nAs mentioned above, experimentally, an ECS or\na cat state can be generated in various ways, and the same can be\nfurther engineered to produce a hole at vacuum in its photon number\ndistribution. Specifically, filtration of vacuum will burn a hole\nat $n=0$ and produce VFECS, which can be described in Fock basis\nas \n\\begin{equation}\n\\begin{array}{lcl}\n|\\phi_{1}(\\alpha)\\rangle & = & N_{{\\rm VFECS}}\\sum\\limits _{n=0,\\,n\\neq0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}\\left(1+\\left(-1\\right)^{n}\\right)|n\\rangle,\\end{array}\\label{eq:VFECS}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{array}{lcl}\nN_{{\\rm VFECS}} & = & \\{4{\\rm cosh}\\left(\\mid\\alpha\\mid^{2}\\right)-1\\}^{-1\/2}\\end{array}\n\\end{equation}\nis the normalization constant. For simplicity, we may expand Eq. (\\ref{eq:VFECS})\nas a superposition of a standard ECS and a vacuum state as follows\n\\begin{equation}\n\\begin{array}{lcl}\n|\\phi_{1}(\\alpha)\\rangle & = & N_{{\\rm VFECS}}\\left(\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}\\left(1+\\left(-1\\right)^{n}\\right)|n\\rangle-2|0\\rangle\\right).\\end{array}\\label{eq:VFECS-EXPANDED}\n\\end{equation}\nIn what follows, Eq. (\\ref{eq:VFECS-EXPANDED}) will be used to explore\nvarious nonclassical features that may exist in VFECS.\n\n\\subsubsection{Photon added even coherent state}\n\nOne can define a single photon added ECS as \n\\begin{equation}\n|\\phi_{2}(\\alpha)\\rangle=N_{{\\rm PAECS}}\\hat{a}^{\\dagger}|\\phi(\\alpha)\\rangle=N_{{\\rm PAECS}}\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}\\left(1+\\left(-1\\right)^{n}\\right)\\sqrt{n+1}|n+1\\rangle,\\label{eq:PAECS}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{array}{lcl}\nN_{{\\rm PAECS}} & = & \\{{\\rm cosh}\\left(\\mid\\alpha\\mid^{2}\\right)+\\mid\\alpha\\mid^{2}{\\rm sinh}\\left(\\mid\\alpha\\mid^{2}\\right)\\}^{-1\/2}\/2\\end{array}\n\\end{equation}\nis the normalization constant for PAECS.\n\n\\subsection{Binomial state and the states generated by holeburning on it}\n\nBinomial state is a finite superposition of Fock states having binomial\nphoton number distribution. It is quite similar to the coherent state\nwhich is the linear combination of Fock states having the Poissonian\nphoton number distribution \\cite{stoler1985binomial}. BS can be defined\nas \n\\begin{equation}\n\\begin{array}{lcl}\n|p,M\\rangle & = & \\sum\\limits _{n=0}^{M}\\left[\\frac{M!}{n!(M-n)!}p^{n}\\left(1-p\\right)^{M-n}\\right]^{1\/2}|n\\rangle.\\end{array}\\label{eq:BS}\n\\end{equation}\nThe binomial coefficient describes the presence of $n$ photons with\nprobability $p$ in $M$ number of ways. Recently, the study of nonclassical\nproperties of BS, specifically, antibunching, squeezing, HOSPS \\cite{verma2008higher,verma2010generalized,bazrafkan2004tomography},\netc., have been studied very extensively. However, no effort has yet\nbeen made to study the nonclassical properties of VFBS and PABS.\n\n\\subsubsection{Vacuum filtered binomial state}\n\nThe vacuum filtration of BS can be obtained by simply eliminating\nvacuum state from the BS as \n\\begin{equation}\n\\begin{array}{lcl}\n|p,M\\rangle_{1} & = & N_{{\\rm VFBS}}\\sum\\limits _{n=0}^{M}\\left[\\frac{M!}{n!(M-n)!}p^{n}\\left(1-p\\right)^{M-n}\\right]^{1\/2}|n\\rangle-N_{VFBS}\\left[\\left(1-p\\right)^{M}\\right]^{1\/2}|0\\rangle,\\end{array}\\label{eq:VFBS}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{array}{lcl}\nN_{{\\rm VFBS}} & = & \\{1-\\left(1-p\\right)^{M}\\}^{-1\/2}\\end{array}\n\\end{equation}\nis the normalization constant for the VFBS.\n\n\\subsubsection{Photon added binomial state}\n\nA hole at $n=0$ at a BS can also be introduced by the addition of\na single photon on the BS. A few steps of computation yield the desired\nexpression for PABS as \n\\begin{equation}\n\\begin{array}{lcl}\n|p,M\\rangle_{2} & =N_{{\\rm PABS}} & \\sum\\limits _{n=0}^{M}\\left[\\frac{M!(n+1)!}{\\left(n!\\right)^{2}(M-n)!}p^{n}\\left(1-p\\right)^{M-n}\\right]^{1\/2}|n+1\\rangle,\\end{array}\\label{eq:PABS}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{array}{lcl}\nN_{{\\rm PABS}} & = & \\left(1+Mp\\right)^{-1\/2}\\end{array}\n\\end{equation}\nis the normalization constant for single photon added BS.\n\n\\subsection{Kerr state and the states generated by holeburning on it}\n\nA KS can be obtained when electromagnetic field\nin a coherent state interacts with nonlinear medium with Kerr type\nnonlinearity \\cite{gerry1994statistical}. This interaction generates\nphase shifts which depend on the intensity. The Hamiltonian involved\nin this process is given as \n\\begin{equation}\nH=\\hbar\\omega\\hat{a}^{\\dagger}\\hat{a}+\\hbar\\chi\\left(\\hat{a}^{\\dagger}\\right)^{2}\\left(\\hat{a}\\right)^{2},\n\\end{equation}\nwhere $\\chi$ depends on the third-order susceptibility of Kerr medium. Explicit contribution of $H$ is ${\\rm exp}\\left[-i\\chi n(n-1)\\right]$. Thus, the compact analytic form\nof the KS in the Fock basis can be given as \n\\begin{equation}\n\\begin{array}{lcl}\n|\\psi_{K}\\left(n\\right)\\rangle & = & \\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}\\exp\\left(-\\frac{\\mid\\alpha\\mid^{2}}{2}\\right)\\exp\\left(-\\iota\\chi n\\left(n-1\\right)\\right)|n\\rangle.\\end{array}\\label{eq:KS}\n\\end{equation}\n\n\n\\subsubsection{Vacuum filtered Kerr state}\n\nSimilarly, a VFKS, which can be obtained using the same quantum state\nengineering process that leads to VFECS and VFBS, is given by \n\\begin{equation}\n\\begin{array}{lcl}\n|\\psi_{K}\\left(n\\right)\\rangle_{1} & = & N_{{\\rm VFKS}}\\left[\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}\\exp\\left(-\\iota\\chi n\\left(n-1\\right)\\right)|n\\rangle-|0\\rangle\\right]\\end{array},\\label{eq:VFKS-expanded}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{array}{lcl}\nN_{{\\rm VFKS}} & = & {{\\left(\\exp\\left[\\mid\\alpha\\mid^{2}\\right]-1\\right)}^{-1\/2}}\\end{array}\\label{eq:NVFKS-expanded}\n\\end{equation}\nis the normalization constant for the VFKS.\n\n\\subsubsection{Photon added Kerr state}\n\nAn addition of a photon to Kerr state would yield PAKS which can be\nexpanded in Fock basis as \n\\begin{equation}\n\\begin{array}{lcl}\n|\\psi_{K}\\left(n\\right)\\rangle_{2} & =N_{{\\rm PAKS}} & \\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}}{\\sqrt{n!}}\\exp\\left(-\\iota\\chi n\\left(n-1\\right)\\right)\\sqrt{\\left(n+1\\right)}|n+1\\rangle,\\end{array}\\label{eq:PAKS}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{array}{lcl}\nN_{{\\rm PAKS}} & = & {\\left(\\exp\\left[\\mid\\alpha\\mid^{2}\\right]\\left(1+\\mid\\alpha\\mid^{2}\\right)\\right)^{-1\/2}}\\end{array}\\label{eq:NPAKS}\n\\end{equation}\nis the normalization constant for the PAKS.\n\n\\section{The notion of nonclassical states \\label{sec:The-notion-of nonclassicality}}\n\nQuantum states which do not have any classical analogue have been\nreferred to as nonclassical states \\cite{agarwal2013quantum}. In\nother words, states having their $P$-distribution more singular than\ndelta function or having negative values are referred to as nonclassical\nstates \\cite{dodonov2003classicality}. This idea was possible only\nwhen Glauber and Sudarshan published papers in 1963 \\cite{sudarshan1963equivalence,glauber1963photon,glauber1963coherent}.\nSudarshan found a mathematical form to represent any state in the\ncoherent basis, mathematically given as \n\\[\n\\rho=\\int P\\left(\\alpha\\right)\\left|\\alpha\\right\\rangle \\left\\langle \\alpha\\right|d^{2}\\alpha,\n\\]\nwhere $P\\,(\\alpha)$ is known as Glauber-Sudarshan $P$-function,\nwhich follows normalization condition as $\\int dP(\\alpha)=1$, but it may have negative values. Thus, it is defined as quasidistribution\nfunction or quasiprobability distribution. When\n$P\\,(\\alpha)$ attains a positive probability density function, immediately\nit indicates that the state is classical. This leads to the definition\nof nonclassicality. If an arbitrary quantum state is failed to represent\nas mixture of coherent states, that is known as nonclassical state.\nTo establish quantum supremacy, these nonclassical states play very\nessential role, for instance in theses states are useful in establishing\nquantum supremacy of quantum information processing, quantum communication,\netc. Although $P$-function is not reconstructable for any arbitrary\nstate yet it has been of major interest as it provides an important\nsignature of nonclassicality. The negativity (positivity or non-negativity)\nof the $P$-function essentially provides the nonclassical (classical)\nbehavior of the state under consideration. The experimental difficulty\nassociated with the easurement of $P$-function in its reconstruction\nled to various feasible substitutes as nonclassicality witnesses.\nHere, we list some of those nonclassicalty witnesses. These witnesses\ncan be viewed as operational criterion of nonclassicality.\n\n\\section{Nonclassical states: witnesses and measures \\label{sec:Nonclassical-states:-witnesses}}\n\nUsing nonclassical states, the essence of quantum theory of light\ncan be understood. There are various tools for characterization of\nnonclassical states. In this section, some tools are described which\nare used to characterize such states. If historically seen, the first\nsuch approach was aimed to check the deviation from Poissonian photon\nstatistics, the second is to evaluate the volume of the negative part\nof the quasiprobability distribution in the phase space, etc. An infinite\nset of moments based criteria is available in literature which is\nused as witness of nonclassicality equivalent to $P$-function \\cite{shchukin2005nonclassical}.\nAny subset of this infinite set may detect nonclassicality or fail\nto do so. Example of these witnesses are lower- and higher-order antibunching,\nsub-Poissonian photon statistics, squeezing as well as Mandel $Q_{M}$\nparameter, Klyshko's, Vogel's, and Agarwal-Tara's criteria, $Q$ function,\netc. To quantify the amount of nonclassicality, a number of measures\nhave been proposed, like linear entropy, Wigner volume, concurrence\nand many more. A small description of these criteria is given here.\n\n\\subsection{Witnesses of nonclassicality\\label{subsec:Witnesses-of-nonclassicality}}\n\n\\subsubsection{Lower- and higher-order antibunching}\n\nIn this section, we study lower- and higher-order antibunching. To\ndo so, we use the following criterion of $(l-1)$th order antibunching\n(\\cite{pathak2006control} and references therein) in\nterms of nonclassicality witness ($d(l-1)$) as\n\\begin{equation}\nd(l-1)=\\langle\\hat{a}^{\\dagger l}\\hat{a}^{l}\\rangle-\\langle\\hat{a}^{\\dagger}\\hat{a}\\rangle^{l}<0.\\label{eq:HOA-1}\n\\end{equation}\nThis nonclassical feature characterizes suitability of the quantum\nstate to be used as single photon source as the negative values of\n$d(l-1)$ parameter show that the probability of photons coming bunched\nis less compared to that of coming independently. The signature of\nlower-order antibunching can be obtained as a special case of Eq.\n(\\ref{eq:HOA-1}) for $l=2$, and that for $l\\geq3$, the negative\nvalues of $d(l-1)$ correspond to higher-order antibunching of $(l-1)$th\norder. Figure \\ref{fig:HBT} illustrates the scheme for studying antibunching\nexperimentally (corresponds to $l=2$). For higher\nvalues of $l$, we require more beamsplitters and APDs. On these cascaded\nbeamplitters signal is mixed with vacuum and measured higher-order\ncorrelation ~\\cite{avenhaus2010accessing}.\n\n\\begin{figure}\n\\centering{}%\n\\begin{tabular}{c}\n\\includegraphics[width=100mm]{HBT.jpg}\\tabularnewline\n\\tabularnewline\n\\end{tabular}\\caption{Hanbury Brown and Twiss setup. Here, APD is avalanche\nphoto diode.}\n\\label{fig:HBT} \n\\end{figure}\n\n\\subsubsection{Lower- and higher-order sub-Poissionian photon statistics}\n\nThe lower-order counterparts of antibunching and sub-Poissonian photon\nstatistics are closely associated as the presence of latter ensures\nthe possibility of observing former (see \\cite{thapliyal2014higher,thapliyal2017comparison}\nfor a detailed discussion). However, these two nonclassical features\nwere shown to be independent phenomena in the past (\\cite{thapliyal2014higher,thapliyal2017comparison}\nand references therein). Higher-order counterpart of sub-Poissonian\nphoton statistics can be introduced as\n\n\\begin{equation}\n\\begin{array}{lcccc}\n\\mathcal{D}_{h}(l-1) & = & \\sum\\limits _{e=0}^{l}\\sum\\limits _{f=1}^{e}S_{2}(e,\\,f)\\,^{l}C_{e}\\,\\left(-1\\right)^{e}d(f-1)\\langle N\\rangle^{l-e} & < & 0,\\end{array}\\label{eq:hosps22-1}\n\\end{equation}\nwhere $S_{2}(e,\\,f)$ is the Stirling number of second kind, {and\n$\\,^{l}C_{e}$ is the usual binomial coefficient}.\n\n\\subsubsection{Higher-order squeezing}\n\nAs mentioned beforehand, the squeezing of quadrature is defined in\nterms of variance in the measured values of the quadrature (say, position\nor momentum) below the corresponding value for the coherent state,\ni.e., minimum uncertainty state. The higher-order counterpart of squeezing\nis studied in two ways, namely Hong-Mandel and Hillery-type squeezing\n\\cite{hong1985higher,hong1985generation,hillery1987amplitude}. Specifically,\nthe idea of the higher-order squeezing originated from the pioneering\nwork of Hong and Mandel \\cite{hong1985higher,hong1985generation},\nwho generalized the lower-order squeezing using the higher-order moments\nof field quadrature. According to the Hong-Mandel criterion, the $l$th\norder squeezing can be observed if the $l$th moment (for even values\nof $l>2$) of a field quadrature operator is less than the corresponding\ncoherent state value. The condition of Hong-Mandel type higher-order\nsqueezing is given as follows \\cite{hong1985higher,hong1985generation}\n\n\\begin{equation}\nS(l)=\\frac{\\langle(\\Delta X)^{l}\\rangle-\\left(\\frac{1}{2}\\right)_{\\frac{l}{2}}}{\\left(\\frac{1}{2}\\right)_{\\frac{l}{2}}}<0,\\label{eq:Hong-Def}\n\\end{equation}\nHere, $S_{(l)}$ is higher-order squeezing, $\\Delta X$ is the quadrature\n( $\\Delta X_{1}$) as defined in Section \\ref{SMQO}. Further, $(x)_{l}$\nis conventional Pochhammer symbol. The inequality in Eq. (\\ref{eq:Hong-Def})\ncan also be rewritten as \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle(\\Delta X)^{l}\\rangle & < & \\left(\\frac{1}{2}\\right)_{\\frac{l}{2}}=\\frac{1}{2^{\\frac{l}{2}}}(l-1)!!\\end{array}\\label{eq:Hong-def2-2}\n\\end{equation}\nwith \n\\begin{equation}\n\\langle\\left(\\text{\\ensuremath{\\Delta}}\\text{X}\\right)^{l}\\rangle=\\sum\\limits _{r=0}^{l}\\sum\\limits _{i=0}^{\\frac{r}{2}}\\sum\\limits _{k=0}^{r-2i}\\left(-1\\right)^{r}\\frac{1}{2^{\\frac{l}{2}}}\\left(2i-1\\right)!^{2i}C_{k}{}^{l}C_{r}{}^{r}C_{2i}\\langle\\hat{a}^{\\dagger}+\\hat{a}\\rangle^{l-r}\\langle\\hat{a}^{\\dagger k}\\hat{a}^{r-2i-k}\\rangle.\\label{eq:cond2.1-1}\n\\end{equation}\n\n\n\\subsubsection{Klyshko's criterion}\n\nThis criterion is relatively simpler as to calculate this witness\nof nonclssicality, only three consecutive probability terms are required\nrather than all the terms. Negative values of $B(m)$ are symbol of\nnonclassicality present in the state. Klyshko introduced this criterion\n\\cite{klyshko1996observable} to investigate the nonclassical property\nusing only three successive photon-number {{} probabilities}. In\nterms of the photon-number probability $p_{m}=\\langle m|\\rho|m\\rangle$\nof the state with density matrix $\\rho$, the Klyshko's criterion\nin the form of an inequality can be written as \n\\begin{equation}\nB(m)=(m+2)p_{m}p_{m+2}-(m+1)\\left(p_{m+1}\\right)^{2}<0.\\label{eq:Klyshko-1}\n\\end{equation}\n\n\n\\subsubsection{Vogel's criterion}\n\n The moments-based nonclassicality criterion of the\nprevious subsection was later extended to Vogel's nonclassicality\ncriterion \\cite{shchukin2005nonclassical} in terms of matrix of moments\nas \n\n \n\\begin{equation}\nv =\\left[\\begin{array}{ccc}\n1 & \\langle\\hat{a}\\rangle & \\langle\\hat{a}^{\\dagger}\\rangle\\\\\n\\langle\\hat{a}^{\\dagger}\\rangle & \\langle\\hat{a}^{\\dagger}\\hat{a}\\rangle & \\langle\\hat{a}^{\\dagger2}\\rangle\\\\\n\\langle\\hat{a}\\rangle & \\langle\\hat{a}^{2}\\rangle & \\langle\\hat{a}^{\\dagger}\\hat{a}\\rangle\n\\end{array}\\right].\\label{eq:vogel}\n\\end{equation}\nThe negative value of the determinant $dv $ of matrix $v$\nin Eq. (\\ref{eq:vogel}) is signature of nonclassicality.\n\n\\subsubsection{Agarwal-Tara's criterion}\n\nThere were certain quantum states having negative $P$-function yet\nshowing no squeezing and sub-Poissonian behavior, to witness the nonclassicality\nresiding in those particular types of states Agarwal and Tara \\cite{agarwal1992nonclassical}\nintroduced this criterion which is again a moments based criterion.\nThis can be written in a matrix form and expressed as \n\\begin{equation}\nA_{3}=\\dfrac{\\det m^{(3)}}{\\det\\mu^{(3)}-\\det m^{(3)}}<0,\\label{eq:Agarwal-1}\n\\end{equation}\nwhere \n\\[\nm^{(3)}=\\begin{bmatrix}1 & m_{1} & m_{2}\\\\\nm_{1} & m_{2} & m_{3}\\\\\nm_{2} & m_{3} & m_{4}\n\\end{bmatrix}\n\\]\nand \n\\[\n\\mu^{(3)}=\\begin{bmatrix}1 & \\mu_{1} & \\mu_{2}\\\\\n\\mu_{1} & \\mu_{2} & \\mu_{3}\\\\\n\\mu_{2} & \\mu_{3} & \\mu_{4}\n\\end{bmatrix}.\n\\]\nThe matrix elements are defined as $m_{i}=\\langle\\hat{a}^{\\dagger i}\\hat{a}^{i}\\rangle$\nand $\\mu_{j}=(\\langle\\hat{a}^{\\dagger}\\hat{a}\\rangle)^{j}=(m_{1})^{j}$.\n\n\\subsubsection{Mandel $Q_{M}$ parameter}\n\nThe Mandel $Q_{M}$ parameter \\cite{mandel1979sub} illustrates the\nnonclassicality through photon number distribution in a quantum state.\nThe Mandel $Q_{M}$ parameter is defined as \n\\begin{equation}\nQ_{M}=\\frac{\\langle(\\hat{a}^{\\dagger}\\hat{a})^{2}\\rangle-\\langle\\hat{a}^{\\dagger}\\hat{a}\\rangle^{2}-\\langle\\hat{a}^{\\dagger}\\hat{a}\\rangle}{\\langle\\hat{a}^{\\dagger}\\hat{a}\\rangle}.\\label{eq:MandelQ}\n\\end{equation}\nThe negative values of $Q_{M}$ parameter essentially indicate the\nnegativity for $P$-function and so it gives a witness for nonclassicality.\nFor the Poissonian statistics it becomes 0, while for the sub-Poissonian\n(super-Poissonian) photon statistics it has negative (positive) values.\n\n\\subsection{Other quasiprobability distributions}\n\nInability to give a phase space description of quantum mechanics is\nexploited in terms of quasiprobability distributions (\\cite{thapliyal2015quasiprobability}\nand references therein). Later, it was found that they are useful\nas witnesses of nonclassicality. These real and normalized quasiprobability\ndistributions allow to calculate the expectation value of an operator\nas any classical probability distribution. One such quasiprobability\ndistributions is $Q$ function \\cite{husimi1940some}, and zeros of\nthis function are signature of nonclassicality. Another example is\nWigner \\cite{wigner1932quantum} function whose negative values corresponds\nto the nonclassicality.\n\n\\subsubsection{$Q$ function }\n\n$Q$ function \\cite{husimi1940some} is defined as \n\\begin{equation}\nQ=\\dfrac{1}{\\pi}\\langle\\beta|\\rho|\\beta\\rangle,\\label{eq:Q-function-1}\n\\end{equation}\nwhere $|\\beta\\rangle$ is the coherent state (\\ref{eq:CS}).\n\n\\subsubsection{Wigner function }\n\nAnother quasiprobability distribution is Wigner function formulated\nby Wigner in 1932 \\cite{wigner1932quantum} in the early stage of\nquantum mechanics, the motive was to connect the wavefunction approach\nto a probability distribution in phase space. Negativity of Wigner\nfunction represents the nonclassicality present in an arbitrary quantum\nstate. Also the ability to reconstruct the Wigner function experimentally\nmakes this approach more impactful than any other approach. Specifically,\nWigner function obtained through optical tomography can be used to\nobtain other quasidistributions, however, Wigner function is stronger\nwitness of nonclassicality than $Q$ function while is not singular\nlike $P$-function. Mathematically, it is expressed as \n\\begin{equation}\nW\\left(\\gamma,\\gamma^{\\star}\\right)=A\\exp\\left[-2\\left|\\gamma\\right|^{2}\\right]\\int d^{2}\\lambda\\langle-\\lambda|\\rho|\\lambda\\rangle\\exp\\left[-2\\left(\\gamma^{\\star}\\lambda-\\gamma\\lambda^{\\star}\\right)\\right].\\label{eq:wigner-def}\n\\end{equation}\nThe zeros of $Q$ function while the negativity of $P$-function and\nWigner function correspond to the nonclassical behavior of any arbitrary\nquantum state. It is worth stressing here that only\n$P$-function is both necessary and sufficient criterion of nonclassicality,\nwhile rest of the quasidistribution functions are only sufficient.\n\n\\subsection{Measures of nonclassicality \\label{subsec:Measures-of-nonclassicality}}\n\nIn the above section, we have seen that there exist numerous criteria\nof nonclassicality. However, most of these criteria only witness the\nnonclassicality. They do not provide any quantification of the nonclassicality. Except $P$-function and infinite set of vogel's criteria, all other\ncriteria are sufficient but not necessary. However,\nmany efforts have been made for the quantification of nonclassicality, e.g., in 1987, a distance-based measure of nonclassicality was introduced\nby Hillery \\cite{hillery1987nonclassical}. A trace norm based measure \\cite{mari2011directly} was introduced\nby Mari et al., for the set of all states having the positive Wigner\nfunction. In 1991, Lee gave a measure of nonclassicality known as\nnonclassical depth \\cite{lee1991measure}. However, in this work,\nwe will not study these measures. There are\ncertain measures those can be exploited in terms of entanglement,\nlike linear entropy \\cite{wei2003maximal}, which\nwe will use for our calculations and the same is described below.\n\n\\subsubsection{Linear entropy }\n\nIn 2005, a measure of nonclassicality was proposed as entanglement\npotential, which is the amount of entanglement in two output ports\nof a beam splitter with the quantum state $\\rho_{in}$ and vacuum\n$|0\\rangle\\langle0|$ sent through two input ports \\cite{asboth2005computable}.\nThe amount of entanglement quantifies the amount of nonclassicality\nin the input quantum state as classical state can not generate entanglement\nin the output. The post beam splitter state can be obtained as $\\rho_{out}=U\\left(\\rho_{in}\\otimes|0\\rangle\\langle0|\\right)U^{\\dagger}$\nwith $U=\\exp[-iH\\theta]$, where $H=(\\hat{a}^{\\dagger}\\hat{b}+\\hat{a}\\hat{b}^{\\dagger})\/2$,\nand $\\hat{a}^{\\dagger}\\,(\\hat{a})$, $\\hat{b}^{\\dagger}\\,(\\hat{b})$\nare the creation (annihilation) operators of the input modes. For\nexample, considering quantum state ($|\\psi\\rangle=\\sum\\limits _{n=0}^{\\infty}c_{n}|n\\rangle$)\nand a vacuum state $|0\\rangle$ as input states, we can write the\nanalytic expression of the two-mode output state as \n\\begin{equation}\n|\\phi\\rangle=U\\left(|\\psi\\rangle\\otimes|0\\rangle\\right)\\equiv U|\\psi,0\\rangle=\\sum_{n=0}^{\\infty}\\,\\frac{c_{n}}{2^{n\/2}}\\sum_{j=0}^{n}\\sqrt{^{n}C_{j}}\\,\\,|j,\\,n-j\\rangle.\\label{eq:inout_psi}\n\\end{equation}\nWe can measure the amount of entanglement in the output state to quantify\nthe amount of input nonclassicality in $|\\psi\\rangle$. Here, we use\nlinear entropy of single mode subsystem (obtained after tracing over\nthe other subsystem) as entanglement potential. The linear entropy\nfor an arbitrary bipartite state $\\rho_{AB}$ is defined as \\cite{wei2003maximal}\n\\begin{equation}\n\\mathcal{L}=1-{\\rm Tr}\\left(\\rho_{B}^{2}\\right),\\label{eq:le}\n\\end{equation}\nwhere $\\rho_{B}$ is obtained by tracing over subsystem $A$.\n\n\\section{Analytic tools for the study of phase properties of nonclassical\nstates \\label{sec:Analytic-tools-forphase}}\n\nIn this section, we aim to introduce the parameters that are used\nto study phase properties of a given quantum state under consideration\nin this section.\n\n\\subsection{Phase distribution function}\n\nA distribution function allows us to calculate expectation values\nof an operator analogous to that from the corresponding density matrix.\nPhase distribution function for a given density operator \\cite{banerjee2007phase,agarwal1992classical}\ncan be defined as \n\\begin{equation}\nP_{\\theta}=\\frac{1}{2\\pi}\\langle\\theta|\\varrho|\\theta\\rangle,\\label{eq:Phase-Distridution-1}\n\\end{equation}\nwhere the phase state $|\\theta\\rangle$, complementary to the number\nstate $|n\\rangle$, is defined \\cite{agarwal1992classical} as \n\\begin{equation}\n|\\theta\\rangle=\\sum_{n=0}^{\\infty}e^{\\iota n\\theta}|n\\rangle.\\label{eq:phase-1}\n\\end{equation}\n\n\n\\subsection{Phase dispersion}\n\nA known application of phase distribution function (\\ref{eq:Phase-Distridution-1})\nis that it can be used to quantify the quantum phase fluctuation.\nAlthough the variance is also used occasionally as a measure of phase\nfluctuation, it has a drawback that it depends on the origin of phase\nintegration \\cite{banerjee2007phase}. A measure of phase fluctuation,\nfree from this problem, is phase dispersion \\cite{perinova1998phase}\ndefined as \n\\begin{equation}\nD=1-\\left|\\intop_{-\\pi}^{\\pi}d\\theta\\exp\\left[-\\iota\\theta\\right]P_{\\theta}\\right|^{2}.\\label{eq:Dispersion-1}\n\\end{equation}\n\n\n\\subsection{Angular Q function}\n\nAnalogous to the phase distribution $P_{\\theta}$, phase distributions\nare also defined as radius integrated quasidistribution functions\nwhich are used as the witnesses for quantumness \\cite{thapliyal2015quasiprobability}.\nOne such phase distribution function based on the angular part of\nthe $Q$ function is studied in \\cite{leonhardt1993phase,leonhardt1995canonical}.\nSpecifically, the angular $Q$ function is defined as \n\\begin{equation}\nQ_{\\theta_{1}}=\\intop_{0}^{\\infty}Q\\left(\\beta,\\beta^{\\star}\\right)\\left|\\beta\\right|d\\left|\\beta\\right|,\\label{eq:ang-Qf-1}\n\\end{equation}\nwhere the $Q$ function \\cite{husimi1940some} is defined in Eq. (\\ref{eq:Q-function-1}).\n\n\\subsection{Phase fluctuation}\n\nIn attempts to get rid of the limitations of the Hermitian phase operator\nof Dirac \\cite{dirac1927quantum}, Louisell \\cite{louisell1963amplitude}\nfirst mentioned that bare phase operator should be replaced by periodic\nfunctions. As a consequence, sine $(\\mathcal{\\hat{S}})$ and cosine\n$(\\hat{\\mathcal{C}})$ operators appeared, explicit forms of these\noperators were given by Susskind and Glogower \\cite{susskind1964quantum},\nand further modified by Barnett and Pegg \\cite{barnett1986phase}\nas \n\\begin{equation}\n\\mathcal{\\hat{S}}=\\frac{\\hat{a}-\\hat{a}^{\\dagger}}{2\\iota\\left(\\bar{N}+\\frac{1}{2}\\right)^{\\frac{1}{2}}}\\label{eq:fluctuation1-1}\n\\end{equation}\nand \n\\begin{equation}\n\\hat{\\mathcal{C}}=\\frac{\\hat{a}+\\hat{a}^{\\dagger}}{2\\left(\\bar{N}+\\frac{1}{2}\\right)^{\\frac{1}{2}}}.\\label{eq:fluctuation2-1}\n\\end{equation}\nHere, $\\bar{N}$ is the average number of photons in the measured\nfield, and here we refrain our discussion to Barnett and Pegg sine\nand cosine operators \\cite{barnett1986phase}. Carruthers and Nieto\n\\cite{carruthers1968phase} have introduced three quantum phase fluctuation\nparameters in terms of sine and cosine operators \n\\begin{equation}\nU=\\left(\\Delta N\\right)^{2}\\left[\\left(\\Delta\\mathcal{S}\\right)^{2}+\\left(\\Delta\\mathcal{C}\\right)^{2}\\right]\/\\left[\\langle\\mathcal{\\hat{S}}\\rangle^{2}+\\langle\\hat{\\mathcal{C}}\\rangle^{2}\\right],\\label{eq:fluctuation3-1}\n\\end{equation}\n\\begin{equation}\nS=\\left(\\Delta N\\right)^{2}\\left(\\Delta\\mathcal{S}\\right)^{2},\\label{eq:fluctuation4-1}\n\\end{equation}\nand \n\\begin{equation}\nQ=S\/\\langle\\hat{\\mathcal{C}}\\rangle^{2}.\\label{eq:fluctuation5-1}\n\\end{equation}\n These three phase fluctuation parameters $U$, $S$\nand $Q$ show phase properties of PADFS and PSDFS, while $U$ parameter\nis shown relevant as a witness of nonclassicality (antibunching).\n\n\\subsection{Quantum phase estimation parameter}\n\nQuantum phase estimation is performed by sending the input state through\na Mach-Zehnder interferometer and applying the phase to be determined\n($\\phi$) on one of the arms of the interferometer. To study the phase\nestimation using Mach-Zehnder interferometer, angular momentum operators\n\\cite{sanders1995optimal,demkowicz2015quantum}, defined as \n\\begin{equation}\n\\hat{J_{x}}=\\frac{1}{2}\\left(\\hat{a}^{\\dagger}\\hat{b}+\\hat{b}^{\\dagger}\\hat{a}\\right),\\label{eq:ang-mom1-1}\n\\end{equation}\n\\begin{equation}\n\\hat{J_{y}}=\\frac{\\iota}{2}\\left(\\hat{b}^{\\dagger}\\hat{a}-\\hat{a}^{\\dagger}\\hat{b}\\right),\\label{eq:ang-mom2-1}\n\\end{equation}\nand \n\\begin{equation}\n\\hat{J_{z}}=\\frac{1}{2}\\left(\\hat{a}^{\\dagger}\\hat{a}-\\hat{b}^{\\dagger}\\hat{b}\\right),\\label{eq:ang-mom3-1}\n\\end{equation}\nare used. Here, $\\hat{a}$ and $\\hat{b}$ are the annihilation operators\nfor the modes corresponding to two input ports of the Mach-Zehnder\ninterferometer. The average value of $\\hat{J_{z}}$ operator in the\noutput of the Mach-Zehnder interferometer, which is one-half of the\ndifference of photon numbers in the two output ports (\\ref{eq:ang-mom3-1}),\ncan be written as \n\\begin{equation}\n\\langle\\hat{J_{z}}\\rangle=\\cos\\text{\\ensuremath{\\phi}}\\langle\\hat{J_{z}}\\rangle_{in}-\\sin\\text{\\ensuremath{\\phi}}\\langle\\hat{J_{x}}\\rangle_{in}.\n\\end{equation}\nTherefore, variance in the measured value of operator $\\hat{J_{z}}$\ncan be computed as \n\\begin{equation}\n\\left(\\Delta J_{z}\\right)^{2}=\\cos^{2}\\phi\\left(\\Delta{J_{z}}\\right)_{in}^{2}+\\sin^{2}\\phi\\left(\\Delta{J_{x}}\\right)_{in}^{2}-2\\sin\\text{\\ensuremath{\\phi\\,\\cos\\phi\\,}cov}\\left(\\hat{J_{x}},\\hat{J_{z}}\\right)_{in},\n\\end{equation}\nwhere covariance of the two observables is defined as \n\\begin{equation}\n\\text{{\\rm cov}}\\left(\\hat{J_{x}},\\hat{J_{z}}\\right)=\\frac{1}{2}\\langle\\hat{J_{x}}\\hat{J_{z}}+\\hat{J_{z}}\\hat{J_{x}}\\rangle-\\langle\\hat{J_{x}}\\rangle\\langle\\hat{J_{z}}\\rangle.\n\\end{equation}\nThis allows us to quantify precision in phase estimation \\cite{demkowicz2015quantum}\nas \n\\begin{equation}\n\\Delta\\phi=\\frac{\\Delta{J_{z}}}{\\left|\\frac{d\\langle\\hat{J_{z}}\\rangle}{d\\phi}\\right|}.\\label{eq:PE-1}\n\\end{equation}\nBefore we proceed further and conclude this chapter by noting the\nstructure of the rest of the thesis, it would be apt to to note that\nthere exist various methods of quantum state engineering (some of\nwhich have already been mentioned) and photon addition, subtraction,\nfilteration, punching etc., which can be viewed as examples of quantum\nstate engineering processes. In rest of the thesis these processes\nwill be studied with detail.\n\n\\section{Structure of the rest of the thesis\\label{sec:Structure-of-the thesis}}\n\nThis thesis has 6 chapters. The next 4 chapters are focused on the\nstudy of nonclassical and phase properties of the engineered quantum\nstates and the last chapter is dedicated to conclusion. These chapters\nand thus the rest of this thesis is organized as follows.\n\nIn Chapter \\ref{cha:Introduction1}, in Section \\ref{sec:Quantum-states-of-4},\nquantum states of our interest (i.e., PADFS and PSDFS) have been introduced\nin detail. In Chapter \\ref{cha:PADFS-PSDFS}, in Section \\ref{sec:Nonclassicality-witnesses},\nthe analytical expressions of various witnesses of nonclassicality\nare reported. Further, the existence of various lower- and higher-order\nnonclassical features in PADFS and PSDFS are shown through a set of\nplots. Finally, we conclude in Section \\ref{sec:Conclusions}.\n\nIn Chapter \\ref{cha:phase}, in Section \\ref{sec:phase-witnesses},\nwe investigate the phase properties of PADFS and PSDFS from a number\nof perspectives. Finally, the chapter is concluded in Section \\ref{sec:Conclusions-1}.\n\nIn Chapter \\ref{cha:PASDFS}, we describe the quantum state of interest\n(i.e., PASDFS) in Fock basis and calculate the analytic expressions\nfor the higher-order moments of the relevant field operators for this\nstate. In Section \\ref{sec:Nonclassicality-witnesses-2}, we investigate\nthe possibilities of witnessing various nonclassical features in PASDFS\nand its limiting cases by using a set of moments-based criteria for\nnonclassicality. Variations of nonclassical features (witnessed through\ndifferent criteria) with various physical parameters are also discussed\nhere. In Section \\ref{sec:Phase-properties-of}, phase properties\nof PASDFS are studied. $Q$ function for PASDFS is obtained in Section\n\\ref{sec:Qfn}. Finally, we conclude in Section \\ref{sec:Conclusions-5}.\n\nIn Chapter \\ref{cha:QSE-1}, in Section \\ref{sec:Quantum-states-of-1},\nwe have introduced the quantum states of our interest which include\nECS, BS, KS, VFECS, VFBS, VFKS, PAECS, PABS, and PAKS. In Section\n\\ref{sec:Nonclassicality-witnesses-1}, we have investigated the nonclassical\nproperties of these states using various witnesses of lower- and higher-order\nnoncassicality as well as a measure of nonclassicality. Specifically,\nin this section, we have compared nonclassicality features found in\nvacuum filtered and single photon added versions of the states of\nour interest using the witnesses of Higher-order antibunching (HOA),\nHigher-order squeezing (HOS) and Higher-order sub-Poissonian photon\nstatistics (HOSPS). Finally, in Section \\ref{sec:Conclusion}, the\nresults are analyzed, and the chapter is concluded.\n\nFinally, the thesis is concluded in Chapter \\ref{cha:Conclusions-and-Scope},\nwhere we have summarized the findings reported in Chapter \\ref{cha:PADFS-PSDFS}-\\ref{cha:QSE-1}\nand have emphasized on the main conclusion of the present thesis.\nWe have also discussed the scopes of future work.\n\n\n\\chapter{Lower-and higher-order nonclassical properties of photon added and\nsubtracted displaced Fock state\\textsc{\\label{cha:PADFS-PSDFS}}}\n\nIn this chapter, which is based on \\cite{malpani2019lower}, we aim to study the nonclassical properties of the\nPADFS and PSDFS (which are already introduced in Section \\ref{subsec:Photon-added-and sub})\nusing the witnesses of nonclassicality introduced in Section \\ref{subsec:Witnesses-of-nonclassicality}.\n\n\n\\section{Introduction\\label{sec:Introduction-chap2} }\n\nAs we have mentioned in Chapter \\ref{cha:Introduction1}, with the\nadvent of quantum state engineering \\cite{vogel1993quantum,sperling2014quantum,miranowicz2004dissipation,marchiolli2004engineering}\nand quantum information processing (\\cite{pathak2013elements} and\nreferences therein), the study of nonclassical properties of engineered\nquantum states have become a very important field. Quantum state engineering\nis helpful in realizing non-Gaussianity inducing operations, like\nphoton addition and subtraction \\cite{zavatta2004quantum,podoshvedov2014extraction}.\nKeeping this in mind, in what follows, in this chapter, we aim to\nstudy the nonclassical properties of a set of engineered quantum states\n(both photon added and subtracted) which can be produced by using\nthe above mentioned techniques.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\begin{tabular}{c}\n\\includegraphics[width=100mm]{DFS.pdf}\\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{DFS} A schematic diagram for the generation of PSDFS (in (a)\nand (b)) and PADFS (in (c) and (d)). In (a) and (c) ((b) and (d)),\nsingle-mode (two-mode) squeezed vacuum is used for generation of the\ndesired state. Here, NLC corresponds to nonlinear crystal and D1 and\nD2 are photon number resolving detectors.}\n\\end{figure}\nIt is already mentioned in Chapter 1 that a state having a negative\n$P$-function is referred to as a nonclassical state. Such a state\ncannot be expressed as a mixture of coherent states and does not possess\na classical analogue. In contrast to these states, coherent states\nare classical, but neither their finite dimensional versions \\cite{miranowicz2004dissipation,alam2017higher}\nnor their generalized versions are classical \\cite{satyanarayana1985generalized,thapliyal2016tomograms,thapliyal2015quasiprobability,banerjee2007phase}.\nHere, we would like to focus on photon added and subtracted versions\nof a particular type of generalized coherent state, which is also\nreferred to as the displaced Fock state (DFS). To be precise, state\nof the form $|\\psi\\rangle=D(\\alpha)|n\\rangle$, where $D(\\alpha)$\nis the displacement operator with Fock state $|n\\rangle$, is referred\nto as generalized coherent state (see Section \\ref{sec:Quantum-states-of-4}\nof Chapter \\ref{cha:Introduction1}), as this state is reduced to\na coherent state in the limit $n=0$. However, from the structure\nof the state it seems more appropriate to call this state as the DFS,\nand {this seems to be the nomenclature usually adopted in the literature}\n\\cite{keil2011classical,wunsche1991displaced,blavzek1994high,moya1995generation}.\nIn some other works, it is referred to as displaced number state \\cite{ziesel2013experimental,de2006nonlinear,dodonov2005decoherence},\nbut all these names are equivalent; and in what follows, we will refer\nto it as DFS. This is an extremely interesting quantum state for various\nreasons. Specifically, its relevance in various areas of quantum optics\nis known. For example, in the context of cavity QED, it constitutes\nthe eigenstates of the Jaynes-Cummings systems with coherently driven\natoms \\cite{alsing1992dynamic}. Naturally, various lower-order nonclassical\nproperties and a set of other quantum features of DFS have already\nbeen studied. Specifically, quasiprobability distributions of DFS\nwere studied in \\cite{wunsche1991displaced}, phase fluctuations of\nDFS was investigated in \\cite{zheng1992fluctuation}, decoherence\nof superposition of DFS was discussed in \\cite{dodonov2005decoherence},\n$Q$ function, Wigner function and probability distribution of DFS\nwere studied in \\cite{de1990properties}, Pancharatnam phase of DFS\nhas been studied in \\cite{mendas1993pancharatnam}. Further, in the\ncontext of non-optical DFS, various possibilities of generating DFS\nfrom Fock states by using a general driven time-dependent oscillator\nhas been discussed in \\cite{lo1991generating}; and in the trapped-ion\nsystem, quantum interference effects have been studied for the superposition\nof DFS \\cite{marchiolli2004engineering}. Thus, DFS seems to be a\nwell studied quantum state, but it may be noted that {a little effort}\nhas yet been made to study higher-order nonclassical properties of\nDFS. It is a relevant observation as in the recent past, it has been\nobserved that higher-order nonclassicality has various applications\n\\cite{hillery1999quantum,banerjee2017quantum,sharma2017quantumauction,thapliyal2017protocols},\nand it can be used to detect the existence of weak nonclassical characters\n\\cite{thapliyal2014higher,thapliyal2014nonclassical,verma2010generalized,thapliyal2017comparison,alam2016nonclassical,alam2015approximate,thapliyal2017nonclassicality,prakash2006higher,prakash2010detection,das2018lower}.\nFurther, various higher-order nonclassical features have been experimentally\ndetected \\cite{allevi2012measuring,allevi2012high,avenhaus2010accessing,perina2017higher}.\nHowever, we do not want to restrict to DFS, rather we wish to focus\non lower- and higher-order nonclassical properties of a set of even\nmore general states, namely photon added DFS (PADFS) and photon subtracted\nDFS (PSDFS). The general nature of these states can be visualized\neasily, as in the special case that no photon is added (subtracted)\nPADFS (PSDFS) would reduce to DFS. Further, for $n=0$, PADFS would\nreduce to photon added coherent state which has been widely studied\n(\\cite{agarwal1991nonclassical,verma2008higher,thapliyal2017comparison}\nand references therein) and experimentally realized \\cite{zavatta2004quantum,zavatta2005single}.\nHere it is worth noting that DFS has also been generated experimentally\nby superposing a Fock state with a coherent state on a beam splitter\n\\cite{lvovsky2002synthesis}. Further, an alternative method for the\ngeneration of DFS has been proposed by Oliveira et al., \\cite{de2005alternative}.\nFrom the fact that photon added coherent state and DFS have already\nbeen generated experimentally, and the fact that the photon added\nstates can be prepared via conditional measurement on a beam splitter,\nit appears that PADFS and PSDFS can also be built in the lab. In fact,\ninspired by these experiments, we have proposed a schematic diagram\nfor generating the PADFS and PSDFS in Figure \\ref{DFS} using single-mode\nand two-mode squeezed vacuum states. Specifically, using three (two)\nhighly transmitting beam splitters, a conditional measurement of single\nphotons at both detectors D1 and D2 in Figure \\ref{DFS} (a) (Figure\n\\ref{DFS} (b)) would result in single photon subtracted DFS as output\nfrom a single-mode (two-mode) squeezed vacuum state. Similarly, to\ngenerate PADFS conditional subtraction of photon is replaced by photon\naddition, using a nonlinear crystal and heralding one output mode\nfor a successful measurement of a single photon to ensure generation\nof single photon added DFS. This fact, their general nature, and the\nfact that nonclassical properties of PADFS and PSDFS have not yet\nbeen {accorded sufficient attention, has} motivated us to perform\nthis study.\n\nMotivated by the above facts, in what follows, we investigate the\npossibilities of observing lower- and higher-order sub-Poissonian\nphoton statistics, antibunching and squeezing in PADFS and PSDFS.\nWe have studied nonclassical properties of these states through a\nset of other witnesses of nonclassicality, e.g., zeros of $Q$ function,\nMandel $Q_{M}$ parameter, Klyshko's criterion, and Agarwal-Tara's\ncriterion. These witnesses of nonclassicality successfully establish\nthat both PADFS and PSDFS (along with most of the states to which\nthese two states reduce at different limits) are highly nonclassical.\nThus, {making use of the analytical expressions of moments of creation\nand annihilation operators, discussed below, facilitates an analytical\nunderstanding for most of the nonclassical witnesses.}\n\n\\section{Higher-order moment for PADFS and PSDFS \\label{sec:Quantum-states-of}}\n\nWe have already mentioned that we are interested in PADFS and PSDFS.\nIn what follows, we will see that various experimentally measurable\nnonclassicality witnesses can be expressed as the moments of annihilation\nand creation operators \\cite{allevi2012measuring,allevi2012high,avenhaus2010accessing,perina2017higher,miranowicz2010testing}.\nTo utilize those witnesses to identify the signatures of nonclassicality,\nwe will compute an analytic expression for the most general moment,\n$\\langle\\hat{a}^{\\dagger q}\\hat{a}^{r}\\rangle$, with $q$ and $r$\nbeing non-negative integers. This is the most general moment in the\nsense that any other moment can be obtained as a special case of it.\nFor example, if we need $\\langle\\hat{a}^{2}\\rangle,$ we would just\nrequire to consider $q=2$ and $r=0$. Thus, an analytic expression\nfor $\\langle\\hat{a}^{\\dagger q}\\hat{a}^{r}\\rangle$ would essentially\nhelp us to obtain analytic expression for any moment-based witness\nof nonclassicality. Further, the analytic expressions of moment obtained\nfor PADFS and PASDFS would also help us to obtain nonclassical features\nin the set of states obtained in the limiting cases, like Fock state,\nDFS, photon added coherent state. Keeping this in mind, we have computed\n$\\langle\\psi_{+}(u,n,\\alpha)|\\hat{a}^{\\dagger q}\\hat{a}^{r}|\\psi_{+}(u,n,\\alpha)\\rangle$\nand $\\langle\\psi_{-}(v,n,\\alpha)|\\hat{a}^{\\dagger q}\\hat{a}^{r}|\\psi_{-}(v,n,\\alpha)\\rangle$\n using Eq. (\\ref{eq:PADFS}) and (\\ref{eq:PSDFS}),\nand provide the final analytic expressions of these moments without\ngoing into the mathematical details to maintain the flow of the chapter.\nThe obtained expressions for the above mentioned moments for PADFS\nand PSDFS are {\\small{}\n\\begin{eqnarray}\\label{eq:PA-expectation}\n\\langle\\hat{a}^{\\dagger q}\\hat{a}^{r}\\rangle_{{\\rm PADFS}} & = & \\langle\\psi_{+}(u,n,\\alpha)|\\hat{a}^{\\dagger q}\\hat{a}^{r}|\\psi_{+}(u,n,\\alpha)\\rangle\\nonumber \\\\\n & = & \\frac{N_{+}^{2}}{n!}\\sum\\limits _{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\\\\n & \\times & \\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\sum\\limits _{m=0}^{\\infty}\\frac{\\alpha^{m}(\\alpha^{\\star})^{m+p-p'-r+q}(m+p+u)!(m+p+u-r+q)!}{m!(m+p-p'-r+q)!(m+p+u-r)!},\\nonumber \n\\end{eqnarray}\n}and {\\small{}\n\\begin{eqnarray}\\label{eq:PS-expectation}\n\\langle\\hat{a}^{\\dagger q}\\hat{a}^{r}\\rangle_{{\\rm PSDFS}} & = & \\langle\\psi_{-}(v,n,\\alpha)|\\hat{a}^{\\dagger q}\\hat{a}^{r}|\\psi_{-}(v,n,\\alpha)\\rangle\\nonumber \\\\\n & = & \\frac{N_{-}^{2}}{n!}\\sum\\limits _{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\nonumber \\\\\n & \\times & \\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\sum\\limits _{m=0}^{\\infty}\\frac{\\alpha^{m}(\\alpha^{*})^{m+p-p'-r+q}(m+p)!(m+p-r+q)!}{m!(m+p-p'-r+q)!(m+p-v-r)!},\n\\end{eqnarray}\n}respectively. The values of normalization constants for PADFS and\nPSDFS are already given in Eqs. (\\ref{eq:norP}) and (\\ref{eq:norS}),\nrespectively. In the following section, we shall investigate the possibilities\nof observing various types lower- and higher-order nonclassical features\nin PADFS and PSDFS by using Eqs. (\\ref{eq:PA-expectation}) and\n(\\ref{eq:PS-expectation}).\n\n\\section{Nonclassical features of PADFS and PSDFS \\label{sec:Nonclassicality-witnesses}}\n\nThe moments of number operators for PADFS and PSDFS states obtained\nin the previous section enable us to study nonclassical properties\nof these states using a set of moments-based criteria of nonclassicality\n\\cite{miranowicz2010testing}\\cite{naikoo2018probing}. In the recent\npast, an infinite set of these moments-based criteria is shown to\nbe equivalent to the $P$-function-based criterion, i.e., it becomes\nboth necessary and sufficient \\cite{richter2002nonclassicality,shchukin2005nonclassical}.\nHowever, in this section, we will use a subset of this infinite set\nas witnesses of nonclassicality to investigate various nonclassical\nproperties of the PADFS and PSDFS. Specifically, nonclassicality will\nbe witnessed through Mandel $Q_{M}$ parameter, criteria of lower-\nand higher-order antibunching, Agarwal-Tara's criterion, Klyshko's\ncriterion, criteria of higher-order sub-Poissonian photon statistics,\nzeros of $Q$ function, etc. As all these criteria are already introduced\nin Section \\ref{sec:Nonclassicality-witnesses},\\textcolor{blue}{{}\n}here we may discuss the plots and results.\n\n\\subsection{Mandel $Q_{M}$ Parameter}\n\nNegativity of this parameter indicates nonclassicality which can be\ncalculated using Eqs. (\\ref{eq:PA-expectation}) and (\\ref{eq:PS-expectation}).\nIn Figure \\ref{Mandel-Q-parameter}, the dependence of $Q_{M}$ on\nthe state parameter $\\alpha$ and non-Gaussianity inducing parameters\n (i.e., photon addition, subtraction, and Fock parameters\nas they can induce non-Gaussianity in a quantum state) is shown.\nSpecifically, variation of $Q_{M}$ parameter for PADFS and PSDFS\nis shown with state parameter $\\alpha$, where the effect of the number\nof photons added\/subtracted and the initial Fock state is also established.\nFor $\\alpha=0$, the PADFS with an arbitrary number of photon addition\nhas $Q_{M}$ parameter -1, which can be attributed to the fact that\nfinal state, which reduces to the Fock state ($|1\\rangle$ chosen\nto be displaced in this case) is the most nonclassical state (cf.\nFigure \\ref{Mandel-Q-parameter} (a)). With increase in the number\nof photons added to the DFS, the depth of nonclassicality witness\n$Q_{M}$ increases. However, the witness of nonclassicality becomes less negative for higher values\nof the displacement parameter. In contrast to the photon addition,\nwith the subtraction of photons from the DFS the $Q_{M}$ parameter\nbecomes almost zero for the smaller values of displacement parameter\n$\\alpha$ in DFS as shown in Figure \\ref{Mandel-Q-parameter} (c).\nThis behavior can be attributed to the fact that photon subtraction\nfrom $D\\left(\\alpha\\right)|1\\rangle$ for small values of $\\alpha$\nwill most likely yield vacuum state. Also, with the increase in the\ndisplacement parameter the witness of nonclassicality becomes more\nnegative as with a higher average number of photons in DFS photon\nsubtraction becomes more effective. However, for the larger values\nof displacement parameter the nonclassicality disappears analogous\nto the PADFS. For large values of $\\alpha$, this\nparameter dominates in the behavior of the state and thus it behaves\nanalogous to coherent state.\n\nAs Fock states are known to be nonclassical, and photon addition and\nsubtraction are established as nonclassicality inducing operations,\nit would be worth comparing the effect of these two independent factors\nresponsible for the observed nonclassical features in the present\ncase. To perform this study, we have shown the variation of the single\nphoton added (subtracted) DFS with different initial Fock states in\nFigure \\ref{Mandel-Q-parameter} (b) (Figure \\ref{Mandel-Q-parameter}\n(d)). Specifically, the nonclassicality present in PADFS decays faster\nfor the higher values of the Fock states with increasing displacement\nparameter (cf. Figure \\ref{Mandel-Q-parameter} (b)). However, such\nnature was not present in PSDFS shown in Figure \\ref{Mandel-Q-parameter}\n(d). Note that variation of $Q_{M}$ parameter with $\\alpha$ starts\nfrom 0 (-1) iff $u\\leq n$ $\\left(u>n\\right)$. For instance, if $u=n=1$,\ni.e., corresponding to state $\\hat{a}D\\left(\\alpha\\right)|1\\rangle$,\nnonclassicality witness is zero for $\\alpha=0$ as it corresponds\nto vacuum state. Therefore, the present study reveals that photon\naddition is a stronger factor for the nonclassicality present in the\nstate when compared to the initial Fock state chosen to be displaced.\nWhereas photon subtraction is a preferred choice for large values\nof displacement parameter in contrast to the higher values of Fock\nstates to displace with small $\\alpha$. Among photon addition and\nsubtraction, addition is a preferred choice for the smaller values\nof displacement parameter, while the choice between addition and subtraction\nbecomes immaterial for large $\\alpha$.\n\n\\begin{figure}\n\\centering %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{PAGCS-mandel-diffphoton.pdf} & \\includegraphics[width=60mm]{PAGCS-mandel-diffnp.pdf} \\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{PSGCS-mandel-diffphoton.pdf} & \\includegraphics[width=60mm]{PSGCS-mandel-diffnp.pdf} \\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{Mandel-Q-parameter} Variation of Mandel $Q_{M}$ parameter\nfor PADFS (in (a) and (b)) and PSDFS (in (c) and (d)) is shown with\ndisplacement parameter $\\alpha$. In (a) and (c), the value of number\nof photons added\/subtracted (i.e., $u$ or $v$) is changed for the\nsame initial Fock state $|1\\rangle$. Different initial Fock states\n$|n\\rangle$ are chosen to be displaced in (b) and (d) for the single\nphoton addition\/subtraction. The blue curve corresponds\nto vacuum for $\\alpha=0$ and thus starts from $0$ unlike rest of\nthe states which are Fock state ($n\\protect\\neq0)$ in the limiting\ncase. Therefore, nonclassicality increases first with increasing $\\alpha$\nbefore decreasing as in the rest of the cases.}\n\\end{figure}\n\n\\subsection{Lower- and higher-order antibunching}\n\nThe nonclassicality reflected by the lower-order antibunching criterion\nobtained here is the same as Mandel $Q_{M}$ parameter $\\left(Q_{M}=\\frac{d\\left(1\\right)}{\\left\\langle \\hat{a}^{\\dagger}\\hat{a}\\right\\rangle }\\right)$\nillustrated in Figure \\ref{Mandel-Q-parameter}. Therefore we will\nrather discuss here the possibility of observing higher-order antibunching\nin the quantum state of our interest using Eqs. (\\ref{eq:PA-expectation})\nand (\\ref{eq:PS-expectation}) in Eq. (\\ref{eq:HOA-1}). Specifically,\nthe depth of nonclassicality witness can be observed to increase with\norder for both PADFS and PSDFS as depicted in Figure \\ref{HOA} (a)\nand (d). This fact is consistent with the earlier observations (\\cite{thapliyal2014higher,thapliyal2014nonclassical,thapliyal2017comparison,thapliyal2017nonclassicality,alam2017lower}\nand references therein) that higher-order nonclassicality criteria\nare useful in detecting weaker nonclassicality. On top of that the\nhigher-order antibunching can be observed for larger values of displacement\nparameter $\\alpha$, when lower-order antibunching is not present.\nThe presence of higher-order nonclassicality in the absence of its\nlower-order counterpart establishes the relevance of the present study.\n\nThe depth of nonclassicality parameter ($d\\left(l-1\\right)$)\nwas observed to decrease with an increase in the number of photons\nsubtracted from DFS for small values of $\\alpha$ in Figure \\ref{Mandel-Q-parameter}\n(c). A similar nature is observed in Figure \\ref{HOA} (e), which\nshows that for the higher values of displacement parameter, the depth\nof higher-order nonclassicality witness increases with the number\nof photon subtraction. Therefore, not only the depth of nonclassicality\nbut the range of displacement parameter for the presence of higher-order\nantibunching also increases with photon addition\/subtraction (cf.\nFigure \\ref{HOA} (b) and (e)). With the increase in the value of\nFock state parameter $n$, the depth of higher-order nonclassicality\nwitness increases (decreases) for smaller (larger) values of displacement\nparameter in both PADFS and PSDFS as shown in Figure \\ref{HOA} (c)\nand (f), respectively. Thus, we have observed that the range of $\\alpha$\nwith the presence of nonclassicality increases (decreases) with photon\naddition\/subtraction (Fock state) in DFS.\n\n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{ccc}\n\\includegraphics[width=50mm]{HOA-pagcs.pdf} & \\includegraphics[width=50mm]{HOA-pagcs-addphoton.pdf} & \\tabularnewline\n(a) & (b) & \\tabularnewline\n\\includegraphics[width=50mm]{HOA-pagcs-addphoton-diff-displaced.pdf} & \\includegraphics[width=50mm]{HOA-psgcs.pdf} & \\tabularnewline\n & & \\tabularnewline\n(c) & (d) & \\tabularnewline\n\\includegraphics[width=50mm]{HOA-psgcs-subphoton.pdf} & \\includegraphics[width=50mm]{HOA-psgcs-subphoton-diff-displaced.pdf} & \\tabularnewline\n(e) & (f) & \\tabularnewline\n\\end{tabular}\\caption{\\label{HOA} The presence of higher-order antibunching is shown as\na function of $\\alpha$ for PADFS (in (a)-(c)) and PADFS ((d)-(f)).\nSpecifically, (a) and (d) illustrate comparison between lower- and\nhigher-order antibunching. {It should be noted that some of the curves\nare multiplied by a scaling factor in order to present them in one\nfigure}. Figures (b) and (e) show the effect of photon addition\/subtraction,\nand (c) and (f) establish the effect of Fock state chosen to displace\nin PADFS and PSDFS, respectively. Here, without the loss of generality, we have used the notation $\\psi_{+}(u, n, l-1)$ ($\\psi_{-}(u, v, l-1)$) for nonclassicality in photon added (subtracted) scenarios, and will follow this notation in subsequent figures of this chapter.}\n\\end{figure}\n\n\\subsection{Higher-order sub-Poissonian photon statistics}\n\nThis allows us to study higher-order sub-Poissonian photon statistics\nusing Eqs. (\\ref{eq:PA-expectation}) and (\\ref{eq:PS-expectation})\nin Eq. (\\ref{eq:hosps22-1}). The presence of higher-order sub-Poissonian\nphoton statistics (as can be seen in Figure \\ref{HOSPS} (a) and (d)\nfor PADFS and PSDFS, respectively) is dependent on the order of nonclassicality\nunlike higher-order antibunching, which is observed for all orders.\nSpecifically, the nonclassical feature was observed only for odd orders,\nwhich is consistent with some of the earlier observations \\cite{thapliyal2017comparison},\nwhere nonclassicality in those cases could be induced due to squeezing.\n Along the same line, we\nexpect to observe the nonclassicality in such cases with appropriate\nuse of squeezing as a useful quantum resource, which will be discussed\nelsewhere. In case of photon addition\/subtraction in DFS, a behavior\nanalogous to that observed for higher-order antibunching is observed,\ni.e., the depth of nonclassciality increases with the photon addition\nwhile it decreases (increases) for small (large) values of $\\alpha$\n(cf. Figure \\ref{HOSPS} (b) and (e)). Similar to the previous case,\nnonclassicality can be observed to be present for larger values of\ndisplacement parameter with photon addition\/subtraction, while increase\nin the value of Fock parameter has an opposite effect.\n\n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{ccc}\n\\includegraphics[width=60mm]{HOSPS-PAGCS-diffl.pdf} & \\includegraphics[width=60mm]{HOSPS-PAGCS-diffphoton.pdf} & \\tabularnewline\n(a) & (b) & \\tabularnewline\n\\includegraphics[width=60mm]{HOSPS-PAGCS-diffnp.pdf} & \\includegraphics[width=60mm]{HOSPS-PSGCS-varyl.pdf} & \\tabularnewline\n(c) & (d) & \\tabularnewline\n\\includegraphics[width=60mm]{HOSPS-Psgcs-sub-diffphoton.pdf} & \\includegraphics[width=60mm]{HOSPS-Psgcs-sub-diffnp.pdf} & \\tabularnewline\n(e) & (f) & \\tabularnewline\n\\end{tabular}\\caption{\\label{HOSPS} Dependence of higher-order sub-Poissonian photon statistics\non $\\alpha$ for PADFS (in (a)-(c)) and PSDFS ((d)-(f)) is illustrated\nhere. Specifically, (a) and (d) show the increase in depth of nonclassicality\nwitness with order, (b) and (e) depict the effect of photon addition\nand subtraction, respectively, and (c) and (f) establish the effect\nof choice of Fock state to be displaced in PADFS and PSDFS, respectively. }\n\\end{figure}\n\n\\subsection{Higher-order squeezing}\n\nThe analytical expressions of the nonclassicality witness of the Hong-Mandel\ntype higher-order squeezing criterion for PADFS and PSDFS can be obtained\nwith the help of Eqs. (\\ref{eq:PA-expectation}), (\\ref{eq:PS-expectation}),\nand (\\ref{eq:Hong-Def})-(\\ref{eq:cond2.1-1}). We have investigated\nthe higher-order squeezing and depict the result in Figure \\ref{fig:HOS}\nassuming $\\alpha$ to be real. Incidentally, we could not establish\nthe presence of higher-order squeezing phenomena in PADFS (Figure\n\\ref{fig:HOS} (a)-(c)). However, the depth of the higher-order squeezing\nwitness increases with order for small values of $\\alpha$ as shown\nin Figure \\ref{fig:HOS} (a), while for the higher values of the displacement\nparameter, higher-order squeezing disappear much quicker (cf. Figure\n\\ref{fig:HOS} (d)). With increase in the number of photons subtracted,\nthe presence of this nonclassicality feature can be maintained for\nthe higher values of displacement parameter as well (cf. Figure \\ref{fig:HOS}\n(e)). In general, photon subtraction is a preferred mode for nonclassicality\nenhancement as far as this nonclassciality feature is concerned. The\nchoice of the initial Fock state is also observed to be relevant as\nthe depth of squeezing parameter can be seen increasing with value\nof the Fock parameter for PSDFS in the small displacement parameter\nregion (shown in Figure \\ref{fig:HOS} (f)), where this nonclassical\nbehavior is also shown to succumb to the higher values of Fock and\ndisplacement parameters. Unlike the other nonclassicalities discussed\nso far, the observed squeezing also depends on phase $\\theta$ of\nthe displacement parameter $\\alpha=|\\alpha|\\exp\\left(i\\theta\\right)$\ndue to the second last term in Eq. (\\ref{eq:cond2.1-1}). We failed\nto observe this nonclassicality behavior in PADFS even by controlling\nthe value of the phase parameter (also shown in Figure \\ref{fig:HOS-diff-phase}\n(a)). For PSDFS, the squeezing disappears for some particular values\nof the phase parameter, while the observed squeezing is maximum for\n$\\theta=n\\pi$ (see Figure \\ref{fig:HOS-diff-phase} (b)). It thus\nestablishes the phase parameter of the displacement operator as one\nmore controlling factor for nonclassicality in these engineered quantum\nstates.\n\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{ccc}\n\\includegraphics[width=45mm]{HOS-pagcs-difforder.pdf} & \\includegraphics[width=45mm]{HOS-pagcs-diffphoton.pdf} & \\includegraphics[width=45mm]{HOS-pagcs-diffn.pdf} \\tabularnewline\n(a) & (b) & (c) \\tabularnewline\n\\includegraphics[width=45mm]{HOS-psgcs-difforder.pdf} & \\includegraphics[width=45mm]{HOS-psgcs-diffphoton.pdf} & \\includegraphics[width=45mm]{HOS-psgcs-diff-n.pdf} \\tabularnewline\n(d) & (e) & (f) \\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:HOS} Illustration of the higher-order squeezing using\nHong-Mandel criterion as the function of displacement parameter. In\n(a) and (d), dependence of the observed nonclassicality on different\norders ($l$) is shown for PADFS and PSDFS, respectively; while in\n(b) and (e), the effect of variation in the number of photon added\/subtracted\nis shown in case of PADFS and PSDFS, respectively. In (c) and (f),\nthe variation due to change in the initial Fock state chosen to be\ndisplaced is shown for PADFS and PSDFS, respectively. }\n\\end{figure}\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{HOS-pagcs-diffphase.pdf} & \\includegraphics[width=60mm]{HOS-psgcs-diffphase.pdf} \\tabularnewline\n(a) & (b) \\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:HOS-diff-phase} Hong-Mandel type higher-order squeezing\nfor PADFS and PSDFS is shown dependent on the phase of the displacement\nparameter $\\alpha=|\\alpha|\\exp\\left(i\\theta\\right)$ in (a) and (b),\nrespectively.}\n\\end{figure}\n\n\\subsection{$Q$ function}\n\nUsing Eqs. (\\ref{eq:PADFS}) and (\\ref{eq:PSDFS}) in (\\ref{eq:Q-function-1}),\nwe obtain the analytic expressions for the Husimi $Q$ function for\nPADFS and PSDFS as\n\n\\begin{equation}\n\\begin{array}{lcl}\nQ_{+} & = & \\frac{N_{+}^{2}}{\\pi}\\frac{\\exp\\left[-\\mid\\beta\\mid^{2}\\right]}{n!}\\sum\\limits _{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\\\\n & \\times & \\sum\\limits _{m,m^{\\prime}=0}^{\\infty}\\frac{\\alpha^{m}(\\alpha^{\\star})^{m^{\\prime}}\\beta^{(m^{\\prime}+p'+u)}(\\beta^{\\star})^{(m+p+u)}}{m!m^{\\prime}!}\n\\end{array}\\label{eq:Q-PADFS}\n\\end{equation}\nand \n\\begin{eqnarray}\n\\begin{array}{lcl}\nQ_{-} & = & \\frac{N_{-}^{2}}{\\pi}\\frac{\\exp\\left[-\\mid\\beta\\mid^{2}\\right]}{n!}\\sum\\limits _{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\\\\n & \\times & \\sum\\limits _{m,m^{\\prime}=0}^{\\infty}\\frac{\\alpha^{m}(\\alpha^{\\star})^{m^{\\prime}}\\beta^{(m^{\\prime}+p'-v)}(\\beta^{\\star})^{(m+p-v)}(m+p)!(m^{\\prime}+p')!}{m!m^{\\prime}!(m+p-v)!(m^{\\prime}+p'-v)!},\n\\end{array}\\label{eq:Q-PSDFS}\n\\end{eqnarray}\nrespectively. We failed to observe the nonclassical features reflected\nbeyond moments based nonclassicality criteria through a quasiprobability\ndistribution, i.e., zeros of the $Q$ function. We have shown the\n$Q$ function in Figure \\ref{fig:Q-function}, where the effect of\nphoton addition\/subtraction and the value of Fock parameter on the\nphase space distribution is shown. Specifically, it is observed that\nthe value of Fock parameter affects the quasidistribution function\nmore compared to the photon addition\/subtraction.\n\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{cc}\n\\includegraphics[width=100mm]{Q-fun.jpg} \n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:Q-function} Contour plots of the $Q$ function for (a)\nsingle photon added displaced Fock $|1\\rangle$ state, (b) two photon\nadded displaced Fock $|1\\rangle$ state, (c) single photon added displaced\nFock $|2\\rangle$ state, (d) single photon subtracted displaced Fock\n$|1\\rangle$ state, (e) two photon subtracted subtracted displaced\nFock $|1\\rangle$ state, (f) single photon subtracted displaced Fock\n$|2\\rangle$ state. In all cases, $\\alpha=\\sqrt{2}\\exp\\left(\\frac{i\\pi}{4}\\right)$\nis chosen. }\n\\end{figure}\n\n\\subsection{Agarwal-Tara's criterion}\n\nThe analytic expression of $A_{3}$ parameter defined in Eq. (\\ref{eq:Agarwal-1})\n{can} be obtained for PADFS and PSDFS using Eqs. (\\ref{eq:PA-expectation})\nand (\\ref{eq:PS-expectation}). The nonclassical properties of the\nPADFS and PSDFS using Agarwal-Tara's criterion are investigated, and\nthe corresponding results are depicted in Figure \\ref{fig:A3}, which\nshows highly nonclassical behavior of the {states generated by engineering}.\nSpecifically, the negative part of the curves, which is bounded by\n-1, ensures the existence of the nonclassicality. From Figure \\ref{fig:A3},\nit is clear that $A_{3}$ is 0 (-1) for the displacement parameter\n$\\alpha=0$ because then DFS, PADFS, and PSDFS reduce to Fock state,\nand $A_{3}=0$ (-1) for the Fock state parameter $n=0,\\,1$ $\\left(n>1\\right)$.\nNonclassicality reflected through $A_{3}$ parameter increases (decreases)\nwith photon addition (subtraction) (shown in Figure \\ref{fig:A3}\n(a) and (c)). In contrast, the Fock parameter has a completely opposite\neffect that it leads to decrease (increase) in the observed nonclassicality\nfor PADFS (PSDFS), which can be seen in Figure \\ref{fig:A3} (b) and\n(d). However, for larger values of displacement parameter, the depth\nof nonclassicality illustrated through this parameter can again be\nseen to increase (cf. Figure \\ref{fig:A3} (b)).\n\n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{PAGCS-Agarwal-diffphoton.pdf} & \\includegraphics[width=60mm]{PAGCS-Agarwal-diffnp.pdf} \\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{PSGCS-Agarwal-diffphoton.pdf} & \\includegraphics[width=60mm]{PSGCS-Agarwal-diffnp.pdf} \\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:A3} Variation of Agarwal-Tara's parameter with $\\alpha$\nfor PADFS and PSDFS is shown in (a)-(b) and (c)-(d), respectively.\nSpecifically, the effect of photon addition\/subtraction (in (a) and\n(c)) and the choice of Fock state ((b) and (d)) on the presence of\nnonclassicality in PADFS and PSDFS is illustrated.}\n\\end{figure}\n\n\\subsection{Klyshko's Criterion}\n\nThe analytic expression for the $m$th photon-number distribution\nfor PADFS and PSDFS can be calculated (using $q=r=1$) from Eqs. (\\ref{eq:PA-expectation})\nand (\\ref{eq:PS-expectation}), respectively.\n\n\\begin{figure}\n\\centering %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{Klyshko-photoadd.pdf} & \\includegraphics[width=60mm]{Klyshko-photoadddiffnp.pdf} \\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{Klyshko-photosub.pdf} & \\includegraphics[width=60mm]{Klyshko-photosub-diffnp.pdf}\\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:Klyshko} Illustration of the Klyshko's criterion. Variation\nof $B(m)$ with respect to $m$ (a) and (c) for different values of\nthe number of the photon additon\/subtraction for PADFS and PSDFS,\nrespectively; (b) and (d) for different values of the number of the\nFock state parameter for PADFS and PSDFS, respectively. Here, we have\nchosen $\\alpha=1$ in all cases because almost all\nof the other criteria detected nonclassicality for this choice of\n$\\alpha$ so does the Klyshko's criterion.}\n\\end{figure}\nThe advantage of the Klyshko's criterion over any other existing moments\nbased criteria is that a very small amount of information is required.\nSpecifically, probability of only three successive photon numbers\nis sufficient to investigate the nonclassical property. The negative\nvalues of $B(m)$ serve as the witness of nonclassicality. Klyshko's\ncriterion in Eq. (\\ref{eq:Klyshko-1}) is derived analytically and\nthe corresponding nonclassical properties for both PADFS and PSDFS\nare investigated (cf. Figure \\ref{fig:Klyshko}). Specifically, the\nnegative values of $B(m)$ are observed for different values of $m$\nin case of photon addition and subtraction (cf. Figure \\ref{fig:Klyshko}\n(a) and (c)) being the signature of nonclassicality induced via independent\noperations. Additionally, one can also visualize that due to the photon\naddition (subtraction) the negative peaks in the values of $B(m)$\nshift to higher (lower) photon number regime. A similar observation\nis obtained for different values of the Fock state parameter for the\nPADFS and PSDFS, where the negative values of the witness of nonclassicality\nget amplified towards higher photon number regime and becomes more\nnegative, and the corresponding results are shown in Figure \\ref{fig:Klyshko}\n(b) and (d), respectively. This further establishes the relevance\nof operations, like photon addition, subtraction, and starting with\nFock states in inducing nonclassicality in the engineered quantum\nstates.\n\n\\section{Conclusions \\label{sec:Conclusions}}\n\nThe only Fock state that does not show any nonclassical feature is\nthe vacuum state \\cite{miranowicz2015statistical}, and displacement\noperator preserves its classicality. All the rest of the Fock states\nare maximally nonclassical and they are shown to remain nonclassical\neven after application of a displacement operator. Here, we set ourselves\na task: What happens when the displacement operator applied on a Fock\nstate is followed by addition or subtraction of photon(s)? Independently,\nphoton addition and subtraction are already established as nonclassicality\ninducing operations in case of displaced vacuum state (i.e., coherent\nstate). In this chapter, we have established that photon addition\/subtraction\nis not only nonclassicality inducing operation, it can also enhance\nthe nonclassicality present in the DFS. It's expected that these operations\nwould increase the amount of nonclassicality (more precisely, the\ndepth of the nonclassicality witnessing parameter) present in other\nnonclassical states, too. There is one more advantage of studying\nthe nonclassical features of PADFS and PSDFS. These states can be\nreduced to a class of quantum states, most of which are already experimentally\nrealized and found useful in several applications. Inspired by the\navailable experimental results, we have also proposed optical designs\nfor generation of PADFS and PSDFS from the squeezed vacuum state.\n\nTo analyze the nonclassical features of the engineered final states,\ni.e., PADFS and PSDFS, we have used a set of moments based criteria,\nnamely Mandel $Q_{M}$ parameter, Agarwal-Tara's $A_{3}$ parameter,\ncriteria for higher-order antibunching, sub-Poissonian photon statistics,\nand squeezing. In addition, the nonclassical features have been investigated\nthrough Klyshko's criterion and a quasiprobability distribution--$Q$\nfunction. The states of interest are found to show a large variety\nof nonclassical features as all the nonclassicality witnesses used\nhere (except $Q$ function) are found to detect the nonclassicality.\n They show that state is useful as squeezed, antibunched,\nas well as generation of entangled state.\n\nThis study has revealed that the amount of nonclassicality in PADFS\nand PSDFS can be controlled by the Fock state parameter, displacement\nparameter, the number of photons added or subtracted. In general,\nthe amount of nonclassicality with respect to the witness used here\nis found to increase with the number of photons added\/subtracted,\nwhile smaller values of Fock state and displacement parameters are\nobserved to be preferable for the presence of various nonclassical\nfeatures. On some occasions, nonclassicality has also been observed\nto increase with the Fock state parameter, while larger values of\ndisplacement parameter always affect the nonclassicality adversely.\nMost of the nonclassicality criteria used here, being moments-based\ncriteria, could not demonstrate the effect of phase parameter of the\ndisplacement parameter. Here, higher-order squeezing witness and $Q$\nfunction are found to be dependent on the phase of the displacement\nparameter. However, only higher-order squeezing criterion was able\nto detect nonclassicality, and thus established that this phase parameter\ncan also be used to control the amount of nonclassicality.\n\nFurther, in the past, it has been established that higher-order nonclassicality\ncriteria have an advantage in detecting weaker nonclassicality. We\nhave also shown that the depth of nonclassicality witness increases\nwith order of nonclassicality thus providing an advantage in the experimental\ncharacterization of the observed nonclassical behavior. \n\\chapter{Quantum phase properties of photon added and subtracted displaced\nFock states\\textsc{{\\label{cha:phase}}}}\n\nIn this chapter, the motive is to observe the phase properties of\nPADFS and PSDFS. The main findings of this chapter\nare published in \\cite{malpani2019quantum}\n\n\\section{{Introduction\\label{sec:Introduction-chap3}}}\n\nIn the previous chapter, the nonclassical properties of PADFS and\nPSDFS were studied. Here, our specific interest is to study the phase\nproperties of PADFS and PSDFS and their limiting cases. In the recent\npast, the nonclassical properties of this set of engineered quantum\nstates, many of which have been experimentally generated \\cite{lvovsky2001quantum,lvovsky2002synthesis,zavatta2004quantum,zavatta2005single,zavatta2008subtracting},\nwere focus of various studies (see \\cite{malpani2019lower} and references\ntherein). In Section \\ref{subsec:Photon-added-and sub}, we have already\nexpressed PADFS and PSDFS as superposition of Fock states. Further,\nin Section \\ref{sec:Analytic-tools-forphase} we have described the\nparameters used for the study of phase properties of a quantum state.\nIn that context we have already mentioned several applications of\nquantum phase distribution and quantum phase fluctuation.\n\nTo stress on the recently reported applications of quantum phase distribution\nand quantum phase fluctuation, we note that these have applications\nin quantum random number generation \\cite{xu2012ultrafast,raffaelli2018soi},\ncryptanalysis of squeezed state based continuous variable quantum\ncryptography \\cite{horak2004role}, generation of solitons in a Bose-Einstein\ncondensate \\cite{denschlag2000generating}, in phase encoding quantum\ncryptography \\cite{gisin2002quantum}, phase imaging of cells and\ntissues for biomedical application \\cite{park2018quantitative}; as\nwell as have importance in determining the value of transition temperature\nfor superconductors \\cite{emery1995importance}. Keeping these applications\nand the general nature of engineered quantum states PADFS and PSDFS\nin mind, in what follows, we aim to study phase distribution, $Q$\nphase, phase fluctuation measures, phase dispersion, and quantum phase\nestimation using the concerned states and the states obtained in the\nlimiting cases. As PADFS and PSDFS are already described, we may begin\nthis study by describing limiting cases of these states as our states\nof interest.\n\nWe have already mentioned that our focus would be on PADFS and PSDFS.\nDue to the general form of PADFS and PSDFS, a large number of states\ncan be obtained in the limiting cases. Some of the important limiting\ncases of PADFS and PSDFS in the present notation are summarized in\nTable \\ref{tab:state}. This table clearly establishes that the applicability\nof the results obtained in the present study is not restricted to\nPADFS and PSDFS; rather an investigation of the phase properties of\nPADFS and PSDFS would also reveal phase properties of many other quantum\nstates of particular interest.\n\n\\begin{table}\n\\begin{centering}\n\\begin{tabular}{c>{\\centering}p{3cm}c>{\\centering}p{3cm}}\n\\hline \nReduction of state & Name of the state & Reduction of state & Name of the state\\tabularnewline\n\\hline \n\\hline \n$|\\psi_{+}(u,n,\\alpha)\\rangle$ & $u$-PADFS & $|\\psi_{-}(v,n,\\alpha)\\rangle$ & $v$-PSDFS\\tabularnewline\n\\textbar$\\psi_{+}(0,n,\\alpha)\\rangle${} & DFS & \\textbar$\\psi_{-}(0,n,\\alpha)\\rangle${} & DFS\\tabularnewline\n$|\\psi_{+}(0,0,\\alpha)\\rangle$ & Coherent state & $|\\psi_{-}(0,0,\\alpha)\\rangle$ & Coherent state\\tabularnewline\n\\textbar$\\psi_{+}(0,n,0)\\rangle${} & Fock state & \\textbar$\\psi_{-}(0,n,0)\\rangle${} & Fock state\\tabularnewline\n\\textbar$\\psi_{+}(u,0,\\alpha)\\rangle${} & $u$-Photon added coherent state & \\textbar$\\psi_{-}(v,0,\\alpha)\\rangle${} & $v$-Photon subtracted coherent state \\tabularnewline\n\\hline \n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{tab:state}Various states that can be obtained as the limiting\ncases of the PADFS and PSDFS.}\n\\end{table}\n\n\\section{Quantum phase distribution and other phase properties \\label{sec:Quantum-phase-parameters}}\n\nQuantum phase operator $\\hat{\\phi}$ was introduced by Dirac based\non his assumption that the annihilation operator $\\hat{a}$ can be\nfactored out into a Hermitian function $f(\\hat{N})$ of the number\noperator $\\hat{N}=\\hat{a}^{\\dagger}\\hat{a}$ and a unitary operator\n$\\hat{U}$ \\cite{dirac1927quantum} as \n\\begin{equation}\n\\hat{a}=\\hat{U}\\,f\\left(\\hat{N}\\right),\\label{eq:Dirac_pahse}\n\\end{equation}\nwhere \n\\begin{equation}\n\\hat{U}=e^{\\iota\\hat{\\phi}}.\\label{eq:phase-operator}\n\\end{equation}\nHowever, there was a problem with the Dirac formalism of phase operator\nas it failed to provide a meaning to the corresponding uncertainty\nrelation. Specifically, in the Dirac formalism, the creation ($\\hat{a}^{\\dagger}$)\nand annihilation ($\\hat{a}$) operators satisfy the bosonic commutation\nrelation, $\\left[\\hat{a},\\,\\hat{a}^{\\dagger}\\right]=1$, iff $\\left[\\hat{N},\\,\\hat{\\phi}\\right]=\\iota$,\nwhich leads to the number phase uncertainty relation $\\Delta N\\,\\Delta\\phi\\geq1$.\nTherefore, in order to satisfy the bosonic commutation relation under\nDirac formalism, the phase uncertainty should be greater than 2$\\pi$\nfor $\\Delta N$ \\textless{} $\\frac{1}{2\\pi}$ which lacks a physical\ndescription. Subsequently, Louisell \\cite{louisell1963amplitude}\nproposed some periodic phase based method, which was followed by Susskind\nand Glogower formalism based on Sine and Cosine operators \\cite{susskind1964quantum}.\nAn important contribution to this problem is the Barnett-Pegg formalism\n\\cite{barnett1986phase} which is used in this thesis. In what follows,\nwe will also briefly introduce notions, such as quantum phase distribution,\nangular $Q$ phase function, phase fluctuation parameters, phase dispersion,\nquantum phase estimation to study the phase properties of the quantum\nstates of our interest.\n\n\n\\section{Phase properties of PADFS and PSDFS \\label{sec:phase-witnesses}}\n\nThe description of the states of our interest given in the previous\nsection can be used to study different phase properties and quantify\nphase fluctuation in the set of quantum states listed in Table \\ref{tab:state}.\nSpecifically, with the help of the quantum states defined in Eqs.\n(\\ref{eq:PADFS})-(\\ref{eq:PSDFS}), we have obtained the analytic\nexpressions of phase distribution and other phase parameters defined\nin Section \\ref{sec:Quantum-phase-parameters}.\n\n\\subsection{Phase distribution function}\n\nFrom the definition of the phase distribution (\\ref{eq:Phase-Distridution-1}),\nit can be observed that for a Fock state, $P_{\\theta}=\\frac{1}{2\\pi}$,\nimplying it has a uniform distribution of phase. Interestingly, the\nstates of our interest, PADFS and PSDFS, are obtained by displacing\nthe Fock state followed by photon addition\/subtraction. Therefore,\nwe will study here what is the effect of application of displacement\noperator on a uniformly phase distributed (Fock) state and how subsequent\nphoton addition\/subtraction further alters the phase distribution.\nUsing phase distribution function, the information regarding uncertainty\nin phase and phase fluctuation can also be obtained. To begin with,\nwe compute the analytic expressions of $P_{\\theta}$ for the PADFS\nand PSDFS, using Eq. (\\ref{eq:Phase-Distridution-1}) as \n\\begin{eqnarray}\n\\begin{array}{lcl}\nP_{\\theta}\\left(u,n\\right) & = & \\frac{1}{2\\pi}\\dfrac{\\left|N_{+}\\right|^{2}}{n!}\\sum\\limits _{p,p^{\\prime}=0}^{n}{n \\choose p}{n \\choose p^{\\prime}}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\left|\\alpha\\right|^{2n-p-p^{\\prime}}\\\\\n & \\times & \\sum\\limits _{m,m^{\\prime}=0}^{\\infty}\\frac{(-\\left|\\alpha\\right|)^{m+m^{\\prime}}\\sqrt{(m+p+u)!(m^{\\prime}+p^{\\prime}+u)!}}{m!m^{\\prime}!}\\exp[\\iota\\left(\\theta-\\theta_{2}\\right)(m^{\\prime}+p^{\\prime}-m-p)],\n\\end{array}\\label{eq:PA-phase}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\begin{array}{ccc}\nP_{\\theta}\\left(v,n\\right) & = & \\frac{1}{2\\pi}\\dfrac{\\left|N_{-}\\right|^{2}}{n!}\\sum\\limits _{p,p^{\\prime}=0}^{n}{n \\choose p}{n \\choose p^{\\prime}}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\left|\\alpha\\right|^{2n-p-p^{\\prime}}\\\\\n & \\times & \\sum\\limits _{m,m^{\\prime}=0}^{\\infty}\\frac{(-\\left|\\alpha\\right|)^{m+m^{\\prime}}(m+p)!(m^{\\prime}+p^{\\prime})!}{m!m^{\\prime}!\\sqrt{(m+p-v)!(m^{\\prime}+p^{\\prime}-v)!}}\\exp[\\iota\\left(\\theta-\\theta_{2}\\right)(m^{\\prime}+p^{\\prime}-m-p)],\n\\end{array}\\label{eq:PS-phase}\n\\end{eqnarray}\nrespectively. Here, $\\theta_2$ is the phase associated with the displacement parameter $\\alpha$ ($ \\alpha = |\\alpha|e^{\\iota \\theta_2}$). Since the obtained expressions in Eqs. (\\ref{eq:PA-phase})\nand (\\ref{eq:PS-phase}) are complex in nature, we depict numerical\n(graphical) analysis of the obtained results in Figs. \\ref{fig:Phase-Distribution-Function}\nand \\ref{fig:Phase-Distribution-Function-1} for PADFS and PSDFS,\nrespectively. Specifically, in Figure \\ref{fig:Phase-Distribution-Function}\n(a), we have shown the variation of phase distribution with phase\nparameter $\\theta$ for different number of photon added in the displaced\nsingle photon Fock state ($D\\left(\\alpha\\right)\\left|1\\right\\rangle $)\nfor $\\theta_{2}=0$. A uniform phase distribution for Fock state (with\na constant value of $\\frac{1}{2\\pi}$) is found to transform to one\nthat decreases for higher values of phase and possess a dip in the\nphase distribution for $\\theta=0$, which can be thought of as an\napproach to the Fock state. In fact, in case of classical states,\n$P_{\\theta}$ has a peak at zero phase difference $\\theta-\\theta_{2}$,\nand therefore, this contrasting behavior can be viewed as signature\nof quantumness of DFS. However, with the increase in the number of\nphotons added to the DFS, the phase distribution of the PADFS is observed\nto become narrower. In fact, a similar behavior with increase in the\nmean photon number of coherent state was observed previously \\cite{agarwal1992classical}.\nIt is imperative to state that $P_{\\theta}$ in case of higher number\nof photon added to DFS has similar but narrower distribution than\nthat of coherent state. In contrast, with increase in the Fock parameter,\nthe phase distribution is observed to become broader (cf. Figure \\ref{fig:Phase-Distribution-Function}\n(b)). Thus, the increase in the number of photons added and the increase\nin Fock parameter have opposite effects on the phase distribution.\nThe same is also illustrated through the polar plots in Figure \\ref{fig:Phase-Distribution-Function}\n(c)-(d), which not only reestablish the same fact, but also illustrate\nthe dependence of $P_{\\theta}$ on the phase of the displacement parameter.\nSpecifically, the obtained phase distribution remains symmetric along\nthe value of phase $\\theta_{2}$ (i.e., $P_{\\theta}$ is observed\nto have a mirror symmetry along $\\theta=\\theta_{2}$) of the displacement\nparameter. The phase distribution of Fock state is shown by a black\ncircle in the polar plot.\n\n\\begin{figure}\n\\centering{} \\centering{} %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{phasedis-pagcs.pdf} & \\includegraphics[width=60mm]{distriphase-diffnp-pagcs1.pdf} \\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{phasedis-polar-pagcs.pdf} & \\includegraphics[width=60mm]{distripolarphase-diffnp-pagcs1.pdf}\\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:Phase-Distribution-Function} Variation of phase distribution\nfunction with phase parameter for PADFS with displacement parameter\n$\\left|\\alpha\\right|=1$ for different values of photon addition ((a)\nand (c)) and Fock parameters ((b) and (d)). The phase distribution\nis shown using both two-dimensional ((a) and (b) with $\\theta_{2}=0$)\nand polar ((c) and (d)) plots. In (c) and (d), $\\theta_{2}=\\frac{n\\pi}{2}$\nwith integer $n\\in\\left[0,3\\right]$, and the legends are same as\nin (a) and (b), respectively. }\n\\end{figure}\nInstead of photon addition, if we subtract photons from the DFS, a\nsimilar effect on the phase distribution to that of photon addition\nis observed. Further, a comparison between photon addition and subtraction\non the phase distribution establishes that a single photon subtraction\nhas a prominent impact on phase distribution when compared to that\nof single photon addition, i.e., the distribution can be observed\nto be narrower than that of coherent state in most of the cases for\n$u=v$. For instance, single photon added (subtracted) DFS is broader\n(narrower) than corresponding coherent state. Similarly, with the\nincrease in the value of Fock parameter, we can observe more changes\non PSDFS than what was observed in PADFS, i.e., the phase distribution\nbroadens more with Fock parameter for PSDFS. Note that $P_{\\theta}$\nhas a peak at $\\theta=\\theta_{2}$ only for photon addition $u>n$,\nwhile in case of photon subtraction it can be observed for $v\\geq n$.\nWith the increase in the amplitude of displacement parameter ($\\left|\\alpha\\right|$)\ninitially the phase distribution becomes narrower, which is further\nsupported by both addition and subtraction of photons, but it becomes\nbroader again for very high $\\left|\\alpha\\right|$ (figure is not\nshown here). \n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{phasedis-psgcs.pdf} & \\includegraphics[width=60mm]{distriphase-diffnp-psgcs.pdf} \\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{phasedis-polar-psgcs.pdf} & \\includegraphics[width=60mm]{distripolarphase-diffnp-psgcs.pdf}\\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:Phase-Distribution-Function-1} Variation of phase distribution\nfunction with phase parameter for PSDFS with displacement parameter\n$\\left|\\alpha\\right|=1$ for different values of photon subtraction\n((a) and (c)) and Fock parameters ((b) and (d)). The phase distribution\nis shown using both two-dimensional ((a) and (b) with $\\theta_{2}=0$)\nand polar ((c) and (d)) plots. In (c) and (d), $\\theta_{2}=\\frac{n\\pi}{2}$\nwith integer $n\\in\\left[0,3\\right]$, and the legends are same as\nin (a) and (b), respectively.}\n\\end{figure}\n\n\\subsection{Angular $Q$ function of PADFS and PSDFS}\n\nThe relevance of the $Q$ function as witness of nonclassicality \\cite{thapliyal2015quasiprobability}\nand in state tomography \\cite{thapliyal2016tomograms} is well studied.\nOn top of that, non-Gaussianity of the PADFS and PSDFS using $Q$\nfunction was recently reported by us \\cite{malpani2019lower}. We\nfurther discuss a phase distribution based on $Q$ function using\nEq. (\\ref{eq:ang-Qf-1}). In this particular case, we have obtained\nthe angular $Q$ function from the $Q$ functions of the PADFS and\nPSDFS reported as Eqs. (15)-(16) in \\cite{malpani2019lower}. Specifically,\nwe have shown the effect of photon addition on the DFS ($D\\left(\\alpha\\right)\\left|1\\right\\rangle $)\nfor a specific value of the displacement parameter in Figure \\ref{fig:Angular Q function}\n(a) for angular $Q$ function. One can clearly see that the polar\nplots show an increase in the peak (located at $\\theta_{1}=\\theta_{2}$)\nof the distribution with photon addition. Further, one can compare\nthe behavior of $Q_{\\theta_{1}}$ with $P_{\\theta}$ in Figure \\ref{fig:Phase-Distribution-Function}\nand observe that they behave quite differently (as reported in \\cite{agarwal1992classical}\nfor the coherent states), other than increase in the peak of the distribution.\nSpecifically, $P_{\\theta}$ has a peak at $\\theta=\\theta_{2}$ only\nfor $u>n$, while $Q_{\\theta_{1}}$ is always peaked at the phase\nof the displacement parameter which also becomes a line of symmetry.\nInterestingly, the effect of increase in the Fock parameter of PADFS\non $Q_{\\theta_{1}}$is similar but less prominent in comparison to\nphoton addition. This is in quite contrast of that observed for $P_{\\theta}$\n(in Figs. \\ref{fig:Phase-Distribution-Function} and \\ref{fig:Angular Q function}\n(b)). In case of PSDFS, both photon subtraction and Fock parameter\nhave completely different effects on $Q_{\\theta_{1}}$ (cf. Figure\n\\ref{fig:Angular Q function} (c)-(d)) which is also in contrast to\nthat on corresponding $P_{\\theta}$ (shown in Figure \\ref{fig:Phase-Distribution-Function-1}).\nSpecifically, with increase in photon subtraction the angular $Q$\nfunction becomes narrower peaked at $\\theta=\\theta_{2}$, but for\nlarger number of photon subtraction the peak value decreases quickly.\nHowever, with increasing Fock parameter (cf. Figure \\ref{fig:Angular Q function}\n(d)), $Q_{\\theta_{1}}$ behaves much like photon addition on DFS (shown\nin Figure \\ref{fig:Angular Q function} (a)). The observed behavior\nshows the relevance of studying both these phase distributions due\nto their independent characteristics. \n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\centering{}\\includegraphics[width=60mm]{Qafun-diffphotoadd.pdf} & \\includegraphics[width=60mm]{Qafun-diff-fock-photoadd.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{Qafun-photosub-12.pdf} & \\includegraphics[width=60mm]{Qafun-photo-diff-focksub.pdf}\\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:Angular Q function} The polar plots for angular $Q$ function\nfor PADFS (in (a) and (b)) and PSDFS (in (c) and (d)) for displacement\nparameter $\\left|\\alpha\\right|{\\rm =}1$ and $\\theta_{2}=\\frac{n\\pi}{2}$\nwith integer $n\\in\\left[0,3\\right]$ for different values of photon\naddition\/subtraction and Fock parameters. In (a) and (c), for $n=1$,\nthe smooth (blue), dashed (red), dot-dashed (magenta), and dotted\n(brown) lines correspond to photon addition\/subtraction 0, 1, 2, and\n3, respectively. In (b) and (d), for the single photon added\/subtracted\ndisplaced Fock state, the smooth (blue), dashed (red), dot-dashed\n(magenta), and dotted (brown) lines correspond to Fock parameter 1,\n2, 3, and 4, respectively. }\n\\end{figure}\n\n\\subsection{Quantum phase fluctuation of PADFS and PSDFS}\n\nNote that Carruthers and Nieto \\cite{carruthers1968phase} had introduced\nthese parameters in terms of Susskind and Glogower operators \\cite{susskind1964quantum};\nhere we use them in Barnett-Pegg formalism to remain consistent with\n\\cite{gupta2007reduction}, where $U$ parameter is shown relevant\nas a witness of nonclassicality \\cite{gupta2007reduction}. Specifically,\n$U$ is 0.5 for coherent state, and reduction of $U$ parameter below\nthe value for coherent state can be interpreted as the presence of\nnonclassical behavior \\cite{gupta2007reduction}. In what follows,\nwe will study quantum phase fluctuations for PADFS and PSDFS by computing\nanalytic expressions of $U,\\,S$ and $Q$ parameters in Barnett-Pegg\nformalism, with a specific focus on the possibility of witnessing\nnonclassical properties of these states via the reduction of $U$\nparameter below the coherent state limit. Carruthers and Nieto \\cite{carruthers1968phase}\nintroduced three parameters to study quantum phase fluctuation (\\ref{eq:fluctuation3-1})-(\\ref{eq:fluctuation5-1}).\nIt was only in the recent past that Gupta and Pathak provided a physical\nmeaning to one of these parameters by establishing its relation with\nantibunching and sub-Poissonian photon statistics \\cite{gupta2007reduction}.\nThus, the quantum phase fluctuation studied here using three parameters\nwill also be used to witness the nonclassical nature of the quantum\nstates under consideration. Here, the effect of photon addition\/subtraction\nand displacement parameters on these fluctuation parameters is also\nstudied (shown in Fig. \\ref{fig:phase fluctuation}). Specifically,\nFigure \\ref{fig:phase fluctuation} (a)-(c) show variation of the\nthree parameters of quantum phase fluctuation for different values\nof the number of photons added in the displaced Fock state ($D\\left(\\alpha\\right)\\left|1\\right\\rangle $)\nwith displacement parameter $\\left|\\alpha\\right|$. It may be clearly\nobserved that two of the quantum phase fluctuation parameters, namely\n$U\\left(u,n\\right)$ and $Q\\left(u,n\\right)$ decrease with the value\nof displacement parameter, while $S\\left(u,n\\right)$ increases with\n$\\left|\\alpha\\right|$. Interestingly, the photon addition and increase\nin the displacement parameter exhibit the same effect on all three\nquantum phase fluctuation parameters for PADFS, while for higher values\nof displacement parameter $S\\left(u,n\\right)$ show completely opposite\neffect of photon addition. In contrast, $U\\left(v,n\\right)$ for $v$\nsubtracted photons from $D\\left(\\alpha\\right)\\left|1\\right\\rangle $\nis found to increase (decrease) with photon subtraction while decrease\n(increase) with the displacement parameter for small (large) value\nof $\\left|\\alpha\\right|$ (cf. Figure \\ref{fig:phase fluctuation}\n(d)). On the other hand, parameter $S\\left(v,n\\right)$ is also observed\nto increase (decrease) with $\\left|\\alpha\\right|$ ($v$) as shown\nin Figure \\ref{fig:phase fluctuation} (e). The third parameter $Q\\left(v,n\\right)$\nshows slightly complex behavior for PSDFS with both $\\left|\\alpha\\right|$\nand $v$ (cf. Figure \\ref{fig:phase fluctuation} (f)) as it behaves\nanalogous to PADFS for each subtracted photon for both small and large\nvalues of the displacement parameter (when it increases with $\\left|\\alpha\\right|$),\nbut for intermediate values the behavior is found to be completely\nopposite.\n\nAs mentioned previously, $U\\left(i,n\\right)\\,\\forall i\\in\\left\\{ u,v\\right\\} $\nhas a physical significance as a witness of antibunching for values\nof this parameter less than $\\frac{1}{2}$, Figure \\ref{fig:phase fluctuation}\n(a) and (d) can be used to perform similar studies for PADFS and PSDFS,\nrespectively. In case of PADFS, we can observe this relevant parameter\nto become less than $\\frac{1}{2}$, and thus to illustrate the presence\nof antibunching, only at higher values of the displacement parameter\nand photon added to the displaced Fock state. In contrast, PSDFS shows\nthe presence of this nonclassical feature in all cases.Thus, occurrence\nof antibunching in PADFS and PSDFS is established here through this\nphase fluctuation parameter. Interestingly, a similar dependence of\nantibunching in PADFS and PSDFS Eq. (\\ref{eq:HOA-1}) has\nbeen recently reported by us \\cite{malpani2019lower} using a different\ncriterion. Further, one can observe from the expression of $U$ in\nEq. (\\ref{eq:fluctuation3-1}) that it is expected to be independent\nof the phase of the displacement parameter, which can also be understood\nfrom the use of this parameter as a witness for an intensity moments\nbased nonclassical feature. In contrast, $S$ and $Q$ in Eqs. (\\ref{eq:fluctuation4-1})-(\\ref{eq:fluctuation5-1})\nshow dependence on the phase of displacement parameter. Here, we have\nnot discussed the effect of Fock parameter in detail, but in case\nof photon addition, $u$ and $n$ have same (opposite) effects on\n$S$ ($U$ and $Q$) parameter(s). Fock parameter has always shown\nopposite effect of photon subtraction on all three phase fluctuation\nparameters, and thus nonclassicality revealed by $U$ can be enhanced\nwith Fock parameter. The relevance of Fock parameter can also be visualized\nby observing the fact that the single photon subtracted coherent state\nhas $U=0.5$ (which is consistent with the value zero of the antibunching\nwitness reported in \\cite{thapliyal2017comparison}). Thus, in this\ncase, the origin of the induced antibunching can be attributed to\nthe non-zero value of Fock parameter. \n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{ccc}\n\\includegraphics[width=60mm]{phaseUpagcs.pdf} & \\includegraphics[width=60mm]{phaseSpagcs.pdf} & \\tabularnewline\n(a) & (b) & \\tabularnewline\n\\includegraphics[width=60mm]{phaseQpagcs.pdf} & \\includegraphics[width=60mm]{phaseUpsgcs.pdf} & \\tabularnewline\n(c) & (d) & \\tabularnewline\n\\includegraphics[width=60mm]{phaseSpsgcs.pdf} & \\includegraphics[width=60mm]{phaseQpsgcs.pdf} & \\tabularnewline\n(e) & (f) & \\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:phase fluctuation} Variation of three phase fluctuation\nparameters introduced by Carruthers and Nieto with the displacement\nparameter with $\\theta_{2}=0$. The values of photon addition ($u$),\nsubtraction ($v$), and Fock parameter $n=1$ are given in the legends.\nParameter $U\\left(i,n\\right)\\,\\forall i\\in\\left\\{ u,v\\right\\} $ also\nillustrates antibunching in the states for values less than $\\frac{1}{2}$. }\n\\end{figure}\n\n\\subsection{Phase Dispersion}\n\nHere, it is worth stressing that both Carruthers-Nieto parameters\nand phase dispersion $D$ correspond to phase fluctuation, Our primary\nfocus is to study phase fluctuation and further to check the correlation\nbetween these measures of phase fluctuation. Thus, it would be interesting\nto study phase fluctuation from the two perspectives. We compute a\nmeasure of quantum phase fluctuation based on quantum phase distribution,\nthe phase dispersion (\\ref{eq:Dispersion-1}), for both PADFS and\nPSDFS to perform a comparative study between them. Specifically, the\nmaximum value of dispersion is 1 which corresponds to the uniform\nphase distribution, i.e., $P_{\\theta}=\\frac{1}{2\\pi}$. Both PADFS\nand PSDFS show a uniform distribution for the displacement parameter\n$\\alpha=0$ (cf. Figure \\ref{fig:Phase-Dispersion}). It is a justified\nresult as both the states reduce to the Fock state in this case. However,\nwith the increase in the value of displacement parameter quantum phase\ndispersion is found to decrease. This may be attributed to the number-phase\ncomplimentarity \\cite{banerjee2010complementarity,srikanth2009complementarity,srikanth2010complementarity},\nwhich leads to smaller phase fluctuation with increasing variance\nin the number operator at higher values of displacement parameter.\nThus, with an increase in the average photon number by increasing\nthe displacement parameter, phase dispersion decreases for both PADFS\nand PSDFS. Addition of photons in DFS leads to decrease in the value\nof phase dispersion, while subtraction of photons has more complex\neffect on phase dispersion (cf. Figure \\ref{fig:Phase-Dispersion}\n(a) and (c)). Specifically, for the smaller values of the displacement\nparameter ($\\left|\\alpha\\right|<1$), the phase dispersion parameter\nbehaves differently for $v\\leq n$ and $v>n$. This can be attributed\nto the sub-Poissonian photon statistics for $v\\leq n$ with $\\left|\\alpha\\right|<1$\nas well as the small value of average photon number (Figure \\ref{fig:phase fluctuation}\n(d)). However, at the higher values of the displacement parameter\n$D$ for the PSDFS behaves in a manner analogous to the PADFS. Interestingly,\nincrease in the Fock parameter shows similar effect on PADFS and PSDFS\nin Figure \\ref{fig:Phase-Dispersion} (b) and (d), respectively. \n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{diffusion-pagcs.pdf} & \\includegraphics[width=60mm]{diffusion-pagcs-difffock.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{diffusion-psgcs.pdf} & \\includegraphics[width=60mm]{diffusion-psgcs-difffock.pdf}\\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:Phase-Dispersion} Variation of phase dispersion for PADFS\n(in (a) and (b)) and PSDFS (in (c) and (d)) with displacement parameter\nfor an arbitrary $\\theta_{2}$. Dependence on different values of\nphoton added\/subtracted and the initial Fock state $\\left|1\\right\\rangle $\n(in (a) and (c)), while on different values of Fock parameter for\nsingle photon added\/subtracted state (in (b) and (d)).}\n\\end{figure}\n\n\\subsection{Phase sensing uncertainity for PADFS and PSDFS}\n\nWe finally discuss quantum phase estimation using Eq. (\\ref{eq:PE-1}),\nassuming the two mode input state in the Mach-Zehnder interferometer\nas $|\\psi_{i}(j,n,\\alpha)\\rangle\\otimes|0\\rangle$. The expressions\nfor the variance of the difference in the photon numbers in the two\noutput modes of the Mach-Zehnder interferometer for input PADFS and\nPSDFS and the rest of the parameters required to study phase sensing\nare reported in Appendix.\n\nThe obtained expressions allow us to study the optimum choice of state\nparameters for quantum phase estimation using PADFS and PSDFS. The\nvariation of these parameters is shown in Figure \\ref{fig:Phase sensing uncertainity}.\nSpecifically, we have shown that PSDFS is preferable over coherent\nstate for phase estimation (cf. Figure \\ref{fig:Phase sensing uncertainity}\n(b)). However, with the increase in the photon subtraction this phase\nuncertainty parameter is found to increase although remaining less\nthan corresponding coherent state value. In contrast, with photon\naddition, advantage in phase estimation can be attained as the reduction\nof the phase uncertainty parameter allows one to perform more precise\nmeasurement. This advantage can be enhanced further by choosing large\nvalues of photon addition and Fock parameter (cf. Figure \\ref{fig:Phase sensing uncertainity}\n(a) and (c)). In a similar sense, appropriate choice of Fock parameter\nwould also be advantageous in phase estimation with PSDFS as it decreases\nthe phase uncertainty parameter, but still PADFS remains preferable\nover PSDFS. This can further be controlled by an increase in $\\left|\\alpha\\right|$\nwhich decreases (increases) phase uncertainty parameter for PADFS\n(PSDFS). \n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{phaseresolution-pagcs.pdf} & \\includegraphics[width=60mm]{phaseresolution-psgcs.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{phaseresolution-pagcs-diff-np.pdf} & \\includegraphics[width=60mm]{phaseresolution-psgcs-diff-Fock.pdf} \\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:Phase sensing uncertainity}Phase sensing uncertainty for\n(a) PADFS and (b) PSDFS as a function of phase to be estimated $\\phi$\nfor different number of photon addition\/subtraction with $n=1$. The\ndependence for (c) PADFS and (d) PSDFS is also shown for different\nvalues of Fock parameters with $u=1$ and $v=1$, respectively. In\nall cases, we have chosen $\\alpha=0.1$.}\n\\end{figure}\n\n\\section{Conclusions \\label{sec:Conclusions-1}}\n\nA set of engineered quantum states can be obtained as the limiting\ncases from the PADFS and PSDFS, e.g., DFS, coherent state, photon\nadded\/subtracted coherent state, and Fock state. Specifically, PADFS\/PSDFS\nare obtained by application of unitary (displacement) and non-unitary\n(addition and subtraction of photons) operations on Fock state. In\nview of the fact that the Fock states have uniform phase distribution,\nthe set of unitary and non-unitary quantum state engineering operations\nare expected to affect the phase properties of the generated state.\nTherefore, here we have calculated quantum phase distribution, which\nfurther helped in quantifying phase fluctuation as phase dispersion.\nWe have also computed the phase distribution as the angular $Q$ function.\nWe have further studied phase fluctuation using three Carruthers and\nNieto parameters, and have used one of them to reveal the existence\nof antibunching in the quantum states of our interest.\n\nBoth the phase distribution and angular $Q$ functions are found to\nbe symmetric along the value of the phase of the displacement parameter.\nThe phase distribution is observed to become narrow and peak(s) to\nincrease with the amplitude of the displacement parameter ($\\left|\\alpha\\right|$),\nwhich further becomes broader for higher values of $\\left|\\alpha\\right|$.\nFurther, photon addition\/subtraction and Fock parameters are observed\nto have opposite effects on phase distribution, i.e., distribution\nfunction becomes narrower (broader) with photon addition\/subtraction\n(Fock parameter). Among photon addition and subtraction operations,\nsubtracting a photon alters the phase properties more than that of\nphoton addition. Specifically, at the small values of the displacement\nparameter ($\\left|\\alpha\\right|<1$), the phase properties of PSDFS\nfor $v\\leq n$ and $v>n$ behave differently. This can be attributed\nto the fact that for $v\\leq n$ with $\\left|\\alpha\\right|<1$, the\naverage photon number becomes very small. Further, the peak of the\nphase distribution remains at the phase of displacement parameter\nonly when the number of photons added\/subtracted is more than that\nof the Fock parameter. However, in case the number of photons subtracted\n(added) is same as the Fock parameter, the peak of the phase distribution\nis observed (not observed) at the phase of displacement parameter.\nThe angular $Q$ function can be observed to show similar dependence\non various parameters, but the peak of the distribution remains located\nat the value of phase of the displacement parameter. The three phase\nfluctuation parameters introduced by Carruthers and Nieto \\cite{carruthers1968phase}\nshow phase properties of PADFS and PSDFS, while one of them, $U$\nparameter also reveals antibunching in both PADFS and PSDFS. In this\ncase, the role of Fock parameter as antibunching inducing operation\nin PSDFS is also discussed. Phase dispersion quantifying phase fluctuation\nremains unity for Fock state reflecting uniform distribution, which\ncan be observed to decrease with increasing displacement parameter.\nThis may be attributed to the number-phase complimentarity as the\nhigher values of variance with increasing displacement parameter lead\nto smaller phase fluctuation. Fock parameter and photon addition\/subtraction\nshow opposite effects on the phase dispersion as it increases (decreases)\nwith $n$ ($u\/v$).\n\nFinally, we have also discussed the advantage of the PADFS and PSDFS\nin quantum phase estimation and obtained the set of optimized parameters\nin the PADFS\/PSDFS. Both photon addition and Fock parameter decrease\nthe uncertainty in phase estimation, while photon subtraction, though\nperforms better than coherent state is not as advantageous as $u$\nor $n$. In \\cite{ou1997fundamental}, it was established that signal-to-noise\nratio is significant only when the phase shift to measure is of the\nsame order as multiplicative inverse of the average photon number.\nTherefore, in case of PADFS this limitation of quantum measurement\nis expected to play an important role. Thus, we have shown here that\nstate engineering tools can be used efficiently to control the phase\nproperties of the designed quantum states for suitable applications.\nThe study performed in this chapter can be extended for other such\noperations, like squeezing, photon addition followed by subtraction\nor vice versa. \n\\chapter{Impact of photon addition and subtraction on nonclassical and phase\nproperties of a displaced Fock state \\textsc{\\label{cha:PASDFS}}}\n\nIn this chapter, we aim to observe the nonclassical and phase properties\nof a PASDFS. The work done in this chapter is published\nin ~\\cite{malpani2020impact}.\n\n\\section{Introduction \\label{sec:Intro-chap-4}}\n\nIn Chapter 2, we have already studied nonclassical properties of PADFS\nand PSDFS. In Chapter 3, we have investigated phase properties of\nthe same set state. In both chapters, we have obtained various exciting\nobservations such as photon addition and subtraction\nenhance nonclassical properties of non-Gaussian DFS. Motivated by\nthese, here we aim to study both nonclassical and phase properties\nfor a more general quantum state. To be specific, in this chapter,\nwe aim to study the nonclassical (both lower- and higher-order) and\nphase properties of a PASDFS. The reason behind selecting this particular\nstate lies in the fact that this is a general state in the sense that\nin the limiting cases, this state reduces to different quantum states\nhaving known applications in continuous variable quantum cryptography\n(this point will be further elaborated in the next section).\n\nAs it appears from the above discussion, this investigation has two\nfacets. Firstly, we wish to study nonclassical features of PASDFS,\nnamely Klyshko's \\cite{klyshko1996observable}, Agarwal-Tara's \\cite{agarwal1992nonclassical},\nVogel's \\cite{shchukin2005nonclassical} criteria, lower- and higher-order\nantibunching \\cite{pathak2006control}, squeezing \\cite{hillery1987amplitude,hong1985generation,hong1985higher},\nand sub-Poissonian photon statistics (HOSPS) \\cite{zou1990photon}.\nWe subsequently study the phase properties of PASDFS by computing\nphase distribution function \\cite{agarwal1996complementarity,beck1993experimental},\nphase fluctuation parameters \\cite{carruthers1968phase,barnett1986phase},\nand phase dispersion \\cite{perinova1998phase}. A detailed analysis\nof the obtained results will also be performed to reveal the usefulness\nof the obtained results. \n\n\\section{Moments of the field operators for the quantum states of our interest }\n\n\\label{sec:Quantum-states-of-3}\n\nAs mentioned in the previous section, this work is focused on PASDFS.\nA PASDFS as a Fock superposition state has already been expressed\nin Eq.(\\ref{eq:PADFS-1}). To study nonclassical and phase properties\nof this state, we have used nonclassicality witnesses introduced in\nSection \\ref{sec:Nonclassicality-witnesses} and phase parameters\nin Section \\ref{sec:Analytic-tools-forphase}, we would require analytic\nexpression for moments of the field operators. A bit of computation\nyields the expression for higher-order moment of annihilation and\ncreation operator as \n\\begin{eqnarray}\\label{eq:PA-expepectation-1}\n\\langle\\hat{a}^{\\dagger t}\\hat{a}^{j}\\rangle & = & \\langle\\psi(k,q,n,\\alpha)|\\hat{a}^{\\dagger t}\\hat{a}^{j}|\\psi(k,q,n,\\alpha)\\rangle\\nonumber \\\\\n & = & \\frac{N^{2}}{n!}\\sum\\limits _{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\\\\n & \\times & \\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\sum\\limits _{m=0}^{\\infty}\\frac{\\alpha^{m}(\\alpha^{\\star})^{m+p-p'-j+t}(m+p+k)!(m+p+k-j+t)!}{m!(m+p-p'-j+t)!(m+p+k-q-j)!}.\\nonumber \n\\end{eqnarray}\nFor different values of $t$ and $j$, moments of any order can be\nobtained, and the same may be used to investigate the nonclassical\nproperties of PASDFS and its limiting cases by using various moments-based\ncriteria of nonclassicality. The same will be performed in the following\nsection, but before proceeding, it would be apt to briefly state our\nmotivation behind the selection of this particular state for the present\nstudy (or why do we find this state as interesting?).\n\nDue to the difficulty in realizing single photon on demand sources,\nthe unconditional security promised by various QKD schemes, like BB84\n\\cite{bennett1984quantum} and B92 \\cite{bennett1992quantumBBM},\ndoes not remain unconditional in the practical situations This is\nwhere continuous variable QKD (CVQKD) becomes relevant as they do\nnot require single photon sources. Special cases of PASDFS has already\nbeen found useful in the realization of CVQKD. For example, protocols\nfor CVQKD have been proposed using photon added coherent state ($k=1,\\,q=0,\\,n=0$)\n\\cite{pinheiro2013quantum,wang2014quantum}, photon added then subtracted\ncoherent states ($k=1,\\,q=1,\\,n=0$) \\cite{borelli2016quantum,srikara2019continuous},\nand coherent state ($k=0,\\,q=0,\\,n=0$) \\cite{grosshans2002continuous,hirano2017implementation,huang2016long,ma2018continuous}.\nFurther, boson sampling with displaced single photon Fock states and\nsingle photon added coherent state \\cite{seshadreesan2015boson} has\nbeen reported, and an $m$ photon added coherent state ($k=m,\\,q=0,\\,n=0$)\nhas been used for quantum teleportation \\cite{pinheiro2013quantum}.\nApart from these schemes of CVQKD, which can be realized by using\nPASDFS or its limiting cases, the fact that the photon addition and\/or\nsubtraction operation from a classical or nonclassical state can be\nperformed experimentally using the existing technology \\cite{parigi2007probing,zavatta2004quantum}\nhas enhanced the importance of PASDFS.\n\n\\section{Nonclassicality witnesses and the nonclassical features of PASDFS\nwitnessed through those criteria \\label{sec:Nonclassicality-witnesses-2}}\n\nThe negative value of the Glauber-Sudarshan $P$-function characterizes\nnonclassicality of an arbitrary state \\cite{glauber1963coherent,sudarshan1963equivalence}.\nAs $P$-function is not directly measurable in experiments, many witnesses\nof nonclassicality have been proposed, such as, negative values of\nWigner function \\cite{wigner1932quantum,kenfack2004negativity}, zeroes\nof $Q$ function \\cite{husimi1940some,lutkenhaus1995nonclassical},\nseveral moments-based criteria \\cite{miranowicz2010testing,naikoo2018probing}.\nAn infinite set of such moments-based criteria of nonclassicality\nis equivalent to $P$-function in terms of necessary and sufficient\nconditions to detect nonclassicality \\cite{richter2002nonclassicality}.\nHere, we would discuss some of these moments-based criteria of nonclassicality\nand $Q$ function (in Section \\ref{sec:Qfn}) to study nonclassical\nproperties of the state of our interest.\n\n\\subsection{Lower- and higher-order antibunching}\n\nThe relevance of photon addition, photon subtraction, Fock, and displacement\nparameters in the nonclassical properties of the class of PASDFSs\nis studied here rigorously. Specifically, using Eq. (\\ref{eq:PA-expepectation-1})\nwith the criterion of antibunching (\\ref{eq:HOA-1}) we can study\nthe possibilities of observing lower- and higher-order antibunching\nin the quantum states of PASDFS class, where the class of PASDFSs\nrefers to all the states that can be reduced from state (\\ref{eq:PADFS-1})\nin the limiting cases. The outcome of such a study is illustrated\nin Figure \\ref{fig:HOSPS-1}. It is observed that the depth of lower-\nand higher-order nonclassicality witnesses can be increased by increasing\nthe value of the displacement parameter, but large values of $\\alpha$\ndeteriorate the observed nonclassicality (cf. Figure \\ref{fig:HOSPS-1}\n(a)-(b)). The nonclassicality for higher-values of displacement parameter\n$\\alpha$ can be induced by subtracting photons at the cost of reduction\nin the depth of nonclassicality witnessed for smaller $\\alpha$, as\nshown in Figure \\ref{fig:HOSPS-1} (a). However, photon addition is\nalways more advantageous than subtraction. Therefore, both addition\nand subtraction of photons illustrate these collective effects by\nshowing nonclassicality for even higher values of $\\alpha$ at the\ncost of that observed for the small values of displacement parameter.\nFock parameter has completely opposite effect of photon subtraction\nas it shows the advantage (disadvantage) for small (large) values\nof displacement parameter. Figure \\ref{fig:HOSPS-1} (b) shows benefit\nof studying higher-order nonclassicality as depth of corresponding\nwitness of nonclassicality can be observed to increase with the order.\nThe higher-order nonclassicality criterion is also able to detect\nnonclassicality for certain values of displacement parameter for which\nthe corresponding lower-order criterion failed to do so.\n\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{HOA-sameorder-11-22-33.pdf} & \\includegraphics[width=60mm]{HOA-difforder-11.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{HOsps-sameorder-11-22-33.pdf} & \\includegraphics[width=60mm]{HOsps-difforder-11.pdf}\\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:HOSPS-1} For PASDFS the lower-and higher-order antibunching\nis given as a function of displacement parameter $\\alpha$. (a) {Lower-order antibunching for }different values\nof parameters of the state. (b) {Higher-order\nantibunching for particular state.} HOSPS for PASDFS for {different\n}values of (c) state parameters and (d) {order of nonclassicality.}}\n\\end{figure}\n\n\\subsection{Higher-order sub-Poissonian photon statistics}\n\nVariation of HOSPS nonclassicality witness for class of PASDFSs obtained\nby different nonclassicality inducing operations show the same effect\nas that of antibunching witness for all the odd orders of HOSPS, and\nas depicted in Figure \\ref{fig:HOSPS-1} (c). However, this nonclassical\nfeature disappears for even orders of HOSPS (cf. Figure \\ref{fig:HOSPS-1}\n(d)). In case of the odd orders of HOSPS, though the depth of nonclassicality\nwitness increases with the order, higher-order criterion is found\nto fail to detect nonclassicality for certain values of $\\alpha$\nwhen corresponding HOSPS criterion for smaller values of orders shown\nthe nonclassicality.\n\n\\subsection{Lower- and higher-order squeezing}\n\nOut of all the nonclassicality inducing operations used in PASDFS\nonly photon subtraction is squeezing inducing operation as shown in\nFigure \\ref{fig:HOS-2}, which is consistent with some of our recent\nobservations \\cite{malpani2019lower}. With photon addition higher-order\nsqueezing can be induced for large values of modulus of displacement\nparameter at the cost of squeezing observed for small $\\left|\\alpha\\right|$\nas long as the number of photon subtracted is more than the value\nof Fock parameter. As far as higher-order squeezing is concerned,\nthe observed nonclassicality disappears for large values of real displacement\nparameter with increase in the depth of the nonclassicality witness.\nSqueezing being a phase dependent nonclassical feature depends on\nthe phase $\\theta$ of the displacement parameter $\\alpha=\\left|\\alpha\\right|\\exp[\\iota\\theta]$\n(shown in Figure \\ref{fig:HOS-2} (c) for lower-order squeezing).\n\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{HOs-sameorder-11-22-33.pdf} & \\includegraphics[width=60mm]{HOs-difforder-11.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{HOs-theta.pdf} & \\tabularnewline\n(c) & \\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:HOS-2} Dependence of the Hong-Mandel-type higher-order\nsqueezing witness on displacement parameter for (a) different state\nparameters and (b) order of squeezing. (c) Lower-order squeezing as\na function of phase parameter of the state, i.e., phase $\\theta$\nof displacement parameter $\\alpha=1.\\exp[\\iota\\theta].$}\n\\end{figure}\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{c}\n\\includegraphics[width=80mm]{klyshko-alphaless11-22-33.pdf}\\tabularnewline\n(a) \\tabularnewline\n\\includegraphics[width=80mm]{klyshko-alphamore11-22-33.pdf} \\tabularnewline\n(b) \\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:Klyshko-1} Illustration of Klyshko's\nparameter $B\\left(z\\right)$ with respect to the photon number $z$ for different values of state parameters with\n(a) $\\alpha=0.5$ and (b) $\\alpha=1$.}\n\\end{figure}\n\n\\subsection{Klyshko's Criterion}\n\nFor PASDFS $p_{z}$ can be obtained from Eq. (\\ref{eq:PA-expepectation-1}).\nNonclassicality reflected through Klyshko's criterion can be controlled\nby all the state engineering operations used here as shown in Figure\n\\ref{fig:Klyshko-1}. The depth of this nonclassicality witness increases\nat higher values of photon numbers $z$ due to increase in photon\naddition and\/or Fock parameter. In contrast, depth of witness increases\nat smaller photon numbers $z$ due to photon subtraction. The Klyshko's\nnonclassicality witness is positive for some photon numbers only if\n$k+n>q$. Additionally, with increase in displacement parameter the\ndepth of nonclassicality witness decreases, and the weight of the\ndistribution of witness shift to higher values of $z$.\n\n\\subsection{Agarwal-Tara's criterion}\n\nThis nonclassicality witness is able to detect nonclassicality in\nall the quantum states in the class of PASDFSs (cf. Figure \\ref{Vogel's criteria}\n(a)). Note that for $|\\psi\\left(1,2,1,\\alpha\\right)\\rangle$ with\nsmall $\\alpha$, $A_{3}$ parameter is close to zero, which is due\nto very high probability for zero photon states.\n\n\\begin{figure}\n\\begin{tabular}{c}\n\\includegraphics[width=80mm]{agarwal-11-22-33.pdf}\\tabularnewline\n(a) \\tabularnewline\n\\includegraphics[width=80mm]{Vogel-11-22-33.pdf}\\tabularnewline\n(b) \\tabularnewline\n\\end{tabular}\\caption{Nonclassicality reflected through the negative values of (a) Agarwal-Tara's\nand (b) Vogel's criteria as a function of $\\alpha$ or different state\nparameters.}\n\\label{Vogel's criteria} \n\\end{figure}\n\n\\subsection{Vogel's criterion}\n\nThe negative value of the determinant $dv$ of matrix $v$ in Eq.\n(\\ref{eq:vogel}) is signature of nonclassicality. Fock parameter\nhas adverse effect on the nonclassicality in PASDFS detected by this\ncriterion. This averse effect can be compensated by photon subtraction\nand can be further controlled by photon addition (as shown in Figure\n\\ref{Vogel's criteria} (b)). Notice that the nonclassical behavior\nillustrated by Agarwal-Tara's (Vogel's) criterion is related to higher-order\nantibunching (squeezing) criterion. However, nonclassicality witness\nof Vogel's criterion is a phase independent property unlike squeezing.\n\n\\section{Phase properties of PASDFS\\label{sec:Phase-properties-of}}\n\nThe nonclassicality inducing operations are also expected to impact\nthe phase properties of a quantum state \\cite{banerjee2007phase}.\nRecently, we have reported an extensive study on the role that such\nquantum state engineering tools can play in application oriented studies\non quantum phase \\cite{malpani2019quantum}. Specifically, relevance\nin quantum phase estimation, phase fluctuation, and phase distribution\nwere discussed which can play an important role in quantum metrology\n\\cite{giovannetti2011advances}. Here, we briefly discuss some of\nthe phase properties of the class of PASDFSs.\n\n\\subsection{Phase distribution function}\n\nThe analytical expression for phase distribution function for PASDFS\ncan be computed as\n\n\\begin{eqnarray}\n\\begin{array}{lcl}\nP(\\theta) & = & \\frac{1}{2\\pi}\\dfrac{N^{2}}{n!}\\sum\\limits _{p,p'=0}^{n}{n \\choose p}{n \\choose p'}(-\\alpha^{\\star})^{(n-p)}(-\\alpha)^{(n-p')}\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\\\\n & \\times & \\sum\\limits _{m,m^{\\prime}=0}^{\\infty}\\frac{\\alpha^{m}(\\alpha^{\\star})^{m^{\\prime}}(m+p+k)!(m^{\\prime}+p'+k)!}{m!m^{\\prime}!\\sqrt{\\left(m+p+k-q\\right)!\\left(m^{\\prime}+p'+k-q\\right)!}}\\exp[\\iota\\theta(m^{\\prime}+p'-m-p)].\n\\end{array}\\label{eq:PA-phase-1}\n\\end{eqnarray}\n\nPhoton subtraction can be observed to be a more effective tool to\nalter phase properties of PASDFS than photon addition, as shown in\nFigure \\ref{fig:Phase distribution function}. Interestingly, photon\naddition shows similar behavior, though less prominent, as photon\nsubtraction, Fock parameter has opposite effect.\n\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{ccc}\n\\includegraphics[width=40mm]{phase-distri-21-22-23.pdf} & \\includegraphics[width=40mm]{phase-distri-12-22-32np2.pdf} & \\includegraphics[width=40mm]{phase-distri-11-22-33np2.pdf}\\tabularnewline\n(a) & (b) & (c) \\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:Phase distribution function}Polar plot of phase distribution\nfunction for PASDFS $|\\psi\\left(k,q,n,\\alpha\\right)\\rangle$ with\nrespect to variation in displacement parameter for (a) $n=1,\\,k=2$\nand $q=1,$ 2, and 3 represented by the smooth (cyan), dashed (magenta),\nand dot-dashed (purple) lines, respectively; (b) $n=2,\\,q=2$ and\n$k=1,$ 2, and 3 illustrated by the smooth (cyan), dashed (magenta),\nand dot-dashed (purple) lines, respectively; and (c) $n=1$ with $k=q=1,$\n2, and 3 shown by the smooth (cyan), dashed (magenta), and dot-dashed\n(purple) lines, respectively.}\n\\end{figure}\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{c}\n\\includegraphics[width=60mm]{U-11-22-33.pdf} \\tabularnewline\n\\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:Phase fluctuation}\\textcolor{green}{{}} Variation of\nphase fluctuation parameter with displacement parameter\\textcolor{green}{{}\n}for various state parameters in PASDFS.}\n\\end{figure}\n\n\\subsection{Phase Fluctuation}\n\nHere, we focus only on the first phase fluctuation parameter $U$,\nwhich is related to antibunching if $U$ is below its value for coherent\nstate (i.e., 0.5), remaining consistent with Barnett-Pegg formalism\n\\cite{gupta2007reduction,pathak2000phase}. One can observe that the\nphase fluctuation parameter is able to detect nonclassicality (specifically\nantibunching) only in three cases where the role of the photon subtraction\nis relevant (cf. Figure \\ref{fig:Phase fluctuation}). The observation\ncan be seen analogous to that observed for Vogel's nonclassicality\ncriterion.\n\n\\begin{figure}\n\\begin{centering}\n\\begin{tabular}{cc}\n\\includegraphics[width=70mm]{Q-fun-PADFS.jpg} \n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{fig:Q function} $Q$ function for PASDFS $|\\psi\\left(k,q,n,\\alpha\\right)\\rangle$\nwith (a) $k=q=n=1,$ (b) $k=2,\\,q=n=1,$ and (c) $q=2,\\,k=n=1$ with\n$\\alpha=\\frac{1}{5\\sqrt{2}}\\exp\\left(\\iota\\pi\/4\\right)$. (d) Similarly,\n$Q$ function of PASDFS with $q=2,\\,k=n=1$ and $\\alpha=\\sqrt{2}\\exp\\left(\\iota\\pi\/4\\right)$.\n$Q$ function for $|\\psi\\left(k,q,n,\\alpha\\right)\\rangle$ with (e)\n$k=q=1,\\,n=2$ and (f) $q=1,\\,k=n=2$ for $\\alpha=\\frac{1}{5\\sqrt{2}}\\exp\\left(\\iota\\pi\/4\\right)$.}\n\\end{figure}\n\n\\section{Quasidistribution function: $Q$ function \\label{sec:Qfn}}\n\nHere, We will establish non-Gaussianity inducing behavior of photon\naddition and Fock parameter (cf. Figure \\ref{fig:Q function}), which\nare so far illustrated as nonclassicality inducing and phase altering\noperations. Clearly, with photon addition tendency of quasidistribution\naway from Gaussian behavior is visible, while with photon subtraction\nsqueezing along particular phase angle chosen by displacement parameter\ncan be observed. This squeezing can be noticed to be more appreciable\nfor higher values of displacement parameter (cf. Figure \\ref{fig:Q function}\n(c)-(d)). From Figure \\ref{fig:Q function} (e)-(f), it can be observed\nthat Fock parameter and photon addition have a similar effect in the\nphase space. As zeros of $Q$ function are signature of nonclassicality,\nPASDFS shows nonclassicality in Figure \\ref{fig:Q function} (b),\n(e), and (f). This establishes that use of more\nthan one state engineering tool may be helpful in generation of nonclassical\nstates. It would be interesting to verify whether one more tools (say\nsqueezing or photon catalysis) may further enhance the nonclassical\nproperties. \n\n\\section{Conclusions \\label{sec:Conclusions-5}}\n\nIn this chapter, we have investigated the nonclassical behavior of\nPASDFS using different witnesses of lower- and higher-order nonclassicality.\nThe significance of this choice of state underlies the fact that a\nclass of engineered quantum states can be achieved as the reduced\ncase of PASDFS $|\\psi\\left(k,q,n,\\alpha\\right)\\rangle$, like photon\nadded DFS $\\left(q=0\\right)$, photon subtracted DFS $\\left(k=0\\right)$,\nDFS $\\left(q=k=0\\right)$, photon added coherent state $\\left(n=q=0\\right)$,\nphoton subtracted coherent state $\\left(n=k=0\\right)$, coherent state\n$\\left(n=k=q=0\\right)$, and Fock state $\\left(n=k=q=\\alpha=0\\right)$.\nSome of the reduced states have been experimentally realized and in\nsome cases optical schemes for generation have been proposed, so this\nfamily of states is apt for various challenging tasks to establish\nquantum dominance. The state under consideration requires various\nnon-Gaussianity inducing quantum engineering operations and thus our\nfocus here was to analyze the relevance of each operation independently\nin the nonclasscial features (listed in Table \\ref{tab:Properties of PASDFS})\nobserved in PASDFS. To study the nonclassical properties of PASDFS,\na set of moments-based criteria for Klyshko's, Agrwal-Tara's, and\nVogel's criteria, as well as lower- and higher-order antibunching,\nHOSPS, and squeezing. Further, phase properties for the same state\nare also studied using phase distribution function and phase fluctuation.\nFinally, non-Gaussianity and nonclassicality of PASDFS is also studied\nusing $Q$ function.\n\n\\begin{table}\n\\begin{centering}\n\\begin{tabular}{ccc}\n\\toprule \nS. No. & Nonclassical Properties & Observed in PASDFS\\tabularnewline\n\\midrule \n1 & Lower-order and higher-order Antibunching & yes\\tabularnewline\n2 & Higher-order sub Poissionian photon statistics & yes\\tabularnewline\n3 & Lower-order and higher-order squeezing & yes\\tabularnewline\n4 & Klyshko's criterion & yes\\tabularnewline\n5 & Agarwal-Tara's criterion & yes\\tabularnewline\n6 & Vogel's criterion & yes\\tabularnewline\n7 & Phase distribution function & -\\tabularnewline\n8 & Phase fluctuation & yes\\tabularnewline\n9 & $Q$ function & yes\\tabularnewline\n\\bottomrule\n\\end{tabular}\n\\par\\end{centering}\n\\caption{\\label{tab:Properties of PASDFS} Summary of the nonclassical properties\nof PASDFS. }\n\\end{table}\nThe present study reveals that with an increase in the order of nonclassicality\nthe depth of nonclassicality witnesses increase. Additionally, higher-order\nnonclassicality criteria were able to detect nonclassicality in the\ncases when corresponding lower-order criteria failed to do so. Different\nnonclassical features are observed for smaller values of displacement\nparameter, which can be sustained for higher values by increasing\nthe number of subtracted photon. Photon addition generally improves\nnonclassicality, and this advantage can be further enhanced for the\nhigher (smaller) values of displacement parameter using photon subtraction\n(Fock parameter). The HOSPS nonclassical feature is only observed\nfor the odd orders. As far as squeezing is concerned, only photon\nsubtraction could induce this nonclassicality. Large number of photon\naddition can be used to observe squeezing at higher values of displacement\nparameter at the cost of that present for smaller $\\alpha$. Photon\nsubtraction alters the phase properties more than photon addition,\nwhile Fock parameter has an opposite effect of the photon addition\/subtraction.\nThe nonclassicality revealed through phase fluctuation parameter shows\nsimilar behavior as Vogel's criterion. Finally, we have shown the\nnonclassicality and non-Gaussianity of PASDFS with the help of a quasidistribution\nfunction, namely $Q$ function.\n\\chapter{Manipulating nonclassicality via quantum state engineering processes:\nVacuum filtration and single photon addition\\textsc{\\label{cha:QSE-1}}}\n\nIn this chapter, the objective is to study nonclassical properties\nassociated with two different quantum state engineering processes\nwith a specific focus on nonclassicality witnesses and measures. The\nwork done in this chapter is published in ~\\cite{malpani2019filter}.\n\n\\section{Introduction \\label{cha:QSE}\\label{introduction:chapter5}}\n\nSo far, we have discussed nonclassical properties of engineered quantum\nstates in detail. Here, we wish to extend the discussion and investigate\nthe possibilities of manipulating or controlling the nonclassicality\npresent in the system by using two specific processes of quantum state\nengineering. Precisely, by using vacuum filtration and single photon\naddition processes. To introduce the idea of these quantum state engineering\noperations, we can write the photon number distribution of an arbitrary\nquantum state in terms of Glauber-Sudarshan $P\\left(\\alpha\\right)$\nfunction as \n\\begin{equation}\np_{n}=\\int P\\left(\\alpha\\right)\\left|\\langle n|\\alpha\\rangle\\right|^{2}d^{2}\\alpha.\\label{eq:pnd}\n\\end{equation}\nIf $p_{n}$ vanishes for a particular value of Fock state parameter\n$n$, we refer to that as a ``hole'' or a hole in the photon number\ndistribution at position $n$ \\cite{escher2004controlled}. Notice\nthat $p_{n}=0$ reveals that $P\\left(\\alpha\\right)<0$ for some $\\alpha$,\nwhich is the signature of nonclassicality. Thus, the existence of\na hole in the photon number distribution implies that the corresponding\nstate is nonclassical, and corresponding technique of quantum state\nengineering to create hole is called hole burning \\cite{gerry2002hole}.\nInterestingly, this result also implies that qudits which are $d$-dimensional\n(finite dimensional) quantum states are always nonclassical as we\ncan see that in such a state $p_{d}=p_{d+1}=\\ldots=0$. In principle,\nthe hole can be created for an arbitrary $n$, but here for the sake\nof a comparative study, we restrict ourselves to the situation where\nthe hole is created at $n=0,$ i.e., the desired engineered state\nhas zero probability of getting vacuum state on measurement in Fock\nbasis (in other words, $p_{0}=0).$ In fact, Lee \\cite{lee1995theorem}\nhad shown that a state with $p_{n}=0$ is a maximally nonclassical\nas long as the nonclassicality is quantitatively measured using nonclassical\ndepth. Such a state can be constructed in various ways. To elaborate\non this, we describe an arbitrary pure quantum state as a superposition\nof the Fock states \n\\begin{equation}\n|\\psi\\rangle=\\sum\\limits _{n=0}^{\\infty}c_{n}|n\\rangle,\\label{eq:fock-superposition}\n\\end{equation}\nwhere $c_{n}$ is the probability amplitude of state $|n\\rangle$.\nA hole can be created at $n=0$ by adding a single photon to obtain\n\\begin{equation}\n|\\psi_{1}\\rangle=N_{1}a^{\\dagger}|\\psi\\rangle,\\label{eq:a-dagger-for-hole}\n\\end{equation}\nwhere $N_{1}=\\left(\\langle\\psi|aa^{\\dagger}|\\psi\\rangle\\right)^{-1\/2}$\nis the normalization constant. If we consider the initial quantum\nstate $|\\psi\\rangle$ as a coherent state, the addition of a single\nphoton would lead to a photon added coherent state which has been\nexperimentally realized \\cite{zavatta2004quantum} and extensively\nstudied \\cite{hong1999nonclassical} because of its interesting nonclassical\nproperties and potential applications. Thus, in quantum state engineering,\ntechniques for photon addition are known \\cite{thapliyal2017comparison,malpani2019lower,malpani2019quantum}\nand experimentally realized.\n\nAn alternative technique to create a hole at vacuum is vacuum filtration.\nThe detailed procedure of this technique is recently discussed in\n\\cite{Meher2018}. Vacuum filtration implies removal of the coefficient\nof the vacuum state, $c_{0}$, in Eq. (\\ref{eq:fock-superposition})\nand subsequent normalization. Clearly this procedure would yield \n\\begin{equation}\n|\\psi_{2}\\rangle=N_{2}\\sum\\limits _{n=1}^{\\infty}c_{n}^{\\prime}|n\\rangle,\\label{eq:c0-is-removed}\n\\end{equation}\nwhere the normalization constant $N_{2}=\\left(1-\\left|c_{0}\\right|^{2}\\right)^{-1\/2}$.\nBoth these states (i.e., $|\\psi_{1}\\rangle,$ and $\\psi_{2}\\rangle$)\nare maximally nonclassical as far as Lee's result related to nonclassical\ndepth is concerned \\cite{lee1991measure}. However, recently lower-order\nnonclassical properties of $|\\psi_{1}\\rangle$ and $|\\psi_{2}\\rangle$\nin (\\ref{eq:a-dagger-for-hole})-(\\ref{eq:c0-is-removed}) are reported\nto be different for $|\\psi\\rangle$ chosen as coherent state \\cite{Meher2018}.\nThis led to several interesting questions, like- What happens if the\ninitial state on which addition of photon or vacuum filtration process\nis to be applied is already nonclassical (specifically, pure state\nother than coherent state \\cite{hillery1985classical})? How do these\nprocesses affect the higher-order nonclassical properties of the quantum\nstates? How does the depth of nonclassicality corresponding to a particular\nwitness of nonclassicality changes with the parameters of the quantum\nstate for these processes? The present chapter aims to answer these\nquestions through a comparative study using a set of interesting quantum\nstates $|\\psi\\rangle$ (and the corresponding single photon added\n$|\\psi_{1}\\rangle$ and vacuum filtered $|\\psi_{2}\\rangle$ states),\neach of which can be reduced to many more states. Specifically, in\nwhat follows, we would study the lower- and higher-order nonclassical\nproperties of single photon addition and vacuum filtration of even\ncoherent state (ECS), binomial state (BS) and Kerr state (KS). In\nfact, the quantum state engineering processes described mathematically\nin Eqs. (\\ref{eq:a-dagger-for-hole})-(\\ref{eq:c0-is-removed}) can\nbe used to prepare a set of engineered quantum states, namely vacuum\nfiltered ECS (VFECS), vacuum filtered BS (VFBS), vacuum filtered KS\n(VFKS), photon added ECS (PAECS), photon added BS (PABS) and photon\nadded (PAKS). We aim to look at the nonclassical properties of these\nstates with a focus on higher-order nonclassical properties and subsequently\nquantify the amount of nonclassicality in all these states. In what\nfollows, the higher-order nonclassical properties are illustrated\nthrough the criteria of HOA, HOS and HOSPS with brief discussion of\nlower-order antibunching and squeezing.\n\\section{Quantum states of interest\\label{sec:Quantum-states-of-1}}\n\nIn this chapter, we have selected a set of three widely studied and\nimportant quantum states- (i) ECS, (ii) BS and (iii) KS. We subsequently\nnoted that these states can further be engineered to generate corresponding\nvacuum filtered states and single photon added states. For example,\none can generate VFBS and PABS from BS by using vacuum filtration\n\\cite{Meher2018} and photon addition \\cite{zavatta2004quantum} processes,\nrespectively. In a similar manner, these processes can also generate\nVFECS and PAECS from ECS, and VFKS and PAKS from KS. In this section,\nwe briefly describe ECS, BS, KS, VFBS, PABS, VFECS, PAECS, VFKS and\nPAKS. Specifically, we describe three parent states as Fock superposition\nstates. Similarly, the six engineered states are also expressed as\nFock superposition states for the convenience of identifying the corresponding\nphoton number distributions (each of which essentially contains a\nhole at the vacuum). In the rest of the study, we wish to compare\nthe impact of these two quantum state engineering processes (i.e.,\nvacuum filtration and photon addition processes) on the nonclassical\nproperties of the engineered states.In the above, we have described\nsix (three) quantum states of our interest as Fock superposition states\nhaving (without) holes at vacuum. In what follows, these expressions\nwill be used to study the nonclassical properties of these states\nusing a set of witnesses of nonclassicality. Specifically, we will\nuse a set of witnesses of nonclassicality which are based on moments\nof annihilation and creation operators. Keeping this in mind, in the\nfollowing subsection, we report the general form of such moments for\nall the six engineered states of our interest and the corresponding\nthree parent states (thus overall nine states).\n\n\\subsection{Expressions for moments of annihilation and creation operators}\n\nIn 1992, Agarwal and Tara \\cite{agarwal1992nonclassical} introduced\na criterion of nonclassicality in the form of a matrix of moments\nof creation and annihilation operators. This criterion was further\nmodified to propose a moment-based criteria of entanglement \\cite{shchukin2005inseparability}\nand nonclassicality \\cite{miranowicz2010testing,miranowicz2009inseparability}.\nTherefore, it is convenient to find out the expectation value of the\nmost general term describing higher-order moment $\\langle\\hat{a}^{\\dagger j}\\hat{a}^{k}\\rangle$\nfor a given state to investigate the nonclassicality using the set\nof moment-based criteria.\n\n\\subsubsection{Expectation values for even coherent states and the corresponding\nengineered states}\n\nThe analytic expression of $\\langle\\hat{a}^{\\dagger j}\\hat{a}^{k}\\rangle_{i}$\nis obtained for the quantum states $i\\in\\{{\\rm ECS},{\\rm VFECS,PAECS\\}}$\nusing Eqs. (\\ref{eq:VFECS-EXPANDED}) and (\\ref{eq:PAECS}). For ECS\nand VFECS, expressions of the moments can be written in a compact\nform as\n\n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger j}\\hat{a}^{k}\\rangle_{{\\rm ECS}} & = & \\frac{\\exp\\left[-\\mid\\alpha\\mid^{2}\\right]}{2\\left(1+\\exp\\left[-2\\mid\\alpha\\mid^{2}\\right]\\right)}\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}(\\alpha^{\\star})^{n-k+j}}{\\left(n-k\\right)!}\\mathcal{G}_{n,j,k}.\\end{array}\\label{eq:ECS-moment}\n\\end{equation}\nand \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger j}\\hat{a}^{k}\\rangle_{{\\rm VFECS}} & = & \\left\\{ \\begin{array}{l}\nN_{{\\rm VFECS}}^{2}\\sum\\limits _{n=1}^{\\infty}\\frac{\\alpha^{n}(\\alpha^{\\star})^{n-k+j}}{\\left(n-k\\right)!}\\mathcal{G}_{n,j,k}\\,\\,\\,\\,\\mathrm{for}\\,\\,k\\leq j,\\\\\nN_{{\\rm VFECS}}^{2}\\sum\\limits _{n=1}^{\\infty}\\frac{\\alpha^{\\star n}\\alpha^{n+k-j}}{\\left(n-j\\right)!}\\mathcal{G}_{n,j,k}\\,\\,\\,\\mathrm{for}\\,\\,k>j,\n\\end{array}\\right.\\end{array}\\label{eq:VFECS-moment}\n\\end{equation}\nrespectively. Similarly, we obtained analytic expression for $\\langle\\hat{a}^{\\dagger j}\\hat{a}^{k}\\rangle_{{\\rm PAECS}}$\nfor PAECS as \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger j}\\hat{a}^{k}\\rangle_{{\\rm PAECS}} & = & N_{{\\rm PAECS}}^{2}\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}(\\alpha^{\\star})^{n-k+j}\\left(n+1\\right)\\left(n-k+j+1\\right)}{\\left(n+1-k\\right)!}\\mathcal{G}_{n,j,k}.\\end{array}\\label{eq:PAECS-moment}\n\\end{equation}\nHere, $\\mathcal{G}_{n,j,k}=\\left(1+\\left(-1\\right)^{n}\\right)\\left(1+\\left(-1\\right)^{n-k+j}\\right)$.\nThe above mentioned quantities are also functions of displacement\nparameter of ECS used to generate the engineered states, which will\nbe used as a control parameter while discussion of nonclassicality\ninduced due to engineering operations.\n\n\\subsubsection{Expectation values for binomial state and the corresponding engineered\nstates}\n\nSimilarly, the compact analytic form of $\\langle\\hat{a}^{\\dagger t}\\hat{a}^{r}\\rangle_{{\\rm BS}}$\ncan be written as \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger t}\\hat{a}^{r}\\rangle_{{\\rm BS}} & = & \\sum\\limits _{n=0}^{M}\\mathcal{I}_{p,M,n,r,t}\\frac{M!}{(n-r)!}.\\end{array}\\label{eq:BS-moment}\n\\end{equation}\nIn case of VFBS and PABS, the analytic form of $\\langle\\hat{a}^{\\dagger t}\\hat{a}^{r}\\rangle_{{i}}$\nis obtained as \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger t}\\hat{a}^{r}\\rangle_{{\\rm VFBS}} & = & \\left\\{ \\begin{array}{l}\nN_{{\\rm VFBS}}^{2}\\sum\\limits _{n=1}^{M}\\mathcal{I}_{p,M,n,r,t}\\frac{M!}{(n-r)!}\\,\\,\\,\\,\\,{\\rm {for}\\,r\\leq t,}\\\\\nN_{{\\rm VFBS}}^{2}\\sum\\limits _{n=1}^{M}\\mathcal{I}_{p,M,n,-r,-t}\\frac{M!}{(n-t)!}\\,\\,\\,\\,\\,{\\rm {for}\\,r>t,}\n\\end{array}\\right.\\end{array}\\label{eq:VFBS-moment}\n\\end{equation}\nand \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger t}\\hat{a}^{r}\\rangle_{{\\rm PABS}} & = & N_{{\\rm PABS}}^{2}\\sum\\limits _{n=0}^{M}\\mathcal{I}_{p,M,n,r,t}\\frac{M!(n+1)!(n+1-r+t)!}{n!(n+1-r)!(n-r+t)!},\\end{array}\\label{eq:PABS-moment}\n\\end{equation}\nrespectively, with $\\mathcal{I}_{p,M,n,r,t}=\\left[\\frac{p^{2n-r+t}\\left(1-p\\right)^{2M-2n+r-t}}{(M-n)!(M-n+r-t)!}\\right]^{1\/2}$.\nHere, the obtained average values of moments are also dependent on\nBS parameters, which will be used to enhance\/control the nonclassicality\nfeatures in the generated states.\n\n\\subsubsection{Expectation values for Kerr state and the corresponding engineered\nstates}\n\nFor Kerr state, Vacuum filtered kerr state and Photon added kerr state,\nwe use the same approach to obtain a compact generalized forms of\n$\\langle\\hat{a}^{\\dagger q}\\hat{a}^{s}\\rangle_{{i}}$; and our computation\nyielded \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger q}\\hat{a}^{s}\\rangle_{{\\rm KS}} & = & \\exp\\left[-\\mid\\alpha\\mid^{2}\\right]\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}(\\alpha^{\\star})^{n-s+q}}{\\left(n-s\\right)!}\\mathcal{F}_{n,s,q},\\end{array}\\label{eq:kS-moment}\n\\end{equation}\n\\begin{equation}\n\\begin{array}{l}\n\\begin{array}{l}\n\\langle\\hat{a}^{\\dagger q}\\hat{a}^{s}\\rangle_{{\\rm VFKS}}=\\left\\{ \\begin{array}{l}\nN_{{\\rm VFKS}}^{2}\\sum\\limits _{n=1}^{\\infty}\\frac{\\alpha^{n}(\\alpha^{\\star})^{n-s+q}}{\\left(n-s\\right)!}\\mathcal{F}_{n,s,q},\\,{\\rm {for}\\,\\,s\\leq q,}\\\\\nN_{{\\rm VFKS}}^{2}\\sum\\limits _{n=1}^{\\infty}\\frac{\\alpha^{\\star n}\\alpha^{n+s-q}}{\\left(n-q\\right)!}\\mathcal{F}_{n,-s,-q}^{\\star},\\,{\\rm {for}\\,\\,s>q,}\n\\end{array}\\right.\\end{array}\\end{array}\\label{eq:VFkS-moment}\n\\end{equation}\nand \n\\begin{equation}\n\\begin{array}{lcl}\n\\langle\\hat{a}^{\\dagger q}\\hat{a}^{s}\\rangle_{{\\rm PAKS}} & = & N_{{\\rm PAKS}}^{2}\\sum\\limits _{n=0}^{\\infty}\\frac{\\alpha^{n}(\\alpha^{\\star})^{n-s+q}\\left(n+1\\right)!\\left(n-s+q+1\\right)!}{n!\\left(n-s+q\\right)!(n+1-s)!}\\mathcal{F}_{n,s,q}.\\end{array}\\label{eq:PAKS-moment}\n\\end{equation}\nHere, $\\mathcal{F}_{n,s,q}=\\exp\\left[\\iota\\chi\\left(q-s\\right)\\left(2n+q-s-1\\right)\\right]$.\nFrom the above expressions, it is clear that when $q=s$, there is\nno role of $\\chi$ and the behavior of KS is similar to that of a\ncoherent state. So the effect of this parameter $\\left(\\chi\\right)$\ncan be observed only in HOS which also depends on the higher-order\nmoments other than moments of number operator, i.e., $\\langle\\hat{a}^{\\dagger q}\\hat{a}^{s}\\rangle_{{i}}\\,:q\\neq s$.\nIn what follows, we use the expressions of moments given in Eqs. (\\ref{eq:VFECS-moment})-(\\ref{eq:PAKS-moment})\nto study various lower- and higher-order nonclassicality witnesses.\n\n\\section{Nonclassicality witnesses\\label{sec:Nonclassicality-witnesses-1}}\n\nThere are various criteria of nonclassicality, most of them are sufficient\nbut not necessary in the sense that satisfaction of such a criterion\ncan identify a nonclassical feature, but failure does not ensure that\nthe state is classical. Further, most of the criteria (specially,\nall the criteria studied here) do not provide any quantitative measure\nof nonclassicality present in a state, and so they are referred to\nas witnesses of nonclassicality.\nThese witnesses are based on either quasiprobability distribution\nor moments of annihilation and creation operators. In the present\nwork, we have used a set of moment-based criteria to investigate nonclassical\nproperties of our desired engineered quantum states. Specifically,\nwe have investigated the possibilities of observing lower-order squeezing\nand antibunching as well as HOA, HOSPS, and HOS for all the states\nof our interest. To begin the investigation and the comparison process,\nlet us start with the study of antibunching.\n\n\\subsection{Lower- and higher-order antibunching}\n\nThe phenomenon of lower-order antibunching is closely associated with\nthe lower-order sub-Poissonian photon statistics \\cite{brown1956correlation}.\nHowever, they are not equivalent \\cite{zou1990photon}. The concept\nof HOA also plays an important role in identifying the presence of\nweaker nonclassicality \\cite{allevi2012measuring,hamar2014non}. It\nwas first introduced in 1990 based on majorization technique \\cite{lee1990higher}\nfollowed by some of its modifications \\cite{an2002multimode,pathak2006control}.\nIn this section, we study the generalized HOA criterion introduced\nby Pathak and Garcia \\cite{pathak2006control} to investigate lower-order\nantibunching and HOA. The analytic expressions of moments (\\ref{eq:VFECS-moment})-(\\ref{eq:PAKS-moment})\ncan be used to investigate the nonclassicality using inequality (\\ref{eq:HOA-1})\nfor the set of states. The obtained results are illustrated in Figure\n\\ref{fig:HOA} where we have compared the results between the vacuum\nfiltered and single photon added states. During this attempt, we also\ndiscuss the nonclassicality present in the quantum states used for\nthe preparation of the engineered quantum states (cf. Figs. \\ref{fig:HOA}\n(a)-(c)). In Figs. \\ref{fig:HOA} (b)-(c), we have shown the result\nfor photon added and vacuum filtered BS and KS, where it can be observed\nthat the depths of both lower- and higher-order witnesses in the negative\nregion are larger for photon added BS and KS in comparison with the\nvacuum filtered BS and KS, respectively. However, an opposite nature\nis observed for ECS where the depth of lower- and higher-order witnesses\nis more for the vacuum filtration in comparison with the photon addition\nif the values of $\\alpha$ remain below certain values; whereas for\nthe photon addition the depth of lower- and higher-order antibunching\nwitnesses is found to be greater than that for vacuum filtration for\nthe higher values of $\\alpha$ (cf. Figure \\ref{fig:HOA} (a)). However,\nHOA is not observed for the ECS and KS and thus both operations can\nbe ascribed as nonclassicality inducing operations as far as this\nnonclassical feature is concerned.\n\n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\centering{}\\includegraphics[width=60mm]{AB-l-2-compare.pdf} & \\includegraphics[width=60mm]{Binomial-AB-l-2.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{kerr-AB-l-2-compare.pdf} & \\tabularnewline\n(c) & \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:HOA} Lower- and higher-order antibunching witnesses as\nfunctions of displacement parameter $\\alpha$ (for ECS and KS) and\nprobability $p$ (for BS with parameter $M=10$) for (a) ECS, PAECS\nand VFECS, (b) BS, PABS and VFBS, and (c) KS, PAKS and VFKS. The quantities\nshown in all the plots are dimensionless.}\n\\end{figure}\n\n\\subsection{Lower- and higher-order squeezing}\n\nThe concept of squeezing originates from the uncertainty relation.\nThere is a minimum value of an uncertainty relation involving quadrature\noperators where the variance of two non-commuting quadratures (say\nposition and momentum) are equal and their product satisfies minimum\nuncertainty relation. Such a situation is closest to the classical\nscenario, in the sense that there is no uncertainty in the classical\npicture and this is the closest point that one can approach remaining\nwithin the framework of quantum mechanics. Coherent state satisfies\nthis minimum uncertainty relation and is referred to as a classical\n(or more precisely closest to classical state). If any of the quadrature\nvariances reduces below the corresponding value for a minimum uncertainty\n(coherent) state (at the cost of increase in the fluctuations in the\nother quadrature) then the corresponding state is called squeezed\nstate.\n\nThe higher-order nonclassical properties can be investigated by studying\nHOS. There are two different criteria for HOS \\cite{hong1985generation,hong1985higher,hillery1987amplitude}:\nHong-Mandel criterion \\cite{hong1985higher} and Hillery criterion\n\\cite{hillery1987amplitude}. The concept of the HOS was first introduced\nby Hong and Mandel using higher-order moments of the quadrature operators\n\\cite{hong1985higher}. According to this criterion, it is observed\nif the higher-order moment for a quadrature operator for a quantum\nstate is observed to be less than the corresponding coherent state\nvalue. Another type of HOS was introduced by Hillery who introduced\namplitude powered quadrature and used variance of this quadrature\nto define HOS \\cite{hillery1987amplitude}. Here, we aim to analyze\nthe possibility of HOS using Hong-Mandel criterion for $l$th order\nsqueezing. We have investigated the possibility of observing HOS analytically\nusing Eqs. (\\ref{eq:VFECS-moment})-(\\ref{eq:PAKS-moment}) and inequality\n(\\ref{eq:Hong-def2-2}) for all engineered quantum states of our interest\nand have shown the corresponding results in Figs. \\ref{fig:HOS-1}\n(a)-(c) where we have compared the HOS in the set of quantum states\nand the states obtained by photon addition and vacuum filtration.\nThese operations fail to induce this nonclassical feature in the engineered\nstates prepared from ECS, which also did not show signatures of squeezing.\nIn Figure \\ref{fig:HOS-1} (a), we illustrate Hong-Mandel type HOS\nwith respect to parameter $p$ where we have shown the existence HOS\nfor BS, VFBS and PABS. It can be observed that the state engineering\noperations fail to increase this particular feature of nonclassicality\nin BS. Additionally, higher-order nonclassicality is absent for higher\nvalues of $p$ when corresponding lower-order squeezing is present.\nIn case of KS, PAKS and VFKS, we have observed that HOS is observed\nwhen the values of $\\alpha$ are greater than certain values for the\nindividual curves of the corresponding states (cf. Figure \\ref{fig:HOS-1}\n(b)). Note that photon addition may provide some advantage in this\ncase, but vacuum filtration would not as for the same value of displacement\nparameter KS and PAKS (VFKS) have (has not) shown squeezing. Interestingly,\nthe presence of squeezing also depends upon the Kerr nonlinearity\nparameter $\\chi$, which is shown in Figure \\ref{fig:HOS-1} (c).\nSimilar to Figure \\ref{fig:HOS-1} (b) photon addition shows advantage\nover KS which disappears for larger values of $\\chi$, while vacuum\nfiltering is not beneficial.\n\nIn Figure \\ref{fig:HOS-contour}, we have shown using the dark (blue)\ncolor in the contour plots of the HOS witness for PAKS that squeezing\ncan be observed for higher values of $|\\alpha|$ and smaller values\nof $\\chi$. Additionally, the phase parameter $\\theta$ of $\\alpha$\nis also relevant for observing the nonclassicality as squeezing occurs\nin the vicinity of $\\theta=m\\pi$, while disappears for $\\theta=\\frac{m\\pi}{2}$\nwith integer $m$. Similar behavior is observed in KS and VFKS (not\nshown here).\n\n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\centering{}\\includegraphics[width=60mm]{Binomial-HOS-l-2.pdf} & \\includegraphics[width=60mm]{kerr-HOS-l-2.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{kerr-HOS-l-2-diff-chi.pdf} & \\tabularnewline\n(c) & \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:HOS-1} Illustration of lower- and higher-order squeezing\nfor (a) BS, PABS and VFBS; (b) KS, PAKS and VFKS at the fixed value\nof $\\chi=0.02$; (c) KS, VFKS and PAKS as a function of $\\chi$ with\n$\\alpha=1$. The negative regions of the curves illustrate the presence\nof squeezing.}\n\\end{figure}\n\n\\begin{figure}\n\\centering \\includegraphics[width=120mm]{Slide1.jpg}\n\\caption{\\label{fig:HOS-contour}The dependence of HOS witness ($l=4$) on\nKerr parameter $\\chi$ and displacement parameters $\\left|\\alpha\\right|$\nand $\\theta$ for PAKS with (a) $\\left|\\alpha\\right|=3$, (b) $\\chi=0.02$,\n(c) $\\theta=0$.}\n\\end{figure}\n\n\\subsection{Higher-order sub-Poissonian photon statistics}\n\nThe higher-order moments in Eqs. (\\ref{eq:VFECS-moment})-(\\ref{eq:PAKS-moment})\nare used to calculate the above inequality $(\\ref{eq:hosps22-1})$\nwith the help of (\\ref{eq:HOA-1}) for states obtained after vacuum\nfiltration and photon addition in ECS, BS and KS as well as the parent\nstates, and the corresponding results are depicted in Figure \\ref{fig:HOSPS}.\nNonclassicality is not revealed by HOSPS criteria of even orders in\ncase of ECS, while corresponding engineered states show nonclassicality.\nAdditionally, nonclassicality is induced by vacuum filtration for\nodd orders while it was not observed in the parent state (cf. Figure\n\\ref{fig:HOSPS} (a)). This clearly shows the role of hole burning\noperations in inducing nonclassicality for odd orders. However, in\ncase of even orders, the same operations are also observed to destroy\nthe nonclassicality in the parent state. From Figs. \\ref{fig:HOSPS}\n(b) and (c), it is observed that BS and KS do not show HOSPS for the\nodd values of $l$ even after application of state engineering operations.\nAdditionally, HOSPS is not observed for the KS for even values of\n$l$, too. Consequently, the nonclassical feature witnessed through\nthe HOSPS criterion in PAKS can be attributed solely to the hole burning\nprocess.\n\nNonclassicality in the engineered quantum states can also be studied\nusing quasidistribution functions \\cite{thapliyal2015quasiprobability}\nbut here we are going to quantify the amount of nonclassicality in\nthese states. Further, the effect of decoherence on the observed nonclassicality\n\\cite{banerjee2007dynamics,banerjee2010dynamics,banerjee2010entanglement,naikoo2018probing}\nand phase diffusion \\cite{banerjee2007phaseQND,banerjee2007phase}\ncan be studied.\n\n\\begin{figure}\n\\centering{} %\n\\begin{tabular}{cc}\n\\centering{}\\includegraphics[width=60mm]{HOSPS-l-2-compare.pdf} & \\includegraphics[width=60mm]{Binomial-HOSPS-l-2.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{kerr-HOSPS-l-2.pdf} & \\tabularnewline\n(c) & \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:HOSPS} Illustration of HOSPS as a function of displacement\nparameter $\\alpha$ (for ECS and KS) and probability $p$ (for BS)\nfor (a) ECS, (b) BS, and (c) KS and corresponding engineered states.\nHOSPS is not observed in KS.}\n\\end{figure}\n\\begin{figure}\n\\centering %\n\\begin{tabular}{cc}\n\\includegraphics[width=60mm]{Linear-entropy-ECS.pdf} & \\includegraphics[width=60mm]{Linear-entropy-BS.pdf}\\tabularnewline\n(a) & (b) \\tabularnewline\n\\includegraphics[width=60mm]{Linear-entropy-KS.pdf} & \\includegraphics[width=60mm]{Linear-entropy-KS-chi.pdf}\\tabularnewline\n(c) & (d) \\tabularnewline\n\\end{tabular}\\caption{\\label{fig:Linear-Entropy} Illustration of linear entropy for (a)\nECS, PAECS and VFECS, (b) BS, PABS and VFBS, (c) KS, PAKS and VFKS\nwith $\\alpha$ or $p$ for $\\chi=0.02$. (d) Dependence of nonclassicality\nin KS, PAKS and VFKS on $\\chi$ for $\\alpha=1$.}\n\\end{figure}\n\n\\section{Nonclassicality measure}\n\nWe have obtained the analytic expressions of linear entropy for ECS,\nKS, BS and corresponding engineered states which are given as \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm ECS}} & = & 1-\\frac{\\exp\\left[-2\\mid\\alpha\\mid^{2}\\right]}{4\\left(1+\\exp\\left[-2\\mid\\alpha\\mid^{2}\\right]\\right)^{2}}\\sum\\limits _{n,m,r=0}^{\\infty}f_{n,m,r}\\sum\\limits _{k_{1}=0}^{n}\\frac{^{n}C_{k_{1}}\\,{}^{r}C_{r+k_{1}-m}}{2^{n+r}},\\end{array}\\label{eq:LE-ECS}\n\\end{equation}\nfor VFECS \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm VFECS}} & = & 1-\\left(N_{{\\rm VFECS}}\\right)^{4}\\sum\\limits _{n,m,r=1}^{\\infty}f_{n,m,r}\\sum\\limits _{k_{1}=0}^{n}\\frac{^{n}C_{k_{1}}\\,{}^{r}C_{r+k_{1}-m}}{2^{n+r}},\\end{array}\\label{eq:LE-VFECS}\n\\end{equation}\nand for PAECS \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm PAECS}} & = & 1-\\left(N_{{\\rm PAECS}}\\right)^{4}\\sum\\limits _{n,m,r=0}^{\\infty}f_{n,m,r}\\left(m+1\\right)\\left(n-m+r+1\\right)\\\\\n & \\times & \\sum\\limits _{k_{1}=0}^{n+1}\\frac{^{n+1}C_{k_{1}}\\,{}^{r+1}C_{r+k_{1}-m}}{2^{n+r+2}},\n\\end{array}\\label{eq:LE-PAECS}\n\\end{equation}\nwhere $f_{n,m,r}=\\frac{\\mid\\alpha\\mid^{2n+2r}\\left(1+\\left(-1\\right)^{n}\\right)\\left(1+\\left(-1\\right)^{m}\\right)\\left(1+\\left(-1\\right)^{r}\\right)\\left(1+\\left(-1\\right)^{n+r-m}\\right)}{n!r!}$.\n\nSimilarly, analytical expression for linear entropy of BS \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm BS}} & = & 1-\\sum\\limits _{n,m,r=0}^{M}g_{p,M,n,m,r}\\sum\\limits _{k_{1}=0}^{n}\\frac{^{n}C_{k_{1}}\\,{}^{r}C_{r+k_{1}-m}}{2^{n+r}},\\end{array}\\label{eq:LE-BS}\n\\end{equation}\nfor VFBS \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm VFBS}} & = & 1-\\left(N_{{\\rm VFBS}}\\right)^{4}\\sum\\limits _{n,m,r=1}^{M}g_{p,M,n,m,r}\\sum\\limits _{k_{1}=0}^{n}\\frac{^{n}C_{k_{1}}\\,{}^{r}C_{r+k_{1}-m}}{2^{n+r}},\\end{array}\\label{eq:LE-VFBS}\n\\end{equation}\nand for PABS \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm PABS}} & = & 1-\\left(N_{{\\rm PABS}}\\right)^{4}\\sum\\limits _{n,m,r=0}^{M}g_{p,M,n,m,r}\\left(m+1\\right)\\left(n-m+r+1\\right)\\\\\n & \\times & \\sum\\limits _{k_{1}=0}^{n+1}\\frac{^{n+1}C_{k_{1}}\\,{}^{r+1}C_{r+k_{1}-m}}{2^{n+r+2}}\n\\end{array}\\label{eq:LE-PABS}\n\\end{equation}\nare obtained. Here, $g_{p,M,n,m,r}=\\frac{1}{n!r!}\\left[\\frac{(M!)^{4}p^{2(n+r)}(1-p)^{4M-2n-2r}}{(M-n)!(M-m)!(M-r)!(M-n-r+m)!}\\right]^{1\/2}$.\n\nFinally, analytical expression for linear entropy of KS, VFKS, PAKS\ncan be given as \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm KS}} & = & 1-\\exp\\left[-2\\mid\\alpha\\mid^{2}\\right]\\sum\\limits _{n,m,r=0}^{\\infty}h_{n,m,r}\\sum\\limits _{k_{1}=0}^{n}\\frac{^{n}C_{k_{1}}\\,{}^{r}C_{r+k_{1}-m}}{2^{n+r}},\\end{array}\\label{eq:LS-KS}\n\\end{equation}\n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm VFKS}} & = & 1-\\left(N_{{\\rm VFKS}}\\right)^{4}\\sum\\limits _{n,m,r=1}^{\\infty}h_{n,m,r}\\sum\\limits _{k_{1}=0}^{n}\\frac{^{n}C_{k_{1}}\\,{}^{r}C_{r+k_{1}-m}}{2^{n+r}},\\end{array}\\label{eq:LE-VFKS}\n\\end{equation}\nand \n\\begin{equation}\n\\begin{array}{lcl}\n\\mathcal{L}_{{\\rm PAKS}} & = & 1-\\left(N_{{\\rm PAKS}}\\right)^{4}\\sum\\limits _{n,m,r=0}^{\\infty}h_{n,m,r}\\left(m+1\\right)\\left(n-m+r+1\\right)\\\\\n & \\times & \\sum\\limits _{k_{1}=0}^{n+1}\\frac{^{n+1}C_{k_{1}}\\,{}^{r+1}C_{r+k_{1}-m}}{2^{n+r+2}},\n\\end{array}\\label{eq:LE-PAKS}\n\\end{equation}\nrespectively, with $h_{n,m,r}=\\frac{\\left|\\alpha\\right|^{2n+2r}\\exp\\left[2\\iota\\chi(m-n)(m-r)\\right]}{n!r!}$.\n\nIn general, significance of hole burning operations can be clearly\nestablished through corresponding results shown in Figure \\ref{fig:Linear-Entropy}.\nSpecifically, one can clearly see the amount of nonclassciality (revealed\nthrough the amount of entanglement it can generate at a beam splitter)\nincreases due to these operations.\n\nFrom Figs. \\ref{fig:Linear-Entropy} (a) and (c), it can be observed\nthat vacuum filtered ECS and KS are more nonclassical than corresponding\nphoton added counterparts. However, in case of BS and its engineered\nstates, it is observed that only up to a certain value of $p$ VFBS\nis more nonclassical than PABS (cf. Figure \\ref{fig:Linear-Entropy}\n(a)). In fact, the amount of additional nonclassicality induced due\nto filtration decreases with $p$ and eventually becomes zero (i.e.,\nthe amount of nonclassicality of VFBS becomes equal to that of BS\nas far as linear entropy is considered as a measure of nonclassicality).\nIt is interesting to observe the effect of Kerr coupling parameter\n$\\chi$ on the amount of nonclassicality induced due to nonlinearity.\nIt is observed that for small (relatively large) values of $\\chi$\nnonclassicality present in VFKS (PAKS) is more than that in PAKS (VFKS)\n(cf. Figure \\ref{fig:Linear-Entropy} (d)). This dependence is more\nclearly visible in Figure \\ref{fig:LE}, where one can observe strong\nnonclassicality in PAKS and VFKS (KS) favor (favors) smaller (higher)\nvalues of $\\alpha$ and large $\\chi$.\n\n\\begin{figure}\n\\centering{}\\includegraphics[width=120mm]{fig7.jpg}\n\\caption{\\label{fig:LE} Illustration of linear entropy for (a) KS (b) VFKS\n(c) PAKS.}\n\\end{figure}\n\n\\section{Conclusion \\label{sec:Conclusion}}\n\nIn summary, this chapter is focused on the comparison of the effects\nof two processes (vacuum state filtration and single photon addition)\nused in quantum state engineering to burn hole at vacuum as far as\nthe higher-order nonclassical properties of the quantum states prepared\nusing these two processes are concerned. Specifically, various quantum\nstate engineering processes for burning holes at vacuum lead to different\n$\\sum_{m=1}c_{m}|m\\rangle$ as far as the values of $c_{m}$s are\nconcerned (even when the parent state is the same). To study its significance\nin nonclassical properties of the engineered states, we considered\na small set of finite and infinite dimensional quantum states (namely,\nECS, BS, and KS). This provided us a set of six engineered quantum\nstates, namely VFECS, PAECS, VFBS, PABS, VFKS, and PAKS and three\nparent states for our analysis. This set of engineered quantum states\ncan have a great importance in quantum information processing and\nquantum optics as they are found to be highly nonclassical. Especially\nwhen some exciting applications of their parent states are already\ninvestigated with relevance to continuous variable quantum information\nprocessing and\/or quantum optics. The present study also addresses\nthe significance of these hole burning processes in inducing (enhancing)\nparticular nonclassical features in the large set of engineered and\nparent quantum states.\n\nThe general expressions for moments of the set of states are reported\nin the compact analytic form, which are used here to investigate nonclassical\nfeatures of these states using a set of criteria of higher-order nonclassicality\n(e.g., criteria of HOA, HOS and HOSPS). The obtained expressions can\nbe further used to study other moment-based criteria of nonclassicality.\nThe hole burning operations are found to be extremely relevant as\nthe states studied here are found to be highly nonclassical when quantified\nthrough a measure of nonclassicality (entanglement potential).\n In brief, both the vacuum filtration and photon addition operations\ncan be ascribed as antibunching inducing operations in KS and ECS\nwhile antibunching enhancing operations for BS. As far as HOS is concerned\nno such advantage of these operations is visible as these operations\nfail to induce squeezing in ECS and often decrease the amount of squeezing\npresent in the parent state. Additionally, the operations are successful\nin inducing HOSPS in KS and enhances this feature in the rest of the\nparent states. The relevance of higher-order nonclassicality in the\ncontext of the present study can be understood from the fact that\nthese hole burning operations show an increase in the depth of HOA\nwitness and decrease in the amount of HOS with order. While in case\nof HOSPS even orders show nonclassicality whereas odd orders fail\nto detect it. Finally, the measure of nonclassicality reveals vacuum\nfiltration as a more powerful tool than photon addition for enhancing\nnonclassicality in the parent state, but photon addition is observed\nto be advantageous in some specific cases.\n\\chapter{Conclusions And Scope For Future Work\\textsc{\\label{cha:Conclusions-and-Scope}}}\nThis concluding chapter aims to briefly summarize the results obtained\nin this thesis work, and it also aims to provide some insights into\nthe scope of future works. To begin with, we may note that this thesis\nis a theoretical work focused on nonclassical and phase properties\nof some of the engineered\nquantum states of radiation field. Here, lower- and higher-order nonclassical\nproperties of PADFS, PSDFS, PASDFS, ECS, VFECS, PAECS, BS, VFBS, PABS,\nKS, VFKS and PAKS have been witnessed through lower- and higher-order\nantibunching, higher-order sub-Poissonian photon statistics, higher-order\nsqueezing, Klyshko's criterion, Vogel's criterion, Agarwal-Tara's\ncriterion, $Q$ function, Mandel $Q_{M}$ parameter, etc. Further,\nphase properties of these states have been investigated with the help\nof phase distribution function, phase dispersion, phase fluctuation,\nphase uncertainty parameter and angular $Q$ function. These investigations\nhave revealed that the state engineering processes may help us to\nintroduce and manipulate the nature and amount of nonclassicality\npresent in a quantum state. Keeping this in mind, at the end of the\nthesis two quantum state engineering processes, which can be used\nto generate holes at vacuum in photon number distribution, have been\ncompared. This systematic and rigorous study of the nonclassical and\nphase properties of the above mentioned engineered quantum states\nhave led to many new findings, some of them are already mentioned\nin the end of the individual chapters. In what follows, we list the\nmajor findings of the present thesis.\n\n\\section{Conclusion}\n\nThe main observations of the present thesis may be summarized as follows: \n\\begin{enumerate}\n\\item It is observed that photon addition and subtraction are not only non-gaussianity\nand nonclassicality inducing operations but they can also boost the\nnonclassicality present in the DFS. \n\\item The results indicate that the amount of nonclassicality in PADFS and\nPSDFS can be controlled by the Fock state parameter, displacement\nparameter, the number of photon addition and\/or subtraction. \n\\item Higher-order squeezing witness and $Q$ function are observed to be\ndependent on the phase of the displacement parameter. However, only\nhigher-order squeezing criterion was found to be able to detect nonclassicality,\nand thus established that this phase parameter can also be used to\ncontrol the amount of nonclassicality. \n\\item It is observed that the depth of nonclassicality witnesses increases\nwith order of nonclassicality. \n\\item The phase distribution and angular $Q$ functions are found to be\nsymmetric along the value of the phase of the displacement parameter. \n\\item Photon addition\/subtraction and Fock parameters are found to induce\nopposite effects on phase distribution. Between photon addition and\nsubtraction operations, subtracting a photon modifies the phase properties\nmore than photon addition. Interestingly, phase\nproperties are associated with average photon number of the state\nas well. Photon subtraction increases the average photon number as\nphoton addition does. However, photon addition creates a hole at vacuum\nunlike photon subtraction.\n\\item The three phase fluctuation parameters given by Carruthers and Nieto\nreveal phase properties of PADFS and PSDFS, although one of them,\n$U$ parameter indicates antibunching in both PADFS and PSDFS. \n\\item Phase dispersion quantifying phase fluctuation remains unity for Fock\nstate reflecting uniform distribution, which can be observed to decrease\nwith increasing displacement parameter. This may be attributed to\nthe number-phase complimentarity as the higher values of variance\nwith increasing displacement parameter lead to smaller phase fluctuation. \n\\item The present investigation has revealed the advantage of the PADFS\nand PSDFS in quantum phase estimation and has obtained the set of\noptimized parameters in the PADFS\/PSDFS. \n\\item The nonclassicality and non-Gaussianity of PASDFS viewed with the\nhelp of a quasidistribution function, namely $Q$ function is shown\nin the present thesis. \n\\item The present study also provides a flavor of the significance of the\nhole burning processes in inducing particular nonclassical features\nin the family of engineered and parent quantum states. The hole burning\noperations are observed to be potentially relevant as the quantum\nstates studied in this work are observed to be highly nonclassical\nwhen quantification is done through a measure of nonclassicality. \n\\end{enumerate}\n\n\\section{Scope for future work}\n\nThe works reported in the present thesis give us a general idea for\nthe investigation of phase properties and nonclassical features present\nin a family of engineered quantum states. This work can be further\nextended in various ways. Some of the possible extension of the present\nwork are listed below with a focus on the possibilities that may be\nrealized in the near future. \n\\begin{enumerate}\n\\item The work may be continued to find out the non-Gaussianity of the studied\nstates. Subsequently, the nonclassicality and non-Gaussianity observed\nin these states can be used to realize various applications in quantum\ninformation processing tasks. Therefore, specific\napplications of the non-classical properties of the aforesaid states\nwhich (the applications) are otherwise impossible to achieve using\nother types of states (classical\/nonclassical).\n\\item A major part of the results presented here can be experimentally verified\nusing the available technology. Along this line,\nit would be interesting to perform resource comparison (e.g., total\nnumber of beam splitters, photo detectors, nonlinear gadgets, etc.)\nin generation of the aforesaid nonclassical states using quantum state\nengineering methods.\n\\item The work can be extended to quantify the amount of nonclassicality\npresent in quantum states using different nonclassicality measures. \n\\item The methods adopted here and the results obtained here can be helpful\nin further theoretical studies on nonclassical and phase properties\nof other engineered quantum states (both finite as well as infinite\ndimensional). There could be many such states using\nseveral other quantum state engineering tools, for instance, squeezing,\nphoton catalysis, etc. \n\\item Attempts can be made to observe the effect of noise in these states.\n Specifically, further study of the robustness of\nobserved nonclassical properties of PADFS, PSDFS, PASDFS, BS, VFBS,\nPABS, KS, VFKS, PAKS under photon loss as well as inefficiency of\nphoto-detectors.\n\\end{enumerate}\nWe expect that theoretical work done in this thesis will be performed\nexperimentally and that will lead to some important applications.\nWe also hope this work will be very useful in quantum optics. With\nthese hopes this thesis is concluded. \n\n\n\n\n\\backmatter\n\\pagenumbering{roman}\\setcounter{page}{1}\n\t\\addcontentsline{toc}{chapter}{References}\n\\renewcommand\\bibname{\\bf References}\n\\setlength{\\bibsep}{0pt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Checklist}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proofs of Section \\ref{sec:elimination_at_sampling}}\n\\label{app:proofs_of_sampling}\n\n\\subsection{On Assumption \\ref{ass:beta_bounds}}\n\nWe illustrate first how Assumption~\\ref{ass:beta_bounds} covers the threshold obtained by the most common technique: first get a concentration bound valid for one time of the form $f(1\/\\delta)$, where usually $f(x) = \\log(x) + C\\log\\log(x)$ (for $C$ a dimension-dependent constant), then obtain a time-uniform concentration threshold by setting for example $\\beta_{t, \\delta} = f(\\frac{\\pi^2 t^2}{6 \\delta})$.\n\nLet us suppose then that $\\beta_{t, \\delta} = f(\\frac{C' t^2}{\\delta})$ for $f(x) = \\log(x) + C\\log\\log(x)$ and $C' \\ge 1$. We will now prove that if verifies Assumption~\\ref{ass:beta_bounds}.\n\\begin{align*}\n\\overline{\\log}(C' t^2\/\\delta)\n&= \\log \\frac{C' t^2}{\\delta} + C\\log(\\log\\frac{C' }{\\delta} + \\log t^2)\n\\\\\n&\\ge \\log \\frac{C' }{\\delta} + C\\log\\log\\frac{C' }{\\delta} + 2 \\log(t)\n\\\\\n&= \\overline{\\log}\\frac{1}{\\delta} + 2 \\log(t)\n\\: .\n\\end{align*}\nwhere $\\overline{\\log}(x) = \\log(C' x) + C\\log\\log(C' x)$. Using the concavity of $\\log$, we also have an upper bound\n\\begin{align*}\n\\overline{\\log}(C' t^2\/ \\delta)\n&= \\log \\frac{C' t^2}{\\delta} + C\\log(\\log\\frac{C' }{\\delta} + \\log t^2)\n\\\\\n&= \\log \\frac{C' }{\\delta} + C\\log\\log\\frac{C' }{\\delta} + 2 \\log(t) + C(\\log(\\log\\frac{C' }{\\delta} + \\log t^2) - \\log\\log\\frac{C' }{\\delta})\n\\\\\n&\\le \\overline{\\log}\\frac{1}{\\delta} + 2 (C+1)\\log(t)\n\\: .\n\\end{align*}\nWe have found a function $\\overline{\\log}$ such that for $c_1 = 2$ and $c_2=2(C+1)$,\n\\begin{align*}\n\\overline{\\log}(\\frac{1}{\\delta}) + c_1 \\log(t)\n\\le \\beta_{t, \\delta}\n\\le \\overline{\\log}(\\frac{1}{\\delta}) + c_2 \\log(t)\n\\: .\n\\end{align*}\nIt remains to show the condition on $\\overline{\\log}$, i.e. find $x_0$ such that for $a \\ge 2$ and $x \\ge x_0$, $\\overline{\\log}(x^a) \\le a \\overline{\\log}(x)$.\n\\begin{align*}\n\\overline{\\log}(x^a)\n= \\log(C' x^a) + C \\log\\log(C' x^a)\n&\\le \\log((C' x)^a) + C \\log\\log((C' x)^a)\n\\\\\n&= a \\log (C' x) + C \\log\\log(C' x) + C \\log a\n\\end{align*}\nIt remains to find $x_0$ such that for $x \\ge x_0$, $\\log\\log(C' x) + \\log a \\le a \\log\\log(C' x)$. We find that $x_0 = \\exp(a^{1\/(a-1)})\/C'$ is suitable. Since $a \\ge 2$, we have $x_0 \\le e^2\/C'$.\n\n\n\\begin{lemma}\\label{lem:new_beta-diff-between-phases}\nUnder Assumption~\\ref{ass:beta_bounds}, for any time $t$ and $0 \\leq j \\leq j(t)$, \n\\begin{align*}\n\\beta_{t,\\delta}\n&\\leq \\beta_{\\overline{t}_{j},\\delta} + (2^{j(t)+1-j} c_2 - c_1)\\log(\\overline{t}_{j}),\n\\: , \\\\\n\\beta_{t,1\/t^2}\n&\\leq \\frac{c_2}{c_1} 2^{j(t)-j+1} \\beta_{\\overline{t}_j,1\/\\overline{t}_j^2}.\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nWe have $\\bar{t}_0 = \\max\\{2, \\sqrt{x_0}\\}$. Note that for all $j$, $\\bar{t}_j = \\bar{t}_0^{2^j}$.\nBy using first the upper bound of Assumption~\\ref{ass:beta_bounds}, then $t \\leq \\bar{t}_0^{2^{j(t)+1}}$ and at the end the lower bound of Assumption~\\ref{ass:beta_bounds},\n\\begin{align*}\n\\beta_{t, \\delta}\n\\le \\overline{\\log}(\\frac{1}{\\delta}) + c_2 \\log(t)\n\\le \\overline{\\log}(\\frac{1}{\\delta}) + c_2 \\log(\\bar{t}_0^{2^{j(t)+1}})\n&= \\overline{\\log}(\\frac{1}{\\delta}) + c_2 \\log((\\bar{t}_0^{2^j})^{2^{j(t)+1-j}})\n\\\\\n&= \\overline{\\log}(\\frac{1}{\\delta}) + c_2 2^{j(t)+1-j} \\log(\\bar{t}_0^{2^j})\n\\\\\n&\\le \\beta_{\\overline{t}_j, \\delta} + \\log(\\overline{t}_j)(2^{j(t)+1-j} c_2 - c_1)\n\\: .\n\\end{align*}\n\nWe have $\\overline{t}_j \\ge \\bar{t}_0 \\ge \\sqrt{x_0}$, hence the inequality $\\overline{\\log}(x^a) \\le a\\overline{\\log}(x)$ can be used. Then\n\\begin{align*}\n\\beta_{t, 1\/t^2}\n\\le \\overline{\\log}(t^2) + c_2 \\log(t)\n&\\le \\overline{\\log}(\\bar{t}_0^{2^{j(t)+2}}) + c_2 \\log(\\bar{t}_0^{2^{j(t)+1}})\n\\\\\n&= \\overline{\\log}((((\\bar{t}_0^{2^j})^2)^{2^{j(t)+1-j}}) + c_2 \\log((\\bar{t}_0^{2^j})^{2^{j(t)+1-j}})\n\\\\\n&= \\overline{\\log}(((\\overline{t}_j^2)^{2^{j(t)+1-j}}) + c_2 2^{j(t)+1-j} \\log(\\overline{t}_j)\n\\\\\n&\\le 2^{j(t)+1-j}\\overline{\\log}((\\overline{t}_j^2) + c_2 2^{j(t)+1-j} \\log(\\overline{t}_j)\n\\\\\n&\\le \\frac{c_2}{c_1}\\left( 2^{j(t)+1-j}\\overline{\\log}((\\overline{t}_j^2) + c_1 2^{j(t)+1-j} \\log(\\overline{t}_j) \\right)\n\\\\\n&\\le \\frac{c_2}{c_1} 2^{j(t)+1-j} \\beta_{\\overline{t}_j, 1\/\\overline{t}_j^2}\n\\: .\n\\end{align*}\n\n\\end{proof}\n\n\n\n\\subsection{Proof of Theorem \\ref{th:sampling-rule-with-elim}}\n\nWe now derive an important result for sampling rules combined with elimination (either full or selective). It essentially shows that we cannot eliminate the closest alternative piece to $\\theta$ from the sampling rule without making the algorithm stop.\n\n\\begin{lemma}\\label{lem:closest-alternatives-not-eliminated-in-sampling}\nLet $t \\ge \\bar{t}_1$ be any time step at which the algorithm did not stop. Suppose that some piece index $p\\in\\cP(i^\\star)$ of the true correct answer $i^\\star$ has been eliminated from the sampling rule (i.e., $p\\notin\\cP_{t}^{\\mathrm{smp}}(i^\\star)$). Then, under event $E_t$ (see Equation \\ref{eq:Et}),\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) > \\inf_{\\lambda \\in \\Lambda(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda).\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nLet us proceed by contradiction: suppose that $p\\notin\\cP_{t}^{\\mathrm{smp}}(i^\\star)$ while $\\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) = \\inf_{\\lambda \\in \\Lambda(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)$ and $E_t$ holds.\n\nSince $\\cP_{t}^{\\mathrm{smp}}(i^\\star)$ is the intersection of all active sets from $\\overline{t}_{j(t)-1}$ to $t$, if $p\\notin\\cP_{t}^{\\mathrm{smp}}(i^\\star)$, then there exists $s$ with $\\overline{t}_{j(t)-1} \\leq s \\leq t$ such that\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_p(i)} L_s(\\hat{\\theta}_s,\\lambda) \\geq \\alpha_{s,\\delta}.\n\\end{align*}\nTherefore,\n\\begin{align*}\n\\sqrt{\\alpha_{s,\\delta}}\n \\stackrel{(a)}{\\leq} \\sqrt{\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} L_{s}(\\hat{\\theta}_{s},\\lambda)}\n &\\stackrel{(b)}{\\leq} \\sqrt{\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}N_{s}^k\\KL_k(\\theta,\\lambda)} + \\sqrt{L_{s}(\\hat{\\theta}_{s},\\theta)}\n \\\\ &\\stackrel{(c)}{\\leq} \\sqrt{\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}N_{s}^k\\KL_k(\\theta,\\lambda)} + \\sqrt{\\beta_{t,1\/t^2}}\n \\\\ &\\stackrel{(d)}{\\leq} \\sqrt{\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}N_{t}^k\\KL_k(\\theta,\\lambda)} + \\sqrt{\\beta_{t,1\/t^2}}\n \\\\ & \\stackrel{(e)}{=} \\sqrt{\\inf_{\\lambda \\in \\Lambda(i^\\star)}\\sum_{k\\in[K]}N_{t}^k\\KL_k(\\theta,\\lambda)} + \\sqrt{\\beta_{t,1\/t^2}},\n\\end{align*}\nwhere (a) is from the elimination condition, (b) uses Lemma \\ref{lem:llr-to-kl-lin-gauss}, (c) uses that $E_t$ holds, (d) uses that the number of pulls of each arm is non-decreasing in time, and (e) holds by our assumption. Recall that, for any $\\theta,\\lambda\\in\\mathbb{R}^d$, $\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) = \\frac{1}{2}\\|\\theta-\\lambda\\|_{V_t}^2$. Therefore, by the triangle inequality,\n\\begin{align*}\n\\sqrt{\\sum_{k\\in[K]}N_{t}^k\\KL_k(\\theta,\\lambda)}\n= \\frac{1}{\\sqrt{2}}\\|\\theta-\\lambda\\|_{V_t}\n&\\leq \\frac{1}{\\sqrt{2}}\\|\\theta-\\hat{\\theta}_t\\|_{V_t} + \\frac{1}{\\sqrt{2}}\\|\\hat{\\theta}_t-\\lambda\\|_{V_t}\n\\\\\n&= \\sqrt{L_t(\\hat{\\theta}_t, \\lambda)} + \\sqrt{L_t(\\hat{\\theta}_t, \\theta)}.\n\\end{align*}\nCombining this with the previous chain of inequalities,\n\\begin{align*}\n\\sqrt{\\alpha_{s,\\delta}} \\leq \\sqrt{\\inf_{\\lambda \\in \\Lambda(i^\\star)}L_t(\\hat{\\theta}_t, \\lambda)} + \\sqrt{L_t(\\hat{\\theta}_t, \\theta)} + \\sqrt{\\beta_{t,1\/t^2}}\n &\\leq \\sqrt{\\inf_{\\lambda \\in \\Lambda(i^\\star)}L_t(\\hat{\\theta}_t, \\lambda)} + 2\\sqrt{\\beta_{{t},1\/{t}^2}}\n \\\\ &\\leq \\sqrt{\\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_t))}L_t(\\hat{\\theta}_t, \\lambda)} + 2\\sqrt{\\beta_{{t},1\/{t}^2}},\n\\end{align*}\nwhere we used again that $E_t$ holds to concentrate the LLR between $\\hat{\\theta}_t$ and $\\theta$. The last inequality is easy to check since, if $i^\\star \\neq i^\\star(\\hat{\\theta}_t)$, then $\\inf_{\\lambda \\in \\Lambda(i^\\star)}L_t(\\hat{\\theta}_t, \\lambda) = 0$. Finally, since the algorithm did not stop at $t$, it must be that $\\inf_{\\lambda \\in \\Lambda(i^\\star)}L_t(\\hat{\\theta}_t, \\lambda) < \\beta_{t,\\delta}$ (otherwise all alternative pieces of $i^\\star(\\hat{\\theta}_t)$ would be eliminated at $t$). Therefore,\n\\begin{align*}\n\\sqrt{\\alpha_{s,\\delta}} < \\sqrt{\\beta_{t,\\delta}} + 2\\sqrt{\\beta_{{t},1\/{t}^2}}.\n\\end{align*}\nSince $s\\geq \\overline{t}_{j(t)-1}$, from Lemma \\ref{lem:new_beta-diff-between-phases} we have that\n\\begin{align*}\n\\beta_{t,\\delta} &\\leq \\beta_{\\overline{t}_{j(t)-1},\\delta} + (4 c_2 - c_1)\\log(\\overline{t}_{j(t)-1}) \\leq \\beta_{s,\\delta} + (4 c_2 - c_1)\\log(s),\n\\\\ \\beta_{t,1\/t^2} &\\leq 4 \\frac{c_2}{c_1} \\beta_{\\overline{t}_{j(t)-1},1\/\\overline{t}_{j(t)-1}^2} \\leq 4 \\frac{c_2}{c_1} \\beta_{s,1\/s^2}.\n\\end{align*}\nPlugging this into our previous bound, we conclude that\n\\begin{align*}\n\\sqrt{\\alpha_{s,\\delta}} < \\sqrt{\\beta_{s,\\delta} + (4 c_2 - c_1)\\log(s)} + 4\\sqrt{\\frac{c_2}{c_1}\\beta_{s,1\/s^2}}.\n\\end{align*}\nThis is clearly a contradiction w.r.t. our definition of $\\alpha_{t,\\delta}$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{th:sampling-rule-with-elim}]\nTake any step $t \\ge \\bar{t}_1$ where $E_t$ holds. Lemma \\ref{lem:closest-alternatives-not-eliminated-in-sampling} ensures that all pieces of $i^\\star$ which are at minimal distance from $\\theta$, i.e., such that\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) = \\inf_{\\lambda \\in \\Lambda(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)\n\\end{align*}\nare not eliminated for the sampling rule at time $t$, i.e., $p\\in\\cP_t^{\\mathrm{smp}}(i^\\star)$. This implies that\n\\begin{align*}\n\\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) = \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda).\n\\end{align*}\nThus, if Assumption \\ref{asm:sampling-rule-v2} holds, it must be that\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)\n &\\geq \\max_{\\omega\\in\\Delta_K}\\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) - R(\\theta,t)\n \\\\ &\\geq \\max_{\\omega\\in\\Delta_K}\\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) - R(\\theta,t) = tH^\\star(\\theta) - R(\\theta,t),\n\\end{align*}\nwhere the second inequality is trivial from $\\cP_{s-1}^{\\mathrm{smp}}(i^\\star) \\subseteq \\cP(i^\\star)$ for all $s\\geq 1$. Therefore, we proved that the condition of Assumption \\ref{asm:sampling-rule} holds as well for all $t \\ge \\bar{t}_1$. We can now have it for all $t$ by adding $\\bar{t}_1 H^\\star(\\theta)$ to $R(\\theta, t)$ to obtain another regret function which verifies the condition of Assumption~\\ref{asm:sampling-rule}. The second statement is a direct consequence of the fact that Theorem \\ref{th:piece-elim} holds for any sampling rule that satisifes the latter assumption.\n\\end{proof}\n\n\n\\section{Experiment Details and Additional Results}\n\\label{app:experiments}\n\n\\subsection{Reproducibility details}\n\nWe provide the main details to reproduce our experiments. For all the details, we refer the reader to our implementation at \\url{https:\/\/github.com\/AndreaTirinzoni\/bandit-elimination}.\n\nIn all experiments, we used $\\delta=0.01$ and a heuristic threshold $\\beta_{t,\\delta} = \\log(1\/\\delta) + \\log(1+t)$ for all elimination rules and LLR stopping. This is slightly larger than the heuristic threshold proposed by \\citep{garivier2016optimal} and adopted in many recent works. We implemented the elimination rules as described in Appendix \\ref{app:problems}. When using elimination at both stopping and sampling, we maintained only one set of active pieces (the one for stopping) instead of keeping a separate set with very lazy resets for sampling as suggested by theory. That set is shared by both sampling and stopping rules, and is never reset.\n\n\\paragraph{Computational infrastructure}\n\nAll experiments were run on a Dell XPS 13 laptop with an Intel Core i7-7560U (2.40GHz) CPU and 8GB of RAM.\n\n\\subsection{Bandit instances}\\label{app:instances}\n\nWe provide details on how we generated the bandit instances considered in the experiments presented in the main paper and later in this section.\n\n\\paragraph{Linear instances (experiments of Figure \\ref{fig:all} and Table \\ref{tab:lin_elim})}\n\nWe set $K=50$ and $d=10$. The true parameter is $\\theta = [1,1,\\dots,1]^T$, while we generated the arm features randomly. The arm feature of the first arm is $\\phi_1 = [1,0,0,\\dots,0]^T$. Then, up to reaching 50 arms, we repeated the following procedure. First, we generated a 3-dimensional vector $v\\in\\mathbb{R}^3$ by drawing its elements uniformly in $[-1,1]^3$ and then normalizing to have unit norm. Then, we added 3 feature vectors $v_1 = [0,v,0,0,0,0,0,0]$, $v_2 = [0,0,0,0,v,0,0,0]$, and $v_3 = [0,0,0,0,0,0,0,v]$, but only if $v_1^T\\theta \\in [0, 0.8]$. In this way, we obtain linear instances with 50 arms where arm $1$ is optimal with value $\\mu_1(\\theta) = 1$, while all other arms have minimum sub-optimality of $0.2$ and maximum sub-optimality gap of $0.8$.\n\n\\paragraph{Linear instances (experiments of Table \\ref{tab:lin_all})}\n\nWe set $K=50$ and $d=20$. The first 10 arms are set to the canonical basis of $\\mathbb{R}^{10}$, i.e., $\\phi_k = e_k$ for $k=1,\\dots,10$. The generation of the true parameter $\\theta$ and of the remaining 40 arms is slightly different from BAI\/Top-m and OSI.\n\nFor BAI and Top-m, the true parameter $\\theta$ has the first element equal to $1$, elements from the second to the fifth equal to $0.9$, and elements from the sixth to the tenth equal to $0.8$. The remaining 10 elements are uniformly drawn in $[-0.5,0.5]^{10}$. The remaining 40 arms are randomly generated as follows. First, we draw a vector $v$ uniformly in $[-1,1]^{20}$ and normalize it to have unit norm. Then, if $v^T\\theta \\leq 0.5$, we add $v$ to the set of arms. Otherwise, we reject the vector and keep repeating this procedure until we reach a total of 50 arms. In this way, we obtain random linear instances where the first arm is optimal with value $\\mu_1(\\theta) = 1$, the next 9 arms are hard to discriminate from it since they have small gap (either $0.1$ or $0.2$), and all remaining 40 arms have moderate to large gap (at least $0.5$) and are thus easy to eliminate.\n\nFor OSI, the true parameter $\\theta$ has the first ten elements uniformly drawn in $([-0.2,-0.1]\\cup[0.1,0.2])^{10}$ and the second ten elements uniformly drawn in $[-0.5,0.5]^{10}$. Similarly as before, to generate the remaining 40 arms we first draw a vector $v$ uniformly in $[-1,1]^{20}$ and normalize it to have unit norm. Then, if $|v^T\\theta| \\geq 0.5$, we add $v$ to the set of arms. Otherwise, we reject the vector and keep repeating this procedure until we reach a total of 50 arms. We thus obtain random linear instances where the first 10 arms are hard to learn since they have small gap (i.e., the absolute mean, which is between $0.1$ and $0.2$), and all remaining 40 arms have moderate to large gap (at least $0.5$) and are thus easy to eliminate.\n\n\\paragraph{Unstructured instances (experiments of Appendix \\ref{app:uns_results})}\n\nWe used $K=40$ arms. For BAI and Top-m, the mean reward of the first 5 arms is $\\mu_1 = 1$, $\\mu_2 = 0.9$, $\\mu_3 = 0.8$, $\\mu_4 = 0.7$, and $\\mu_5 = 0.6$. For all remaining arms the mean reward is uniformly drawn in $[0,0.5]$. For OSI, the mean reward of the first 4 arms is $\\mu_1 = 0.1$, $\\mu_2 = -0.2$, $\\mu_3 = 0.3$, and $\\mu_4 = -0.4$. For all remaining arms the mean reward is uniformly drawn in $[-1,-0.5]\\cup[0.5,1]$.\n\n\\subsection{Additional Results}\n\n\\subsubsection{Full versus selective elimination}\\label{app:full_vs_emp_results}\n\nWe report in Table \\ref{tab:lin_elim} the full results of the experiment comparing elimination rules (full vs selective) from which we extracted Figure \\ref{fig:all}\\emph{(middle)}. We recall that the linear instances for this experiment were generated as explained in the first paragraph of Appendix \\ref{app:instances}. We did not compare full and selective elimination rules on OSI since, as explained in Appendix \\ref{app:problems}, they are actually equivalent in such a setting.\n\nWhile we saw in Figure \\ref{fig:all}\\emph{(middle)} that the full elimination rule allows eliminating some arms earlier than the selective one, we notice from Table \\ref{tab:lin_elim} that the former rule actually yields no advantage in terms of sample complexity. Moreover, its computational overhead makes it much slower than the selective elimination rule. Therefore, in practice we suggest using the selective elimination rule, which always yields reduced computation times and often improved sample complexity.\n\n\\begin{table*}[t!]\n\\centering\n\\small\n\\begin{tabular}{@{}clcccccc@{}} \n\\toprule\n & & \\multicolumn{2}{c}{No elim. (LLR)} & \\multicolumn{2}{c}{Selective elim.} & \\multicolumn{2}{c}{Full elim.} \\\\\n\\cmidrule(r){3-8}\n& Algorithm & Samples & Time & Samples & Time & Samples & Time \\\\\n\\cmidrule{1-8}\n\\multirow{11}{*}{\\rotatebox[origin=c]{90}{BAI}} \n& LinGapE & $4.51 \\pm 1.3$ & $0.19$ & $4.49 \\pm 1.3$ & $0.17$ & $4.49 \\pm 1.3$ & $0.57$ \\\\\n& LinGapE + elim & & & $4.16 \\pm 1.4$ & $0.15$ & $4.17 \\pm 1.4$ & $0.58$ \\\\\n& LinGame & $5.28 \\pm 1.7$ & $0.21$ & $5.09 \\pm 1.8$ & $0.19$ & $5.09 \\pm 1.8$ & $0.6$ \\\\\n& LinGame + elim & & & $4.05 \\pm 1.2$ & $0.17$ & $4.05 \\pm 1.2$ & $0.65$ \\\\\n& FWS & $4.68 \\pm 4.2$ & $0.84$ & $4.68 \\pm 4.2$ & $0.82$ & $4.68 \\pm 4.2$ & $1.39$ \\\\\n& FWS + elim & & & $4.21 \\pm 1.4$ & $0.58$ & $4.21 \\pm 1.4$ & $1.16$ \\\\\n& Lazy TaS & $9.99 \\pm 8.8$ & $0.45$ & $9.75 \\pm 8.9$ & $0.45$ & $9.75 \\pm 8.9$ & $0.78$ \\\\\n& Lazy TaS + elim & & & $8.7 \\pm 8.9$ & $0.38$ & $8.7 \\pm 8.9$ & $0.73$ \\\\\n& Oracle & $6.65 \\pm 1.8$ & $0.04$ & $6.55 \\pm 1.9$ & $0.02$ & $6.55 \\pm 1.9$ & $0.31$ \\\\\n& XY-Adaptive & & & & & $13.89 \\pm 6.0$ & $2.23$ \\\\\n& RAGE & & & & & $16.28 \\pm 6.2$ & $0.02$ \\\\\n\\cmidrule{1-8}\n\\multirow{10}{*}{\\rotatebox[origin=c]{90}{Top-m ($m=3$)}} \n& m-LinGapE & $6.26 \\pm 1.2$ & $0.29$ & $6.21 \\pm 1.2$ & $0.24$ & $6.21 \\pm 1.2$ & $1.35$ \\\\\n& m-LinGapE + elim & & & $5.77 \\pm 1.2$ & $0.19$ & $5.77 \\pm 1.2$ & $1.29$ \\\\\n& MisLid & $7.06 \\pm 1.4$ & $0.34$ & $6.81 \\pm 1.5$ & $0.27$ & $6.81 \\pm 1.5$ & $1.48$ \\\\\n& MisLid + elim & & & $5.89 \\pm 1.1$ & $0.22$ & $5.89 \\pm 1.1$ & $1.42$ \\\\\n& FWS & $5.91 \\pm 1.7$ & $1.51$ & $5.9 \\pm 1.7$ & $1.46$ & $5.9 \\pm 1.7$ & $2.66$ \\\\\n& FWS + elim & & & $5.84 \\pm 1.7$ & $0.83$ & $5.84 \\pm 1.7$ & $2.02$ \\\\\n& Lazy TaS & $13.1 \\pm 6.5$ & $0.71$ & $12.85 \\pm 6.4$ & $0.67$ & $12.85 \\pm 6.4$ & $1.57$ \\\\\n& Lazy TaS + elim & & & $11.34 \\pm 6.3$ & $0.56$ & $11.34 \\pm 6.3$ & $1.47$ \\\\\n& Oracle & $8.74 \\pm 1.8$ & $0.1$ & $8.65 \\pm 1.8$ & $0.04$ & $8.65 \\pm 1.8$ & $1.02$ \\\\\n& LinGIFA & $5.58 \\pm 1.1$ & $1.8$ & $5.57 \\pm 1.1$ & $1.75$ & $5.57 \\pm 1.1$ & $2.68$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Experiments on linear instances with $K=50$ and $d=10$. The \"Time\" columns report average times per iteration in milliseconds (i.e., the total time the algorithm took divided by the number of samples). Each entry reports the mean across $100$ runs plus\/minus standard deviation (which is omitted for compute times due to space constraints). The ``+ elim'' variant of some algorithms indicates that the corresponding sampling rule is combined with elimination. Samples are scaled down by a factor $10^3$.}\\label{tab:lin_elim}\n\\end{table*}\n\n\\subsubsection{Unstructured instances}\\label{app:uns_results}\n\nWe report the results on unstructured bandit instances (generated according to the procedure of Appendix \\ref{app:instances}) in Table \\ref{tab:uns_all}. The algorithm k-Learner is the unstructured variant of LinGame proposed by \\citep{degenne2019non}. We note that the results are coherent with those for linear instances presented in the main paper. In particular, we observe a reduction in computation times when combining adaptive algorithms with selective elimation. The reduction is however less evident than in the linear case. This is expected since, in general, eliminations are easier in structured problems than in unstructured ones. We also note that combining sampling rules with selective elimination slightly improves the sample complexity of all algorithms.\n\n\\begin{table*}[t!]\n\\centering\n\\small\n\\begin{tabular}{@{}clcccccc@{}} \n\\toprule\n & & \\multicolumn{2}{c}{No elim. (LLR)} & \\multicolumn{2}{c}{Elim. stopping} & \\multicolumn{2}{c}{Elim. stopping + sampling} \\\\\n\\cmidrule(r){3-8}\n& Algorithm & Samples & Time & Samples & Time & Samples & Time \\\\\n\\cmidrule{1-8}\n\\multirow{7}{*}{\\rotatebox[origin=c]{90}{BAI}} \n& k-Learner & $18.76 \\pm 6.5$ & $0.49$ & $18.12 \\pm 6.6$ & $0.44$ & $14.82 \\pm 4.6$ & $0.4$ \\\\\n& FWS & $14.5 \\pm 4.6$ & $1.25$ & $14.44 \\pm 4.7$ & $1.24$ & $13.94 \\pm 4.4$ & $1.16$ \\\\\n& Lazy TaS & $26.18 \\pm 8.0$ & $0.32$ & $24.78 \\pm 7.7$ & $0.31$ & $20.66 \\pm 7.4$ & $0.33$ \\\\\n& Oracle & $27.49 \\pm 3.7$ & $0.07$ & $27.0 \\pm 3.7$ & $0.04$ & & \\\\\n& LUCB & $14.2 \\pm 5.2$ & $0.11$ & $14.18 \\pm 5.2$ & $0.06$ & $13.57 \\pm 4.7$ & $0.06$ \\\\\n& UGapE & $15.13 \\pm 5.0$ & $0.43$ & $15.13 \\pm 5.0$ & $0.39$ & & \\\\\n& Racing & & & & & $34.55 \\pm 7.6$ & $0.01$ \\\\\n\\cmidrule{1-8}\n\\multirow{7}{*}{\\rotatebox[origin=c]{90}{Top-m ($m=3$)}} \n& k-Learner & $25.84 \\pm 6.2$ & $0.67$ & $25.06 \\pm 6.3$ & $0.57$ & $17.65 \\pm 4.9$ & $0.51$ \\\\\n& FWS & $17.68 \\pm 4.7$ & $2.52$ & $17.67 \\pm 4.7$ & $2.48$ & $17.63 \\pm 4.6$ & $2.0$ \\\\\n& Lazy TaS & $38.89 \\pm 10.4$ & $0.5$ & $37.84 \\pm 10.7$ & $0.43$ & $27.74 \\pm 6.4$ & $0.46$ \\\\\n& Oracle & $34.17 \\pm 4.5$ & $0.15$ & $33.68 \\pm 4.9$ & $0.07$ & & \\\\\n& LUCB & $17.61 \\pm 4.6$ & $0.24$ & $17.58 \\pm 4.5$ & $0.13$ & $17.15 \\pm 5.4$ & $0.13$ \\\\\n& UGapE & $17.87 \\pm 4.3$ & $0.54$ & $17.87 \\pm 4.3$ & $0.43$ & & \\\\\n& Racing & & & & & $22.67 \\pm 3.1$ & $0.01$ \\\\\n\\cmidrule{1-8}\n\\multirow{5}{*}{\\rotatebox[origin=c]{90}{OSI}}\n& k-Learner & $8.55 \\pm 1.7$ & $0.61$ & $8.38 \\pm 1.8$ & $0.55$ & $5.43 \\pm 1.2$ & $0.52$ \\\\\n& FWS & $5.54 \\pm 1.3$ & $1.48$ & $5.53 \\pm 1.3$ & $1.47$ & $5.47 \\pm 1.4$ & $1.4$ \\\\\n& Lazy TaS & $12.83 \\pm 3.1$ & $0.72$ & $12.27 \\pm 3.1$ & $0.7$ & $8.74 \\pm 1.8$ & $0.79$ \\\\\n& Oracle & $11.55 \\pm 1.6$ & $0.07$ & $11.41 \\pm 1.6$ & $0.04$ & & \\\\\n& LUCB & $5.5 \\pm 1.4$ & $0.11$ & $5.5 \\pm 1.4$ & $0.1$ & $5.49 \\pm 1.4$ & $0.1$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Experiments on unstructured instances with $K=40$. The \"Time\" columns report average times per iteration in milliseconds (i.e., the total time the algorithm took divided by the number of samples). Each entry reports the mean across $100$ runs plus\/minus standard deviation (which is omitted for compute times due to space constraints). Algorithms for which the third column is missing cannot be combined with elimination at sampling, while algorithms for which the first two columns are missing are natively elimination-based. Samples are scaled down by a factor $10^3$.}\\label{tab:uns_all}\n\\end{table*}\n\n\n\\section{Elimination stopping rules for adaptive algorithms}\n\\label{sec:elimination_stopping_rules}\n\nWe show how to modify the stopping rule of adaptive algorithms using LLR stopping to perform elimination. We assume that the alternatives sets $\\Lambda(i)$ can be decomposed into a union of sets which we refer to as \\emph{alternative pieces} (or simply pieces), with the property that computing the infimum LLR over these sets is computationally easy\n\n\\begin{assumption}\\label{ass:union_of_sets}\nFor all $i \\in \\cI$, there exist pieces $(\\Lambda_p(i))_{p \\in \\cP(i)}$, where $\\cP(i)$ is a finite set of \npiece indexes, such that $\\Lambda(i) = \\bigcup_{p\\in\\cP(i)} \\Lambda_p(i)$ and $\\inf_{\\lambda \\in \\Lambda_p(i)}L_t(\\hat{\\theta}_t,\\lambda)$ can be efficiently computed for all $p \\in \\cP(i)$ and $t >0$.\n\\end{assumption}\nThis assumption is satisfied in many problems of interest, including BAI, Top-$m$ identification, and thresholding bandits (see Appendix \\ref{app:problems}).\nIndeed, in all applications we consider in this paper, the sets of Assumption~\\ref{ass:union_of_sets} are half-spaces. In our linear BAI example, the piece indexes are simply arms. For $i,j\\in[K]$ we can define $\\Lambda_j(i) = \\{\\lambda\\in\\cM \\mid \\phi_j^\\top \\lambda > \\phi_i^\\top \\lambda\\}$. Then, $\\Lambda(i) = \\bigcup_{j\\in [K]\\setminus \\{i\\}} \\Lambda_j(i)$. Moreover, the infimum LLR (and the corresponding minimizer) can be computed in closed form as \\citep[e.g.,][]{fiez2019sequential}\n$\n\\inf_{\\lambda \\in \\Lambda_j(i)}L_t(\\hat{\\theta}_t,\\lambda) = \\max\\{\\hat{\\theta}_t^T(\\phi_i-\\phi_j),0\\}^2 \/ \\| \\phi_i - \\phi_j \\|_{V_{N_t}^{-1}}^2.\n$\n\n\\paragraph{Elimination stopping}\n\nThe main idea is that it is not necessary to exclude all $\\Lambda_p(i)$ for $p\\in\\cP(i)$ at the \\emph{same time}, as LLR stopping \\eqref{eq:llr-stop} does, in order to know that the algorithm can stop and return answer $i$. Instead, each piece can be pruned as soon as we have enough information to do so. \n\n\\begin{definition}\\label{def:elimination}\nA set $S \\subseteq \\mathbb{R}^d$ is said to be eliminated at time $t$ if, for all $\\lambda\\in S$, $L_t(\\hat{\\theta}_t,\\lambda) \\ge \\beta_{t,\\delta}$.\n\\end{definition}\nFrom the concentration property~\\eqref{eq:concentration-beta}, we obtain that the probability that $\\theta \\in S$ and $S$ is eliminated is less than $\\delta$.\nLLR stopping interrupts the algorithm when the alternative set $\\Lambda(i^\\star(\\hat{\\theta}_t))$ can be eliminated.\nIn elimination stopping, we eliminate smaller sets gradually, instead of the whole alternative at once.\nFormally, let us define, for all $i\\in\\cI$,\n\\begin{align}\n\\label{eq:active-pieces-t-only}\n\\overline{\\cP}_t(i;\\beta_{t,\\delta}) = \\left\\{ p \\in \\cP(i) : \\inf_{\\lambda \\in \\Lambda_p(i)} L_t(\\hat{\\theta}_t,\\lambda) < \\beta_{t,\\delta} \\right\\}\n\\end{align}\nas the subset of pieces for answer $i\\in\\cI$ whose infimum LLR at time $t$ is below a threshold $\\beta_{t,\\delta}$. That is, the indexes of pieces that are \\emph{not} eliminated at time $t$. Moreover, we define, for all $i\\in\\cI$, a set of \\emph{active pieces} $\\cP_t^{\\mathrm{stp}}(i)$ which is initialized as $\\cP_0^{\\mathrm{stp}}(i) = \\cP(i)$ (all piece indexes).\n\nOur \\emph{selective elimination} rule updates, at each time $t$, only the active pieces of the empirical answer $i^\\star(\\hat{\\theta}_t)$. That is, for $i=i^\\star(\\hat{\\theta}_t)$, it sets\n\\begin{align}\\label{eq:elimination-sets}\n\t\\cP_t^{\\mathrm{stp}}(i) := \\cP_{t-1}^{\\mathrm{stp}}(i) \\cap \\overline{\\cP}_t(i;\\beta_{t,\\delta}),\n\\end{align}\nwhile it sets $\\cP_t^{\\mathrm{stp}}(i) := \\cP_{t-1}^{\\mathrm{stp}}(i)$ for all $i\\neq i^\\star(\\hat{\\theta}_t)$. One might be wondering why not updating all answers at each round. The main reason is computational: as we better discuss at the end of this section, checking LLR stopping requires one minimization for \\emph{each} piece $p\\in\\cP(i^\\star(\\hat{\\theta}_t))$, while selective elimination requires only one for each \\emph{active} piece $p\\in\\cP_{t-1}^{\\mathrm{stp}}(i^\\star(\\hat{\\theta}_t))$. Thus, the latter becomes increasingly more computationally efficient as pieces are eliminated. For completeness, we also analyze the variant, that we call \\emph{full elimination}, which updates the active pieces according to \\eqref{eq:elimination-sets} for \\emph{all} answers $i\\in\\cI$ at each round. While we shall establish slightly better theoretical guarantees for this rule, it is computationally demanding and, as we shall see in our experiments, it does not significantly improve sample complexity w.r.t. selective elimination, which remains our recommended choice.\n\n\nLet $\\tau_{\\mathrm{s. elim}} = \\inf_{t \\geq 1}\\{t \\mid \\cP_t^{\\mathrm{stp}}(i^\\star(\\hat{\\theta}_t)) = \\emptyset\\}$ and $\\tau_{\\mathrm{f. elim}} := \\inf_{t \\geq 1}\\{t \\mid \\exists i\\in\\cI : \\cP_t^{\\mathrm{stp}}(i) = \\emptyset\\}$ be the stopping times of selective and full elimination, respectively. Intuitively, these two rules stop when one of the updated answers has all its pieces eliminated (and return that answer). We show that, as far as $\\beta_{t,\\delta}$ is chosen to ensure concentration of $\\hat{\\theta}_t$ to $\\theta$, those two stopping rules are $\\delta$-correct.\n\n\\begin{lemma}[$\\delta$-correctness]\\label{lem:delta-correct}\nSuppose that $\\beta_{t,\\delta}$ guarantees \\eqref{eq:concentration-beta} and that the algorithm verifies that, whenever it stops, there exists $i_{\\emptyset}\\in\\cI$ such that $\\cP_\\tau^{\\mathrm{stp}}(i_{\\emptyset}) = \\emptyset$ and $\\ihat = i_{\\emptyset}$. Then, $\\mathbb{P}_\\theta(\\ihat \\ne i^\\star(\\theta)) \\le \\delta$.\n\\end{lemma}\nAll proofs for this section are in Appendix~\\ref{app:proofs-elim-stopping}. If an algorithm verifies the conditions of Lemma~\\ref{lem:delta-correct} and has a sampling rule that makes it stop almost surely, then it is $\\delta$-correct. Interestingly, we can prove a stronger result than $\\delta$-correctness: under the same sampling rule, the elimination stopping rules never trigger later than the LLR one \\emph{almost surely}. In other words, any algorithm equipped with elimination stopping suffers a sample complexity that is never worse than the one of the same algorithm equipped with LLR stopping. Let $\\tau_{\\mathrm{llr}} := \\inf_{t \\geq 1}\\{t \\mid \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_t))} L_t(\\hat{\\theta}_t,\\lambda) \\geq \\beta_{t,\\delta}\\}$.\n\n\\begin{theorem}\\label{th:elim-better-than-llr}\nFor any sampling rule, almost surely $\\tau_{\\mathrm{f. elim}} \\le \\tau_{\\mathrm{s. elim}} \\le \\tau_{\\mathrm{llr}}$~.\n\\end{theorem}\n\nThe proof of this theorem is very simple: if $\\tau_{\\mathrm{llr}} = t$, then at $t$ all pieces $\\Lambda_p(i^\\star(\\hat{\\theta}_t))$ for $p\\in\\cP(i^\\star(\\hat{\\theta}_t))$ can be eliminated, hence $\\tau_{\\mathrm{s. elim}} \\le t$. The proof that $\\tau_{\\mathrm{f. elim}} \\le \\tau_{\\mathrm{s. elim}}$ follows from the observation that full elimination always has less active pieces than selective elimination. Note that all three stopping rules must use the same threshold $\\beta_{t,\\delta}$ to be comparable. Although simple, Theorem \\ref{th:elim-better-than-llr} has an important implication: we can take any existing algorithm that uses LLR stopping, equip it with elimination stopping instead, and obtain a new strategy that is never worse in terms of sample complexity and for which the original theoretical results on the stopping time still hold.\n\nFinally, it is important to note that, while defining the elimination rule in the general form \\eqref{eq:elimination-sets} allows us to unify many settings, storing\/iterating over all sets $\\cP_t^{\\mathrm{stp}}(i)$ would be intractable in problems with large number of answers (e.g., top-m identification or thresholding bandits, where the latter is exponential in $K$).\nFortunately, we show in Appendix \\ref{app:problems} that this is not needed and efficient implementations exist for these problems that take only polynomial time and memory.\n\n\\subsection{Elimination time of alternative pieces}\n\\label{sub:changing_the_stopping_rule_only}\n\nWe now show that\nelimination stopping can indeed discard certain alternative pieces much earlier that the stopping time.\nWhile all results so far hold for any distribution and bandit structure, in the remaining we focus on Gaussian linear bandits. Other distribution classes beyond Gaussians could be used with minor modifications (see Appendix \\ref{sub:beyond_gaussians}) but the Gaussian case simplifies the exposition. Since most existing adaptive sampling rules target the optimal proportions from the lower bound of \\citep{garivier2016optimal}, we unify them under the following assumption.\n\\begin{assumption}\\label{asm:sampling-rule}\nConsider the concentration events\n\\begin{align}\\label{eq:Et}\nE_t := \\left\\{ \\forall s \\leq t: L_s(\\hat{\\theta}_s,\\theta) \\leq \\beta_{t,1\/t^2} \\right\\} \\: .\n\\end{align}\nA sampling rule is said to have low information regret if there exists a problem-dependent function $R(\\theta,t)$ which is sub-linear in $t$ such that for each time $t$ where $E_t$ holds,\n\\begin{align}\\label{eq:no-regret-property}\n\\inf_{\\lambda \\in \\Lambda(i^\\star(\\theta))} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) \\geq t H^\\star(\\theta) - R(\\theta,t).\n\\end{align}\n\\end{assumption}\nThe left-hand side of \\eqref{eq:no-regret-property} can be understood as the information collected by the sampling rule at time $t$ to discriminate $\\theta$ with all its alternatives. Therefore, Assumption \\ref{asm:sampling-rule} requires that information to be comparable (up to a low-order term $R(\\theta,t)$) with the maximal one from the lower bound.\nIn Appendix \\ref{app:assumptions}, we show that this is satisfied by both Track-and-Stop \\citep{garivier2016optimal} and the approach in \\citep{degenne2019non}.\n\nLet $H_p(\\omega, \\theta) := \\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\theta))} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda)$, the information that sampling with proportions $\\omega$ brings to discriminate $\\theta$ from the alternative piece $\\Lambda_p(i^\\star(\\theta))$. Note that $H^\\star(\\theta) = \\max_{\\omega\\in\\Delta_K}\\min_{p\\in\\cP(i^\\star(\\theta))}H_p(\\omega, \\theta)$. For $\\epsilon\\geq 0$, let $\\Omega_\\epsilon(\\theta) := \\{ \\omega\\in\\Delta_K \\mid \\inf_{\\lambda \\in \\Lambda(i^\\star(\\theta))} \\sum_k \\omega^k\\KL_k(\\theta,\\lambda) \\geq H^\\star(\\theta) - \\epsilon\\}$ be the set of $\\epsilon$-optimal proportions.\n\n\\begin{theorem}[Piece elimination]\\label{th:piece-elim}\nThe stopping time of any sampling rule having low information regret, combined with LLR stopping, satisfies $\\mathbb{E}[\\tau] \\leq \\bar{t} + 2$, where $\\bar{t}$ is the first integer such that\n\\begin{align}\\label{eq:llr-stopping-ineq}\nt \\geq \\left(\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2 + R(\\theta,t)\\right) \/ H^\\star(\\theta).\n\\end{align}\nWhen the same sampling rule is combined with elimination stopping, let $\\tau_p$ be the time at which $p \\in \\cP(i^\\star(\\theta))$ is eliminated. Then, $\\mathbb{E}[\\tau_p] \\leq \\min\\{\\bar{t}_p, \\bar{t} \\} + 2$, where $\\bar{t}_p$ is the first integer such that\n\\begin{align}\\label{eq:elimination_time}\nt \\geq \\max\\left\\{\\frac{\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2}{\\min_{\\omega \\in \\Omega_{R(\\theta,t)\/t}(\\theta)}H_p(\\omega, \\theta)}, G(\\theta,t) \\right\\},\n\\end{align}\nwith $G(\\theta,t) = 0$ for full elimination and $G(\\theta,t) = \\frac{ 4\\beta_{t,1\/t^2} + R(\\theta,t)}{H^\\star(\\theta)}$ for selective elimination.\n\\end{theorem}\n\nFirst, the bound we obtain on the elimination time of pieces in $\\cP(i^\\star(\\theta))$ is not worse than the bound we obtain on the stopping time of LLR stopping.\nSecond, with elimination stopping, such eliminations can actually happen much sooner. Intuitively, sampling rules with low information regret play arms with proportions that are close to the optimal ones. If all of such ``good'' proportions provide large information for eliminating some piece $p\\in\\cP(i^\\star(\\theta))$, then $p$ is eliminated much sooner than the actual stopping time (which requires eliminating the worst-case piece in the same set).\n\nWhile both elimination rules are provably efficient, with full elimination enjoying slighly better guarantees\\footnote{Note that $G(\\theta,t)$ for selective elimination contributes only a finite (in $\\delta$) sample complexity.}, selective elimination provably never worsens (and possibly improves) the computational complexity over LLR stopping. In all applications we consider, implementing LLR stopping requires one minimization for each of the same alternative pieces we use for elimination stopping.\nTherefore, the total number of minimizations required by LLR stopping is $\\sum_{t=1}^{\\tau_{\\mathrm{llr}}} |\\cP(i^\\star(\\hat{\\theta}_t))|$ versus $\\sum_{t=1}^{\\tau_{\\mathrm{s.elim}}} |\\cP_t^{\\mathrm{stp}}(i^\\star(\\hat{\\theta}_t))|$ for selective elimination.\nThe second is never larger since $\\tau_{\\mathrm{s.elim}} \\leq \\tau_{\\mathrm{llr}}$ by Theorem \\ref{th:elim-better-than-llr} and $\\cP_t^{\\mathrm{stp}}(i^\\star(\\hat{\\theta}_t)) \\subseteq \\cP(i^\\star(\\hat{\\theta}_t))$ for all $t$, and much smaller if eliminations happen early, as we shall verify in experiments.\nIn our linear BAI example we need to perform $(K-1)$ minimizations at each step, one for each sub-optimal arm, in order to implement LLR stopping.\nOn the other hand, we need only $|\\cP_t^{\\mathrm{stp}}(i^\\star(\\hat{\\theta}_t))|$ minimizations with selective elimination, one for each active sub-optimal arm, while full elimination takes $\\sum_{i\\in[K]}|\\cP_t^{\\mathrm{stp}}(i)|$ to update all the sets.\n\nNote that Theorem \\ref{th:piece-elim} does not provide a better bound on $\\mathbb{E}[\\tau]$ for elimination stopping than for LLR stopping. In fact, when evaluating the bound on $\\mathbb{E}[\\tau_p]$ for the worst-case piece in $p\\in\\cP(i^\\star(\\theta))$, we recover the one on $\\mathbb{E}[\\tau]$. This is intuitive since the sampling rule is playing proportions that try to eliminate all alternative pieces at once. The following result formalizes this intuition. \n\\begin{theorem}\\label{th:elim-vs-llr-fixed-sampling}\nSuppose that we can write $\\beta_{t, \\delta} = \\log\\frac{1}{\\delta} + \\xi(t, \\delta)$ with $\\lim_{\\delta \\to 0}\\xi(t, \\delta)\/\\log(1\/\\delta) = 0$. Then for any sampling rule that satisfies Assumption \\ref{asm:sampling-rule},\n\\begin{align*}\n\\mathbb{E}[\\tau_{\\mathrm{llr}}] \\le \\mathbb{E}[\\tau_{\\mathrm{elim}}] + f(\\theta, \\delta) \\: .\n\\end{align*}\nwith $\\lim_{\\delta \\to 0} f(\\theta, \\delta)\/\\log(1\/\\delta) = 0$. Here $\\tau_{\\mathrm{elim}}$ can stand for either full or selective elimination.\n\\end{theorem}\nSee Appendix~\\ref{subapp:proof_elim_vs_llr} for $f$. This result shows that when the sampling rule is tailored to the LLR stopping rule, the expected LLR and elimination stopping times differ by at most low-order (in $\\log(1\/\\delta)$) terms. As $\\delta\\rightarrow 0$ the two expected stopping times converge to the same value $H^\\star(\\theta)^{-1}\\log(1\/\\delta)$, which is the asymptotically-optimal sample complexity prescribed by the lower bound~\\eqref{eq:lower_bound}. \n\n\nWe showed that, for both elimination rules, some pieces of the alternative are discarded sooner than the stopping time, and that the overall sample complexity of the method can only improve over LLR stopping.\nHowever, since the sampling rule of the algorithm was not changed, elimination does not change the computational cost of each sampling step, only the cost of checking the stopping rule.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\iffalse\n\\subsection{Lower Bounds}\n\\label{sub:lower_bounds}\n\nWe show that the use of a particular stopping rule translates into lower bounds for the identification algorithm. We prove two such bounds, one for LLR stopping and one for Elimination stopping. The lower bound obtained for LLR stopping is stronger than for elimination.\n\nWe first report an intermediate step in the proof of the lower bound of \\citep{garivier2016optimal} (see~\\eqref{eq:lower_bound}). We write that step as a bound on a log-likelihood ratio, which we can more easily compare to our results. \n\\begin{theorem}\\label{thm:lower_bound_GK16}\nFor any $\\delta$-correct algorithm, for any $\\theta \\in \\cM$,\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda(i^\\star(\\theta))} \\mathbb{E}\\left[ L_\\tau(\\theta, \\lambda)\\right]\n&\\ge \\log\\frac{1}{2.4 \\delta}\n\\: .\n\\end{align*}\n\\end{theorem}\n\n\\paragraph{LLR Stopping}\n\nWe study LLR stopping with a threshold $\\beta_{t,\\delta} = \\log\\frac{1}{\\delta} + \\gamma(t)$ which verifies that for all $\\delta' \\in (0,1]$, with probability $1 - \\delta'$, for all $t \\in \\mathbb{N}$, the quantity $\\beta_{t,\\delta'}$ is an upper bound for $L_t(\\hat{\\theta}_t, \\theta)$. We call LLR stopping with such a threshold LLR$\\gamma$.\nThis property is verified by one of the LLR stopping thresholds proposed in \\citep{garivier2016optimal} and used for example in \\citep{degenne2019non}.\nThose algorithms are subject to a stronger lower bound than the one of Theorem~\\ref{thm:lower_bound_GK16}.\n\n\\begin{theorem}\\label{thm:lower_bound_llr}\nAlgorithms stopping with the LLR$\\gamma$ rule verify for all $\\theta \\in \\cM$\n\\begin{align*}\n\\mathbb{E}\\left[ \\inf_{\\lambda \\in \\Lambda(i^\\star(\\theta))} L_\\tau(\\theta, \\lambda)\\right]\n&\\ge \\log \\frac{1}{e \\delta} - \\sqrt{2 \\delta (2 + \\mathbb{E}[\\gamma(\\tau)^2])}\n\\: .\n\\end{align*}\n\\end{theorem}\n\nThe last term is usually negligible if $\\delta$ is not too large. Indeed the usual function $\\gamma$ is $\\gamma(t) = d \\log \\log(t)$. This is concave, such that $\\mathbb{E}[\\gamma(\\tau)^2] \\le (d \\log\\log \\mathbb{E}[\\tau])^2$ and for $\\delta \\le 1\/d^2$ the square root term is negligible.\n\nThe main difference with Theorem~\\ref{thm:lower_bound_GK16} is the presence of $\\mathbb{E}\\left[ \\inf L_\\tau(\\theta, \\lambda)\\right]$ instead of $\\inf \\mathbb{E}\\left[ L_\\tau(\\theta, \\lambda)\\right]$. That is, the expectation and the infimum are reversed.\n\n\\paragraph{The expected infimum LLR}\n\nBy concavity of the infimum, $\\mathbb{E}\\left[ \\inf L_\\tau(\\theta, \\lambda)\\right] \\le \\inf \\mathbb{E}\\left[ L_\\tau(\\theta, \\lambda)\\right]$, hence the lower bound for LLR stopping is tighter than the generic lower bound (up to the square root term).\nHowever, algorithms based on LLR stopping have been shown to asymptotically match the lower bound of~\\eqref{eq:lower_bound} and indeed the two bounds coincide in the limit $\\delta \\to 0$.\n\nWe now quantify how much tighter Theorem~\\ref{thm:lower_bound_llr} is compared to Theorem~\\ref{thm:lower_bound_GK16}, in a regime where $\\delta$ is not too small.\n\n\n\\todo[inline]{quantify the difference on fixed proportion sampling.}\n\n\\paragraph{Elimination Stopping}\n\nWe also prove a new lower bound for elimination stopping, which is also stronger that Theorem~\\ref{thm:lower_bound_GK16}, but lower than the one we presented for LLR stopping.\n\n\\begin{theorem}\\label{thm:lower_bound_elim}\nAlgorithms stopping with the full elimination rule verify for all $\\theta \\in \\cM$\n\\begin{align*}\n\\mathbb{E}&\\left[ \\min_m \\max_{s \\le \\tau} \\inf_{\\lambda \\in \\Lambda_m(i^\\star)}L_s(\\theta, \\lambda)\\right]\n\\\\\n&\\ge \\log \\frac{1}{\\delta} - 1 - \\sqrt{2 \\delta (2 + \\mathbb{E}[\\gamma(\\tau)^2])}\n\\: .\n\\end{align*}\n\\end{theorem}\nThe proof of Appendix~\\ref{app:lower_bound_proofs} proves both this result and Theorem~\\ref{thm:lower_bound_llr}. This bound is not as strong as the one we proved for LLR stopping, and the difference is in the order of the minimization over $m$ and the maximization over time.\nWe cannot conclude from these two lower bounds that elimination stopping allows a smaller sample complexity than LLR stopping since these are only lower bounds and not characterizations of the sample complexity (upper bounds are missing). It could be that a stronger lower bound holds for elimination stopping as well.\nHowever, we raise that possibility as an open question: although LLR stopping is optimal asymptotically, is it bound by Theorem~\\ref{thm:lower_bound_llr} to be sub-optimal for finite $\\delta$? If so, is elimination stopping enough to overcome that limitation?\n\n\\fi\n\\section{Lower bound proofs}\n\\label{app:lower_bound_proofs}\n\nFor any answer $i$, we have $P_i$ sets $\\Lambda_p(i)$, such that $\\Lambda(i) = \\bigcup_p \\Lambda_p(i)$. The elimination stopping time is\n\\begin{align*}\n\\tau\n= \\min \\left\\{ t \\in \\mathbb{N} \\mid \\max_i \\min_p \\max_{s \\le t} (L_s(\\hat{\\mu}_s, \\Lambda_p(i)) - \\overline{\\log}\\frac{1}{\\delta} - \\gamma(s)) \\ge 0 \\right\\}\n .\n\\end{align*}\nThis is simply a rewriting of the following description: $\\tau$ is the first time $t$ at which there exists an answer $i$ for which for all sets $\\Lambda_p(i)$, there was a time $s \\le t$ at which the set was eliminated.\n\nSince $\\ihat$ is the argmax for $t = \\tau$ in the definition of $\\tau$ above, we have $\\min_p \\max_{s \\le \\tau} (L_s(\\hat{\\mu}_s, \\Lambda_p(\\ihat)) - \\overline{\\log}\\frac{1}{\\delta} - \\gamma(s)) \\ge 0$.\n\nImportant equality: $L_s(\\hat{\\mu}_s, \\Lambda_p(i)) = L_s(\\hat{\\mu}_s, \\mu) + L_s(\\mu, \\Lambda_p(i))$.\n\nWith probability $1 - \\delta'$, for all $s \\in \\mathbb{N}$, $L_s(\\hat{\\mu}_s, \\mu) \\le \\overline{\\log} \\frac{1}{\\delta'} + \\gamma(s)$. Hence with probability $1 - \\delta'$,\n\\begin{align*}\n&\\max_i \\min_p \\max_{s \\le \\tau} (L_s(\\mu, \\Lambda_p(i)) - \\overline{\\log}\\frac{1}{\\delta} + \\overline{\\log}\\frac{1}{\\delta'}) \\ge 0 \\: ,\n\\\\\n\\text{i.e. }& \\overline{\\log}\\frac{1}{\\delta} - \\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i)) \\le \\overline{\\log}\\frac{1}{\\delta'} \\: .\n\\end{align*}\nLet $\\overline{\\exp}$ be the inverse of $\\overline{\\log}$ \\todo{$\\overline{\\log}$ is supposed increasing and smaller than $2\\log$}. We have for all $x \\in \\overline{\\log}^{-1}([1, +\\infty))$,\n\\begin{align*}\n\\mathbb{P}\\left( \\overline{\\log}\\frac{1}{\\delta} - \\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i)) > x \\right) \\le 1\/\\overline{\\exp}(x) \\: .\n\\end{align*}\nSince we supposed that $\\overline{\\log} \\le 2 \\log$, we have $\\overline{\\exp}(x) \\ge \\exp(x\/2)$.\n\\begin{align*}\n\\mathbb{P}\\left( \\overline{\\log}\\frac{1}{\\delta} - \\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i)) > x \\right) \\le e^{-x\/2} \\: .\n\\end{align*}\n\nWe can get a bound in expectation:\n\\begin{align*}\n&\\overline{\\log} \\frac{1}{\\delta} - \\mathbb{E}[\\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i))]\n\\\\\n&\\le \\mathbb{E}[(\\overline{\\log}\\frac{1}{\\delta} - \\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i)))\\mathbb{I}\\{\\overline{\\log}\\frac{1}{\\delta} - \\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i)) \\ge 0\\}]\n\\\\\n&= \\int_0^{+\\infty} \\mathbb{P}(\\overline{\\log}\\frac{1}{\\delta} - \\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i)) > x) dx\n\\\\\n&\\le \\overline{\\exp}(1) + \\int_{\\overline{\\exp}(1)}^{+\\infty} e^{-x\/2} dx\n\\\\\n&\\le e^{1\/2} + \\int_{0}^{+\\infty}e^{-x\/2} dx = e^{1\/2} + 2\n\\: .\n\\end{align*}\n\n\\todo[inline]{propagate the changes below}\n\n\\begin{align*}\n\\mathbb{E}[\\max_i \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i))] \\ge \\log \\frac{1}{\\delta} - 1 \\: .\n\\end{align*}\n\nBy similar computations with the inequality involving $\\ihat$,\n\\begin{align*}\n\\mathbb{E}[\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(\\ihat))] \\ge \\log \\frac{1}{\\delta} - 1 \\: .\n\\end{align*}\n\n\nFor all $i \\ne i^\\star$, there exists $m$ such that $\\mu \\in \\Lambda_p(i)$. For that $m$, $L_s(\\mu, \\Lambda_p(i)) \\le L_s(\\mu, \\mu) = 0$ for all $s \\in \\mathbb{N}$. Hence for $i\\ne i^\\star$, $\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i)) \\le 0$.\n\\begin{align*}\n\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(\\ihat))\n&= \\mathbb{I}\\{\\ihat = i^\\star\\}\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(\\ihat)) + \\mathbb{I}\\{\\ihat \\ne i^\\star\\}\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(\\ihat))\n\\\\\n&\\le \\mathbb{I}\\{\\ihat = i^\\star\\}\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(\\ihat))\n\\\\\n&= \\mathbb{I}\\{\\ihat = i^\\star\\}\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i^\\star))\n\\\\\n&= \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i^\\star)) - \\mathbb{I}\\{\\ihat \\ne i^\\star\\}\\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i^\\star))\n\\\\\n&\\le \\min_p \\max_{s \\le \\tau} L_s(\\mu, \\Lambda_p(i^\\star)) + \\mathbb{I}\\{\\ihat \\ne i^\\star\\} \\max_{s \\le \\tau} L_s(\\hat{\\mu}_s, \\mu)\n\\end{align*}\n\n\\paragraph{Bound on the expectation}\nWe prove $\\mathbb{E}[L_\\tau(\\hat{\\mu}_\\tau, \\mu)] \\le \\mathbb{E}[\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu)] \\le 1 + \\mathbb{E}[\\gamma(\\tau)]$. In the computations below, we use that $s \\mapsto \\gamma(s)$ is non-decreasing.\n\\begin{align*}\n\\mathbb{E}[\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu) - \\gamma(\\tau)]\n&\\le \\mathbb{E}[\\max\\{0, \\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu) - \\gamma(\\tau)\\}]\n\\\\\n&= \\int_{x=0}^{+\\infty}\\mathbb{P}(\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu) - \\gamma(\\tau) > x) dx\n\\\\\n&\\le \\int_{x=0}^{+\\infty}\\mathbb{P}(\\exists t \\in \\mathbb{N}, L_t(\\hat{\\mu}_t, \\mu) > x + \\gamma(t)) dx\n\\\\\n&\\le \\int_{x=0}^{+\\infty} e^{- x} d x = 1\n\\: .\n\\end{align*}\n\n\\paragraph{Bound on the expectation of the truncated LLR}\n\\begin{align*}\n\\mathbb{E}[\\mathbb{I}\\{\\ihat \\ne i^\\star\\} \\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu)]\n&\\le \\sqrt{\\mathbb{P}\\{\\ihat \\ne i^\\star\\} \\mathbb{E}[(\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu))^2]}\n\\end{align*}\n\n\\begin{align*}\n\\mathbb{E}[(\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu))^2 - 2\\gamma(\\tau)^2]\n&\\le \\mathbb{E}[\\max\\{0, (\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu))^2 - 2\\gamma(\\tau)^2\\}]\n\\\\\n&= \\int_{x=0}^{+\\infty}\\mathbb{P}((\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu))^2 - 2\\gamma(\\tau)^2 > x) dx\n\\\\\n&\\le \\int_{x=0}^{+\\infty}\\mathbb{P}((\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu))^2 > (\\gamma(\\tau) + \\sqrt{x\/2})^2) dx\n\\\\\n&= \\int_{x=0}^{+\\infty}\\mathbb{P}(\\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu) > \\gamma(\\tau) + \\sqrt{x\/2} ) dx\n\\\\\n&\\le \\int_{x=0}^{+\\infty}\\mathbb{P}(\\exists t \\in \\mathbb{N}, L_t(\\hat{\\mu}_t, \\mu) > \\gamma(t) + \\sqrt{x\/2}) dx\n\\\\\n&\\le \\int_{x=0}^{+\\infty} e^{- \\sqrt{x\/2}} d x = 4\n\\: .\n\\end{align*}\n\n\\begin{align*}\n\\mathbb{E}[\\mathbb{I}\\{\\ihat \\ne i^\\star\\} \\max_{s \\le \\tau}L_s(\\hat{\\mu}_s, \\mu)]\n&\\le \\sqrt{\\mathbb{P}\\{\\ihat \\ne i^\\star\\} (4 + \\mathbb{E}[\\gamma(\\tau)^2])]}\n\\: .\n\\end{align*}\nSince the algorithm is $\\delta$-correct, $\\mathbb{P}\\{\\ihat \\ne i^\\star\\} \\le \\delta$.\n\n\\paragraph{Special case of LLR stopping}\n\n\nNote: for LLR stopping, $M_i = 1$ for all $i$ and we get\n\\begin{align*}\n\\mathbb{E}[\\max_i \\max_{s \\le \\tau} L_s(\\mu, \\Lambda(i))]\n&\\ge \\log \\frac{1}{\\delta} - 1\n\\: , &\n\\mathbb{E}[\\max_{s \\le \\tau} L_s(\\mu, \\Lambda(\\ihat))]\n&\\ge \\log \\frac{1}{\\delta} - 1\n\\: .\n\\end{align*}\nAnd due to the way the algorithm stops, we could go through the same proof but with $\\max_{s \\le \\tau}$ replaced by taking the value at $\\tau$ for the answer $\\ihat$. That would give\n\\begin{align*}\n\\mathbb{E}[L_\\tau(\\mu, \\Lambda(\\ihat))]\n&\\ge \\log \\frac{1}{\\delta} - 1\n\\: .\n\\end{align*}\n\n\\begin{align*}\nL_\\tau(\\mu, \\Lambda(\\ihat))\n&= \\mathbb{I}\\{\\ihat = i^\\star\\}L_\\tau(\\mu, \\Lambda(\\ihat)) + \\mathbb{I}\\{\\ihat \\ne i^\\star\\}L_\\tau(\\mu, \\Lambda(\\ihat))\n\\\\\n&\\le \\mathbb{I}\\{\\ihat = i^\\star\\}L_\\tau(\\mu, \\Lambda(\\ihat)) + \\mathbb{I}\\{\\ihat \\ne i^\\star\\}L_\\tau(\\mu, \\mu)\n\\\\\n&= \\mathbb{I}\\{\\ihat = i^\\star\\}L_\\tau(\\mu, \\Lambda(\\ihat))\n\\\\\n&= \\mathbb{I}\\{\\ihat = i^\\star\\}L_\\tau(\\mu, \\Lambda(i^\\star))\n\\\\\n&= L_\\tau(\\mu, \\Lambda(i^\\star)) - \\mathbb{I}\\{\\ihat \\ne i^\\star\\} L_\\tau(\\mu, \\Lambda(i^\\star))\n\\\\\n&= L_\\tau(\\mu, \\Lambda(i^\\star)) + \\mathbb{I}\\{\\ihat \\ne i^\\star\\} L_\\tau(\\hat{\\mu}_\\tau, \\mu)\n\\end{align*}\n\n\n\\subsection{Expectation of the infimum}\n\\label{sub:expectation_of_the_infimum}\n\nFor a set $\\Lambda$ and $\\varepsilon \\ge 0$, let $\\Lambda_{[0,\\varepsilon], N_t} = \\{\\lambda \\in \\Lambda \\mid \\sum_k N_t^k \\KL_k(\\theta, \\lambda) \\le \\inf_{\\eta \\in \\Lambda} \\sum_k N_t^k \\KL_k(\\theta, \\eta) + \\varepsilon\\}$. Then since $\\Lambda_{[0,\\varepsilon], N_t} \\subseteq \\Lambda$,\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda} L_t (\\theta, \\lambda)\n&\\le \\inf_{\\lambda \\in \\Lambda_{[0,\\varepsilon], N_t}} L_t (\\theta, \\lambda)\n\\\\\n&= \\inf_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}}\\left(\\sum_k N_t^k \\KL_k(\\theta, \\lambda)\n\t- \\sum_{s=1}^t (\\KL_{k_s}(\\theta, \\lambda) - \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_s^{k_s})) \\right)\n\\\\\n&\\le \\inf_{\\lambda \\in \\Lambda} \\sum_k N_t^k \\KL_k(\\theta, \\lambda) + \\varepsilon\n\t- \\sup_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}}\\left( \\sum_{s=1}^t (\\KL_{k_s}(\\theta, \\lambda) - \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_s^{k_s})) \\right)\n\\: ,\\\\\n\\mathbb{E}[\\inf_{\\lambda \\in \\Lambda} L_t (\\theta, \\lambda)]\n&\\le \\inf_{\\lambda \\in \\Lambda}\\sum_k \\mathbb{E}[N_t^k] \\KL_k(\\theta, \\lambda) + \\varepsilon\n\t- \\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}}\\left( \\sum_{s=1}^t (\\KL_{k_s}(\\theta, \\lambda) - \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_s^{k_s})) \\right)\\right]\n\\: .\n\\end{align*}\n\nFor Gaussian distributions,\n\\begin{align*}\n\\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}}\\left( \\sum_{s=1}^t (\\KL_{k_s}(\\theta, \\lambda) - \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_s^{k_s})) \\right)\\right]\n&= \\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}} \\sum_k N_t^k (\\hat{\\mu}_{t,k} - \\mu_k(\\theta)) (\\mu_k(\\lambda) - \\mu_k(\\theta))\\right]\n\\end{align*}\n\nOur goal now is to show that this expectected supremum is large, which means that the expected infimum LLR is lower than the infimum of the weighted sum of the KL, hence a lower bound on the infimum LLR is stronger than a lower bound on the weighted sum of KLs.\n\nLet's look at the case of fixed design ($N_t$ is deterministic). Let $\\KL(t, \\Lambda) = \\inf_{\\lambda \\in \\Lambda}\\sum_k N_t^k \\KL_k(\\theta, \\lambda)$ and let $u_{\\lambda, N_t} = \\frac{\\mu(\\lambda) - \\mu(\\theta)}{\\Vert \\mu(\\lambda) - \\mu(\\theta)\\Vert_{N_t}} = \\frac{\\mu(\\lambda) - \\mu(\\theta)}{\\sqrt{2 \\sum_k N_t^k \\KL_k(\\theta, \\lambda)}}$. The vector $u_{\\lambda, N_t}$ has norm 1 for all $\\lambda$. Finally, let $Z = \\sqrt{N_t^k} (\\hat{\\mu}_{t,k} - \\mu_k(\\theta))$. $Z$ has law $\\mathcal N(0,1)$.\n\\begin{align*}\n&\\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}} \\sum_k N_t^k (\\hat{\\mu}_{t,k} - \\mu_k(\\theta)) (\\mu_k(\\lambda) - \\mu_k(\\theta))\\right]\n\\\\\n&= \\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}} \\sqrt{2 \\sum_k N_t^k \\KL_k(\\theta, \\lambda)} Z^\\top u_{\\lambda, N_t}\\right]\n\\\\\n&\\ge \\sqrt{2} \\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{[0, \\varepsilon], N_t}} \\sqrt{\\KL(t, \\Lambda) + \\varepsilon \\mathbb{I}\\{Z^\\top u_{\\lambda, N_t}\\le 0\\}} Z^\\top u_{\\lambda, N_t}\\right]\n\\end{align*}\nFor $\\varepsilon = 0$, using that $N_t$ is not random (hence $\\KL(t, \\Lambda)$ isn't either) this is\n\\begin{align*}\n\\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{\\{0\\}, N_t}} \\sqrt{2 \\KL(t, \\Lambda)} Z^\\top u_{\\lambda, N_t}\\right]\n&= \\sqrt{2 \\KL(t, \\Lambda)} \\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{\\{0\\}, N_t}} Z^\\top u_{\\lambda, N_t}\\right]\n\\end{align*}\nThe value $\\mathbb{E}\\left[\\sup_{\\lambda \\in \\Lambda_{\\{0\\}, N_t}} Z^\\top u_{\\lambda, N_t}\\right]$ is the Gaussian width of the set $\\{u_{\\lambda, N_t} \\mid \\lambda \\in \\Lambda_{\\{0\\}, N_t}\\}$, which is a subset of the sphere $\\{u \\mid \\Vert u \\Vert = 1\\}$.\n\nFor unstructured best arm identification, this set contains up to $K-1$ points ($K-1$ exactly at the $N_t$ which maximizes $V_t$). The Gaussian width scales as $\\sqrt{\\log K}$ in that case.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe proposed a selective elimination rule, which successively prunes the pieces of the empirical answer, that can be easily combined with existing adaptive algorithms for general identification problems. We proved that it reduces their computational complexity, it never worsens their sample complexity guarantees, and it provably discards certain answers early. Our experiments on different pure exploration problems and bandit structures show that existing adaptive algorithms often benefit from a reduced sample complexity when combined with selective elimination, while achieving significant gains in computation time. Moreover, they show that selective elimination is overall better (in terms of samples vs time) than its full variant which repeatedly updates the pieces of all answers.\n\n\n\nInteresting directions for future work include investigating whether better guarantees on the stopping time can be derived for algorithms combined with elimination as compared to their LLR counterparts, and designing adaptive algorithms which are specifically tailored for elimination.\n\\section{Elimination at sampling}\n\\label{sec:elimination_at_sampling}\n\nWe show how to adapt sampling rules in order to accommodate piece elimination. There are two reasons for doing this: first, adapting the sampling to ignore pieces that have been discarded could reduce the sample complexity; second, the amount of computations needed to update the sampling strategy is often proportional to the number of pieces and decreasing it can reduce the overall time.\n\nWe start from an algorithm using LLR stopping, for which we change the stopping rule as above. The algorithms that we can adapt are those that choose the next sample by aggregating information from each alternative piece.\nFor example methods that mimic the lower bound allocation~\\eqref{eq:lower_bound}, like Track-and-Stop \\citep{garivier2016optimal}, LinGame \\citep{degenne2020gamification}, or FWS \\citep{wang2021fast}, use the decomposition of the alternative into pieces: they need to compute the divergence to each of those pieces as well as the closest points. This is also the case for other algorithms like LinGapE \\citep{xu2018fully}. At time $t$, it loops over all currently sub-optimal arms to compute an optimistic gap, and sample the arm that makes the maximal gap decrease the most.\nThat loop over arms is a loop over the alternative pieces $\\{\\lambda \\mid \\phi_j^\\top \\lambda \\ge \\phi_{i^\\star(\\hat{\\theta}_t)}^\\top \\lambda\\}$. Eliminating pieces at sampling simply means omitting from that computation the arms that were deemed sub-optimal.\n\n\\subsection{Eliminating pieces at sampling}\n\\label{sub:eliminating_pieces_at_sampling}\n\nWe need an hypothesis on the shape of the threshold, used to compare thresholds at different times.\n\n\\begin{assumption}\\label{ass:beta_bounds}\nThere exist two positive reals $c_1$ and $c_2$ that may depend on the parameters of the problem known to the algorithm ($d$, $K$, etc.) and a function $\\overline{\\log}$ such that the threshold $\\beta_{t,\\delta}$ verifies\n\\begin{align*}\n\\overline{\\log}(1\/\\delta) + c_1 \\log(t)\n\\le \\beta_{t, \\delta}\n\\le \\overline{\\log}(1\/\\delta) + c_2 \\log(t)\n\\: .\n\\end{align*}\nFurthermore, $\\overline{\\log}$ verifies that there exists $x_0$ such that for all $a \\ge 2$ and $x \\ge x_0$, $\\overline{\\log}(x^a) \\le a \\overline{\\log}(x)$. \n\\end{assumption}\nThe function $\\overline{\\log}$ represents an almost logarithmic function and corresponds to the function of order $\\log + \\sqrt{\\log}$ in \\citep{abbasi2011improved} or $\\log + d\\log\\log$ in \\citep{tirinzoni2020asymptotically,reda2021dealing}.\nOther thresholds have a $O(\\log\\log(t))$ dependence instead of $O(\\log(t))$, and what follows could be extended to them in a similar way.\n\nSimilarly to elimination stopping, the idea of our elimination at sampling is to maintain sets of active pieces $\\cP_{t}^{\\mathrm{smp}}(i)$ for each $i\\in\\cI$. Note that these are different from the ones introduced in Section \\ref{sec:elimination_stopping_rules} for the stopping rule. The sampling set is updated at each step with a different threshold $\\alpha_{t,\\delta}$, while we reset it very infrequently at steps $t\\in\\{\\bar{t}_0^{2^j}\\}_{j\\geq 0}$, where $\\bar{t}_0 = \\max\\{2, \\sqrt{x_0}\\}$. Formally, let us define the helper sets $\\tilde{\\cP}_{t}^{\\mathrm{smp}}(i)$ as $\\tilde{\\cP}_{0}^{\\mathrm{smp}}(i) := \\cP(i)$ and\n\\begin{align*}\n\\tilde{\\cP}_t^{\\mathrm{smp}}(i) := \\begin{cases}\n\\tilde{\\cP}_{t-1}^{\\mathrm{smp}}(i) \\cap \\overline{\\cP}_t(i;\\alpha_{t,\\delta}) & \\text{if } t\\notin\\{\\bar{t}_0^{2^j}\\}_{j\\geq 0}\n\\\\ \\overline{\\cP}_t(i;\\alpha_{t,\\delta}) & \\text{otherwise},\n\\end{cases}\n\\end{align*}\nwhere $\\overline{\\cP}_t$ was defined in \\eqref{eq:active-pieces-t-only}.\nLet $\\overline{t}_j := \\bar{t}_0^{2^j}$ be the time step at which the $j$-th reset is performed and $j(t) := \\lfloor \\log_2\\log_{\\bar{t}_0} t \\rfloor$ be the index of the last reset before $t$.\nThose reset times are the elements of our implementation that would change if the threshold is not $O(\\log(t))$. They are chosen such that $\\beta_{t,1\/t^2}$ roughly doubles from one reset to the next.\n\nWe define ${\\cP}_t^{\\mathrm{smp}}(i) := \\tilde{\\cP}_t^{\\mathrm{smp}}(i) \\cap \\tilde{\\cP}_{\\overline{t}_{j(t)} - 1}^{\\mathrm{smp}}(i)$. This implies that ${\\cP}_t^{\\mathrm{smp}}(i)$ is the intersection of all active pieces from the second-last reset up to $t$, i.e.,\n\n{\\cP}_t^{\\mathrm{smp}}(i) = \\bigcap_{s=\\overline{t}_{j(t)-1}}^t \\overline{\\cP}_s(i;\\alpha_{s,\\delta}).\n$\nAs before, we can instantiate both selective and full elimination to update these sets. Since the resets are very infrequent, this definition only drops a small number of rounds from the intersection (less than $\\sqrt{t}$).\nThis is not much different from ${\\cP}_t^{\\mathrm{stp}}(i)$, which instead takes the intersection of all $t$ sets.\nWe require $\\alpha_{t,\\delta} \\ge \\beta_{t,\\delta}$, so that pieces that are eliminated for the sampling rule\nare also eliminated for the stopping rule.\nWe set\n$\n\\alpha_{{t},\\delta} := \\left(\\sqrt{\\beta_{t,\\delta} + (4 c_2 - c_1)\\log(t)} + 4\\sqrt{\\frac{c_2}{c_1}\\beta_{t,1\/t^2}}\\right)^2\n\\: \n$. The reason for having different sets of active pieces for stopping and sampling (with rare resets in the latter) is that it allows us to derive guarantees on the expected stopping times. If one is only interested in high-probability results, it is possible to use the same set for both components.\n\n\\subsection{Properties}\n\\label{sub:properties}\n\n\nWe consider a counterpart of Assumption \\ref{asm:sampling-rule} for sampling rules combined with piece elimination.\n\n\\begin{assumption}\\label{asm:sampling-rule-v2}\nThere exists a sub-linear (in $t$) problem-dependent function $R(\\theta,t)$ such that, for each time $t$ where $E_t$ (defined in Equation \\ref{eq:Et}) holds,\n\\begin{align*}\n\\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star(\\theta))}\\! \\! \\! \\! \\! H_p(N_t, \\theta) \n\\geq \\max_{\\omega\\in\\Delta_K} \\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\theta))}\\! \\! \\! \\! \\! H_p(\\omega, \\theta) {-} R(\\theta,t).\n\\end{align*}\n\\end{assumption}\n\nIntuitively, the sampling rule maximizes the information for discriminating $\\theta$ with all its alternatives from the sequence of active pieces $(\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\theta)))_{s=1}^t$. We prove in Appendix \\ref{app:assumptions} that the algorithms for which we proved Assumption \\ref{asm:sampling-rule} also satisfy Assumption \\ref{asm:sampling-rule-v2} when their sampling rules are combined with either full or selective elimination.\n\n\\begin{theorem}\\label{th:sampling-rule-with-elim}\nConsider a sampling rule that verifies Assumption \\ref{asm:sampling-rule-v2} and uses either full or selective elimination with the sets ${\\cP}_t^{\\mathrm{smp}}$. Then, Assumption \\ref{asm:sampling-rule} holds as well. Moreover, when using the same elimination rule at stopping, such a sampling rule verifies Theorem \\ref{th:piece-elim}, i.e., it enjoys the same guarantees as without elimination at sampling.\n\\end{theorem}\n\nThe proof is in Appendix~\\ref{app:proofs_of_sampling}. Theorem \\ref{th:sampling-rule-with-elim} shows that for an algorithm using elimination at sampling and stopping, we get bounds on the times at which pieces of $\\Lambda(i^\\star(\\theta))$ are discarded from the stopping rule which are not worse than those we obtained for the same algorithm without elimination at sampling.\nThis result is non-trivial. We know that the sampling rule collects information to discriminate $\\theta$ with its closest alternatives, and eliminating a piece cannot make the resulting ``optimal'' proportions worse at this task. However, it could make them worse at discriminating $\\theta$ with alternatives that are not the closest.\nThis would imply that the elimination times for certain pieces could actually increase w.r.t. not eliminating at sampling.\nTheorem \\ref{th:sampling-rule-with-elim} guarantees that this does not happen: eliminating pieces at sampling cannot worsen our guarantees.\nWe shall see in our experiments that eliminating pieces in both the sampling and stopping rules often yields improved sample complexity.\n\n\n\n\\section{Proofs of Section \\ref{sec:elimination_stopping_rules}}\\label{app:proofs-elim-stopping}\n\n\n\\subsection{Proof of Lemma \\ref{lem:delta-correct}}\n\n\\begin{proof}\nFix any $\\theta\\in\\mathcal{M}$ and let $\\hat{i}$ be the answer returned by the algorithm at the stopping time $\\tau$. Then,\n\\begin{align*}\n\\mathbb{P}\\left(\\hat{i} \\neq i^\\star(\\theta)\\right) \n&\\stackrel{(a)}{\\leq} \\mathbb{P}\\left(\\exists i \\neq i^\\star(\\theta) : \\cP_\\tau(i) = \\emptyset\\right)\n\\\\\n&\\stackrel{(b)}{\\leq} \\mathbb{P}\\left(\\exists i \\neq i^\\star(\\theta),\\forall p\\in\\cP(i), \\exists t \\leq \\tau : \\inf_{\\lambda \\in \\Lambda_{p}(i)} L_t(\\hat{\\theta}_t, \\lambda) \\geq \\beta_{t,\\delta}\\right)\n\\\\\n&\\stackrel{(c)}{\\leq} \\mathbb{P}\\left(\\exists i \\neq i^\\star(\\theta),\\forall p\\in\\cP(i), \\exists t \\geq 1 : \\inf_{\\lambda \\in \\Lambda_{p}(i)} L_t(\\hat{\\theta}_t, \\lambda) \\geq \\beta_{t,\\delta}\\right)\n\\end{align*}\nwhere (a) is from the definition of stopping rule, (b) is from the fact that, if $\\cP_\\tau(i)$ is empty, then all the pieces of $i$ have been eliminated at some times before $\\tau$, and (c) follows trivially by relaxing the condition $t\\leq \\tau$ to $t\\geq 1$. Take any $t\\geq 1$. Now note that, for any wrong answer $i\\neq i^\\star(\\theta)$, $\\theta \\in \\Lambda(i)$. By definition of the decomposition into pieces of $\\Lambda(i)$, this means that there exists $\\bar{p}_i \\in \\cP(i)$ such that $\\theta \\in \\Lambda_{\\bar{p}_i}(i)$. Therefore, continuing the chain of inequalities above, we get that\n\\begin{align*}\n\\mathbb{P}\\left(\\hat{i} \\neq i^\\star(\\theta)\\right) \n&\\stackrel{(d)}{\\leq} \\mathbb{P}\\left(\\exists i \\neq i^\\star(\\theta), \\exists t \\geq 1 : \\inf_{\\lambda \\in \\Lambda_{\\bar{p}_i}(i)} L_t(\\hat{\\theta}_t, \\lambda) \\geq \\beta_{t,\\delta}\\right)\n\\\\ &\\stackrel{(e)}{\\leq} \\mathbb{P}\\left(\\exists t \\geq 1 : L_t(\\hat{\\theta}_t, \\theta) \\geq \\beta_{t,\\delta}\\right) \n\\stackrel{(f)}{\\leq} \\delta,\n\\end{align*}\nwhere (d) holds since the event under which all pieces for $i$ have been eliminated implies that $\\bar{p}_i$ has been eliminated as well, (e) holds since $\\theta \\in \\Lambda_{\\bar{p}_i}(i)$, and (f) is from the assumption on threshold $\\beta_{t,\\delta}$. This concludes the proof.\n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{th:elim-better-than-llr}}\\label{app:monotonicity}\n\nTheorem~\\ref{th:elim-better-than-llr} was proved in the main text. We prove here a more general result about the monotonicity of the stopping time with respect to the piece decomposition.\nWe show that if two algorithms use the same sampling rules and use elimination stopping based on different piece decompositions, the algorithm using the finer decomposition (in the sense defined below) will stop earlier.\n\n\\begin{definition}\nWe say that a piece decomposition $(\\Lambda_p(i))_{i \\in \\mathcal I,p \\in \\mathcal P(i)}$ is finer than another one $(\\tilde{\\Lambda}_p(i))_{i \\in \\mathcal I,p \\in \\tilde{\\mathcal P}(i)}$ if for all $i \\in \\mathcal I, p \\in \\tilde{\\mathcal P}(i)$, there exists a set $S \\subseteq \\mathcal P(i)$ such that $\\tilde{\\Lambda}_p(i) = \\bigcup_{q \\in S}\\Lambda_q(i)$.\n\\end{definition}\n\n\n\\begin{theorem}\\label{thm:elimination_mono_decomposition}\nLet $\\mathcal D = (\\Lambda_p(i))_{i \\in \\mathcal I,p \\in \\mathcal P(i)}$ be a finer decomposition than $\\tilde{\\mathcal D} = (\\tilde{\\Lambda}_p(i))_{i \\in \\mathcal I,p \\in \\tilde{\\mathcal P}(i)}$. For $i \\in \\mathcal I,\\tilde{p} \\in \\tilde{\\mathcal P}(i)$, let $\\tau_{\\tilde{p}}$ and $\\tilde{\\tau}_{\\tilde{p}}$ be the times at which $\\tilde{\\Lambda}_{\\tilde{p}}(i)$ is eliminated by the two corresponding algorithms (in the sense of Definition~\\ref{def:elimination}). Then almost surely $\\tau_{\\tilde{p}} \\le \\tilde{\\tau}_{\\tilde{p}}$.\n\\end{theorem}\nRoughly, if a piece in the tilde decomposition corresponds to several pieces in the other, then it is faster to eliminate it as several pieces than as one piece.\n\n\\begin{proof}\nIt is enough to prove that whenever the elimination stopping rule of $\\tilde{\\Lambda}_{\\tilde{p}}(i)$ triggers for $\\tilde{\\mathcal D}$, it triggers for $\\mathcal D$ too. Then, let $t\\geq 1$ and suppose that\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_{\\tilde{p}}(i)} L_t(\\hat{\\theta}_t, \\lambda) \\geq \\beta_{t,\\delta}.\n\\end{align*}\nLet $S_{\\tilde{p}} \\subseteq \\cP(i)$ be the set corresponding to $\\tilde{p}$ in the definition of ``finer decomposition''. We first argue that\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_p(i)} L_t(\\hat{\\theta}_t, \\lambda)\n&= \\inf_{q\\in S_p}\\inf_{\\lambda \\in \\Lambda_q(i)} L_t(\\hat{\\theta}_t, \\lambda) \\: .\n\\end{align*}\nIndeed, this is simply writing the infimum over an union as the infimum of an infimum. Hence if the $\\tilde{p}$ piece is eliminated in $\\tilde{\\mathcal D}$, we have\n\\begin{align*}\n\\inf_{q\\in S_p}\\inf_{\\lambda \\in \\Lambda_q(i)} L_t(\\hat{\\theta}_t, \\lambda)\n\\ge \\beta_{t, \\delta}\n\\end{align*}\nand every piece $\\Lambda_p(i)$ for $p \\in S_{\\tilde{p}}$ is eliminated as well in $\\mathcal D$. We get that the set $\\tilde{\\Lambda}_{\\tilde{p}}$ is also eliminated in $\\mathcal D$.\n\\end{proof}\n\nCorollary: the finest possible decomposition is the one in which $\\mathcal P(i) = \\Lambda(i)$ and $\\Lambda_p(i) = \\{p\\}$, i.e. every point of $\\Lambda(i)$ is its own piece. This is not a computationally usable decomposition, but its the theoretically best one for the sample complexity metric (for a fixed sampling rule).\n\nTheorem~\\ref{th:elim-better-than-llr} for full elimination compared to LLR stopping follows from Theorem~\\ref{thm:elimination_mono_decomposition} by setting $\\tilde{\\mathcal P}(i) = \\{0\\}$ and $\\tilde{\\Lambda}_p(i) = \\Lambda(i)$. Then the elimination stopping rule uses a finer decomposition than the LLR stopping rule.\n\n\n\\subsection{Proof of Theorem \\ref{th:piece-elim}}\n\nWe first present two important lemmas which will be used to prove the main statement for full elimination and selective elimination, respectively. For full elimination, we the following lemma shows that if a piece has not been eliminated the information collected by the algorithm about it must be small.\n\n\n\n\n\\begin{lemma}\\label{lemma:upper-bound-sum-inf}\nConsider an algorithm that uses the full elimination stopping rule \\eqref{eq:elimination-sets}. Let $p\\in \\cP(i^\\star(\\theta))$, then, for each time $t$ such that $E_t$ (Equation \\ref{eq:Et}) holds and $p\\in \\cP_t(i^\\star(\\theta))$,\n\\begin{align*}\n\\sqrt{\\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} < \\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}.\n\\end{align*}\nOn the other hand, if the algorithm uses the LLR stopping rule, for each time $t$ such that $E_t$ holds and the algorithm did not stop,\n\\begin{align*}\n\\sqrt{\\inf_{\\lambda \\in \\Lambda(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} < \\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}.\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nSince $p\\in \\cP_t(i^\\star(\\theta))$ (i.e., $p$ has not been eliminated at time $t$), we have from \\eqref{eq:elimination-sets} and Lemma \\ref{lem:llr-to-kl-lin-gauss} that\n\\begin{align*}\n\\beta_{t,\\delta}\n &> \\inf_{\\lambda \\in \\Lambda_p(i^{\\star})} L_t(\\hat{\\theta}_t, \\lambda) \\geq \\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\left(\\sqrt{\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} - \\sqrt{L_t(\\hat{\\theta}_t,\\theta)}\\right)^2.\n\\end{align*}\nBy definition of event $E_t$, this implies\n\\begin{align*}\n\\sqrt{\\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} < \\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}.\n\\end{align*} \nThis yields the first statement. The second result can be shown analogously by using the full set of alternatives to $\\theta$.\n\n\\end{proof}\n\nSelective elimination updates, at each step $t$, only the set of active pieces of the empirical answer $i^\\star(\\hat{\\theta}_t)$. Therefore, to bound the elimination times of the pieces $\\Lambda_p(i^\\star)$ for $p\\in\\cP(i^\\star)$ we need to show that $i^\\star(\\hat{\\theta}_t) \\neq i^\\star$ after a certain time. We show that Assumption \\ref{asm:sampling-rule} alone is sufficient to guarantee this.\n\n\\begin{lemma}\\label{lem:empirical-vs-true-answer}\nConsider a sampling rule satisfying Assumption \\ref{asm:sampling-rule}. Under event $E_t$, a sufficient condition for $i^\\star(\\hat{\\theta}_t) = i^\\star$ is\n\\begin{align}\nt \\geq \\frac{ 4\\beta_{t,1\/t^2} + R(\\theta,t)}{H^\\star(\\theta)}.\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFirst note that, under $E_t$, if\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda(i^{\\star}(\\hat{\\theta}_t))} L_t(\\hat{\\theta}_t, \\lambda) > \\beta_{t,1\/t^2},\n\\end{align*}\nthen $i^\\star(\\hat{\\theta}_t) = i^\\star$. In fact, if this was not the case, we would have $\\theta \\in \\Lambda(i^{\\star}(\\hat{\\theta}_t))$ and thus $L_t(\\hat{\\theta}_t, \\theta) > \\beta_{t,1\/t^2}$, which is a contradiction with event $E_t$ itself. Let us now look for a sufficient condition on $t$ to satisfy this inequality. Take $t$ and suppose it does not satisfy it. Then,\n\\begin{align*}\n\\sqrt{\\beta_{t,1\/t^2}}\n \\geq \\sqrt{\\inf_{\\lambda \\in \\Lambda(i^{\\star}(\\hat{\\theta}_t))} L_t(\\hat{\\theta}_t, \\lambda)} \n &\\stackrel{(a)}{\\geq} \\sqrt{\\inf_{\\lambda \\in \\Lambda(i^{\\star}(\\hat{\\theta}_t))}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} - \\sqrt{L_t(\\hat{\\theta}_t,\\theta)}\n \\\\ &\\stackrel{(b)}{\\geq} \\sqrt{\\inf_{\\lambda \\in \\Lambda(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} - \\sqrt{\\beta_{t,1\/t^2}}\n \\\\ &\\stackrel{(c)}{\\geq} \\sqrt{t H^\\star(\\theta) - R(\\theta,t)} - \\sqrt{\\beta_{t,1\/t^2}},\n\\end{align*}\nwhere (a) is from Lemma \\ref{lem:llr-to-kl-lin-gauss}, (b) from $E_t$ and the fact that either $i^\\star(\\hat{\\theta}_t) = i^\\star$ or the infimum is zero, and (c) from Assumption \\ref{asm:sampling-rule}. Rearranging this inequality yields the desired condition on $t$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{th:piece-elim}]\nWe start by proving the sample complexity bound for LLR stopping. Since it is a bound on the elimination time of the whole alternative $\\Lambda(i^\\star)$, by definition it also holds for the elimination times of its pieces obtained with either full or selective elimination. We then move to full elimination stopping and selective elimination stopping. Finally, we prove that the left-hand side of the maximum in \\eqref{eq:elimination_time} is never larger than the bound for LLR stopping.\n\n\\paragraph{LLR stopping}\n\nLet $t > 0$ such that $E_t$ holds and the algorithm did not stop. The second statement in Lemma \\ref{lemma:upper-bound-sum-inf} yields\n\\begin{align*}\n\\sqrt{\\inf_{\\lambda \\in \\Lambda(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} < \\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}.\n\\end{align*}\nMoreover, from Assumption \\ref{asm:sampling-rule}\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda(i^\\star)}\\sum_{k\\in[K]}N_t^k \\KL_k(\\theta,\\lambda) \\geq t H^\\star(\\theta) - R(\\theta,t).\n\\end{align*}\nThe combination of these two inequalities directly yields the stated inequality on $t$. The result in expectation is obtained by applying Lemma 19 in \\citep{reda2021dealing} together with $\\mathbb{P}(\\neg E_t) \\leq 1\/t^2$.\n\n\\paragraph{Full elimination stopping}\n\nLet $t > 0$ such that $E_t$ holds. By Assumption \\ref{asm:sampling-rule},\n\\begin{align*}\nH^\\star(\\theta) \\leq \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}\\frac{N_t^k}{t} \\KL_k(\\theta,\\lambda) + \\frac{R(\\theta,t)}{t}\n\\end{align*}\nThis implies that $\\frac{N_t}{t}\\in \\Omega_{R(\\theta,t)\/t}$.\n\nNow fix a piece $p\\in \\cP(i^\\star)$ such that $p\\in \\cP_t(i^\\star)$. Under event $E_t$, we know from Lemma \\ref{lemma:upper-bound-sum-inf} that\n\\begin{align*}\n\\sqrt{\\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)}\n< \\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}.\n\\end{align*}\nSince $\\frac{N_t}{t}\\in \\Omega_{R(\\theta,t)\/t}$,\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_p(i^{\\star})}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)\n \\geq t \\min_{\\omega \\in \\Omega_{R(\\theta,t)\/t}}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda).\n\\end{align*}\nCombining the last two displays, we obtain that, if $j$ is not eliminated at time $t$ and $E_t$ holds, $t$ itself must satisfy\n\\begin{align*}\nt < \\frac{\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2}{\\min_{\\omega \\in \\Omega_{R(\\theta,t)\/t}}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda)}.\n\\end{align*}\nThe result in expectation is obtained by applying Lemma 19 in \\citep{reda2021dealing} together with $\\mathbb{P}(\\neg E_t) \\leq 1\/t^2$.\n\n\\paragraph{Selective elimination stopping}\n\nGiven Lemma \\ref{lem:empirical-vs-true-answer}, the proof of Theorem \\ref{th:piece-elim} for elimination stopping is very simple. Simply take a time $t$ such that $E_t$ holds and which verifies the condition in Lemma \\ref{lem:empirical-vs-true-answer}. Then, for such a $t$ the first claim of Lemma \\ref{lemma:upper-bound-sum-inf} can be verified analogously since the empirical answer (the one for which the set of active pieces is updated) is exactly the correct answer. Given Lemma \\ref{lemma:upper-bound-sum-inf}, the same derivation as in the proof for full elimination can be carried out. This yields the following sufficient condition on the time $t$ to eliminate a piece $p\\in\\cP(i^\\star)$:\n\\begin{align*}\nt \\geq \\max\\left\\{\\frac{\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2}{\\min_{\\omega \\in \\Omega_{R(\\theta,t)\/t}}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda)} , \\frac{ 4\\beta_{t,1\/t^2} + R(\\theta,t)}{H^\\star(\\theta)}\\right\\}.\n\\end{align*}\n\n\\paragraph{Comparison of the bounds for full elimination and LLR stopping}\n\nWe finally prove that a sufficient condition for $t \\geq \\bar{t}_p$, with $\\bar{t}_p$ associated to full elimination, is $t\\geq\\bar{t}$.\n\nTake any $p\\in \\cP(i^\\star)$. By definition of the set $\\Omega_\\epsilon$, we have that, for any $\\omega\\in\\Omega_\\epsilon$,\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) \\geq \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) \\geq H^\\star(\\theta) - \\epsilon.\n\\end{align*}\nThis implies that\n\\begin{align*}\n\\min_{\\omega\\in\\Omega_\\epsilon}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) \\geq \\max_{\\omega\\in\\Delta_K}\\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) - \\epsilon.\n\\end{align*}\nA sufficient condition for satisfying the inequality for piece elimination is thus\n\\begin{align*}\nt \\geq \\frac{\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2}{\\max_{\\omega\\in\\Delta_K}\\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda)- R(\\theta,t)\/t}.\n\\end{align*}\nLet $\\gamma \\in (0,1)$. For $R(\\theta,t)\/t \\leq \\gamma H^\\star(\\theta)$ we have that\n\\begin{align*}\nt \\geq \\frac{\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2}{(1-\\gamma)\\max_{\\omega\\in\\Delta_K}\\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda)}\n\\end{align*}\nsuffices. Therefore, taking the maximum between the condition above and $t \\geq \\frac{R(\\theta,t)}{\\gamma H^\\star(\\theta)}$ and optimizing over $\\gamma \\in (0,1)$, the inequality for piece elimination is verified if\n\\begin{align*}\nt \\geq \\frac{1}{H^\\star(\\theta)}\\inf_{\\gamma \\in (0,1)}\\max\\left\\{\\frac{\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2}{(1-\\gamma)}, \\frac{R(\\theta,t)}{\\gamma } \\right\\}\n\\end{align*}\nOptimizing over $\\gamma$, which amounts to setting $\\gamma = \\frac{R(\\theta,t)}{\\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2 + R(\\theta,t)}$, yields the desired statement.\n\n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{th:elim-vs-llr-fixed-sampling}}\\label{subapp:proof_elim_vs_llr}\n\n\\begin{proof}\nTake any time $t$ and suppose that $E_t$ holds while the algorithm did not stop with $\\tau_{\\mathrm{llr}}$ yet. Using the first result of Theorem \\ref{th:piece-elim} yields\n\\begin{align*}\nt < H^\\star(\\theta)^{-1}\\left( \\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2 + R(\\theta,t) \\right)\n\\: .\n\\end{align*}\nFrom Lemma 19 in \\citep{reda2021dealing}, we get that $\\mathbb{E}[\\tau_{\\mathrm{llr}}] \\leq \\bar{t} + 2$, where $\\bar{t}$ is the first time that does not satisfy the inequality above (it is a function of $\\theta$ and $\\delta$). This means that\n\\begin{align*}\n\\mathbb{E}[\\tau_{\\mathrm{llr}}] \\leq H^\\star(\\theta)^{-1}\\left( \\left(\\sqrt{\\beta_{t,\\delta}} + \\sqrt{\\beta_{t,1\/t^2}}\\right)^2 + R(\\theta,t) \\right) + 3.\n\\end{align*}\n\nNow let us take the same algorithm with $\\tau_{\\mathrm{elim}}$. We know it is $\\delta$-correct, so the standard lower bound states that\n\\begin{align*}\n\\mathbb{E}[\\tau_{\\mathrm{elim}}] \\geq H^\\star(\\theta)^{-1}\\log(1\/2.4\\delta).\n\\end{align*}\nWe get that\n\\begin{align*}\n&H^\\star(\\theta)\\mathbb{E}[\\tau_{\\mathrm{llr}}]\n\\\\\n&\\le H^\\star(\\theta)\\mathbb{E}[\\tau_{\\mathrm{elim}}] + 3H^\\star(\\theta) + \\left(\\sqrt{\\beta_{\\bar{t},\\delta}} + \\sqrt{\\beta_{\\bar{t},1\/\\bar{t}^2}}\\right)^2 + R(\\theta,\\bar{t}) - \\log \\frac{1}{2.4 \\delta}\n\\\\\n&= H^\\star(\\theta)\\mathbb{E}[\\tau_{\\mathrm{elim}}]+ 3H^\\star(\\theta) + \\beta_{\\bar{t},\\delta} - \\log\\frac{1}{\\delta} + 2 \\sqrt{\\beta_{\\bar{t},\\delta}\\beta_{\\bar{t},1\/\\bar{t}^2}} + \\beta_{\\bar{t},1\/\\bar{t}^2} + R(\\theta,\\bar{t}) + \\log (2.4) \\: .\n\\end{align*}\nWe now use the hypothesis $\\beta_{\\bar{t}, \\delta} = \\log\\frac{1}{\\delta} + \\xi(\\bar{t}, \\delta)$:\n\\begin{align*}\nH^\\star(\\theta)\\mathbb{E}[\\tau_{\\mathrm{llr}}]\n&\\le H^\\star(\\theta)\\mathbb{E}[\\tau_{\\mathrm{elim}}] + 3H^\\star(\\theta) + \\xi(\\bar{t}, \\delta) + 2 \\sqrt{\\beta_{\\bar{t},\\delta}\\beta_{\\bar{t},1\/\\bar{t}^2}} + \\beta_{\\bar{t},1\/\\bar{t}^2} + R(\\theta,\\bar{t}) + \\log (2.4) \\: .\n\\end{align*}\n\\end{proof}\n\n\n\\section{Existing Algorithms Satisfy Assumption \\ref{asm:sampling-rule} and \\ref{asm:sampling-rule-v2}}\\label{app:assumptions}\n\nIn this section, we show that existing sampling rules that target the optimal value of the max-inf game from the general lower bound of \\citep{kaufmann2016complexity} satisfy Assumption \\ref{asm:sampling-rule}. Moreover, we show that, when such sampling rules are combined with elimination (as in Section \\ref{sec:elimination_at_sampling}), they also satisfy Assumption \\ref{asm:sampling-rule-v2}. We do that explicitly for two algorithms: the game-theoretic approach based on no-regret learners of \\citep{degenne2019non} and an optimistic variant of the Track-and-Stop algorithm of \\citep{kaufmann2016complexity}. These two algorithms are sufficiently general to be representative of many existing approaches for pure exploration: the latter represents those that repeatedly solve the optimization problem from the lower bound to get optimal allocations, while the former represents those that solve such problem incrementally. We now describe these two algorithmic techniques. Then, we show, through a unified proof, that they satisfy Assumption \\ref{asm:sampling-rule} and \\ref{asm:sampling-rule-v2}.\n\n\\subsection{Sampling Rules}\n\nWe describe the sampling rules of interest while trying to keep some of their design choices (e.g., confidence intervals, tracking, optimization, etc.) as general as possible. We do this because these sampling rules have been adapted to different pure exploration problems and bandit structures in the literature, for which such components would be different. This will allow us to have unified proofs that are actually agnostic to the specific setting.\n\n\\subsection{Common Assumptions}\n\nWe first state some assumptions on the main common components of these algorithms.\n\n\\paragraph{Bounded closest alternatives}\n\nFirst, we shall make a mild regularity assumption about the considered identification problem: the distance between any closest alternative and $\\theta$ is bounded.\n\\begin{assumption}\\label{asm:boundedness}\nThere exists a constant $B > 0$ such that, for any $\\omega \\in \\mathbb{R}_{\\geq 0}$ and any subset $\\Lambda\\subseteq\\cM$ which is exactly the union of arbitrary pieces, there exists a closest alternative\n\\begin{align*}\n\\lambda_\\omega \\in \\argmin_{\\lambda\\in\\Lambda}\\sum_{k=1}^K \\omega^K \\KL_k(\\theta,\\lambda)\n\\end{align*}\nsuch that\n\\begin{align*}\n\\max_{k\\in[K]} \\KL_k(\\theta,\\lambda_\\omega) \\leq B.\n\\end{align*}\n\\end{assumption}\nWe note that this assumption is satisfied in the identification problems we consider (see Appendix \\ref{app:problems}), for Gaussian rewards where pieces are half-spaces.\n\n\\paragraph{Tracking}\n\nAll the sampling rules we consider choose a sequence of proportions $(\\omega_t)_{t\\geq 1}$, where $\\omega_t\\in\\Delta_K$, and use tracking to select the next arm to play based on these. Formally, the arm played at time $t$ is\n\\begin{align}\\label{eq:tracking}\nk_t := \\mathrm{Track}\\left(\\sum_{s=1}^t\\omega_s, N_{t-1}\\right),\n\\end{align}\nwhere $\\mathrm{Track}: \\mathbb{R}_+^K \\times \\mathbb{R}_+^K \\rightarrow [K]$ is some tracking function. To gain generality, we shall keep the tracking function implicit in the remainder, while only requiring the following assumption (which is actually guaranteed by existing methods).\n\\begin{assumption}\\label{asm:tracking}\nThere exists a constant $C_{\\mathrm{track}} > 0$ such that\n\\begin{align*}\n\\forall t > 0, k\\in[K] : N_t^k \\geq \\sum_{s=1}^t \\omega_s^k -C_{\\mathrm{track}}.\n\\end{align*}\nFor instance, the widely-adopted cumulative tracking,\n\\begin{align*}\n\\mathrm{Track}\\left(\\sum_{s=1}^t\\omega_s, N_{t-1}\\right) = \\argmin_{k\\in[K]} \\left( N_{t-1}^k - \\sum_{s=1}^t\\omega_s^k \\right),\n\\end{align*}\nsatisfies this assumption.\n\\end{assumption}\n\n\\paragraph{Confidence intervals}\n\nThese sampling rules maintain confidence intervals $(c_t^k)_{t\\geq 1, k\\in[K]}$ about the expected return of each arm. We shall also keep them implicit as their specific form depends on the bandit structure under consideration (e.g., linear vs unstructured). We will only require the following assumption, which is satisfied by common choices as described below.\n\\begin{assumption}\\label{asm:confidence intervals}\nUnder event $E_t$,\n\\begin{align*}\n\\forall s \\leq t: \\KL_k(\\hat{\\theta}_{s},\\lambda) - c_{s}^k \\leq \\KL_k(\\theta,\\lambda) \\leq \\KL_k(\\hat{\\theta}_{s},\\lambda) + c_{s}^k.\n\\end{align*}\nMoreover, there exists a sub-linear (in $t$) function $C_{\\mathrm{conf}}(t)$ such that\n\\begin{align*}\n\\sum_{s=1}^t\\sum_{k\\in[K]}\\omega_s^kc_{s-1}^k \\leq C_{\\mathrm{conf}}(t).\n\\end{align*}\n\\end{assumption}\nIn bandits with linear structure, the confidence intervals typically take the form $c_t^k \\propto \\|\\phi_k\\|_{V_t^{-1}}$. In this case finding an upper bound on $\\sum_{s=1}^t\\sum_{k\\in[K]}\\omega_s^kc_{s-1}^k$ would reduce to applying an elliptical potential lemma \\citep{abbasi2011improved} plus the tracking property, and the resulting upper bound would be $C_{\\mathrm{conf}}(t) \\propto \\sqrt{dt}$. In unstructured bandits we can take $c_t^k \\propto 1\/\\sqrt{N_t^k}$ and finding $C_{\\mathrm{conf}}(t)$ would require applying the standard pigeon-hole principle plus the tracking property, for which one obtains $C_{\\mathrm{conf}}(t) \\propto \\sqrt{Kt}$.\n\n\n\\subsubsection{Optimistic Track-and-Stop}\\label{sec:optimistic-ts}\n\nHere we describe an optimistic variant of the Track-and-Stop algorithm by \\citep{kaufmann2016complexity}. It was originally introduced by \\citep{degenne2019non} in order to get rid of forced exploration, one of the main causes behind the poor empirical performance of Track-and-Stop. The idea is to solve, at each time step $t$, an optimistic variant of the optimization problem from the lower bound,\n\\begin{align}\\label{eq:optimistic-ts-optimization}\n\\omega_t := \\argmax_{\\omega\\in\\Delta_K}\\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{t-1}))} \\sum_{k\\in[K]}\\omega^k \\left(\\KL_k(\\hat{\\theta}_{t-1},\\lambda) + c_{t-1}^k\\right),\n\\end{align}\nwhere $c_{t-1}^k$ is a per-arm confidence interval that satisfies Assumption \\ref{asm:confidence intervals}, thus ensuring optimism $\\KL_k(\\hat{\\theta}_{t-1},\\lambda) + c_{t-1}^k \\geq \\KL_k(\\theta,\\lambda)$ with high probability. The solution to this optimization problem yields proportions $\\omega_t$ which are tracked by the sampling rule. Then, the arm played at time $t$ is given by the tracking rule \\eqref{eq:tracking}.\n\n\\paragraph{Combining with elimination}\n\nIn order to combine this sampling rule with elimination, we simply redefine the optimization problem as\n\\begin{align}\\label{eq:optimistic-ts-optimization-elim}\n\\omega_t := \\argmax_{\\omega\\in\\Delta_K}\\min_{p\\in{\\cP}_{t-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{t-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{t-1}))} \\sum_{k\\in[K]}\\omega^k \\left(\\KL_k(\\hat{\\theta}_{t-1},\\lambda) + c_{t-1}^k\\right).\n\\end{align}\nThat is, we simply replace a minimization over the whole alternative with one over the active pieces only.\n\n\\subsubsection{Game-Theoretic Approach with No-Regret Learners}\\label{sec:no-regret}\n\nThe idea behind the game-theoretic approach of \\citep{degenne2019non} is to avoid recomputing the full optimistic problem \\eqref{eq:optimistic-ts-optimization} at each step, while solving it incrementally by means of no-regret learners. Given some online-learning algorithm $\\cL$ working on the $K$-dimensional simplex, the sampling rule works as follows. At each time $t$, first $\\cL$ outputs new proportions $\\omega_t$. Then, we compute the closest alternative\n\\begin{align*}\n\\hat{\\lambda}_t := \\argmin_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{t-1}))}\\sum_{k\\in[K]}\\omega_t^k \\KL_k(\\hat{\\theta}_{t-1},\\lambda).\n\\end{align*}\nFinally, $\\cL$ is updated with the (concave) gain function\n\\begin{align*}\ng_t(\\omega) := \\sum_{k\\in[K]}\\omega^k \\left(\\KL_k(\\hat{\\theta}_{t-1},\\hat{\\lambda}_t) + c_{t-1}^k\\right).\n\\end{align*}\nFinally, the sampling rule uses tracking exactly as in \\eqref{eq:tracking} to decide the next arm to pull. As before, we will only require the tracking rule to satisfy Assumption \\ref{asm:tracking} without specifying an explicit form. Similarly, we will keep the learner implicit as far as it satisfies the following no-regret property.\n\\begin{assumption}\\label{asm:no-regret}\nThe learner $\\mathcal{L}$ is no regret: there exists a sub-linear (in $t$) function $C_{\\mathcal{L}}(t)$ such that, for any $t\\geq 1$ and any sequence of gains $\\{g_s(\\omega)\\}_{s\\leq t}$,\n$$\n\\max_{w\\in\\Delta_K}\\sum_{s=1}^t \\big( g_s(w) -g_s(w_s) \\big) \\leq C_{\\mathcal{L}}(t) \\: .\n$$\n\\end{assumption}\n\n\\paragraph{Combining with elimination}\n\nFor the game-theoretic approach, we only need to redefine the closest alternative used in the gains as\n\\begin{align*}\n(\\hat{p}_t, \\hat{\\lambda}_t) := \\argmin_{p\\in{\\cP}_{t-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{t-1})), \\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{t-1}))}\\sum_{k\\in[K]}\\omega_t^k \\KL_k(\\hat{\\theta}_{t-1},\\lambda).\n\\end{align*}\n\n\\subsection{Assumption \\ref{asm:sampling-rule} Holds}\n\nWe need to show that, for any time $t \\geq 1$ where the good event $E_t$ (Equation \\ref{eq:Et}) holds, the two sampling rules presented above satisfy\n\\begin{align*}\nt H^\\star(\\theta) \\leq \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) + R(\\theta,t)\n\\end{align*}\nfor suitable choices of the function $R(\\theta,t)$. We shall write a unified proof for these two sampling rules while explicitly mentioning where they differ. Let us suppose that their algorithmic components satisfy the assumptions stated above, i.e., Assumption \\ref{asm:tracking} for tracking, Assumption \\ref{asm:confidence intervals} for the confidence intervals, and Assumption \\ref{asm:no-regret} for the no-regret learner.\n\nTake any time step $t\\geq 1$ and suppose that $E_t$ holds. Let $\\lambda_t := \\argmin_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)$. Then, using the tracking property (Assumption \\ref{asm:tracking}) together with Assumption \\ref{asm:boundedness},\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)\n &= \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda_t) \n \\\\\n &\\geq \\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda_t) - C_{\\mathrm{track}}\\sum_{k\\in[K]}\\KL_k(\\theta,\\lambda_t) \n \\\\ &\\geq \\inf_{\\lambda \\in \\Lambda(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda) - C_{\\mathrm{track}}\\sum_{k\\in[K]}\\KL_k(\\theta,\\lambda_t) \n \\\\ &\\geq \\inf_{\\lambda \\in \\Lambda(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda) - C_{\\mathrm{track}}KB.\n\\end{align*}\nWe can now lower bound the first term as\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda)\n &\\stackrel{(a)}{\\geq} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]} \\omega_s^k\\KL_k(\\theta,\\lambda)\n \\\\ &\\stackrel{(b)}{\\geq} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k\\KL_k(\\theta,\\lambda)\n \\\\ &\\stackrel{(c)}{\\geq} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) - c_{s-1}^k \\right)\n \\\\ &= \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2\\sum_{s=1}^t\\sum_{k\\in[K]}\\omega_s^kc_{s-1}^k\n \\\\ &\\stackrel{(d)}{\\geq} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2C_{\\mathrm{conf}}(t),\n\\end{align*}\nwhere (a) is from the concavity of the infimum, (b) holds since either $\\Lambda(i^\\star(\\hat{\\theta}_{s-1})) = \\Lambda(i^\\star)$ or $\\theta \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))$ (in which case the infimum would be zero), (c) is from the validity of the confidence intervals under $E_t$, and (d) is from Assumption \\ref{asm:confidence intervals}. Now note that, when applying the game-theoretic approach (Section \\ref{sec:no-regret}), the first term on the right-hand side is exactly the sum of gains fed into the learner. Thus, using the no-regret property (Assumption \\ref{asm:no-regret}),\n\\begin{align*}\n&\\inf_{\\lambda \\in \\Lambda(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda)\n\\\\\n&\\geq \\max_{\\omega\\in\\Delta_K}\\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2C_{\\mathrm{conf}}(t) - C_{\\cL}(t).\n\\end{align*}\nIf instead we are applying optimistic Track-and-Stop (Section \\ref{sec:optimistic-ts}), the first term on the right-hand side is exactly the sum of optimal values of the objective functions maximized by the algorithm. Thus,\n\\begin{align*}\n&\\inf_{\\lambda \\in \\Lambda(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda)\n\\\\\n&= \\sum_{s=1}^t \\max_{\\omega\\in\\Delta_K}\\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2C_{\\mathrm{conf}}(t)\n\\\\ &\\geq \\max_{\\omega\\in\\Delta_K} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2C_{\\mathrm{conf}}(t).\n\\end{align*}\nTherefore, we only need to lower bound the first term above, which is common between the two considered algorithms. We have\n\\begin{align*}\n&\\max_{\\omega\\in\\Delta_K} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right)\n\\\\\n&\\stackrel{(e)}{\\geq} \\max_{\\omega\\in\\Delta_K} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right)\n\\\\ &\\stackrel{(f)}{\\geq} \\max_{\\omega\\in\\Delta_K} \\sum_{s=1}^t \\inf_{\\lambda \\in \\Lambda(i^\\star)} \\sum_{k\\in[K]} \\omega^k \\KL_k({\\theta},\\lambda)\n\\\\ &\\stackrel{(g)}{\\geq} t H^\\star(\\theta),\n\\end{align*}\nwhere (e) follows from the same reasoning as step (b) above, (f) is from the fact that confidence intervals are valid under $E_t$, and (g) is from the definition of the game in the lower bound. Putting all together, we proved that optimistic Track-and-Stop satisfies Assumption \\ref{asm:sampling-rule} with\n\\begin{align*}\nR(\\theta,t) = C_{\\mathrm{track}}KB + 2C_{\\mathrm{conf}}(t),\n\\end{align*}\nwhile the game-theoretic approach satisfies it with\n\\begin{align*}\nR(\\theta,t) = C_{\\mathrm{track}}KB + 2C_{\\mathrm{conf}}(t) + C_{\\cL}(t).\n\\end{align*}\n\n\\subsection{Assumption \\ref{asm:sampling-rule-v2} Holds}\n\nWe now show that Assumption \\ref{asm:sampling-rule-v2} holds for these sampling rules combined with elimination as formally explained above. The main steps are very similar as before, with the additional complications posed by eliminating pieces at sampling. \n\nRecall that we want to show that, for any time $t \\geq 1$ where the good event $E_t$ (Equation \\ref{eq:Et}) holds, the two sampling rules presented above satisfy\n\\begin{align*}\n\\max_{\\omega\\in\\Delta_K}\\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) \\leq \\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) + R(\\theta,t)\n\\end{align*}\nfor suitable choices of the function $R(\\theta,t)$.\n\nTake any time step $t\\geq 1$ and suppose that $E_t$ holds. Let \n$$(p_t,\\lambda_t) \\in \\argmin_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star), \\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)$$ be the closest piece and alternative at time $t$. Then, using the tracking property (Assumption \\ref{asm:tracking}) and Assumption \\ref{asm:boundedness},\n\\begin{align*}\n\\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)\n &= \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda_t) \n\\\\ &\\geq \\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda_t) - C_{\\mathrm{track}}KB\n \\\\ &\\geq \\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda) - C_{\\mathrm{track}}KB.\n\\end{align*}\nRecall that $\\overline{t}_j := {\\bar{t}_0}^{2^j}$ is the time step at which the $j$-th reset is performed and $j(t) := \\lfloor \\log_2\\log_{\\bar{t}_0} t \\rfloor$ is the index of the last reset before $t$. Let $\\bar{t} := \\overline{t}_{j(t)-1}$ be the time of the second-last reset before $t$. Note that\n\\begin{align*}\n\\bar{t} := {\\bar{t}_0}^{2^{j(t)-1}} = \\bar{t}_0^{\\frac{1}{2}2^{\\lfloor \\log_{2}\\log_{\\bar{t}_0} t \\rfloor}} = \\sqrt{\\bar{t}_0^{2^{\\lfloor \\log_{2}\\log_{\\bar{t}_0} t \\rfloor}}} = \\sqrt{\\bar{t}_{j(t)}}\\leq \\sqrt{t}.\n\\end{align*}\nWe can now lower bound the first term as\n\\begin{align*}\n&\\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda)\n \\\\ &\\stackrel{(a)}{\\geq} \\sum_{s=1}^t \\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]} \\omega_s^k\\KL_k(\\theta,\\lambda)\n \\\\ &\\stackrel{(b)}{\\geq} \\sum_{s=\\bar{t}+1}^t \\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]} \\omega_s^k\\KL_k(\\theta,\\lambda)\n \\\\ &\\stackrel{(c)}{\\geq} \\sum_{s=\\bar{t}+1}^t \\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k\\KL_k(\\theta,\\lambda)\n \\\\ &\\stackrel{(d)}{\\geq} \\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k\\KL_k(\\theta,\\lambda) - B\\sqrt{t}\n \\\\ &\\stackrel{(e)}{\\geq} \\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) - c_{s-1}^k \\right) - B\\sqrt{t}\n\n \\\\ &\\stackrel{(f)}{\\geq} \\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega_s^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2C_{\\mathrm{conf}}(t) - B\\sqrt{t},\n\\end{align*}\nwhere (a) is from the concavity of the infimum, (b) drops the fist $\\bar{t}$ rounds, (c) uses Lemma \\ref{lem:change-inf-from-istar}, (d) uses that $\\cP_t^{\\mathrm{smp}}$ is contained in all active sets from $\\bar{t}$ to $t$ and completes the sum with the first $\\bar{t}$ rounds (while bounding $\\bar{t} \\leq \\sqrt{t}$), (e) is from the validity of the confidence intervals under $E_t$, and (f) is from Assumption \\ref{asm:confidence intervals}.\n\nNow note that, when applying the game-theoretic approach (Section \\ref{sec:no-regret}), the first term on the right-hand side is exactly the sum of gains fed into the learner. Thus, using the no-regret property (Assumption \\ref{asm:no-regret}),\n\\begin{align*}\n\\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}&\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda) \n\\\\ &\\geq \\max_{\\omega\\in\\Delta_K}\\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right)\n\\\\&\\quad - 2C_{\\mathrm{conf}}(t) - C_{\\cL}(t) - B\\sqrt{t}.\n\\end{align*}\nIf instead we are applying optimistic Track-and-Stop (Section \\ref{sec:optimistic-ts}), the first term on the right-hand side is exactly the sum of optimal values of the objective functions maximized by the algorithm. Thus,\n\\begin{align*}\n&\\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)}\\sum_{k\\in[K]}\\sum_{s=1}^t\\omega_s^k\\KL_k(\\theta,\\lambda) \n\\\\ &\\geq \\sum_{s=1}^t \\max_{\\omega\\in\\Delta_K}\\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2C_{\\mathrm{conf}}(t) - B\\sqrt{t}\n\\\\ &\\geq \\max_{\\omega\\in\\Delta_K}\\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right) - 2C_{\\mathrm{conf}}(t) - B\\sqrt{t}.\n\\end{align*}\nTherefore, we only need to lower bound the first term above, which is common between the two considered algorithms. We have\n\\begin{align*}\n&\\max_{\\omega\\in\\Delta_K} \\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right)\n\\\\ &\\stackrel{(g)}{\\geq}\n\\max_{\\omega\\in\\Delta_K} \\sum_{s=\\bar{t}+1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s-1}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s-1}))} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right)\n\\\\ &\\stackrel{(h)}{\\geq} \\max_{\\omega\\in\\Delta_K} \\sum_{s=\\bar{t}+1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]} \\omega^k \\left( \\KL_k(\\hat{\\theta}_{s-1},\\lambda) + c_{s-1}^k \\right)\n\\\\ &\\stackrel{(i)}{\\geq} \\max_{\\omega\\in\\Delta_K} \\sum_{s=\\bar{t}+1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]} \\omega^k \\KL_k({\\theta},\\lambda)\n\\\\ &\\stackrel{(j)}{\\geq} \\max_{\\omega\\in\\Delta_K} \\sum_{s=1}^t \\min_{p\\in\\cP_{s-1}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]} \\omega^k \\KL_k({\\theta},\\lambda) - B\\sqrt{t},\n\\end{align*}\nwhere (g) drops the first $\\bar{t}$ rounds, (h) uses Lemma \\ref{lem:change-inf-from-ihat}, (i) is from the fact that confidence intervals are valid under $E_t$, and (j) adds the missing first $\\bar{t}$ rounds. Putting all together, we proved that optimistic Track-and-Stop satisfies Assumption \\ref{asm:sampling-rule} with\n\\begin{align*}\nR(\\theta,t) = C_{\\mathrm{track}}KB + 2C_{\\mathrm{conf}}(t) + 2B\\sqrt{t},\n\\end{align*}\nwhile the game-theoretic approach satisfies it with\n\\begin{align*}\nR(\\theta,t) = C_{\\mathrm{track}}KB + 2C_{\\mathrm{conf}}(t) + C_{\\cL}(t) + 2B\\sqrt{t}.\n\\end{align*}\n\n\\begin{lemma}\\label{lem:theta-not-elim-in-last-two-phases}\nUnder event $E_t$, for any $s\\in\\mathbb{N}$ with $\\overline{t}_{j(t)-1} \\leq s \\leq t$,\n\\begin{align*}\nL_s(\\hat{\\theta}_s,\\theta) < \\alpha_{s,\\delta}.\n\\end{align*}\nMoreover, for any $s,s'\\in\\mathbb{N}$ with $\\overline{t}_{j(t)-1} \\leq s' \\leq s \\leq t$,\n\\begin{align*}\nL_{s'}(\\hat{\\theta}_{s'},\\hat{\\theta}_s) < \\alpha_{s',\\delta}.\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nUsing the good event $E_t$ followed by an application of Lemma \\ref{lem:new_beta-diff-between-phases} together with $s \\geq \\overline{t}_{j(t)-1}$,\n\\begin{align*}\nL_s(\\hat{\\theta}_s,\\theta) \\leq \\beta_{t,1\/t^2} \\leq 4 \\frac{c_2}{c_1}\\beta_{\\overline{t}_{j(t)-1},1\/\\overline{t}_{j(t)-1}^2} \\leq 4\\frac{c_2}{c_1}\\beta_{s,1\/s^2} < \\alpha_{s,\\delta}.\n\\end{align*}\nThis proves the first claim. To prove the second one, note that\n\\begin{align*}\n\\sqrt{L_{s'}(\\hat{\\theta}_{s'},\\hat{\\theta}_s)}\n \\stackrel{(a)}{=} \\sqrt{\\sum_{k\\in[K]}N_{s'}^k\\KL_k(\\hat{\\theta}_{s'},\\hat{\\theta}_s)}\n &\\stackrel{(b)}{\\leq} \\sqrt{\\sum_{k\\in[K]}N_{s'}^k\\KL_k(\\hat{\\theta}_{s'},\\theta)} + \\sqrt{\\sum_{k\\in[K]}N_{s'}^k\\KL_k(\\hat{\\theta}_s,\\theta)}\n \\\\ &\\stackrel{(c)}{\\leq} \\sqrt{\\sum_{k\\in[K]}N_{s'}^k\\KL_k(\\hat{\\theta}_{s'},\\theta)} + \\sqrt{\\sum_{k\\in[K]}N_{s}^k\\KL_k(\\hat{\\theta}_s,\\theta)}\n \\\\ &\\stackrel{(d)}{=} \\sqrt{L_{s'}(\\hat{\\theta}_{s'},\\theta)} + \\sqrt{L_{s}(\\hat{\\theta}_{s},\\theta)} \\stackrel{(e)}{\\leq} 2\\sqrt{\\beta_{t,1\/t^2}},\n\\end{align*}\nwhere (a) is from Corollary~\\ref{cor:llr-lin-gauss}, (b) is from the triangle inequality (recall that the sum of KLs is a norm), (c) is from the fact that the pull counts are non-decreasing and $s\\geq s'$, (d) is again from Corollary~\\ref{cor:llr-lin-gauss}, and (e) is from event $E_t$. Using Lemma \\ref{lem:new_beta-diff-between-phases} as before, we have $2\\sqrt{\\beta_{{t},1\/{t}^2}} \\leq 2\\sqrt{\\frac{c_2}{c_1}\\beta_{s',1\/s'^2}} < \\sqrt{\\alpha_{s',\\delta}}$. This proves the second statement.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:change-inf-from-istar}\nUnder event $E_t$, for any $i\\in\\cI$ and $\\omega\\in\\Delta_K$,\n\\begin{align*}\n \\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i^\\star)}\\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]} \\omega^k\\KL_k(\\theta,\\lambda)\n \\geq \\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i)}\\inf_{\\lambda \\in \\Lambda_p(i)} \\sum_{k\\in[K]} \\omega^k\\KL_k(\\theta,\\lambda).\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nThe statement follows trivially if $i=i^\\star$. So suppose $i\\neq i^\\star$. Since $i$ is not the answer of $\\theta$, by the union property of the decomposition into pieces, there exists $p\\in\\cP(i)$ such that $\\theta\\in\\Lambda_p(i)$. By Lemma \\ref{lem:theta-not-elim-in-last-two-phases}, for any $\\overline{t}_{j(t)-1} \\leq s \\leq t$,\n\\begin{align*}\n\\inf_{\\lambda\\in\\Lambda_p(i)} L_{s}(\\hat{\\theta}_{s},\\lambda) \\leq L_{s}(\\hat{\\theta}_{s},\\theta) < \\alpha_{s,\\delta}.\n\\end{align*}\nThis implies that $p\\in\\cP_{t}^{\\mathrm{smp}}(i)$ since such set is defined as the intersection of all active sets from $\\overline{t}_{j(t)-1}$ to $t$. Finally, we conclude that\n\\begin{align*}\n\\min_{p\\in\\cP_{t}^{\\mathrm{smp}}(i)}\\inf_{\\lambda \\in \\Lambda_p(i)} \\sum_{k\\in[K]} \\omega^k\\KL_k(\\theta,\\lambda) \\leq \\sum_{k\\in[K]} \\omega^k\\KL_k(\\theta,\\theta) = 0,\n\\end{align*}\nand thus our result follows trivially.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:change-inf-from-ihat}\nUnder event $E_t$, for any $\\overline{t}_{j(t)-1} \\leq s \\leq t$, $i\\in\\cI$, and $\\omega\\in\\Delta_K$,\n\\begin{align*}\n \\min_{p\\in\\cP_{s}^{\\mathrm{smp}}(i^\\star(\\hat{\\theta}_{s}))}\\inf_{\\lambda \\in \\Lambda_p(i^\\star(\\hat{\\theta}_{s}))} \\sum_{k\\in[K]} \\omega^k \\KL_k(\\hat{\\theta}_{s},\\lambda)\n \\geq \\min_{p\\in\\cP_{s}^{\\mathrm{smp}}(i)}\\inf_{\\lambda \\in \\Lambda_p(i)} \\sum_{k\\in[K]} \\omega^k\\KL_k(\\hat{\\theta}_{s},\\lambda).\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nThe proof is very similar to the one of Lemma \\ref{lem:change-inf-from-istar}. The statement follows trivially if $i=i^\\star(\\hat{\\theta}_{s})$. So suppose $i\\neq i^\\star(\\hat{\\theta}_{s})$. Since $i$ is not the answer of $\\hat{\\theta}_{s}$, by the union property of the decomposition into pieces, there exists $p\\in\\cP(i)$ such that $\\hat{\\theta}_{s}\\in\\Lambda_p(i)$. By Lemma \\ref{lem:theta-not-elim-in-last-two-phases}, for any $\\overline{t}_{j(t)-1} \\leq s'\\leq s \\leq t$,\n\\begin{align*}\n\\inf_{\\lambda\\in\\Lambda_p(i)} L_{s'}(\\hat{\\theta}_{s'},\\lambda) \\leq L_{s'}(\\hat{\\theta}_{s'},\\hat{\\theta}_s) < \\alpha_{s',\\delta}.\n\\end{align*}\nThis implies that $p\\in\\cP_{s}^{\\mathrm{smp}}(i)$ since such set is defined as the intersection of all active sets from $\\overline{t}_{j(t)-1}$ to $s$. Therefore, we conclude that\n\\begin{align*}\n\\min_{p\\in\\cP_{s}^{\\mathrm{smp}}(i)}\\inf_{\\lambda \\in \\Lambda_p(i)} \\sum_{k\\in[K]} \\omega^k\\KL_k(\\hat{\\theta}_{s},\\lambda) \\leq \\sum_{k\\in[K]} \\omega^k\\KL_k(\\hat{\\theta}_{s},\\hat{\\theta}_{s}) = 0,\n\\end{align*}\nand thus our result follows trivially.\n\\end{proof}\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nThe multi-armed bandit is a sequential decision-making task which is now extensively studied (see, e.g., \\citep{lattimore2020bandit} for a recent review). In this problem, an algorithm interacts with its environment by sequentially ``pulling'' one among $K \\in \\mathbb{N}$ arms and observing a sample from a corresponding distribution. Among the possible objectives, we focus on \\emph{fixed-confidence identification} \\citep{even2002pac, jamieson2014best,garivier2016optimal,chen2015optimal}. In this setting, the algorithm successively collects samples until it decides to stop and return an answer to a given query about the distributions. Its task is to return the correct answer with at most a given probability of error $\\delta$, and its secondary goal is to do so while stopping as early as possible. This problem is called ``fixed-confidence'' as opposed to ``fixed-budget'', where the goal is to minimize the error probability with at most a given number of samples \\citep{bubeck2009pure,audibert2010best,gabillon2012best,carpentier2016tight,abbasi2018best}.\n\n\nThe most studied query is \\emph{best arm identification} (BAI), where the aim is to return the arm whose distribution has highest mean. A variant is Top-m identification \\citep{kalyanakrishnan2012pac}, where the goal is to find the $m$ arms with highest means. While these are the most common, other queries have been studied, including thresholding bandits \\citep{carpentier2016tight}, minimum threshold \\citep{kaufmann2018sequential}, and multiple correct answers \\citep{degenne2019pure}.\n\nAlgorithms for fixed-confidence identification can be generally divided into two classes: those based on \\emph{adaptive sampling} and those based on \\emph{elimination}. Adaptive algorithms \\citep[e.g.,][]{gabillon2012best,kalyanakrishnan2012pac,garivier2016optimal,xu2018fully} update their sampling strategy at each round and typically stop when they can simultaneously assess the correctness of every answer. They often enjoy strong theoretical guarantees.\nFor instance, some of them \\citep{garivier2016optimal,degenne2019non,degenne2020gamification,wang2021fast} have been shown to be optimal as $\\delta\\rightarrow 0$. However, since they repeatedly test the correctness of every answer, they are often computationally demanding.\nElimination-based strategies \\citep[e.g.,][]{even2002pac,kaufmann2013information,soare2014best, fiez2019sequential,tao2018best} maintain a set of ``active'' answers (those that are still likely to be the correct one) and stop when only one remains. They typically update their sampling rules and\/or the active answers infrequently. This, together with the fact that eliminations reduce the problem size over time, makes them more computationally efficient but also yields large sample complexity in practice.\nMoreover, while adaptive algorithms for general identification problems (i.e., with arbitrary queries) exist \\citep{garivier2016optimal,degenne2019non,wang2021fast}, elimination-based strategies are not easy to design at such a level of generality. In particular, they are not easy to extend to combinatorial problems (such as Top-m), where the number of answers is exponential in the problem dimension.\n\nIn this paper, we design a novel elimination rule for general identification problems which we call \\emph{selective} elimination. It can be easily combined with existing adaptive strategies, both in their stopping and sampling rules, making them achieve the best properties of the two classes mentioned above.\nIn particular, we prove that (1) selective elimination never suffers worse sample complexity than the original algorithm, and hence remain asymptotically optimal whenever the base algorithm is; (2) It provably discards some answers much earlier than the stopping time; (3) It improves the computational complexity of the original algorithm when some answers are eliminated early. Experimentally, we compare several existing algorithms for three identification problems (BAI, Top-m, and thresholding bandits) on two bandit structures (linear and unstructured). We find that, coherently across all experiments, existing adaptive strategies achieve significant gains in computation time and, to a smaller extent, in sample complexity when combined with selective elimination.\n\n\\subsection{Bandit fixed-confidence identification}\n\\label{sub:bandit_identification}\n\nAn algorithm interacts with an environment composed of $K>1$ \\emph{arms}. At each time $t \\in \\mathbb{N}$, the algorithm picks an arm $k_t$ and observes $X_t^{k_t} \\sim \\nu_{k_t}$, where $\\nu_{k_t}$ is the distribution of arm $k_t$. \nAt a time $\\tau$, the algorithm stops and returns an answer $\\ihat$ from a finite set $\\mathcal I$. \nFormally, let $\\cF_t$ be the $\\sigma$-algebra generated by the observations up to time $t$. An identification algorithm is composed of\n\\begin{enumerate}[nosep]\n\t\\item \\emph{Sampling rule}: the sequence $(k_t)_{t \\in \\mathbb{N}}$, where $k_t$ is $\\mathcal F_{t-1}$-measurable.\n\t\\item \\emph{Stopping rule}: a stopping time $\\tau$ with respect to $(\\cF_t)_{t\\in\\mathbb{N}}$ and a random variable $\\ihat \\in \\mathcal I$, i.e., the answer returned when stopping at time $\\tau$.\n\\end{enumerate}\nNote that, while it is common to decouple $\\tau$ and $\\ihat$, we group them to emphasize that the time at which an algorithm stops depends strongly on the answer it plans on returning.\n\nWe assume that the arm distributions depend on some unknown parameter $\\theta\\in\\cM$, where $\\cM \\subseteq \\mathbb{R}^d$ is the set of possible parameters, and write $\\nu_k(\\theta)$ for $k\\in[K]$ to make this dependence explicit. For simplicity, we shall use $\\theta$ to refer to the bandit problem $(\\nu_k(\\theta))_{k\\in[K]}$.\nThis assumption allows us to include linear bandits in our analysis.\nWe let $i^\\star: \\cM \\to \\mathcal I$ be the function, known to the algorithm, which returns the unique correct answer for each problem. The algorithm is correct on $\\theta$ if $\\ihat = i^\\star(\\theta)$. \n\n\\begin{definition}[$\\delta$-correct algorithm]\nAn algorithm is said to be $\\delta$-correct on $\\mathcal M \\subseteq \\mathbb{R}^d$ if for all $\\theta \\in \\mathcal M$, $\\tau < +\\infty$ almost surely and\n$\n\\mathbb{P}_\\theta(\\ihat \\ne i^\\star(\\theta) )\n\\le \\delta \\: .\n$\n\\end{definition}\n\nWe want to design algorithms that, given a value $\\delta$, are $\\delta$-correct on $\\mathcal M$ and have minimal expected sample complexity $\\mathbb{E}_\\theta[\\tau]$ for all $\\theta\\in\\cM$. A lower bound on $\\mathbb{E}_\\theta[\\tau]$ was proved in \\citep{garivier2016optimal}. In order to present it, we introduce the concept of \\emph{alternative} set to an answer $i\\in\\cI$: $\\Lambda(i) := \\{\\lambda \\in \\mathcal M \\mid i^\\star(\\lambda) \\ne i\\}$, the set of parameters for which the correct answer is not $i$. Let us denote by $\\KL_k(\\theta,\\lambda)$ the Kullback-Leibler (KL) divergence between the distribution of arm $k$ under $\\theta$ and $\\lambda$.\nThen the lower bound states that for any algorithm that is $\\delta$-correct on $\\cM$ and any problem $\\theta\\in\\cM$,\n\\begin{align}\n\\mathbb{E}_\\theta[\\tau]\n&\\ge \\log(1\/(2.4\\delta))\/H^\\star(\\theta)\n\\: ,\n\\text{with }\nH^\\star(\\theta)\n:= \\max_{\\omega\\in\\Delta_K}\\inf_{\\lambda \\in \\Lambda(i^\\star(\\theta))} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda)\n\\: .\\label{eq:lower_bound}\n\\end{align}\n\n\n\\paragraph{Example: BAI in Gaussian linear bandits}\n\nWhile our results apply to general queries, we illustrate all statements of this paper on the widely-studied task of BAI in Gaussian linear bandits \\citep{soare2014best,xu2018fully,zaki2020explicit,degenne2020gamification,jedra2020optimal}. \nIn this setting, each arm $k\\in[K]$ has a Gaussian distribution $\\mathcal N(\\mu_k(\\theta), 1)$ with mean $\\mu_k(\\theta) = \\phi_k^\\top \\theta$, a linear function of the unknown parameter $\\theta \\in \\mathbb{R}^d$ (and $\\mathcal M = \\mathbb{R}^d$) and of known arm features $\\phi_k\\in\\mathbb{R}^d$. The set of answers is $\\mathcal I = [K]$ and the correct answer is $i^\\star(\\theta) := \\argmax_{k\\in[K]}\\phi_k^\\top \\theta$.\n\nFinally, for $x \\in \\mathbb{R}^d$ and $A \\in \\mathbb{R}^{d \\times d}$, we define $\\Vert x \\Vert_A := \\sqrt{x^\\top A x}$. For $\\omega \\in \\mathbb{R}^K$, let $V_\\omega := \\sum_{k=1}^K \\omega^k \\phi_k \\phi_k^\\top$. With this notation, we have $\\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda) = \\frac{1}{2}\\Vert \\theta - \\lambda \\Vert_{V_\\omega}^2$.\n\n\\subsection{Log-likelihood ratio stopping rules}\n\\label{sub:stopping_rules}\n\nMost existing adaptive algorithms use a log-likelihood ratio (LLR) test in order to decide when to stop. Informally, they check whether sufficient information has been collected to confidently discard at once all answers except one. Since such LLR tests are crucial for the design of our general elimination rules, we now describe their principle.\n \nGiven two parameters $\\theta,\\lambda \\in \\cM$, the LLR of observations $X_{[t]} = (X_1^{k_1}, \\ldots, X_t^{k_t})$ between models $\\theta$ and $\\lambda$ is\n$\nL_t(\\theta, \\lambda)\n:= \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_{[t]})\n= \\sum_{s=1}^t \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_s^{k_s})\n$~. Let $\\hat{\\theta}_t := \\argmax_{\\lambda\\in\\cM} \\log \\mathbb{P}_\\lambda(X_{[t]})$ be the maximum likelihood estimator of $\\theta$ from $t$ observations. In Gaussian linear bandits, we have\n$\nL_t(\\theta, \\lambda)\n= \\frac{1}{2}\\Vert \\theta - \\lambda \\Vert^2_{V_{N_t}} + (\\theta - \\lambda)^\\top V_{N_t} (\\hat{\\theta}_t - \\theta)\n$~, where $N_t^k := \\sum_{s=1}^t \\indi{k_s=k}$.\nSee Appendix~\\ref{sec:exponential_families} for more details. $L_t(\\theta, \\lambda)$ is closely related to $\\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda)$, a quantity that appears frequently in our results. Indeed, the difference between these quantities is a martingale, which is a lower order term compared to them. The LLR stopping rule was introduced to the bandit literature in \\citep{garivier2016optimal}. At each step $t \\in \\mathbb{N}$, the algorithm computes the infimum LLR to the alternative set of $i^\\star(\\hat{\\theta}_t)$ and stops if it exceeds a threshold, i.e., if\n\\begin{align}\\label{eq:llr-stop}\n\\inf_{\\lambda \\in \\Lambda(i^\\star(\\hat{\\theta}_t))} L_t(\\hat{\\theta}_t, \\lambda)\n\\ge \\beta_{t, \\delta} \\: ,\n\\end{align}\nwhere the function $\\beta_{t, \\delta}$ can vary, notably based on the shape of the alternative sets. The recommendation rule is then $\\ihat = i^\\star(\\hat{\\theta}_t)$. Informally, the algorithm stops if it has enough information to exclude all points $\\lambda$ for which the answer is not $i^\\star(\\hat{\\theta}_t)$.\nThis stopping rule enforces $\\delta$-correctness, provided that the sampling rule ensures $\\tau < + \\infty$ a.s. and that $\\beta_{t,\\delta}$ is properly chosen. The most popular choice is to ensure a concentration property of $L_t(\\hat{\\theta}_t, \\theta)$. For example, if for all $\\delta$, $\\beta_{t,\\delta}$ guarantees that\n\\begin{align}\\label{eq:concentration-beta}\n\\mathbb{P}\\left( \\exists t \\geq 1 : L_t(\\hat{\\theta}_t, \\theta) \\geq \\beta_{t,\\delta}\\right) \\leq \\delta,\n\\end{align}\nLLR stopping with that threshold returns a wrong answer with probability at most $\\delta$. Such concentration bounds can be found in \\citep{abbasi2011improved,magureanu2014lipschitz} for linear and unstructured bandits, respectively. This LLR stopping rule is used in many algorithms \\citep{garivier2016optimal,xu2018fully,degenne2019non,degenne2020gamification,jedra2020optimal,wang2021fast}\\footnote{LinGapE \\citep{xu2018fully} does not use LLR stopping explicitly, but its stopping rule is equivalent to it. We can write it as: stop if for all points inside a confidence region a gap is small enough, that is if all those points do not belong to the alternative of $i^\\star(\\hat{\\theta}_t)$. The contrapositive of that statement is exactly LLR stopping.}.\nSome of them have been proven\nto be \\emph{asymptotically optimal}: their sample complexity upper bound matches the lower bound~\\eqref{eq:lower_bound} when $\\delta \\to 0$. However, improvements are still possible: their sample complexity for moderate $\\delta$ may not be optimal and their computational complexity may be reduced, as we will see.\n\n\n\n\n\n\n\\iffalse\n\\subsection{Related Work}\n\\label{sub:related_work}\n\n\\todo[inline]{This section is not needed and the various references should be inserted into the introduction}\n\nFB = fixed budget, FC = fixed confidence\n\n\\citep{even2002pac} BAI, FC (high proba stopping time), elimination: Successive Elimination, Median Elimination\n\n\\citep{mannor2004sample}\n\n\\citep{even2006action} same as the 2002 paper?\n\n\\citep{audibert2010best} BAI, FB, no elimination in UCB-E, elimination in Successive Rejects\n\n\\citep{gabillon2012best} BAI, FB and FC, no elimination (UGapE), kind of LLR stopping\n\n\\citep{garivier2016optimal} BAI, FC, no elimination (LLR)\n\n\\citep{russo2016simple}\n\n\\citep{kaufmann2016complexity}\n\n\\citep{soare2014best}\n\n\\citep{jamieson2014best} BAI, FC, elimination in some algorithms\n\n\\citep{chen2017towards} BAI, FC, lower bound, elimination algorithm\n\n\\citep{chen2015optimal} BAI, FC, lower bound, elimination algorithm\n\n\\citep{chen2014combinatorial} top-M BAI, FC (high proba bound on stopping time?), no elimination\n\n\\citep{tao2018best}\n\n\\citep{huang2017structured} BAI, FC, concentration for a better LLR threshold\n\n\\citep{jedra2020optimal}\n\n\\citep{abbasi2018best} BAI, FB, stochastic and adversarial, no elimination.\n\n\\citep{carpentier2016tight} BAI, FB, lower bounds\n\n\\citep{karnin2013almost}\n\n\\citep{locatelli2016optimal}\n\n\\citep{cheshire2021problem} Thresholding, FB, no elimination.\n\n\\citep{bubeck2009pure} BAI, simple regret, FB with unknown time, no elimination.\n\n\\citep{bubeck2011pure} Same as the 2009 paper.\n\n\\citep{degenne2019pure} Exploration, FC, no elimination (LLR)\n\n\\citep{degenne2019non} Exploration, FC, no elimination (LLR)\n\n\\citep{xu2018fully}\n\n\\citep{kalyanakrishnan2012pac}\n\n\\citep{kaufmann2017monte}\n\n\\citep{kaufmann2018sequential}\n\n\\citep{kaufmann2018mixture}\n\\fi\n\\section{Deleting Pieces}\n\\label{sec:deleting_pieces}\n\nThe main question we answer in this section is ``what happens to the set of optimal allocations when we delete a piece of the alternative?''.\n\nFor a set $\\Lambda$, let $\\Omega(\\Lambda) = \\argmax_{\\omega \\in \\triangle_K} \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^k \\KL_k(\\mu, \\lambda)$. $\\Omega(\\Lambda)$ is a convex and closed set for any set $\\Lambda$.\n\n\\begin{lemma}\\label{lem:optimal_alloc_restrict_alternative_subset}\nIf $\\Lambda' \\subseteq \\Lambda$ and for some $\\varepsilon > 0$, $\\{\\lambda \\in \\Lambda \\mid \\exists \\omega \\in \\Omega(\\Lambda), \\sum_{k = 1}^K \\omega^k \\KL_k(\\mu, \\lambda) \\le \\inf_{\\eta \\in \\Lambda} \\sum_{k = 1}^K \\omega^k \\KL_k(\\mu, \\eta) + \\varepsilon\\} \\subseteq \\Lambda'$, then $\\Omega(\\Lambda) \\subseteq \\Omega(\\Lambda')$. \n\\end{lemma}\n\\begin{proof}\nTODO: we suppose for now that for all $\\omega \\in \\Omega(\\Lambda)$ there exists a distribution $q_\\omega$ over points in $\\Lambda$ which belongs to a Nash equilibrium.\n\nThose $q_\\omega$ distributions are also distributions over $\\Lambda'$ since they are supported on points in $\\{\\lambda \\in \\Lambda \\mid \\sum_{k = 1}^K \\omega^k \\KL_k(\\mu, \\lambda) = \\inf_{\\eta \\in \\Lambda} \\sum_{k = 1}^K \\omega^k \\KL_k(\\mu, \\eta)\\}$. Then for any $\\omega \\in \\triangle_K$ and $\\omega^* \\in \\Omega(\\Lambda)$ we have \n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda'} \\sum_k \\omega^{k} \\KL_k(\\mu, \\lambda)\n&\\le \\mathbb{E}_{\\lambda \\sim q_{\\omega^*}}\\sum_k \\omega^k \\KL_k(\\mu, \\lambda)\n\\\\\n&\\stackrel{(a)}{\\le} \\mathbb{E}_{\\lambda \\sim q_{\\omega^*}}\\sum_k \\omega^{*k} \\KL_k(\\mu, \\lambda)\n\\stackrel{(b)}{=} \\inf_{\\lambda \\in \\Lambda} \\sum_k \\omega^{*k} \\KL_k(\\mu, \\lambda)\n\\stackrel{(c)}{\\le} \\inf_{\\lambda \\in \\Lambda'} \\sum_k \\omega^{*k} \\KL_k(\\mu, \\lambda)\n\\: .\n\\end{align*}\n$(a)$ and $(b)$ are consequences of fact that $(\\omega^*, q_{\\omega^*})$ is a Nash equilibrium. $(c)$ comes from $\\Lambda' \\subseteq \\Lambda$.\n\nWe proved that the value of an allocation $\\omega^* \\in \\Omega(\\Lambda)$ is at least as good as the value of any other allocation. Hence $\\Omega(\\Lambda) \\subseteq \\Omega(\\Lambda')$.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:restricting_does_not_change_optimal_allocations}\nLet $\\varepsilon > 0$ and let $\\Lambda_{\\varepsilon} \\subseteq \\Lambda$ be such that for all $\\lambda_\\varepsilon \\in \\Lambda_{\\varepsilon}$ and all $\\omega \\in \\Omega(\\Lambda)$, $\\sum_{k = 1}^K \\omega^k \\KL_k(\\mu, \\lambda_\\varepsilon) \\ge \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^k \\KL_k(\\mu, \\lambda) + \\varepsilon$. Then $\\Omega(\\Lambda \\setminus \\Lambda_\\varepsilon) = \\Omega(\\Lambda)$. \n\\end{theorem}\n\\begin{proof}\nWe have the inclusion $\\Omega(\\Lambda) \\subseteq \\Omega(\\Lambda \\setminus \\Lambda_\\varepsilon)$ by Lemma~\\ref{lem:optimal_alloc_restrict_alternative_subset}.\n\nNow suppose that there exists $\\omega' \\in \\Omega(\\Lambda \\setminus \\Lambda_\\varepsilon) \\setminus \\Omega(\\Lambda)$. Let $\\omega^* \\in \\Omega(\\Lambda)$ be such that for all $r \\in (0,1]$, $\\omega_r := r \\omega' + (1 - r) \\omega^* \\notin \\Omega(\\Lambda)$. We can get such an $\\omega^*$ by taking an arbitrary $\\omega^*_0 \\in \\Omega(\\Lambda)$ and defining $\\omega^* = r^* \\omega' + (1 - r^*)\\omega_0^*$, where $r^* = \\sup \\{r \\in [0,1] \\mid r \\omega' + (1 - r)\\omega_0^* \\in \\Omega(\\Lambda)\\}$. We know that $r^* < 1$ since $\\Omega(\\Lambda)$ is closed.\n\nBy convexity of $\\Omega(\\Lambda \\setminus \\Lambda_\\varepsilon)$ and since both $\\omega'$ and $\\omega^*$ belong to that set, we have $\\omega_r \\in \\Omega(\\Lambda \\setminus \\Lambda_\\varepsilon)$ for all $r \\in [0,1]$.\n\nSet $r \\in (0,1]$. Since $\\omega_r \\notin \\Omega(\\Lambda)$, there exists $\\lambda_r \\in \\Lambda$ such that $\\sum_{k = 1}^K \\omega_r^k \\KL_k(\\mu, \\lambda_r) < \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda)$. It now suffices to show that for some $r \\in (0,1]$, we have such a $\\lambda_r$ with $\\lambda_r \\notin \\Lambda_\\varepsilon$. Indeed, under that condition we have that\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda \\setminus \\Lambda_\\varepsilon} \\sum_{k = 1}^K \\omega_r^k \\KL_k(\\mu, \\lambda)\n\\stackrel{(a)}{\\le} \\sum_{k = 1}^K \\omega_r^k \\KL_k(\\mu, \\lambda_r)\n&\\stackrel{(b)}{<} \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda)\n\\stackrel{(c)}{=} \\inf_{\\lambda \\in \\Lambda \\setminus \\Lambda_\\varepsilon} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda)\n\\: .\n\\end{align*}\nInequality $(a)$ is due to the hypothesis $\\lambda_r \\in \\Lambda \\setminus \\Lambda_\\varepsilon$, $(b)$ is the definition of $\\lambda_r$ and $(c)$ comes from the fact that minimizers over $\\Lambda$ can't belong to $\\Lambda_\\varepsilon$ by definition of $\\Lambda_\\varepsilon$.\nWe then conclude that $\\omega_r$ does not belong to $\\Omega(\\Lambda \\setminus \\Lambda_\\varepsilon)$, which is a contradiction.\n\nLet us now prove that there exists an $r \\in (0,1]$ for which there exists $\\lambda_r \\in \\Lambda \\setminus \\Lambda_\\varepsilon$ with $\\sum_{k = 1}^K \\omega_r^k \\KL_k(\\mu, \\lambda_r) < \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda)$.\n\nIf there is no such point, then since $\\omega_r \\notin \\Omega(\\Lambda)$ there exists $\\lambda_{r,\\varepsilon} \\in \\Lambda_\\varepsilon$ such that $\\sum_{k = 1}^K \\omega_r^k \\KL_k(\\mu, \\lambda_{r, \\varepsilon}) < \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda)$.\nBut if $r$ is small enough we have\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda)\n> \\sum_{k = 1}^K \\omega_r^k \\KL_k(\\mu, \\lambda_{r, \\varepsilon})\n&\\ge (1 - r)\\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda_{r, \\varepsilon})\n\\\\\n&\\ge (1 - r)\\left( \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda) + \\varepsilon \\right)\n\\\\\n&\\ge \\inf_{\\lambda \\in \\Lambda} \\sum_{k = 1}^K \\omega^{*k} \\KL_k(\\mu, \\lambda)\n\\: ,\n\\end{align*}\nwhich is a contradiction.\n\n\\end{proof}\n\nRemark: the hypothesis $\\varepsilon > 0$ is necessary in Theorem~\\ref{thm:restricting_does_not_change_optimal_allocations}. Indeed consider the unstructured Gaussian BAI problem in two dimensions in which $\\mu = (0,2)$.\nThen $\\Lambda = \\{\\lambda \\in \\mathbb{R}^2 \\mid \\lambda_1 \\ge \\lambda_2 \\}$ and the optimal allocation set is $\\Omega(\\Lambda) = \\{(1\/2, 1\/2)\\}$.\nThe only point with minimal value for that allocation is $\\lambda_0 = (1,1)$.\nConsider now $\\Lambda_0 = \\{\\lambda_0\\}$. Then $\\Omega(\\Lambda_0) = \\triangle_2 \\ne \\Omega(\\Lambda)$. In that restriction, we removed points arbitrarily close to the infimum value and Theorem~\\ref{thm:restricting_does_not_change_optimal_allocations} does not apply.\n\n\\section{Log-likelihood ratio in exponential families}\n\\label{sec:exponential_families}\n\n\nWe suppose in this section that all arms have distributions in a one-parameter exponential family (the same for all arms, for simpler notations). An arm distribution can thus be described by any one of three parameters: the arm feature vector $\\phi_k \\in \\mathbb{R}^d$, the arm mean $\\mu_k(\\theta) = \\phi_k^\\top \\theta$ and its natural parameter $\\eta_k(\\theta)$. These two last are functions of the model $\\theta$.\nFor two models $\\theta$ and $\\lambda$, let $f$ be a function such that the KL between the arm distributions with those parameters is $d_f(\\eta_k(\\lambda), \\eta_k(\\theta))$, where $d_f$ is the Bregman divergence associated to $f$. Let $f^*$ be the convex conjugate of $f$. The Kullback-Leibler divergence between the arm distributions under models $\\theta$ and $\\lambda$ is also equal to $d_{f^*}(\\mu_k(\\theta), \\mu_k(\\lambda))$.\n\nIf $\\eta_k(\\theta)$ is the natural parameter of that arm $k$, we have $\\mu_k(\\theta) = f'(\\eta_k(\\theta))$, and since $(f^*)' = (f')^{-1}$ we have $\\eta_k(\\theta) = (f^*)'(\\mu_k(\\theta))$.\n\n\\begin{lemma}\nFor all $\\theta, \\lambda \\in \\mathcal M$, the quantity $L_t(\\theta, \\lambda) - \\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda)$ is a martingale if the observations come from the model $\\theta$. This does not depend on the hypothesis that the distributions belong to an exponential family but only requires $\\mathbb{P}_\\theta \\ll \\mathbb{P}_\\lambda$.\n\\end{lemma}\n\\begin{proof}\nWe can expand the LLR to obtain a sum over times and write the KL as an expected log-likelihood ratio,\n\\begin{align*}\nL_t(\\theta, \\lambda) - \\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda)\n&= \\sum_{s=1}^t \\left(\\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_s^{k_s}) - \\mathbb{E}_{X \\sim \\nu_{k_s}(\\theta)}\\left[ \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X) \\right] \\right)\n\\: .\n\\end{align*}\nThe martingale property is then immediate.\n\\end{proof}\n\nMany of our proofs depend on the informal statement that the martingale $L_t(\\theta, \\lambda) - \\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda)$ concentrates, and is thus a lower order term which is negligible for $t$ large enough.\n\n\\begin{lemma}\\label{lem:LLR_exp_family}\nFor all $\\theta, \\lambda$ and all $X_{[t]}$,\n\\begin{align*}\nL_t(\\theta, \\lambda)\n&= \\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda) - \\sum_{k=1}^K N_t^k (\\eta_k(\\lambda) - \\eta_k(\\theta)) (\\hat{\\mu}_{t,k} - \\mu_k(\\theta))\n\\: .\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nWrite $\\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X) = \\eta(\\theta) X - f(\\eta(\\theta)) - (\\eta(\\lambda) X - f(\\eta(\\lambda)))$ and develop the Bregman divergence (the KL) on the right.\n\\end{proof}\n\n\n\\begin{lemma}\\label{lem:sub_gaussian_KL_bounds}\nIf the distribution of arm $k$ for model $\\theta$ is $\\sigma^2$-sub-Gaussian, then for all $\\lambda$,\n\\begin{align*}\n\\frac{1}{2 \\sigma^2} (\\mu_k(\\lambda) - \\mu_k(\\theta))^2\n&\\le \\KL_k(\\lambda, \\theta)\n\\: , \\\\\n\\frac{1}{2 \\sigma^2} \\sum_{k=1}^K N_t^k (\\mu_k(\\hat{\\theta}_t) - \\mu_k(\\theta))^2\n&\\le L_t(\\hat{\\theta}_t, \\theta)\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nWe first prove that the sub-Gaussian hypothesis is equivalent to both these inequalities:\n\\begin{align*}\n\\forall \\lambda, d_f(\\eta_k(\\lambda), \\eta_k(\\theta)) \\le \\frac{1}{2}\\sigma^2 (\\eta_k(\\lambda) - \\eta_k(\\theta))^2\n\\: , \\\\\n\\forall \\lambda, d_{f^*}(\\mu_k(\\lambda), \\mu_k(\\theta)) \\ge \\frac{1}{2 \\sigma^2} (\\mu_k(\\lambda) - \\mu_k(\\theta))^2\n\\: .\n\\end{align*}\nThe first result is then a simple consequence of that second inequality and the equality $\\KL(\\lambda, \\theta) = d_{f^*}(\\mu_k(\\lambda), \\mu_k(\\theta))$. The second result of the lemma can be obtained by applying the first one to $\\lambda = \\hat{\\theta}_t$ for all arms, then summing over arms.\n\nThe cumulant generating function at parameter $\\eta_k(\\theta)$ is $\\xi \\mapsto d_f(\\eta_k(\\theta) + \\xi, \\eta_k(\\theta))$. The sub-Gaussian hypothesis is that this function is lower than $\\frac{1}{2}\\sigma^2 \\xi^2$. For the second inequality, we first remark that the convex conjugate of $\\xi \\mapsto d_f(\\eta_k(\\theta) + \\xi, \\eta_k(\\theta))$ is $x \\mapsto d_{f^*}(\\mu_k(\\theta) + x, \\mu_k(\\theta))$, and write\n\\begin{align*}\nd_{f^*}(\\mu_k(\\lambda), \\mu_k(\\theta))\n&= \\sup_\\xi \\xi (\\mu_k(\\lambda) - \\mu_k(\\theta)) - d_f(\\eta_k(\\theta) + \\xi, \\eta_k(\\theta))\n\\\\\n&\\ge \\sup_\\xi \\xi (\\mu_k(\\lambda) - \\mu_k(\\theta)) - \\frac{1}{2}\\sigma^2 \\xi^2\n\\\\\n&= \\frac{1}{2 \\sigma^2} (\\mu_k(\\lambda) - \\mu_k(\\theta))^2 \\: .\n\\end{align*}\n\\end{proof}\n\n\\begin{corollary}\\label{cor:martingale_bound_exp_family}\nIf the distribution of arm $k$ for model $\\theta$ is $\\sigma^2$-sub-Gaussian, then for all $\\lambda$,\n\\begin{align*}\n\\left\\vert \\sum_{k=1}^K N_t^k (\\eta_k(\\lambda) - \\eta_k(\\theta)) (\\mu_k(\\hat{\\theta}_t) - \\mu_k(\\theta)) \\right\\vert\n&\\le 2 \\sqrt{L_t(\\hat{\\theta}_t, \\theta)} \\sqrt{\\sum_{k=1}^K N_t^k \\frac{1}{2} \\sigma^2 (\\eta_k(\\lambda) - \\eta_k(\\theta))^2 }\n\\: .\n\\end{align*}\n\\end{corollary}\n\nRemark: for Gaussians with variance $\\sigma^2$, we also have $\\sum_{k=1}^K N_t^k \\frac{1}{2} \\sigma^2 (\\eta_k(\\lambda) - \\eta_k(\\theta))^2 = \\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda)$. But that is not the case in general, and the sub-Gaussian assumption tells us that the sum of squares is larger than a KL, while we would like the reverse inequality.\n\n\\begin{proof}\nApply the Cauchy-Schwarz inequality, then Lemma~\\ref{lem:sub_gaussian_KL_bounds}:\n\\begin{align*}\n\\left\\vert \\sum_{k=1}^K N_t^k (\\eta_k(\\lambda) - \\eta_k(\\theta)) (\\mu_k(\\hat{\\theta}_t) - \\mu_k(\\theta)) \\right\\vert\n&\\le \\sqrt{ \\sum_{k=1}^K N_t^k (\\eta_k(\\lambda) - \\eta_k(\\theta))^2 \\sum_{k=1}^K N_t^k (\\mu_k(\\hat{\\theta}_t)- \\mu_k(\\theta))^2 }\n\\\\\n&\\le 2 \\sqrt{L_t(\\hat{\\theta}_t, \\theta)} \\sqrt{\\sum_{k=1}^K N_t^k \\frac{1}{2} \\sigma^2 (\\eta_k(\\lambda) - \\eta_k(\\theta))^2 }\n\\: .\n\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{Log-likelihood ratio for Gaussian linear models}\n\\label{sec:log_likelihood_ratio}\n\n\\begin{lemma}\\label{lem:llr-lin-gauss}\nFor any $\\theta\\in\\cM$ and $k\\in[K]$, let $\\nu_k(\\theta)$ be Gaussian with unit variance and linear mean $\\mu_k(\\theta) = \\theta^T\\phi_k$. Then for any $\\theta,\\lambda\\in\\cM$, any $t>0$ and any sequence of observations,\n\\begin{align*}\nL_t(\\theta,\\lambda) &= \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) - (\\lambda - \\theta)^\\top V_t (\\hat{\\theta}_t - \\theta)\n\\: .\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nApply Lemma~\\ref{lem:LLR_exp_family} to the Gaussian case, where $\\eta(\\theta) = \\mu(\\theta) = \\phi_k^\\top \\theta$.\n\\begin{align*}\nL_t({\\theta},\\lambda)\n&= \\sum_{k\\in[K]} N_t^k\\KL_k(\\theta, \\lambda) - \\sum_{k=1}^K N_t^k (\\phi_k^\\top\\lambda - \\phi_k^\\top\\theta) (\\hat{\\mu}_{t,k} - \\phi_k^\\top \\theta)\n\\end{align*}\nFor Gaussian linear models, we have\n\\begin{align*}\n\\sum_{k=1}^K N_t^k (\\phi_k^\\top\\lambda - \\phi_k^\\top\\theta) \\phi_k^\\top \\hat{\\theta}_t\n&= (\\lambda - \\theta)^\\top V_{N_t} \\hat{\\theta}_t\n= (\\lambda - \\theta)^\\top \\sum_{k=1}^K N_t^k \\hat{\\mu}_{t,k} \\phi_k\n= \\sum_{k=1}^K N_t^k (\\phi_k^\\top \\lambda - \\phi_k^\\top \\theta) \\hat{\\mu}_{t,k}\n\\: .\n\\end{align*}\nWe can use this to replace the sum involving $\\hat{\\mu}_{t,k}$ in the expression of $L_t(\\theta, \\lambda)$ by one involving $\\hat{\\theta}_t$.\n\\begin{align*}\nL_t({\\theta},\\lambda)\n&= \\sum_{k\\in[K]} N_t^k\\KL_k(\\theta, \\lambda) - (\\lambda - \\theta)^\\top (\\sum_{k=1}^K N_t^k \\phi_k \\phi_k^\\top) (\\hat{\\theta}_{t} - \\theta)\n\\\\\n&= \\sum_{k\\in[K]} N_t^k\\KL_k(\\theta, \\lambda) - (\\lambda - \\theta)^\\top V_t (\\hat{\\theta}_{t} - \\theta)\n\\: .\n\\end{align*}\n\\end{proof}\n\n\\begin{corollary}\\label{cor:llr-lin-gauss}\nFor the linear Gaussian model of Lemma \\ref{lem:llr-lin-gauss}, for any $\\lambda\\in\\cM$,\n\\begin{align*}\nL_t(\\hat{\\theta}_t,\\lambda) = \\sum_{k\\in[K]}N_t^k\\KL_k(\\hat{\\theta}_t,\\lambda).\n\\end{align*}\n\\end{corollary}\n\n\\begin{lemma}\\label{lem:llr-to-kl-lin-gauss}\nFor any $\\theta\\in\\cM$ and $k\\in[K]$, let $\\nu_k(\\theta)$ be Gaussian with unit variance and linear mean $\\mu_k(\\theta) = \\theta^T\\phi_k$. Then, for any $\\lambda\\in\\cM$ and $t>0$,\n\\begin{align*}\n\\left(\\sqrt{\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} - \\sqrt{L_t(\\hat{\\theta}_t,\\theta)}\\right)^2 \\leq L_t(\\hat{\\theta}_t,\\lambda) \\leq \\left(\\sqrt{\\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda)} + \\sqrt{L_t(\\hat{\\theta}_t,\\theta)}\\right)^2.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nWe decompose the LLR as\n\\begin{align*}\nL_t(\\hat{\\theta}_t,\\lambda)\n&= L_t(\\theta, \\lambda) + L_t(\\hat{\\theta}_t, \\theta)\n\\\\\n&= \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) - (\\lambda - \\theta)^\\top V_t (\\hat{\\theta}_t - \\theta) + L_t(\\hat{\\theta}_t, \\theta)\n\\end{align*}\nThe second term is bounded by Corollary~\\ref{cor:martingale_bound_exp_family} (with $\\sigma^2 = 1$) and the remark below it. We get\n\\begin{align*}\nL_t(\\hat{\\theta}_t, \\lambda)\n&\\le \\sum_{k\\in[K]}N_t^k\\KL_k(\\theta,\\lambda) + 2 \\sqrt{L_t(\\hat{\\theta}_t, \\theta)} \\sqrt{\\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda) } + L_t(\\hat{\\theta}_t, \\theta)\n\\\\\n&= \\left( \\sqrt{\\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda) } + \\sqrt{L_t(\\hat{\\theta}_t, \\theta)} \\right)^2\n\\: .\n\\end{align*}\n\nThe proof of the lower bound is similar.\n\\end{proof}\n\n\\subsection{Beyond Gaussians}\n\\label{sub:beyond_gaussians}\n\nWhile the proofs in the next two sections are specialized to Gaussian rewards, it is possible to extend them to more exponential families under slight assumptions, similarly to what was done in \\citep{degenne2019non}. If the arm distributions are known to belong to a $\\sigma^2$-sub-Gaussian exponential family, with the additional restriction that the distribution parameters should belong to a compact subset of the open interval on which the family is defined, then there exists a constant $c$ such that \n\\begin{align*}\n\\frac{1}{\\sigma^2}\\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda)\n\\le \\sum_{k=1}^K N_t^k \\frac{1}{2} (\\eta_k(\\lambda) - \\eta_k(\\theta))^2\n\\le c\\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda) \\: .\n\\end{align*}\nAnd $\\sum_{k=1}^K N_t^k \\KL_k(\\theta, \\lambda)$ is also close to $\\Vert \\lambda - \\theta \\Vert_{V_t}^2$, up to constant factors.\n\nWe can then recover bounds on the likelihood ratio of the same shape as in Lemma~\\ref{lem:llr-to-kl-lin-gauss}, up to constant factors depending on $c$ and $\\sigma^2$. The proofs of Appendix~\\ref{app:proofs-elim-stopping} and Appendix~\\ref{app:proofs_of_sampling} then proceed similarly, up to the additional constants.\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\begin{figure*}[t]\n\\centering\n\t\\includegraphics[scale=0.21]{figures\/lin_BAI_K50_d20_elim.pdf}\n\t\\includegraphics[scale=0.21]{figures\/lin_BAI_K50_d10_full_vs_emp.pdf}\n\t\\includegraphics[scale=0.21]{figures\/lin_BAI_K50_d10_delta.pdf}\n\t\\caption{Experiments on linear instances with $K=50$, $d=10$, averaged over 100 runs, with the right plot showing standard deviations. (left) How different adaptive algorithms eliminate arms in BAI when using elimination stopping. (middle) LinGame on BAI when combined with full and selective elimination rules, either only at stopping or both at stopping and at sampling. (right) Ratio between the LLR and elimination stopping times of different algorithms as a function of $\\log(1\/\\delta)$.}\n\t\\label{fig:all}\n\\end{figure*}\n\nOur experiments aim at addressing the following questions: (1) how do existing adaptive strategies behave when combined with elimination at stopping and (when possible) at sampling? How do they compare with native elimination-based methods? (2) What is the difference between selective and full elimination? (3) How do LLR and elimination stopping compare as a function of $\\delta$?\\footnote{Our code is available at \\url{https:\/\/github.com\/AndreaTirinzoni\/bandit-elimination}.}\n\n\n\nWe ran experiments on two bandit structures: linear (where $d < K$) and unstructured (where $K=d$ and the arms are the canonical basis of $\\mathbb{R}^d$). For each of them, we considered 3 pure exploration problems: BAI, Top-m, and online sign identification (OSI) \\citep{carpentier2016tight,ouhamma2021online}, also called thresholding bandits. All experiments use $\\delta=0.01$ and are averaged over 100 runs.\n\nWe combined adaptive algorithms which are natively based on LLR stopping with our elimination stopping rules and, whenever possible, we extended their sampling rule to use elimination. The selected baselines are the following. For linear BAI, LinGapE \\citep{xu2018fully}, LinGame \\citep{degenne2020gamification}, Frank-Wolfe Sampling (FWS) \\citep{wang2021fast}, Lazy Track-and-Stop (TaS) \\citep{jedra2020optimal}, XY-Adaptive \\citep{soare2014best}, and RAGE \\citep{fiez2019sequential} (the latter two are natively elimination based). For linear Top-m, m-LinGapE \\citep{reda2021top}, MisLid \\citep{reda2021dealing}, FWS, Lazy TaS\\footnote{Lazy TaS, while analyzed only for BAI, can be applied to any problem since it is a variant of Track-and-Stop.}, and LinGIFA \\citep{reda2021top}. For linear OSI, LinGapE\\footnote{LinGapE was originally proposed only for BAI in \\citep{xu2018fully}, but its extension to OSI is trivial.}, LinGame, FWS, and Lazy TaS.\nFor unstructured instances linear algorithms are still applicable, and we further implemented LUCB \\citep{kalyanakrishnan2012pac}, UGapE \\citep{gabillon2012best}, and the Racing algorithm \\citep{kaufmann2013information} for BAI and Top-m. We also tested an ``oracle'' sampling rule which uses the optimal proportions from the lower bound. Due to space constraints, we present only the results on linear structures. Those on unstructured problems can be found in Appendix~\\ref{app:experiments}. \nThe first experiments use randomly generated instances with $K=50$ arms and dimension $d=10$\n\n\\textbf{Comparison of elimination times.} We analyze how different adaptive algorithms eliminate pieces when combined with selective elimination at stopping. To this purpose we focus on BAI, where the sets of pieces can be conveniently reduced to a set of active arms, those that are still likely to be the optimal one.\nFigure \\ref{fig:all}\\emph{(left)} shows how the set of active arms evolves over time for the 5 adaptive baselines.\nNotably, many arms are eliminated very quickly, with most baselines able to halve the set of active arms in the first 3000 steps.\nThe problem size is quickly reduced over time. As we shall see in the last experiment, this will yield significant computational gains.\nWe further note that the ``oracle'' strategy, which plays fixed proportions, seems the slowest at eliminating arms.\nThe reason is that the optimal proportions from the lower bound focus on discriminating the ``hardest'' arms, while the extra randomization in adaptive rules might indeed eliminate certain ``easier'' arms sooner.\n\n\\begin{table*}[t!]\n\\centering\n\\small\n\\begin{tabular}{@{}clcccccc@{}} \n\\toprule\n & & \\multicolumn{2}{c}{No elimination (LLR)} & \\multicolumn{2}{c}{Elim. stopping} & \\multicolumn{2}{c}{Elim. stopping + sampling} \\\\\n\\cmidrule(r){3-8}\n& Algorithm & Samples & Time & Samples & Time & Samples & Time \\\\\n\\cmidrule{1-8}\n\\multirow{7}{*}{\\rotatebox[origin=c]{90}{BAI}} \n& LinGapE & $33.19 \\pm 8.7$ & $0.23$ & $33.11 \\pm 8.7$ & $0.2$ & $29.89 \\pm 8.6$ & $0.18$ \\\\\n& LinGame & $45.34 \\pm 14.2$ & $0.23$ & $43.67 \\pm 13.4$ & $0.21$ & $32.49 \\pm 8.1$ & $0.18$ \\\\\n& FWS & $42.26 \\pm 60.1$ & $0.73$ & $42.25 \\pm 60.1$ & $0.7$ & $32.62 \\pm 18.0$ & $0.45$ \\\\\n& Lazy TaS & $76.33 \\pm 65.8$ & $0.15$ & $74.08 \\pm 65.8$ & $0.13$ & $64.48 \\pm 81.8$ & $0.12$ \\\\\n& Oracle & $56.36 \\pm 9.1$ & $0.05$ & $55.36 \\pm 9.3$ & $0.02$ & & \\\\\n& XY-Adaptive & & & & & $87.08 \\pm 29.1$ & $0.44$ \\\\\n& RAGE & & & & & $106.87 \\pm 30.7$ & $0.02$ \\\\\n\\cmidrule{1-8}\n\\multirow{6}{*}{\\rotatebox[origin=c]{90}{Top-m ($m=5$)}} \n& m-LinGapE & $63.69 \\pm 11.1$ & $0.56$ & $63.48 \\pm 11.0$ & $0.41$ & $59.57 \\pm 9.4$ & $0.24$ \\\\\n& MisLid & $87.77 \\pm 20.4$ & $0.55$ & $85.95 \\pm 20.5$ & $0.4$ & $69.58 \\pm 16.0$ & $0.25$ \\\\\n& FWS & $78.28 \\pm 65.0$ & $3.0$ & $78.23 \\pm 65.0$ & $2.85$ & $77.79 \\pm 65.0$ & $0.97$ \\\\\n& Lazy TaS & $161.43 \\pm 96.9$ & $0.57$ & $159.86 \\pm 96.9$ & $0.43$ & $146.06 \\pm 82.6$ & $0.36$ \\\\\n& Oracle & $102.45 \\pm 16.1$ & $0.2$ & $101.53 \\pm 16.4$ & $0.08$ & & \\\\\n& LinGIFA & $58.31 \\pm 10.8$ & $2.46$ & $58.31 \\pm 10.8$ & $2.33$ & & \\\\\n\\cmidrule{1-8}\n\\multirow{5}{*}{\\rotatebox[origin=c]{90}{OSI}}\n& LinGapE & $17.31 \\pm 2.3$ & $0.22$ & $17.29 \\pm 2.2$ & $0.19$ & $14.71 \\pm 2.0$ & $0.17$ \\\\\n& LinGame & $23.77 \\pm 4.1$ & $0.25$ & $23.05 \\pm 3.9$ & $0.21$ & $14.87 \\pm 2.0$ & $0.19$ \\\\\n& FWS & $15.26 \\pm 2.0$ & $0.83$ & $15.24 \\pm 2.0$ & $0.81$ & $14.99 \\pm 2.1$ & $0.56$ \\\\\n& Lazy TaS & $35.11 \\pm 10.2$ & $0.32$ & $33.98 \\pm 9.7$ & $0.3$ & $23.51 \\pm 5.6$ & $0.34$ \\\\\n& Oracle & $29.1 \\pm 4.8$ & $0.06$ & $28.65 \\pm 5.0$ & $0.03$ & & \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Experiments on linear instances with $K=50$ and $d=20$. The \"Time\" columns report average times per iteration in milliseconds (i.e., the total time the algorithm took divided by the number of samples). Each entry reports the mean across $100$ runs plus\/minus standard deviation (which is omitted for compute times due to space constraints). Algorithms for which the third column is missing cannot be combined with elimination at sampling, while algorithms for which the first two columns are missing are natively elimination-based. Samples are scaled down by a factor $10^3$.}\\label{tab:lin_all}\n\\end{table*}\n\n\\textbf{Full versus selective elimination.} We \ncombine the different algorithms with full and selective elimination, both at sampling and stopping.\nDue to space constrains, Figure \\ref{fig:all}\\emph{(middle)} shows the results only for LinGame (see Appendix~\\ref{app:experiments} for the others).\nWe note that full elimination seems faster at discarding arms in earlier steps, as we would expect theoretically.\nHowever, it never stops earlier than its selective counterpart. Moreover, its computational overhead is not advantageous. Overall, we concluded that our selective elimination rule is the best choice and we shall thus focus on it in the remaining. Finally, we remark that combining the sampling rule with elimination (no matter of what type) seems to discard arms faster in later steps, and could eventually make the algorithm stop sooner. \n\n\\textbf{LLR versus elimination stopping.} We now compare LLR and elimination stopping as a function of $\\delta$.\nWe know from theory that both stopping rules allow to achieve asymptotic optimality. Hence for asymptotically optimal sampling rules the resulting stopping times with LLR and elimination should tend to the same quantity as $\\delta\\rightarrow 0$.\nFigure \\ref{fig:all}\\emph{(right)}, where we report the ratio between the LLR stopping time and the elimination one for different algorithms, confirms that this is the case.\nSome algorithms (LinGapE and FWS) seem to benefit less from elimination stopping than the others, i.e., they achieve smaller ratios of stopping times.\nWe believe this to be a consequence of their mostly ``greedy'' nature, while the extra randomization of the other algorithms might help in this aspect.\n\n\\textbf{Sample complexities and computation times.} We finally compare our baselines in all three exploration tasks, in terms of sample complexity and computation time.\nFor this experiment, we selected a larger linear instance with $K=50$ and $d=20$, randomly generated (see the protocol in Appendix~\\ref{app:experiments}).\nFrom the results in Table \\ref{tab:lin_all}, we highlight three points.\n(1) The computation times of all adaptive algorithms decrease when using selective elimination stopping instead of LLR and further decrease when also using elimination at sampling. In the case of Top-m (i.e., the hardest combinatorial problem), most adaptive algorithms become at least twice faster with elimination at stopping and sampling instead of LLR.\n(2) Elimination at sampling improves the sample complexity of all algorithms.\n(3) For BAI, the natively elimination-based algorithm RAGE, which updates its strategy infrequently, is the fastest in terms of computation time but the slowest in terms of samples. Adaptive algorithms using elimination achieve run times that are within an order of magnitude of those of RAGE, while outperforming it in terms of sample complexity by a factor 2 to 3.\n\n\n\\section{Notation}\\label{app:notation}\n\n\\begin{table*}[h]\n\\centering\n\\begin{tabular}{@{}ll@{}} \n\\toprule\nSymbol & Meaning \\\\\n\\cmidrule{1-2}\n$[K] = \\{1,2,\\dots,K\\}$ & Set of $K$ arms\\\\\n$\\Delta_K$ & $K$-dimensional simplex\\\\\n$d\\in\\mathbb{N}_{>0}$ & Dimension of parameter space\\\\\n$\\cM \\subseteq \\mathbb{R}^d$ & Set of possible reward parameters\\\\\n$\\mathbb{P}_\\theta$ & Distribution of observations in bandit $\\theta\\in\\cM$\\\\\n$\\nu_k(\\theta)$ & Reward distribution of arm $k$ in bandit $\\theta\\in\\cM$\\\\ \n$\\mu_k(\\theta) := \\mathbb{E}_{x\\sim \\nu_k(\\theta)}[x]$ & Mean reward of arm $k$ in bandit $\\theta\\in\\cM$\\\\ \n$\\cI$ & Set of answers\\\\\n$i^\\star(\\theta)$ & Correct answer for bandit problem $\\theta\\in\\cM$\\\\\n$\\Lambda(i) := \\{\\lambda\\in\\cM : i^\\star(\\lambda) \\neq i\\}$ & Set of alternatives to answer $i\\in\\cI$\\\\\n$\\cP(i)$ & Set of alternative piece indexes for answer $i\\in\\cI$\\\\\n$P_i := |\\cP(i)|$ & Number of pieces for answer $i\\in\\cI$\\\\\n$\\Lambda_p(i)$ & Piece $p\\in\\cP(i)$ for answer $i\\in\\cI$\\\\\n$X_{[t]} := (X_1^{k_1}, \\ldots, X_t^{k_t})$ & Vector of $t$ observations\\\\\n$L_t(\\theta, \\lambda)\n:= \\log \\frac{d \\mathbb{P}_\\theta}{d \\mathbb{P}_\\lambda}(X_{[t]})$ & LLR of $t$ observations between $\\theta$ and $\\lambda$ \\\\\n$\\KL_k(\\theta,\\lambda) := \\KL(\\nu_k(\\theta),\\nu_k(\\lambda))$ & KL divergence between $\\nu_k(\\theta)$ and $\\nu_k(\\lambda)$\\\\\n$\\hat{\\theta}_t := \\argmax_{\\lambda\\in\\cM}d \\mathbb{P}_\\lambda (X_{[t]})$ & Maximum likelihood estimator for $\\theta$\\\\\n$\\hat{\\mu}_t^k := \\frac{1}{N_t^k}\\sum_{s=1}^t X_s^{k_s}\\indi{k_s=k}$ & Empirical mean of arm $k$ (different from $\\mu_k(\\hat{\\theta}_t))$\\\\\n$H_p(\\omega, \\theta) := \\inf_{\\lambda \\in \\Lambda_p(i^\\star)} \\sum_{k\\in[K]}\\omega^k\\KL_k(\\theta,\\lambda)$ & Information of $\\omega\\in\\Delta_K$ for piece $\\Lambda_p(i^\\star)$\\\\\n$H^\\star(\\theta) := \\max_{\\omega\\in\\Delta_K}\\min_{p\\in\\cP(i^\\star)}H_p(\\omega, \\theta)$ & Optimal constant from the lower bound \\eqref{eq:lower_bound}\\\\\n$\\Omega_\\epsilon(\\theta)$ & Set of $\\epsilon$-optimal proportions\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{The notation adopted in this paper.}\n\\label{tab:notation}\n\\end{table*}\n\n\\section{Identification Problems}\\label{app:problems}\n\nIn this section, we show that popular identification problems satisfy Assumption \\ref{ass:union_of_sets} and are thus suitable for elimination-based algorithms. We shall focus on \\emph{Gaussian linear bandits}, where, for any $\\theta\\in\\cM$ and $k\\in[K]$, $\\nu_k(\\theta)$ is Gaussian with unit variance and linear mean $\\mu_k(\\theta) = \\theta^T\\phi_k$. For each identification problem, we first show how to decompose the sets of alternatives into pieces for which the closest alternatives can be found efficiently. Moreover, we report the closed-form equations for computing such closest alternatives in Gaussian linear bandits and in the special case of unstructured bandits (where $\\cM = \\mathbb{R}^K$ and $\\phi_k = e_k$, the canonical basis of $\\mathbb{R}^k$). Finally, we show how to efficiently implement elimination strategies in each of these identification problems even when enumerating over all possible answers is intractable (e.g., for problems where the number of answers is exponential in the problem dimension).\n\n\\paragraph{The LLR in Gaussian linear bandits}\n\nFor all identification problems presented later, we need to show that $\\inf_{\\lambda \\in \\Lambda_p(i)}L_t(\\hat{\\theta},\\lambda)$ can be computed efficiently for any piece. In Gaussian linear bandits, such a log-likelihood ratio is actually equivalent to a KL divergence (see Corollary \\ref{cor:llr-lin-gauss}),\n\\begin{align*}\nL_t(\\hat{\\theta},\\lambda) = \\sum_{k\\in[K]}N_t^k\\KL_k(\\hat{\\theta}_t,\\lambda) = \\frac{1}{2}\\|\\hat{\\theta}_t-\\lambda\\|_{V_t}^2,\n\\end{align*}\nwhich in turn is a quadratic form weighted by the design matrix $V_t := \\sum_{s=1}^t \\phi_{k_s}\\phi_{k_s}^T$. Therefore, with greater generality, in rest of this section we shall focus on showing that $\\inf_{\\lambda \\in \\Lambda_p(i)}\\|\\theta-\\lambda\\|_{V_N}^2$ can be computed efficiently for any $\\theta,\\lambda\\in\\cM$, piece $\\Lambda_p(i)$, and (positive-definite) matrix $V_N := \\sum_{k\\in[K]} N^k \\phi_k\\phi_k^T$ with $N\\in\\mathbb{R}^K_{\\geq 0}$. In all cases, this will require minimizing quadratic forms over half-spaces.\n\n\\subsection{Best-arm Identification}\n\nIn BAI, the goal is to find the arm with largest mean. The set of answers is therefore $\\cI = [K]$ and the correct answer of $\\theta\\in\\cM$ is $i^\\star(\\theta) = \\argmax_{k\\in[K]}\\theta^T\\phi_k$. \n\n\\paragraph{Decomposition into pieces}\n\nFor each $i\\in\\cI$, the set of alternatives $\\Lambda(i)$ can be decomposed into half-spaces,\n\\begin{align*}\n\\Lambda(i) = \\bigcup_{k\\in[K], k\\neq i} \\left\\{ \\lambda \\in \\cM : \\lambda^T\\phi_k > \\lambda^T\\phi_i\\right\\}.\n\\end{align*}\nTherefore, we can take $\\cP(i) = [K] \\setminus \\{i\\}$ with $P_i = K-1$ and $\\Lambda_p(i) = \\{\\lambda\\in\\cM : \\lambda^T\\phi_p > \\lambda^T\\phi_i\\}$ for $p \\in [K]\\setminus\\{i\\}$.\n\n\\paragraph{Closest alternatives}\n\n\n\nTake any $j,k\\in[K]$ with $j\\neq k$. For linear problems, for any $\\theta\\in\\mathbb{R}^d$,\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_j(k)}\\Vert {\\theta} - \\lambda \\Vert_N^2 = \\begin{cases}\n\t\\frac{(\\theta^T(\\phi_k-\\phi_j))^2}{\\| \\phi_k - \\phi_j \\|_{V_N^{-1}}^2} &\\text{if } \\theta^T(\\phi_k-\\phi_j) \\geq 0, \\\\\n\t0 &\\text{otherwise}.\n\\end{cases}\n\\end{align*}\nFor the special case of unstructured problem, for any $\\theta\\in\\mathbb{R}^K$,\n\\begin{align*}\n\\inf_{\\lambda \\in \\Lambda_j(k)}\\Vert {\\theta} - \\lambda \\Vert_{V_N}^2 = \\begin{cases}\n\t\\frac{N_jN_k}{N_j + N_k}(\\theta^T(\\phi_k-\\phi_j))^2 &\\text{if } \\theta^T(\\phi_k-\\phi_j) \\geq 0\\ \\text{and } N_j + N_k > 0, \\\\\n\t0 &\\text{otherwise}.\n\\end{cases}\n\\end{align*}\n\n\\paragraph{Efficient implementation}\n\nIn BAI, for each answer (i.e., arm) $i\\in\\cI$, the piece indexes $p\\in\\cP(i)$ are themselves answers (different than $i$). Implementing the elimination stopping rule in its general form requires storing and iterating over $K(K-1)$ elements (all items in $\\cP(i)$ for each $i\\in\\cI$). However, much better implementations exist that require storing at most $K$ elements (one for each answer). Here we propose two such implementations: the first, for full elimination, is more statistically-efficient, while the second one (for selective elimination) is more computationally-efficient.\n\n\\paragraph{Full elimination (statistically-efficient implementation)}\n\nDue to the structure of the problem, whenever we eliminate one piece $\\Lambda_p(i)$, we actually know that the mean reward of arm $p$ cannot be better than that of arm $i$. In other words, $p$ cannot be the right answer. Therefore, we can maintain a list of active arms $\\cI_t$ which is initialized as $\\cI_0 = [K]$ and updated as\n\\begin{align*}\n\\mathcal{I}_t := \\mathcal{I}_{t-1} \\setminus \\left\\{ j\\in\\cI_{t-1} \\big| \\max_{i\\neq j} \\inf_{\\lambda \\in \\Lambda_j(i)} L_t(\\hat{\\theta}_t,\\lambda) \\geq \\beta_{t,\\delta}\\right\\}.\n\\end{align*}\nThen, we stop whenever $|\\cI_t| = 1$ and return the single arm left active. Due to the inner maximization, this implementation requires performing $|\\cI_{t-1}|(K-1)$ minimizations over half-spaces at each step to check elimination.\n\n\n\\begin{proposition}\nAn algorithm using the statistically-efficient implementation above never discards pieces later than (and thus never stops later than) the full elimination rule of \\eqref{eq:elimination-sets} almost surely. \n\\end{proposition}\n\\begin{proof}\nIf the algorithm eliminates a piece $\\Lambda_p(i)$ at time $t$ with \\eqref{eq:elimination-sets},\n\\begin{align*}\n\\beta_{t,\\delta} \\leq \\inf_{\\lambda \\in \\Lambda_p(i)} L_t(\\hat{\\theta}_t,\\lambda) \\leq \\max_{j\\neq p} \\inf_{\\lambda \\in \\Lambda_p(j)} L_t(\\hat{\\theta}_t,\\lambda).\n\\end{align*}\nThis implies that the elimination condition in the statistically efficient implementation triggers as well.\n\\end{proof}\n\n\\paragraph{Selective elimination (computationally-efficient implementation)} \n\nAn even simpler implementation is to check elimination, at each time $t$, only for pieces related to the empirical optimal arm $i^\\star(\\hat{\\theta}_t)$. That is, we update $\\cI_t$ as\n\\begin{align*}\n\\mathcal{I}_t := \\mathcal{I}_{t-1} \\setminus \\left\\{ j\\in\\cI_{t-1} \\big| \\inf_{\\lambda \\in \\Lambda_j(i^\\star(\\hat{\\theta}_t))} L_t(\\hat{\\theta}_t,\\lambda) \\geq \\beta_{t,\\delta}\\right\\}.\n\\end{align*}\nThis requires linear (in $K$) per-round memory and time complexity. Moreover, checking this stopping rule is more time-efficient than the LLR one. The latter requires to perform $K-1$ tests at each step, while the elimination one only performs $O(|\\cI_{t-1}|)$ tests at each round $t$, thus becoming faster as arms are eliminated.\n\n\\begin{proposition}\nAn algorithm using the computationally-efficient implementation above never discards pieces later than (and thus never stops later than) the selective elimination rule almost surely. \n\\end{proposition}\n\\begin{proof}\nIf the algorithm eliminates a piece of the empirical optimal arm at time $t$ using the selective elimination rule, the arm corresponding to that piece is also eliminated from the set $\\cI_t$ above.\n\\end{proof}\n\n\n\\subsection{Top-m Identification}\n\nIn top-$m$ identification, the goal is to find the $m > 0$ arms with largest mean. BAI is therefore a special case of this problem when $m=1$. The set of answers is $\\cI = \\{\\cS \\subseteq [K] : |\\cS|=m\\}$ with size $|\\cI| = {K \\choose m}$ and the correct answer of $\\theta\\in\\cM$ is $i^\\star(\\theta) = \\argmax_{k\\in[K]}^m\\theta^T\\phi_k$, where we use $\\argmax^m : \\mathbb{R}^K \\mapsto {K \\choose m}$ to denote the function returning the set of $m$ largest values. \n\n\\paragraph{Decomposition into pieces}\n\nLet us denote each $i\\in\\cI$ as a tuple $i = (k_1,\\dots, k_m)$ of $m$ arms. Similarly to BAI, it is known that the set of alternatives $\\Lambda(i)$ can be decomposed into half-spaces \\citep{reda2021dealing},\n\\begin{align*}\n\\Lambda(i) = \\bigcup_{j\\in i, k \\in [K] \\setminus i} \\left\\{ \\lambda \\in \\cM : \\lambda^T\\phi_k > \\lambda^T\\phi_j\\right\\}.\n\\end{align*}\nTherefore, we have $\\cP(i) = i \\times ([K] \\setminus i)$ with $P_i = m(K-m)$ and $\\Lambda_p(i) = \\{\\lambda\\in\\cM : \\lambda^T\\phi_k > \\lambda^T\\phi_j\\}$ when $p = (j,k)$.\n\n\\paragraph{Closest alternatives}\n\nNote that each set $\\Lambda_p(i)$ is still a half-space of the same form as the one we have for BAI. Hence, the same closed form expression for the closest alternative derived for BAI can be adopted for top-$m$ identification (see the closed-form expressions in the previous section).\n\n\\paragraph{Efficient implementation}\n\nNote that, differently from BAI, here the set of answers is of combinatorial size. It is therefore intractable to store and enumerate all sets of active pieces $\\cP_t(i)$. However, thanks to the structure of the problem, this is not necessary and there exists an efficient implementation for the elimination stopping rule. First note that, while there are $m(K-m)$ pieces for each of $K \\choose m$ possible answers, the total number of half-spaces is only $K(K-1)$, one for each couple of different arms. With some abuse of notation, let us denote by $\\Lambda_{k,j} := \\left\\{ \\lambda \\in \\cM : \\lambda^T\\phi_k > \\lambda^T\\phi_j\\right\\}$ the half-space associated with arms $k$ and $j$. The elimination stopping rule, which checks whether all the pieces in $\\cP(i)$ for some answer $i$ have been discarded, is equivalent to checking whether there exist $m$ arms $k_1,\\dots,k_m$ such that $\\Lambda_{k,k_l}$ has been eliminated for all $k\\notin \\{k_1,\\dots,k_m\\}$ and $l\\in[m]$. Therefore, in our implementations we will only store whether each half-space $\\Lambda_{k,k_l}$ has been eliminated or not. As before, we now see two possible implementations, one more computationally efficient and the other more statistically efficient.\n\n\\paragraph{Full elimination (statistically-efficient implementation)}\n\nThe idea is to check, at each time step $t$, the elimination condition for all half-spaces which have not been previously discarded. In particular, for each arm $j\\in[K]$ we keep a set $\\cS_t(j)$ storing those arms which are ``worse'' than $j$. Formally, we initially set $\\cS_0(j) = \\emptyset$ and update it as\n\\begin{align*}\n\\cS_t(j) := \\begin{cases}\n\t\\cS_{t-1}(j) \\cup \\left\\{ k\\notin \\cS_{t-1}(j)\\cup\\{j\\} \\big| \\inf_{\\lambda \\in \\Lambda_{k,j}} L_t(\\hat{\\theta}_t,\\lambda) \\geq \\beta_{t,\\delta}\\right\\} &\\text{if } |\\cS_{t-1}(j)| < K-m, \\\\\n\t\\cS_{t-1}(j) &\\text{otherwise}.\n\\end{cases}\n\\end{align*}\nThat is, when a half-space $\\Lambda_{k,j}$ is eliminated, we conclude that arm $k$ is ``worse'' than arm $j$ and thus add the former to $\\cS_t(j)$. In order to decide when to stop, we use the following intuition: whenever we find that $|\\cS_t(j)| \\geq K-m$ for some arm $j\\in[K]$, then we know that $j$ must be in the top-m arms of $\\theta$ and we can thus stop updating the set $\\cS_t(j)$. Therefore, we can stop whenever there exist $m$ arms satisfying this property. This can be checked efficiently by keeping track of how many arms reach the condition $|\\cS_t(j)| \\geq K-m$ and stopping when the number of such arms reaches $m$. This approach takes $O(K(K-m))$ memory in the worst-case to store the sets $\\cS_t(j)$. At each step, it performs exactly $\\sum_{j: |\\cS_{t-1}(j)| < K-m} (K - |\\cS_{t-1}(j)| - 1)$ minimizations over half-spaces to check the elimination conditions, which gives $O(K(K-1))$ time complexity in the worst-case.\n\n\\begin{proposition}\nAn algorithm using the statistically-efficient implementation above never discards pieces later than (and thus never stops later than) the full elimination rule of \\eqref{eq:elimination-sets} almost surely.\n\\end{proposition}\n\\begin{proof}\nNote that the elimination condition for single half-spaces is exactly the same in the general elimination rule of \\eqref{eq:elimination-sets} and in its implementation above. If \\eqref{eq:elimination-sets} eliminates a piece $\\Lambda_p(i)$ at time $t$, this implies some half-space $\\Lambda_{k,j}$ is eliminated. Then, we have two possible cases: if $|\\cS_{t-1}(j)| < K-m$, then we have $k\\in\\cS_t(j)$ by the condition above, i.e., $k$ is detected as ``worse'' than $j$ and it will be never checked again. On the other hand, if $|\\cS_{t-1}(j)| \\geq K-m$, then $j$ has already been labeled as belonging to the final answers. Thus, no minimization over its corresponding half-spaces (including the one for $k$) will be checked anymore, which is the same as saying that $\\Lambda_{k,j}$ has already been eliminated.\n\\end{proof}\n\n\\paragraph{Selective elimination (computationally-efficient implementation)} \n\nSimilarly to what we did for BAI, the most computationally-efficient implementation consists in checking the elimination condition only for the alternative pieces (i.e., the half-spaces) of the empirical correct answer at each step. We modify the update rule of the statistically-efficient implementation as\n\\begin{align*}\n\t\\cS_t(j) := \\cS_{t-1}(j) \\cup \\left\\{ k\\notin i^\\star(\\hat{\\theta}_t) \\cup \\cS_{t-1}(j) \\big| \\inf_{\\lambda \\in \\Lambda_{k,j}} L_t(\\hat{\\theta}_t,\\lambda) \\geq \\beta_{t,\\delta}\\right\\}\n\\end{align*}\nif $j \\in i^\\star(\\hat{\\theta}_t) \\text{ and } |\\cS_{t-1}(j)| < K-m$, and $S_t(j) := \\cS_{t-1}(j)$.\nThat is, at each step we only check elimination for half-spaces associated with the top-m arms of $\\hat{\\theta}_t$, excluding those that have already been eliminated and those that have already reached the threshold for being among the final answer. Note that this implementation performs $\\sum_{j\\in i^\\star(\\hat{\\theta}_t),|\\cS_{t-1}(j)| < K-m}(K - |i^\\star(\\hat{\\theta}_t) \\cup \\cS_{t-1}(j)|) \\leq m(K-m)$ minimizations over half-spaces at each step $t$. In constrast, the LLR stopping rule always performs $m(K-m)$ minimizations and is thus less efficient.\n\n\\begin{proposition}\nAn algorithm using the computationally-efficient implementation above never discards pieces later than (and thus never stops later than) the selective elimination rule almost surely.\n\\end{proposition}\n\\begin{proof}\nThe proof is the same as for BAI: if an arm is discarded by the selective elimination rule, then it is also discarded from the sets above.\n\\end{proof}\n\n\n\n\n\\subsection{Thresholding Bandits}\n\nIn the thresholding bandit problem, the goal is to learn whether the mean of each arm is above or below some given threshold. As usual, without loss of generality, we shall take zero as our threshold, for which the problem reduces to learning the sign of the mean reward of each arm. Let $\\sign(x) := \\indi{x \\geq 0}$. Then, the set of answers is $\\cI = \\{0,1\\}^K$ with size $|\\cI| = 2^K$. The correct answer of problem $\\theta\\in\\cM$ is $i^\\star(\\theta) = (\\sign(\\theta^T\\phi_k))_{k\\in[K]}$. \n\n\\paragraph{Decomposition into pieces}\n\nFor each $i\\in \\{0,1\\}^K$ (represented as a $K$-dimensional binary vector), the set of alternatives $\\Lambda(i)$ can be decomposed into pieces as\n\\begin{align*}\n\\Lambda(i) = \\bigcup_{k\\in[K]} \\left\\{\\lambda\\in\\cM : \\sign(\\lambda^T\\phi_k) \\neq i^k \\right\\}.\n\\end{align*}\nTherefore, we have $\\cP(i) = [K]$ with $P_i = K$ and $\\Lambda_p(i) = \\{\\lambda\\in\\cM : \\sign(\\lambda^T\\phi_p) \\neq i^p\\}$. As for BAI, the computation of the closest alternative over such pieces can be performed efficiently.\n\n\\paragraph{Closest alternatives}\n\nLet $\\theta\\in\\cM$ and $N\\in\\mathbb{R}_{\\geq 0}^K$. The computation of the closest alternatives over pieces $\\Lambda_p(i)$ can be reduced to the following optimization problem. For any arm $k\\in[K]$ and any $b\\in\\{0,1\\}$, we need to find\n\\begin{align*}\n\\inf_{\\lambda \\in \\cM : \\sign(\\lambda^T\\phi_k) \\neq b} \\Vert {\\theta} - \\lambda \\Vert_{V_N}^2.\n\\end{align*}\nIt is easy to see that this is zero when $b\\neq\\sign(\\theta^T\\phi_k)$ (since $\\theta$ itself is feasible). In case $b=\\sign(\\theta^T\\phi_k)$, for unstructured problems ($\\cM=\\mathbb{R}^K$), the solution is to take $\\lambda$ equal to $\\theta$ at all components except the $k$-th one, where it is set to zero. This gives\n\\begin{align*}\n\\inf_{\\lambda \\in \\mathbb{R}^K : \\sign(\\lambda^T\\phi_k) \\neq b} \\Vert {\\theta} - \\lambda \\Vert_{V_N}^2 = \\begin{cases}\n\tN^k(\\theta^T\\phi_k)^2 &\\text{if } \\sign(\\theta^T\\phi_k) = b, \\\\\n\t0 &\\text{otherwise}.\n\\end{cases}\n\\end{align*}\nIn the linear case, again under the assumption that $V_N$ is positive definite, we get\n\\begin{align*}\n\\inf_{\\lambda \\in \\mathbb{R}^d : \\sign(\\lambda^T\\phi_k) \\neq b} \\Vert {\\theta} - \\lambda \\Vert_{V_N}^2 = \\begin{cases}\n\t\\frac{(\\theta^T\\phi_k)^2}{\\| \\phi_k \\|_{V_N^{-1}}^2} &\\text{if } \\sign(\\theta^T\\phi_k) = b, \\\\\n\t0 &\\text{otherwise}.\n\\end{cases}\n\\end{align*}\n\n\n\\paragraph{Efficient implementation}\n\nFor this problem, implementing the general elimination stopping rule would require storing and iterating over $2^K K$ pieces, which is clearly intractable. However, this problem introduces a high redundancy in the alternative pieces that we can exploit for an efficient implementation which takes only linear (in $K$) time and space. Differently from BAI and top-m identificaiton, the procedure highlighted below is \\emph{exactly} an implementation of the ``theoretical'' elimination rules presented in the main paper, with the full and selective elimination rules reducing to the same thing.\n\nNote that, for any $p\\in[K]$ and $i,j\\in\\{0,1\\}^K$ such that $i^p = j^p$, we have $\\Lambda_p(i) = \\Lambda_p(j)$. That is, whenever we eliminate some piece $\\Lambda_p(i)$ for $p\\in[K]$ and $i\\in\\{0,1\\}^K$, we actually eliminate all problems in $\\cM$ whose sign of the $p$-th mean reward is different from $i^p$. In other words, we learn that the $p$-th position of the correct answer for $\\theta$ is indeed $i^p$. Therefore, an efficient implementation is as follows: we keep a set $\\cA_t$ of active arms (those for which we still have to learn the corresponding component in the correct answer). This set is initialized as $\\cA_0 = [K]$ and updated as\n\\begin{align*}\n\\mathcal{A}_t := \\mathcal{A}_{t-1} \\setminus \\left\\{ j\\in\\cA_{t-1} \\big| \\max_{i\\in\\{0,1\\}^K} \\inf_{\\lambda \\in \\Lambda_j(i)} L_t(\\hat{\\theta}_t,\\lambda) \\geq \\beta_{t,\\delta}\\right\\}.\n\\end{align*}\nWhile the maximization over $2^K$ elements might appear intractable, the structure of the problem allows us to entirely avoid it. Note that, for fixed $j$, the sets $\\Lambda_j(i)$ are fully specified by the $j$-th component of $i$. Moreover, the inf is zero whenever $i^j \\neq \\sign(\\hat{\\theta}_t^T\\phi_j)$ since that would imply $\\hat{\\theta}_t \\in \\Lambda_j(i)$. Therefore, the elimination condition can be equivalently rewritten in the convenient form\n\\begin{align*}\n\\mathcal{A}_t := \\mathcal{A}_{t-1} \\setminus \\left\\{ j\\in\\cA_{t-1} \\big| \\inf_{\\lambda \\in \\cM : \\sign(\\lambda^T\\phi_j) \\neq \\sign(\\hat{\\theta}_t^T\\phi_j) } L_t(\\hat{\\theta}_t,\\lambda) \\geq \\beta_{t,\\delta}\\right\\}.\n\\end{align*}\nMoreover, whenever the elimination condition above triggers for some arm $j\\in[K]$, we set a variable $S_j := \\sign(\\hat{\\theta}_t^T\\phi_j)$ with the correct sign for the $j$-th component. We stop whenever $\\cA_t = \\emptyset$ (i.e., when all signs have been learned) and return $\\hat{i} := (S_1,S_2,\\dots, S_K)$. Similarly to BAI, this requires to perform only $|\\cA_{t-1}|$ tests at each step $t$. On the other hand, the LLR stopping rule would perform $K$ tests at each step.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbmfm b/data_all_eng_slimpj/shuffled/split2/finalzzbmfm new file mode 100644 index 0000000000000000000000000000000000000000..c2c1128636b2945c13ba4f33a4797acba58645cd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbmfm @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe standard stochastic multi-armed bandit framework captures the exploration-exploitation trade-off in sequential decision making problems under partial feedback constraints. The objective is to actively identify the best member, or members, of a community comprising stochastic sources, termed arms, while suffering the relative loss of non-ideal choices. Often, the arms yield rewards that belong to a known probability distribution with hidden parameters, and upon choosing an arm, the decision maker observes the reward from that arm directly. For ergodic reward distributions, \\cite{gittins1979bandit} shows that the dynamic programming solution takes the form of an index policy\\footnote{The orignial formulation of \\cite{gittins1979bandit} was in the Bayesian framework.}, called dynamic allocation indices, which motivated the rich body of work that led to the arm-selection rules that eventually achieved the asymptotic regret lower bounds of \\cite{lai1985asymptotically}. Alternatively, when the probability law that governs the rewards is defined conditionally with respect to a hidden state that represents the ``changing world'', restless bandit framework of \\cite{whittle1988restless} leads to arm-selection policies that are often computationally demanding. A key challenge that we address here is to develop index policies that identify the best source in an environment where the underlying state of the world changes erratically and hence, the observations are generated from different probability distributions at each point in time. \n\nSpecifically, we consider the case where each arm represents a stochastic expert providing opinions on changing tasks and thus, upon consulting an expert, the decision maker observes an opinion, rather than a direct reward. Stochastic experts are sources of subjective information that might fail but not purposefully deceive, as discussed in \\cite{cesa2006prediction}, and often, expert suggestions, or opinions, are used with the aid of side-information: Feedback from past states of the world is used in boosting, \\cite{schapire2012boosting}, models of expert stochasticity, or direct information of expert reliability, or competence, are often used in the Bayesian framework, \\cite{poor2013introduction}. In the absence of \\textit{any} side information, the decision maker operates in a regime that can be termed \\textit{unsupervised}, relying solely on the information in the opinions. Unsupervised opinion aggregation methods such as expectation maximization (EM) \\cite{welinder2010online}, belief propagation (BP) \\cite{karger2011iterative}, and spectral meta-learner (SML) \\cite{parisi2014ranking} exhibit an interesting phenomenon: The reliability of experts are inferred as side-product of the underlying optimization for estimating past states based on a block of opinions. On the other hand, joint sampling and consultation of experts without supervision, or blind exploration and exploitation (BEE) as termed here, requires instantaneously available statistics that would allow reliable inference of expert reliabilities at any and all states of the world. \n\nWe propose a method that relies solely on opinions to infer the competence of an expert by re-defining the notion competence as the probability of agreeing with peers rather than being objectively correct. The proposed method does not only allow empirical inference of competence without any supervision but also enables the use of index policies to efficiently address exploration and exploitation dilemma when the underlying task changes at random. We show that standard, or supervised, exploration-exploitation (SEE) strategies extend their uses to the BEE problem by consulting multiple experts for each task, equivalent to sampling multiple arms in the standard framework. Specifically, we consider the index rules that rely on posterior sampling, \\cite{thompson1933likelihood}, upper-confidence bounds such as UCB1, \\cite{auer2002finite}, and KL-UCB, \\cite{garivier2011klucb}, minimum empirical Kullback-Leibler divergence, in particular, IMED, \\cite{honda2015imed}, and minmax rule MOSS of \\cite{audibert2009minimax}. We investigate two operational regimes: First,\na fixed number of experts are consulted for each task and the opinion of the expert who is believed to be most-reliable at that time is chosen. Second, upon consulting a group of experts, a decision is formed by aggregating their opinions without further supervision. We empirically compare the performance of different BEE index rules and demonstrate that exploration-exploitation-based choice of experts leads to comparable results to those of the original algorithms in the unsupervised framework. \n\nThe organization of this paper is as follows: We summarize the notation used in this paper, provide a background on stochastic experts, and define the BEE problem formally in Section \\ref{sec:probdef}. We discuss the motivation, formal definition, and properties of our technique for unsupervised reliability inference in Section \\ref{sec:pseudocomp}. Then, we discuss the fundamental properties of the BEE index rules in Section \\ref{sec:bee}. The experiments for comparing different BEE algorithms as well as comparing them to their SEE counterparts are in Section \\ref{sec:experiments}. The proofs are deferred to the appendix. \n\n\\section{Notation, Background, and Problem Formulation}\n\\label{sec:probdef}\nWe begin with a brief overview of the notation used in this paper. Then, we formally define the key concepts regarding stochastic experts. We conclude this section by defining the BEE problem. \n\\subsection{Notation}\nA probability space is a triplet $\\yay{\\Omega, \\mathscr{F}, \\mathbb{P}}$, where $\\Omega$ is the event space, $\\mathscr{F}$ is the sigma-field defined on $\\Omega$, and $\\mathbb{P}$ is the probability measure. Random variables are denoted by capital letters with the corresponding samples being denoted by lowercase letters: $(X,x)$. A random process is an indexed collection of random variables: $\\myset{Y(t): t\\in \\mathbb{T}}$, where $\\mathbb{T}$ is the index set. Independent random variables $\\yay{X_1, X_2}$ are denoted by $X_1 \\perp X_2$ and conditionally independent random variables $\\yay{X_1, X_2}$ conditioned on $Y$ are denoted by $X_1 - Y - X_2$. Expectation, conditional expectation, and conditional probability operators are denoted by $\\expt{\\cdot}$, $\\condexpt{\\cdot}{\\cdot}$, and $\\condprob{\\cdot}{\\cdot}$ respectively. The indicator function is denoted by $\\ind{\\cdot}$, where domain is to be understood from context. We use $[T] \\triangleq \\myset{1,\\cdots,T}$ to denote the positive natural numbers up to a finite limit $T<\\infty$. All logarithms $\\yay{\\log}$ are taken with respect to the natural base. We use big $O$ notation when necessary. \n\\subsection{Background}\nConceptually, stochastic experts are honest-but-fallible computational entities that do not deceive the decision maker deliberately. Here, we consider experts that do not collaborate while generating their opinions; \\cite{cesa2006prediction} provides a detailed discussion. The goal of this paper is to propose techniques that identify the best stochastic experts, while dynamically consulting others on varying tasks. In that context, consulting an expert on a task is equivalent to pulling an arm in the standard multi-armed bandit framework. The true reward, however, remains hidden. \n\nFormally, let us begin with a random process $\\myset{Y(t): t\\in [T]}$ that represents binary states of the world, or tasks with binary labels: $Y(t) \\in \\myset{-1,1}$, $\\forall t\\in[T]$. We allow the nature to generate tasks independently:\n\\begin{equation}\n\t\\label{independent_tasks}\n\tY\\yay{t_1} \\perp Y\\yay{t_2},~\\forall t_1\\neq t_2 \\in [T].\n\\end{equation}\nFurthermore, let the random process $Y(t)$ that governs the evolution of tasks maximize the uncertainty: \n\\begin{equation}\n\t\\label{unif_distr}\n\t\\prob{Y(t) =1} = \\prob{Y(t)=-1} = \\nicefrac{1}{2},~\\forall t\\in [T].\n\\end{equation} \nIt is worth noting that any bias from non-uniform task generation can either be estimated directly from labeled data, or inferred without supervision via methods such as \\cite{jaffe2016unsupervised}. Furthermore, while independence assumption appears to be restrictive, it is common in stochastic multi-armed bandit formulations, \\cite{bubeck2012regret}. \n\nFormal characterization of stochastic experts involves the reliability of their opinions and statistical dependence to the others. The probability with which the opinion of an expert identifies the true state of the world correctly determines the reliability, or competence, of that expert: \n\\begin{equation}\n\t\\label{static_competence}\n\tp_i \\triangleq \\prob{X_i(t) =Y(t)},~\\forall t\\in[T].\n\\end{equation} \nHere, the reliability of an expert does not depend on the underlying state of the world \\footnote{A notable exception to this model is the ``two-coin'' model from \\cite{dawid1979maximum}, where conditionally static competences are discussed.}. \nWe further allow that experts generate opinions $\\myset{X_i(t): i\\in [M]}$ independently from one another for every task $t\\in[T]$. Formally: \n\\begin{equation}\n\t\\label{independent_generation}\n\tX_i(t) - Y(t) - X_j(t),~\\forall i\\neq j\\in [M],~\\forall t\\in[T].\n\\end{equation} \nConceptually, it makes sense that for meaningful inference, two different opinions on the same task should never be statistically independent. Furthermore, experts having conditionally independent opinions is equivalent to independence of rewards in the standard framework.\n\nGiven the probability law defined by eq.~\\eqref{independent_tasks}-\\eqref{independent_generation}, we can formally discuss why SEE algorithms requires a toolset to address the impact of the underlying uncertainty. Observe that:\n\\begin{equation}\n\t\\label{motivation}\n\t\\lim\\limits_{t\\rightarrow \\infty} \\frac{1}{t} \\sum_{\\tau =1}^{t} X_i(t) = 0,~\\forall p_i\\in\\brac{0,1},\n\\end{equation} \nwhich follows from the law of total probability, see appendix \\ref{app:average_opinion}. Conceptually, eq.~\\eqref{motivation} indicates that the average opinion does not reflect the competence of an expert, which is the true reward, posing a challenge for joint exploration and exploitation in the context of sequentially consulting stochastic experts, which we formally define next. \n\n\\subsection{Problem Definition}\nThe first objective of the BEE problem is to identify the best expert in a population while actively consulting members of that group on tasks that change from one consultation to another. The following notion of regret, written here in normalized form, formally captures this phenomenon: \n\\begin{equation}\n\t\\label{real_regret}\n\tR_T = \\frac{1}{T}\\sum_{t=1}^{T} \\ind{X^{*}(t)=Y(t)} - \\frac{1}{T}\\sum_{t=1}^{T} \\ind{X_{I_t}(t)=Y(t)}.\n\\end{equation}\nHere $X^{*}$ is the opinion of the most competent expert; $X^{*} = X_{i^{*}}$, where $i^{*} = \\argmax_{i\\in[M]} p_i$ and $I_t\\in[M]$, $\\forall t\\in[T]$ is the expert chosen at time $t$. Observe that the regret, as defined in eq.~\\eqref{real_regret} depends on the sample path of opinions and hence, it is difficult to analyze rigorously. Nonetheless, it simplifies asymptotically:\n\\begin{equation}\n\t\\label{real_regret_asymptotic}\n\t\\lim\\limits_{T\\rightarrow \\infty}R_T = \\max_{i\\in[M]} p_i - \\lim\\limits_{T\\rightarrow \\infty}\\frac{1}{T}\\sum_{t=1}^{T} \\ind{X_{I_t}(t)=Y(t)}.\n\\end{equation}\nThe first term is a direct consequence of the ergodicity of the processs $\\ind{X^{*}(t)=Y(t)}$, which follows directly from eq.~\\eqref{independent_tasks}-\\eqref{static_competence}. Conceptually, this amounts to the fact that one can measure the true reliability of an expert given sufficiently many labeled tasks, as long as the reliability of the expert does not change across tasks, as is the case here. \n\nMotivated by similar asymptotic behaviors, a notion of \\textit{pseudo regret} often arises in the context of stochastic bandits, see, for instance, \\cite{bubeck2012regret}. In the context of stochastic experts, the pseudo regret is defined as follows: \n\\begin{equation}\n\t\\label{pseudo_regret}\n\t\\tilde{R}_T = \\max_{i\\in[M]} p_i - \\frac{1}{T}\\expt{\\sum_{t=1}^{T} \\ind{X_{I_t}(t)=Y(t)}}.\n\\end{equation}\nAnother notion of pseudo regret provides a reliable metric for the performance of BEE rules that aggregate opinions after consulting experts. Let a $m