diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhjqn" "b/data_all_eng_slimpj/shuffled/split2/finalzzhjqn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhjqn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nShift symmetries play a powerful role in diverse areas of physics: they provide a useful classification of low-energy effective theories\nand appear generically in any theory in which an internal or spacetime symmetry is spontaneously broken. In theories with spontaneously broken symmetries, masslessness of the Goldstone bosons is protected by symmetries that act like shift symmetries to leading order in powers of the fields. The avatars of these symmetries in scattering amplitudes are enhanced soft limits, the prototypical example of which is the Adler zero \\cite{Adler:1964um,Adler:1965ga}. Theories with shift symmetries are also known to enjoy various non-renormalization theorems \\cite{Luty:2003vm,Hinterbichler:2010xn,deRham:2014wfa,Goon:2016ihr}.\n\nThe simplest example of a shift-symmetric theory is a free massless scalar field in flat space. In fact, this theory has an infinite number of non-linearly realized symmetries which take the form of an infinite tower of shifts,\\footnote{The free scalar also has an infinite number of unbroken symmetries, which form the higher-spin algebra underlying Vasiliev's higher-spin theory \\cite{Eastwood:2002su,Vasiliev:2003ev,Bekaert:2005vh,Joung:2014qya}. The shift symmetries studied here are distinct from these.}\n\\begin{equation}\n\\delta\\phi=c+c_\\mu x^\\mu+c_{\\mu_1\\mu_2}x^{\\mu_1} x^{\\mu_2}+c_{\\mu_1\\mu_2\\mu_3} x^{\\mu_1}x^{\\mu_2}x^{\\mu_3} +\\cdots. \\label{flatshiftgen}\n\\end{equation}\nHere the $c_{\\mu_1\\cdots\\mu_k}$ are rank-$k$ symmetric and traceless constant tensors, and $x^\\mu$ are the Cartesian spacetime coordinates. \nWe call $k$ the {\\it level} of the shift symmetry. Interactions generically break the shift symmetries~\\eqref{flatshiftgen}; however, certain classes of interactions can preserve subsets of these symmetries. The symmetries preserved by interaction terms therefore provide a useful organizing principle for classifying derivatively-coupled effective field theories (EFTs) in flat space~\\cite{Hinterbichler:2014cwa,Griffin:2014bta,Cheung:2016drk,Padilla:2016mno,Bogers:2018zeg,Pajer:2018egx}.\n\nThe $k=0$ shift symmetry is the standard shift by a constant, $c$. Any interacting theory involving at least one derivative per field will preserve this symmetry, including ghost-free theories such as $P(X)$ theories. If we allow for multiple interacting fields, there exist interesting field-dependent deformations of the constant shift symmetry such that the symmetry generators form a non-abelian algebra. Field theories invariant under these deformed symmetries are of the non-linear $\\sigma$-model type. \n\nThe $k=1$ shift symmetry underlies the galileon~\\cite{Luty:2003vm,Nicolis:2008in}. Any interaction with at least two derivatives per field preserves this symmetry, but there is also a finite set of interactions, the galileon interactions, that have fewer than two derivatives per field and are ghost-free,\n\\begin{equation} \n{\\cal L}_n\\sim \\phi S_{n-1}(\\partial\\partial\\phi) \\,,\\qquad\\quad n=1,2,\\cdots,D+1 \\, ,\n\\end{equation}\nwhere $S_n$ are the symmetric polynomials defined in Appendix \\ref{xtensorappendix}.\nThese can be understood as Wess--Zumino terms for the galileon symmetry \\cite{Goon:2012dy} and, from the point of view of the $S$-matrix, as theories with enhanced soft limits~\\cite{Cheung:2014dqa,Cheung:2016drk,Padilla:2016mno}.\\footnote{Enhanced in this context means that scattering amplitudes vanish more quickly in the soft limit than would naively be expected from the number of derivatives per field.} In this case, there are again interesting non-abelian deformations of the symmetries involving field-dependent terms, but here the deformation can be achieved with a single scalar field. For example, both the Dirac--Born--Infeld (DBI) action and the action of the conformal dilaton possess non-abelian symmetries that start linear in the spacetime coordinates.\n\nFor $k=2$, any interaction with at least three derivatives per field preserves the symmetry, but the known interactions invariant under the abelian symmetry all result in higher-derivative equations of motion. However, there is a unique theory with second-order equations of motion that is invariant under the following deformed, non-abelian version of the symmetry:\n\\begin{equation}\n\\delta\\phi = c_{\\mu\\nu}x^{\\mu} x^{\\nu}+\\frac{1}{\\Lambda^6}c_{\\mu\\nu}\n\\partial^{\\mu} \\phi \\partial^{\\nu} \\phi, \\label{eq:sgsymmytry}\n\\end{equation} \nwhere $\\Lambda$ is the strong coupling scale of the theory.\nThis theory, known as the special galileon~\\cite{Cheung:2014dqa,Hinterbichler:2015pqa,Cheung:2016drk,Novotny:2016jkh}, has fewer than two derivatives per field and is ghost-free. It is a particular galileon theory made from all of the interactions of even order in the fields, with fixed relative coefficients. This theory has no free parameters, other than $\\Lambda$, and its $S$-matrix has a soft limit that is even more enhanced than the regular galileons as a result of the symmetry \\eqref{eq:sgsymmytry}. In fact, the theory is completely fixed by requiring this enhanced soft limit~\\cite{Cheung:2015ota,Bogers:2018zeg}.\n \nFor $k\\geq 3$, there is no known way to have ghost-free interactions \\cite{Hinterbichler:2014cwa,Griffin:2014bta}, and there are no on-shell constructible $S$-matrices with corresponding enhanced soft limits~\\cite{Cheung:2016drk}.\n\nIn this work, we extend this classification of shift-symmetric EFTs to maximally symmetric curved spacetimes and to particles with nonzero spin. The de Sitter (dS) and anti-de Sitter (AdS) space analogues of the flat space shifts, such as \\eqref{flatshiftgen}, are transformations that shift (A)dS fields by polynomials of ambient space coordinates.\nIn (A)dS space, in contrast to flat space, free fields with such shift symmetries have nonzero masses. For each non-negative integer $k$, there is a particular mass for which a given (A)dS field has the analogue of the level-$k$ flat space shift symmetry. \nFor example, scalar fields with masses given by\n\\begin{equation}\nm_k^2 = -k(k+D-1)H^2\n\\label{eq:scalarmass}\n\\end{equation}\nhave the (A)dS analogue of the flat space shift symmetry of level $k$. \n\nThe discrete nature of the mass values \\eqref{eq:scalarmass} is reminiscent of the phenomenon of partial masslessness in (A)dS space \\cite{Deser:1983tm,Deser:1983mm,Higuchi:1986py,Brink:2000ag,Deser:2001pe,Deser:2001us,Deser:2001wx,Deser:2001xr,Zinoviev:2001dt,Skvortsov:2006at,Boulanger:2008up,Skvortsov:2009zu,Alkalaev:2011zv,Joung:2012rv,Basile:2016aen}, whereby a massive spin-$s$ field develops a gauge invariance for particular discrete values of its mass. These masses are labelled by an integer, $t$, called the partially massless (PM) depth, where $t \\in \\{0, \\, \\ldots, \\, s-1\\}$ and $t=s-1$ corresponds to a massless field. In fact, there is a relationship between these two sets of discrete masses: if we start with a generic massive spin-$s$ field and send its mass to the $t=0$ PM value, then this massive field will break up into a PM field and a scalar with a shift symmetry of level $k=s-1$. The scalar shift symmetries then correspond to the PM reducibility parameters, {\\it i.e.}, the global symmetry part of the PM gauge symmetry.\n\nScalar fields with the mass values~\\eqref{eq:scalarmass} appear in a variety of contexts. In the AdS case, these scalars have positive masses and are unitary. In dS space, although they are tachyonic, scalar fields with these particular mass values also belong to unitary representations~\\cite{vilenkin1978special,Gazeau:2010mn}.\\footnote{We give a physical argument for their unitarity in dS space in Section~\\ref{sec:unitarity}. In terms of representation theory, these scalars belong to the so-called exceptional series~\\cite{Basile:2016aen}. In CFT language, the conformal primary associated to one of these bulk tachyonic scalars should correspond to the boundary value of the shift-invariant ``curvature\" obtained by taking $k+1$ derivatives of the bulk field. It would be interesting to further elucidate the representation theory of scalar fields belonging to the family~\\eqref{eq:scalarmass}.}\nSome aspects of their quantization are discussed in~\\cite{Folacci:1992xc, Bros:2010wa, Epstein:2014jaa}, while other features are noted in~\\cite{Shaynkman:2000ts, Chekmenev:2015kzf}. The $k=0$ case corresponds to the massless minimally coupled scalar and has been well studied (see, {\\it e.g.},~\\cite{Allen:1985ux,Allen:1987tz}), and the $k=1$ case has been studied as a toy model for the conformal mode of the graviton~\\cite{Antoniadis:1991fa,Folacci:1996dv}. These scalar tachyons also appear in the context of CFT entanglement entropy~\\cite{deBoer:2015kda, deBoer:2016pqk}, and scalars with negative integer conformal weights play an important role in the constructions of Ref.~\\cite{Arkani-Hamed:2018kmz}. As mentioned in~\\cite{Baumann:2017jvh} and discussed in Appendix~\\ref{app:GJMSscalar}, these scalars are also connected to higher-order conformal scalars in even dimensions.\n\nIn fact, the shift-symmetric scalar fields fit into a larger picture: for particular discrete masses, free fields with nonzero spin in (A)dS space are also invariant under shifts by certain polynomials of ambient space coordinates. The presence of these symmetries can similarly be traced to (higher-depth) PM fields.\nAs we explain, these fields are the longitudinal modes of massive fields as their masses approach the $t>0$ PM values, and the shift symmetries descend from PM reducibility parameters. \nWe explain how this works for fields of all integer spins.\n\nAfter classifying these shift symmetries for free bosonic fields in (A)dS space, we consider\ninteractions for (A)dS scalars that preserve the shift symmetries. The goal is to classify interesting (A)dS EFTs using shift symmetries as an organizing principle, in the same spirit as in flat space. There are interactions that are invariant under the undeformed symmetries of the free theory for any value of $k$, but only for $k=0$ and $k=1$ are there ghost-free interactions. The $k=0$ theories are $P(X)$ theories in (A)dS space, while the $k=1$ theory is the (A)dS galileon~\\cite{Goon:2011uw,Goon:2011qf,Burrage:2011bt}.\n\nFor $k=1, \\,2$, the algebra of symmetries underlying these scalar theories can be deformed in a unique way. The $k=1$ deformed algebra is a real form of $\\frak{so}(D+2)$.\\footnote{Here and throughout, we write the complexified algebra without specifying the real form, since the particular real form depends on the spacetime signature, whether we are in dS or AdS space, and on the sign of certain parameters in the algebras, for example $\\alpha$ in~\\eqref{k1commutatoralge} and~\\eqref{k2commutatoralge}. } Interacting theories are known from brane~\\cite{Goon:2011uw,Goon:2011qf,Burrage:2011bt} and coset~\\cite{Clark:2005ht,Hinterbichler:2012mv} constructions that realize symmetry breaking from $\\frak{so}(D+2)$ down to the $\\frak{so}(D+1)$ isometry algebra of (A)dS$_D$, and we describe the invariant interactions for a choice of field variables where the symmetry transformation takes a particular form in ambient space. For $k=2$, the deformed algebra is $\\frak{sl}(D+1)$ and we find in all dimensions a nonlinear theory that realizes the breaking of this symmetry to the (A)dS isometry algebra. This theory is the analogue of the special galileon in (A)dS space. It involves a rather intricate structure of interactions that are fixed by the (A)dS version of the special galileon symmetry. For example, in $D=4$ it takes the form\n\\begin{align}\n{1\\over \\sqrt{-g}}{\\cal L}_{\\rm SG}=& -\\frac{\\Lambda ^6 }{ H^2}\\frac{ (y^2-8 y+8) \\left(8 X^2-3 y^{3\/2} \\sqrt{X+y}+12 X y-3 X \\sqrt{y} \\sqrt{X+y}+3 y^2\\right)}{15y^3\n (X+y)^{3\/2}} \\nonumber\\\\ \n& -\\frac{\\Lambda ^6 }{ H^2} \\left(\\frac{ 5 (y-4) y+16}{10y^{5\/2}}-{1\\over 10}\\right) \n+\\frac{2 (y-4) \\phi }{15 X y^{5\/2}} \\left(\\frac{\\sqrt{y} (2 X+3 y)}{(X+y)^{3\/2}}-3\\right){H^2\\over \\Lambda ^6}\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(1)}_{\\mu\\nu}(\\Pi) \\nonumber\\\\\n&+\\frac{y-2}{30 X^2 y^2} \\left(2 \\sqrt{y}-\\frac{2 X^2+3 X y+2 y^2}{(X+y)^{3\/2}}\\right){1\\over \\Lambda ^6}\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(2)}_{\\mu\\nu}(\\Pi) \\nonumber\\\\\n&+\\frac{\\phi }{45 X^2 y^{3\/2}} \\left(\\frac{\\sqrt{y} (3 X+2 y)}{(X+y)^{3\/2}}-2\\right){H^2\\over \\Lambda^{12}}\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(3)}_{\\mu\\nu}(\\Pi),\\nonumber\n\\end{align}\nwhere\n\\begin{equation} y\\equiv 1+4{H^4\\over \\Lambda^6}\\phi^2,\\quad X\\equiv {H^2\\over \\Lambda^6}(\\partial\\phi)^2, \\quad \\Pi_{\\mu \\nu} \\equiv \\nabla_{\\mu } \\nabla_{\\nu} \\phi \\, ,\\end{equation}\nand the tensors $X^{(j)}_{\\mu\\nu}$ are defined in Appendix~\\ref{xtensorappendix}.\nThe highest-derivative terms are those of the flat space special galileon, but there are highly nontrivial lower-derivative terms suppressed by the (A)dS radius, including a potential, that are fixed by the symmetry. We expect that this theory should have similarly compelling properties to the special galileon in flat space.\n\nWe begin in Section~\\ref{linesymmsectn} by describing certain shift symmetries enjoyed by free bosonic fields with particular discrete masses in maximally symmetric spacetimes. In Section~\\ref{PMsection}, we explain the relation between these fields and PM fields. We consider interactions in Section~\\ref{sec:interactingscalars}, focusing on the case of a single scalar field. We first classify possible deformations of the symmetry algebras of the free theories and then find interactions invariant under either the undeformed or deformed symmetries. We conclude in Section \\ref{sec:conclusions} and discuss possible generalizations of our results, including interactions for massive higher-spin particles. We include some technical results and reviews of useful material in the appendices:\nthe symmetric polynomials and the tensors $X_{\\mu \\nu}^{(n)}$ are defined in Appendix~\\ref{xtensorappendix}, the ambient space formalism and some useful embedding coordinates are reviewed in Appendix~\\ref{embbedappendix}, we describe the construction of scalar interactions using nonlinear realization techniques in Appendix~\\ref{cosetappendix}, the relation between the shift-symmetric scalars and conformal powers of the Laplacian is discussed in Appendix~\\ref{app:GJMSscalar}, and we discuss the PM decoupling limits of a massive spin-3 particle in Appendix \\ref{spin3Appendix}.\n\n \n\n\\vspace{.15cm}\n\\noindent\n{\\bf Conventions:}\nWe denote the spacetime dimension by $D$ and define $d \\equiv D-1$. We use the mostly plus metric signature convention. We denote the dS space Hubble scale as $H$, so that the Ricci scalar is $R=D(D-1)H^2>0$. We denote the AdS space radius by $L$, so that $R=-{D(D-1)\/ L^2}<0$. We can go between the two cases with the relation $H^2\\leftrightarrow {-1\/L^2}$. We sometimes use ${\\cal R}$ for the radius to cover both cases at once, so that ${\\cal R}=1\/H$ for dS space and ${\\cal R}=L$ for AdS space. Though we phrase things in Lorentzian signature, the theories we consider can be analytically continued to live on Euclidean spheres or hyperbolic spaces straightforwardly. Tensors are symmetrized and antisymmetrized with unit weight, {\\it e.g.}, $T_{(\\mu\\nu)}=\\frac{1}{2} \\left(T_{\\mu\\nu}+T_{\\nu\\mu}\\right)$, $T_{[\\mu\\nu]}=\\frac{1}{2} \\left(T_{\\mu\\nu}-T_{\\nu\\mu}\\right)$, and $(\\cdots)_T$ indicates that we take the symmetric fully traceless part of the enclosed indices, {\\it e.g.}, $T_{(\\mu\\nu)_T}=\\frac{1}{2} \\left(T_{\\mu\\nu}+T_{\\nu\\mu}\\right)-{1\\over D}g_{\\mu\\nu}T^\\rho{}_{\\rho}$. We denote Young projectors by ${\\cal Y}_{r_1,r_2,r_3,\\ldots}$, where the $r_i$ label the lengths of the rows of the corresponding Young diagrams. Our convention for the PM depth is that the depth, $t$, labels the number of indices of the PM gauge parameter.\n\n\n\n\n\n\\section{Shift symmetries of free fields in (A)dS space\\label{linesymmsectn}}\n\n\nWe begin by describing how free bosonic fields in (A)dS space with particular discrete masses enjoy extended shift symmetries that are polynomials in the ambient space coordinates. \n\n\\subsection{Scalar fields}\n\\label{sec:linearscalars}\nFirst we consider scalar fields.\nWhen we extend the free massless scalar action to (A)dS space, only the constant shift symmetry remains unbroken.\nHowever, for each $k$ there is a particular mass for which the massive (A)dS action is invariant under the analogue of the level-$k$ flat space shift symmetry.\n\nConsider the action\n\\begin{equation} \nS=\\int {\\rm d}^D x \\, \\sqrt{-g}\\left(-\\frac{1}{2}(\\partial\\phi)^2-\\frac{m_k^2}{2} \\phi^2\\right) ,\\label{massivegenadse}\n\\end{equation}\nwhere\n\\begin{equation} \nm_k^2=-k(k+D-1)H^2\\, .\\label{scalarmassvaluese}\n\\end{equation}\nThis action has a shift symmetry given by\n\\begin{equation}\n\\delta \\phi = K^{(0)},\\label{adsksymmembe}\n\\end{equation}\nwhere $K^{(0)}$ is the restriction to (A)dS space of a degree-$k$ homogeneous polynomial of ambient space coordinates,\n\\begin{equation}\nK^{(0)} = S_{A_1\\cdots A_k}X^{A_1}\\cdots X^{A_k}\\big|_{\\rho = \\mathcal{R}} \\, .\n\\end{equation}\nHere $X^{A}(x)$ is an embedding of (A)dS$_D$ into a $(D+1)$-dimensional flat ambient space, restriction to the (A)dS hyperboloid is denoted by $|_{\\rho = \\mathcal{R}}$, and $S_{A_1\\cdots A_k}$ is a constant ambient space tensor that is symmetric and traceless (for more details on the ambient space formalism, see Appendix \\ref{embbedappendix}). \nAssociated with $\\phi$ is an ambient space scalar $\\Phi$ that has homogeneity degree $k$ and equals $\\phi$ on the (A)dS surface,\n\\begin{equation}\n\\left(X^A\\partial_A - k\\right)\\Phi = 0\\,,~~~~~~~~~~~~~~~~~\\phi(x) = \\Phi(\\rho, x) \\big|_{\\rho = \\mathcal{R}} \\,,\n\\end{equation} \nso the shift symmetry \\eqref{adsksymmembe} corresponds to the ambient space transformation\n\\begin{equation} \\label{eq:scalarambientshift}\n\\delta \\Phi = S_{A_1\\cdots A_k}X^{A_1}\\cdots X^{A_k}.\n\\end{equation}\nThe number of independent components of $S_{A_1\\cdots A_k}$ is \n\\begin{equation}\nN_{\\rm symm.}=\\left( \\begin{matrix} D+k \\\\ k \\end{matrix} \\right)-\\left( \\begin{matrix} D+k-2\\\\ k-2 \\end{matrix} \\right) \\, ,\n\\label{eq:numcomponsymmtracetensor}\n\\end{equation}\nso there are this many independent symmetries for a given $k$.\n\nUnlike the massless scalar in flat space, which has a symmetry for each value of $k$, the (A)dS action \\eqref{massivegenadse} has the symmetry only for a single value of $k$. Placing the scalar theory on curved spacetime therefore splits the infinite number of shift symmetries of the flat space theory.\nConversely, in the flat limit the (A)dS shift symmetry of level $k$ becomes the flat space symmetries with levels $\\leq k$.\nFor example, the $k=1$ (A)dS symmetry has $D+1$ components, $S_A$, which in the flat limit become the $D$ galileon symmetries, $c_\\mu$, and the shift symmetry, $c$. The $k=2$ (A)dS symmetry is described by the $D(D+3)\/2$ components of the traceless symmetric tensor, $S_{AB}$. In the flat limit, these become the $(D+2)(D-1)\/2$ linear special galileon symmetries, $c_{\\mu\\nu}$, the $D$ galileon symmetries, $c_\\mu$, and the shift symmetry, $c$. In the general case, the splitting is described by the following branching rule:\n\\begin{equation}\n\\gyoung(_4k)^{\\,T}\\xrightarrow[{\\scriptscriptstyle D+1\\rightarrow D}]{}\n~\\gyoung(_4k)^{\\,T}\\,\n\\raisebox{0.235ex}{\\raisebox{-0.3ex}{\\scalebox{1.4}{$\\oplus$}}}~\\,\n\\gyoung(_3{k-1})^{\\,T}\\,\n\\raisebox{0.235ex}{\\raisebox{-0.3ex}{\\scalebox{1.4}{$\\oplus$}}}~\\,\n\\cdots\\,\n\\raisebox{0.235ex}{\\raisebox{-0.3ex}{\\scalebox{1.4}{$\\oplus$}}}~\\,\n\\gyoung(;)~\\,\n\\raisebox{0.235ex}{\\raisebox{-0.3ex}{\\scalebox{1.4}{$\\oplus$}}}~\\,\n\\bullet\\,.\n\\end{equation}\n\nIt is straightforward to verify that the free scalar action \\eqref{massivegenadse} has the symmetry \\eqref{adsksymmembe}. Due to the tracelessness of the ambient space tensor $S_{A_1\\cdots A_k}$, the ambient space transformation \\eqref{eq:scalarambientshift} satisfies\n\\begin{equation}\n\\partial_A \\partial^A \\delta\\Phi=0.\n\\end{equation}\nPulled back to the (A)dS space, this becomes\n\\begin{equation}\n\\left( \\square +k(k+D-1)H^2\\right)\\delta\\phi=0,\n\\end{equation} \nwhich is precisely the Klein--Gordon equation derived from the action \\eqref{massivegenadse}. Since the free action is quadratic in the field, any solution to the equation of motion is a symmetry of the action.\\footnote{The symmetry transformations in \\eqref{adsksymmembe} are precisely the spherical harmonics on (A)dS space, which are eigenfunctions of the (A)dS Laplacian with eigenvalues equal to minus the mass values \\eqref{scalarmassvaluese}. We could also consider shifts by other solutions to the free equations of motion. However, as we will see, the spherical harmonics are distinguished by their connection to PM reducibility parameters and lead to interesting invariant interactions. \n}\n\n\nAs discussed in the introduction, the scalar fields described by \\eqref{massivegenadse} appear in myriad theoretical contexts. Despite this, they are tachyonic in dS space and thus one might doubt whether they can be physical there. However, the naively worrisome growing modes of these fields can be removed precisely by the shift symmetries discussed in this section. To see this, recall that in the inflationary slicing of dS space the zero mode of a scalar field of mass $m$ evolves at late times as\n\\begin{equation}\n\\phi_{\\vec q=0}(\\eta) \\sim \\alpha\\eta^{\\Delta_{+}}+\\beta\\eta^{\\Delta_{-}},\n\\label{eq:dSgrowingmode}\n\\end{equation}\nwhere $\\Delta_{\\pm}$ are the two fall-offs defined in terms of the mass through the relation\n\\begin{equation}\n\\Delta_{\\pm} = \\frac{d}{2}\\pm \\sqrt{\\frac{d^2}{4}-\\frac{m^2}{H^2}}.\n\\end{equation}\nFor the mass values~\\eqref{scalarmassvaluese}, the two fall-offs correspond to\n\\begin{equation}\n\\Delta_- = -k,~~~~~~~~~~~~~~\\Delta_+ = d+k.\n\\end{equation}\nThe fact that $\\Delta_-$ is negative is a symptom of the tachyonic mass---at late times ($\\eta\\to 0$) the field is diverging. However, this dangerous zero mode can be removed by a shift transformation. To see this, we note that the inflationary slicing is given by the embedding~\\eqref{eq:dsflatembedding}, so that if we perform a shift~\\eqref{adsksymmembe} along the lightcone coordinate~\\eqref{eq:xpluslightcone}, we find\n\\begin{equation}\n\\delta\\phi = S_{+\\cdots +}X^{+}\\cdots X^{+}\\big|_{\\rho = \\mathcal{R}} \\propto \\eta^{-k},\n\\end{equation}\nwhich has precisely the same time dependence as the growing mode~\\eqref{eq:dSgrowingmode}. This is not completely satisfactory, because modes with arbitrarily small wave numbers will also seem unstable if we wait long enough. However, this is an artifact of the inflationary slicing. Going to global coordinates, there are only a finite number of modes that are sick: modes with spatial angular momentum $L\\leq k$ are tachyonic.\\footnote{More properly, they are zero-norm with respect to the Klein--Gordon inner product.} However, these are precisely the modes which can be generated or removed by the symmetries~\\eqref{scalarmassvaluese}, so that these scalar fields appear to be healthy.\n\nIn the AdS context, these scalars are not tachyonic, but rather correspond to conventional scalar representations. They lie well above the Breitenlohner--Freedman bound~\\cite{Breitenlohner:1982jf} and are therefore unitary. These particular mass values and their associated shift symmetries have not, to our knowledge, been much studied in AdS space, though we expect that they should play some interesting role in the AdS\/CFT correspondence.\n\n\\subsection{Symmetric tensor fields}\n\nShift symmetries for fields with spin $s\\geq 1$ have not been extensively studied, likely because Goldstone bosons for broken internal symmetries are always scalars.\\footnote{Additionally, massless Goldstone fields with nonzero spin coming from broken spacetime symmetries are subject to constraints, see {\\it e.g.},~\\cite{Klein:2018ylk}.} However, in certain cases massive Goldstone-like fields with spin arise in (A)dS space \\cite{deRham:2018svs} (soft limits for spin-1 theories in flat space have been studied in \\cite{Cheung:2018oki}). We now discuss how the scalar shift symmetries generalize to symmetric tensor fields in (A)dS space.\n\nA free spin-$s$ bosonic field of mass $m$ on ${\\rm (A)dS}_D$ can be described by a completely symmetric tensor, $\\phi_{\\mu_1\\cdots\\mu_s}$, that obeys the following on-shell equations of motion:\n\\begin{equation}\n\\left(\\square-H^2\\left[s+D-2-(s-1)(s+D-4)\\right]-m^2\\right)\\phi_{\\mu_1\\cdots\\mu_s} = 0\\,,\\quad \\nabla^\\nu\\phi_{\\nu\\mu_2\\cdots\\mu_s} = 0\\,,\\quad\\phi^\\nu_{\\ \\nu\\mu_3\\cdots\\mu_{s}} = 0 \\,.\n\\label{massivefields}\n\\end{equation}\nThis theory develops a shift symmetry at the following values of the mass:\n\\begin{equation}\nm^2_{s,k}= -(k+2) (k+D-3+2 s)H^2 \\,, ~~\\qquad k=0,1,2,\\ldots,\\quad (s\\geq 1)\\,. \\label{spinsmassvalueseqe}\n\\end{equation}\nThe form of the level-$k$ shift symmetry acting on the spin-$s$ field is \n\\begin{equation} \\label{eq:tensorshift}\n\\delta \\phi_{\\mu_1\\cdots\\mu_s}=K_{\\mu_1\\cdots \\mu_s}^{(k)} \\, ,\n\\end{equation}\nwhere the tensor on the right-hand side is given by\n\\begin{equation} \nK_{\\mu_1\\cdots \\mu_s}^{(k)} = S_{A_1\\cdots A_{s+k},B_1\\cdots B_s}X^{A_1}\\cdots X^{A_{s+k}} {\\partial X^{B_1}\\over \\partial x^{\\mu_1}}\\cdots {\\partial X^{B_s}\\over \\partial x^{\\mu_s}}\\bigg|_{\\rho = \\mathcal{R}} \\,.\\label{Ktensordef1e}\n\\end{equation}\nThe ambient space tensor $S_{A_1\\cdots A_{s+k},B_1\\cdots B_s}$ is a fully traceless, constant tensor with the symmetries of the following Young tableau:\n\\begin{equation} \nS_{A_1\\cdots A_{s+k},B_1\\cdots B_s}\\in ~\\raisebox{1.15ex}{\\gyoung(_5{s+k},_3s)}^{\\,T} \\, .\n\\label{repKexpre}\n\\end{equation}\nAssociated with $\\phi_{\\mu_1\\cdots\\mu_s}$ is an ambient space tensor, $\\Phi_{A_1\\cdots A_s}$, with homogeneity degree $s+k$, that projects to $\\phi_{\\mu_1\\cdots\\mu_s}$ on the (A)dS surface,\n\\begin{equation}\n\\phi_{\\mu_1\\cdots\\mu_s}(x) = \\Phi_{A_1\\cdots A_s}(\\rho, x) {\\partial X^{A_1}\\over \\partial x^{\\mu_1}}\\cdots {\\partial X^{A_s}\\over \\partial x^{\\mu_s}}\\bigg|_{\\rho = \\mathcal{R}} \\,,\n\\end{equation}\nso the shift symmetry \\eqref{eq:tensorshift} corresponds to the ambient space transformation\n\\begin{equation}\n\\delta \\Phi_{B_1\\cdots B_s} = S_{A_1\\cdots A_{s+k},B_1\\cdots B_s}X^{A_1}\\cdots X^{A_{s+k}}.\n\\end{equation}\n\nThe tensors \\eqref{Ktensordef1e} are the spin-$s$ transverse-traceless spherical harmonics on (A)dS space. They also correspond to generalized traceless Killing tensors, which are the reducibility parameters for PM gauge transformations, as discussed in Section~\\ref{sec:killingtensors}. They solve the free equations of motion \\eqref{massivefields} with masses \\eqref{spinsmassvalueseqe}, so shifting by these preserves the massive equations of motion and the free action. \nThe massive spin-$s$ action for $s\\geq 3$ also necessarily contains auxiliary fields that vanish on shell, but these do not transform under the shift symmetry.\n\n\n\\subsection{Examples}\nTo be concrete, let us write down a few explicit free actions and their corresponding shift symmetries. We focus on fields of spin $s=0,1,2$, with $k=0$. These examples illustrate how the $k=0$ shifts correspond to traceless Killing tensors in (A)dS space, as discussed further in Section~\\ref{sec:killingtensors}. \n\nFor $s=0$ and $k=0$, the free action is just that of a massless scalar field,\n\\begin{equation} \nS=-\\frac{1}{2}\\int {\\rm d}^D x \\, \\sqrt{-g}(\\partial \\phi)^2\\, .\n\\end{equation}\nThe shift symmetry is a constant both in ambient space and in (A)dS space, \\textit{i.e.},\n\\begin{equation}\n\\delta \\phi=\\delta \\Phi = K^{(0)}\\,,\n\\end{equation}\nwhere $\\partial_\\mu K^{(0)} =0$.\n\nFor $s=1$ and $k=0$, the free action is that of a Proca field, $\\phi_\\mu$, of mass $m^2 = -2(D-1) H^2$,\n\\begin{equation} \nS=\\int {\\rm d}^D x \\, \\sqrt{-g}\\left(-\\frac{1}{4}(\\nabla_\\mu\\phi_\\nu-\\nabla_\\nu\\phi_\\mu)^2+(D-1) H^2\\phi_\\mu \\phi^\\mu\\right)\\, .\n\\end{equation}\nThe transformation of the ambient space vector is given by\n\\begin{equation}\n\\delta \\Phi_B = S_{AB}X^A \\, ,\n\\end{equation}\nwhere $S_{AB}$ is a constant antisymmetric tensor. This projects to the (A)dS space transformation\n\\begin{equation}\n\\delta \\phi_\\mu = K^{(0)}_\\mu \\, ,\n\\end{equation}\nwhere $K^{(0)}_{\\mu}$ is an (A)dS Killing vector,\n\\begin{equation}\n\\nabla_\\mu K^{(0)}_{\\nu}+\\nabla_\\nu K^{(0)}_{\\mu} =0 \\,.\n\\end{equation} \nThere are $D(D+1)\/2$ such vectors, in agreement with the number of components of $S_{AB}$.\n\nFor $s=2$ and $k=0$, the free action is that of a Fierz--Pauli massive graviton, $\\phi_{\\mu\\nu}$, with mass $m^2 = -2(D+1) H^2$,\n\\begin{align}\nS=\\int {\\rm d}^D x \\, \\sqrt{-g} & \\left(-\\frac{1}{2}\\nabla_\\alpha \\phi_{\\mu\\nu}\\nabla^\\alpha \\phi^{\\mu\\nu}+\\nabla_\\alpha \\phi_{\\mu\\nu}\\nabla^\\nu \\phi^{\\mu\\alpha}-\\nabla_\\mu \\phi^\\alpha_{~\\alpha} \\nabla_\\nu \\phi^{\\mu\\nu}+\\frac{1}{2}\\nabla_\\mu\\phi^\\alpha_{~\\alpha}\\nabla^\\mu\\phi^\\nu_{~\\nu} \\right. \\nonumber \\\\\n&~~\\left.+(D-1) H^2\\left(\\phi^{\\mu\\nu}\\phi_{\\mu\\nu} -\\frac{1}{2}\\phi^{\\mu}_{~\\mu}\\phi^{\\nu}_{~\\nu}\\right) +(D+1)H^2(\\phi^{\\mu\\nu}\\phi_{\\mu\\nu} -\\phi^{\\mu}_{~\\mu}\\phi^{\\nu}_{~\\nu})\\right)\\, .\n\\end{align}\nThe ambient space transformation is given by\n\\begin{equation}\n\\delta \\Phi_{B_1B_2} = S_{A_1 A_2, B_1 B_2} X^{A_1}X^{A_2}\\, ,\n\\end{equation}\nwhere $S_{A_1 A_2, B_1 B_2}$ is a fully traceless, constant tensor with the symmetries of the following Young tableau:\n\\begin{equation}\nS_{A_1 A_2, B_1 B_2} \\in\\,\\raisebox{1.175ex}{\\gyoung(;;,;;)}^{ \\,\\, T}.\n\\end{equation}\nThis projects to the (A)dS space transformation\n\\begin{equation}\n\\delta \\phi_{\\mu\\nu} = K^{(0)}_{\\mu\\nu} \\, ,\n\\end{equation}\nwhere $K^{(0)}_{\\mu\\nu}$ is a traceless (A)dS Killing tensor,\n\\begin{equation}\n\\nabla_\\mu K^{(0)}_{\\nu\\lambda}+\\nabla_\\nu K^{(0)}_{\\lambda\\mu}+\\nabla_\\lambda K^{(0)}_{\\mu \\nu} =0, \\quad K_\\mu^{(0) \\,\\mu} = 0. \\label{eq:Killingexample}\n\\end{equation}\nThe number of solutions to \\eqref{eq:Killingexample} is \\cite{doi:10.1063\/1.523488, doi:10.1063\/1.527288}\n\\begin{equation}\nN_{KT} = \\frac{(D+1)(D+2)(D+3)(D-2)}{12},\n\\end{equation}\nin agreement with the number of independent components of $S_{A_1 A_2, B_1 B_2}$.\n\n\n\\subsection{Algebra of linearized symmetries}\nWe have seen that for fields of any integer spin in (A)dS space, there are special values of the mass for which they develop shift symmetries that are polynomials of the ambient space coordinates. These are spacetime symmetries, so we would like to explore how these symmetries interact with the (A)dS isometries. In particular, we would like to compute the algebra of symmetries, involving both the extended shifts and spacetime Killing symmetries. We do this with an eye towards constructing deformations of these symmetry algebras.\n\nThe (A)dS isometries acting on the ambient space field $\\Phi_{A_1\\cdots A_s}$ take the form\n\\begin{equation}\n\\delta_{J_{AB}}\\Phi_{A_1\\cdots A_s}\\equiv J_{AB}\\Phi_{A_1\\cdots A_s}=\\left(X_A\\partial_B-X_B\\partial_A\\right)\\Phi_{A_1\\cdots A_s}+\\sum_{i=1}^s \\left({\\cal J}_{AB}\\right)_{A_i}^{\\ C}\\Phi_{A_1\\ldots A_{i-1} C A_{i+1} \\ldots A_s},\n\\label{eq:adssymmtrans}\n\\end{equation}\nwhere $\\left({\\cal J}_{AB}\\right)_{C}{}^{D}\\equiv\\eta_{AC}\\delta_B{}^{D}-\\eta_{BC}\\delta_A{}^{D}$ is the Lorentz generator in the vector representation. \nThe isometries satisfy the commutation relations of the algebra $\\mathfrak{so}(D+1)$,\n\\begin{equation} \\left[ J_{AB},J_{CD}\\right]= \\eta_{AC}J_{BD}-\\eta_{BC}J_{AD}+\\eta_{BD}J_{AC}-\\eta_{AD}J_{BC}.\\end{equation}\n\nThe level-$k$ shift symmetry takes the following form in the ambient space:\n\\begin{equation} \\delta_{S^{A_1\\cdots A_{s+k},B_1\\cdots B_s}} \\Phi_{C_1\\cdots C_s} \\equiv S^{A_1\\cdots A_{s+k},B_1\\cdots B_s} \\Phi_{C_1\\cdots C_s}={\\cal Y}^{(T)}_{s+k,s}\\left[ X^{A_1}\\cdots X^{A_{s+k}} \\delta^{B_1}_{(C_1}\\cdots \\delta^{B_s}_{C_s)_T}\\right],\n\\end{equation}\nwhere ${\\cal Y}^{(T)}_{s+k,s}$ is the Young projector onto the traceless tableau \\eqref{repKexpre}, which acts on the $A_i,B_i$ indices.\nThese symmetries are independent of the fields, so they have trivial commutators among themselves,\n\\begin{equation} \\left[S_{A_1\\dots A_{s+k},B_1\\dots B_s}, \\, S_{C_1\\cdots C_{s+k},D_1\\cdots D_s}\\right]=0\\,.\\label{linegenscommutee}\\end{equation}\nThe commutators of the shift generators with the (A)dS isometries are\n\\begin{equation} \\left[J_{BC}, S_{A_1\\cdots A_{2s+k}}\\right] = \\sum_{i=1}^{2s+k} \\left( \\eta_{BA_i}S_{A_1 \\dots A_{i-1} C A_{i+1}\\dots A_{2s+k}}-\\eta_{CA_i}S_{A_1 \\dots A_{i-1} B A_{i+1}\\dots A_{2s+k}} \\right),\\\\\n\\end{equation} \nwhich shows that they transform as tensors in (A)dS space.\nThe total symmetry algebra is thus the semi-direct product of the (A)dS algebra with the abelian algebra of its mixed-symmetry traceless tensor representation of type \\eqref{repKexpre}. An interesting question is whether this algebra can be deformed so that the shift generators no longer commute.\nIn Section \\ref{algebrassection}, we study possible deformations of this algebra in the scalar case, $s=0$.\n\n\n\\section{Shift symmetries from partially massless fields\\label{PMsection}}\nAlthough it is clear that the free theories with particular discrete masses described in Section~\\ref{linesymmsectn} possess polynomial shift symmetries, the underlying origin and importance of these symmetries may be somewhat obscure. Our goal in this section is to elucidate how these symmetries are intimately connected to the phenomenon of partial masslessness in (A)dS space.\nAs explained below, the shift-symmetric fields are the longitudinal modes of massive fields when we take various PM limits, with the shift symmetries arising from the reducibility parameters of the PM gauge transformations.\n\n\\subsection{Partially massless fields}\nWe first review some facts about PM fields.\nIn contrast to flat space, on (A)dS space there are representations that are neither strictly massive or massless.\nThese PM fields are fields with particular values of the mass, $m$, for which the free action develops a gauge invariance~\\cite{Deser:1983tm,Deser:1983mm,Higuchi:1986py,Brink:2000ag,Deser:2001pe,Deser:2001us,Deser:2001wx,Deser:2001xr,Zinoviev:2001dt,Skvortsov:2006at,Boulanger:2008up,Skvortsov:2009zu,Alkalaev:2011zv,Joung:2012rv,Basile:2016aen}.\n\nA spin-$s$ field has $s$ PM points, which are labeled by an integer, $t$, called the PM {\\it depth}, $t\\in \\left\\{0,1,\\ldots, s-1\\right\\}$. The mass of a spin-$s$ depth-$t$ PM field is\n\\begin{equation}\n\\bar{m}^2_{s,t} = (s-t-1)(s+t+D-4)H^2\\,,\n\\label{pmpoints}\n\\end{equation}\nso a massless field corresponds to $t = s-1$.\nA depth-$t$ PM field possesses a gauge invariance with a rank-$t$ totally symmetric gauge parameter. This gauge invariance removes the helicity $0, \\, \\pm1, \\ldots, \\, \\pm t$ components of the massive field, leaving a field which propagates only the $\\pm(t+1),\\dots, \\, \\pm s$ helicities.\n\nCombining Eqs.~\\eqref{massivefields} and~\\eqref{pmpoints}, the on-shell equations of motion for a PM field of spin-$s$ and depth-$t$ are~\\cite{Deser:2001xr,Zinoviev:2001dt,Hallowell:2005np},\n\\begin{equation}\n\\left(\\square-H^2\\left[D+s-2-t(D+t-3)\\right]\\right)\\phi_{\\mu_1\\cdots\\mu_s} = 0, \\qquad \\nabla^\\nu\\phi_{\\nu\\mu_2\\cdots\\mu_s} = 0, \\qquad\\phi^\\nu_{\\ \\nu\\mu_3\\cdots\\mu_{s}} = 0.\n\\label{spinsdepthteom}\n\\end{equation}\nThe gauge transformation of the PM field is given by\n\\begin{equation}\n \\delta \\phi_{\\mu_1\\cdots\\mu_s} = \\nabla_{(\\mu_{t+1}}\\nabla_{\\mu_{t+2}}\\cdots\\nabla_{\\mu_{s}}\\xi_{\\mu_{1}\\cdots\\mu_{t})}+\\ldots \\label{pmintogt},\n\\end{equation}\nwhere the ellipses stand for ${\\cal O}(H^2)$ terms with fewer derivatives. Explicitly, we can write the full transformation in the following factorized form~\\cite{Hinterbichler:2016fgl}:\n\\begin{equation}\n \\delta \\phi_{\\mu_1\\cdots\\mu_s} = \n\\left\\{\\begin{array}{l}\n{\\cal Y}_{s}\\left(\\prod_{n=1}^{\\frac{s-t}{2}}\\left[\\nabla_{\\mu_n}\\nabla_{\\mu_{n+\\frac{s-t}{2}}}+(2n-1)^2H^2g_{\\mu_n\\mu_{n+\\frac{s-t}{2}}}\\right]\\right)\\xi_{\\mu_{s-t+1}\\cdots\\mu_s},~~~~~~~~~~\\,{\\rm for}~(s-t)~{\\rm even},\\ \\ \\vspace{.15cm}\\\\\n{\\cal Y}_{s}\\left(\\prod_{n=1}^{\\frac{s-t-1}{2}}\\left[\\nabla_{\\mu_n}\\nabla_{\\mu_{n+\\frac{s-t-1}{2}}}+(2n)^2H^2g_{\\mu_n\\mu_{n+\\frac{s-t-1}{2}}}\\right]\\right)\\nabla_{\\mu_{s-t}}\\xi_{\\mu_{s-t+1}\\cdots\\mu_s},~{\\rm for}~(s-t)~{\\rm odd},\n\\end{array}\\right. \\label{pmintogt2}\n\\end{equation}\nwhere ${\\cal Y}_s$ is the Young projector onto the totally symmetric part. \nThe gauge parameter, $\\xi_{\\mu_{1}\\cdots\\mu_{t}}$, is a totally symmetric tensor, which is itself restricted to satisfy the on-shell equations\n\\begin{equation} \n\\Big(\\square + H^2\\left[(s-1)(D+s-2)-t\\right]\\Big)\\xi_{\\mu_{1}\\cdots\\mu_{t}}=0,~~\\qquad \\nabla^{\\nu}\\xi_{\\nu\\mu_{2}\\cdots\\mu_{t}}=0,~~\\qquad \\xi^\\nu_{\\ \\nu\\mu_{3}\\cdots\\mu_{t}} =0.\\label{genresga}\\end{equation}\n\n\\subsection{Dual operators}\n\nThrough the AdS\/CFT correspondence, a spin-$s$ field in ${\\rm AdS}_D$ corresponds to a spin-$s$ primary operator in a ${\\rm CFT}_{d}$ with $d=D-1$. The mass of the field and the scaling dimension, $\\Delta$, of the conformal primary are related by\n\\begin{equation} m^2L^2=\\begin{cases} \\Delta(\\Delta-d), & s=0\\, ,\\\\ \\left(\\Delta+s-2\\right)\\left(\\Delta-s-d+2\\right), & s\\geq 1\\, . \\end{cases} \\label{AdSCFTformpe} \\end{equation}\nFor a given mass, there are two ways to quantize the field in AdS space (that is, assign it a dual operator). These correspond to the greater and smaller roots of \\eqref{AdSCFTformpe}, $\\Delta_{\\pm}$: $\\Delta_+$ corresponds to the ``standard quantization,'' which covers the primaries with $\\Delta>d\/2$, and $\\Delta_-$ corresponds to the ``alternate quantization\" \\cite{Klebanov:1999tb}, which covers the primaries with $\\Delta0$ are below the Higuchi bound but are unitary because these ghostly components are projected out. \n\nWe can now imagine taking the PM decoupling limit starting with a higher-spin field with a mass in the healthy region above the Higuchi bound. As we approach the $t=0$ PM point, the representation splits into a depth-$0$ PM field and a scalar field with a shift symmetry of level $k=s-1$. Since we started in the unitary region and the PM field itself is unitary, we expect that the tachyonic scalar is also a unitary representation, by continuity.\\footnote{Note that if we take the $t=0$ decoupling limit from below the Higuchi bound, the theory we start with is non-unitary, but this manifests as an overall sign flip of the scalar mode in the decoupling limit, which is consistent with the scalars being unitary.}\n\nIn order to reach the $t>0$ PM points, corresponding to shift-symmetric fields with nonzero spin, a similar limiting procedure must first pass {\\it below} the Higuchi bound, so the massive representation we start with is not unitary. Since the PM points themselves are unitary, in this case continuity suggests that the decoupled tachyonic fields are not unitary on dS space. In this case, the non-unitarity is a result of relative signs between the helicity components of the shift-symmetric field.\n\nTo summarize, all of the shift-symmetric fields discussed in Section~\\ref{linesymmsectn} are unitary in AdS space, while we expect that only the scalar fields are unitary in dS space.\n\n\\section{Interacting scalars}\n\\label{sec:interactingscalars}\n\nSo far we have only considered shift symmetries of free theories. In this section, we move on to studying interactions that preserve the shift symmetries. \nWe will consider interactions of a single scalar field, leaving the interactions of fields with nonzero spin and multiple fields to future work. \nWe first look for possible deformations of the symmetry algebras that might be realized by scalar field theories. We then look for theories realizing either the undeformed or deformed symmetry algebras. For $k=1$ and $k=2$, we find interesting interacting examples, including the (A)dS analogue of the special galileon. \n\n\\subsection{Deformed symmetry algebras\\label{algebrassection}}\n\n\nWe start by reconsidering the symmetry algebras formed by the generators of shift symmetries and (A)dS isometries. Interactions can be classified according to whether or not they deform the symmetry algebra of the free theory. In the undeformed case, we call the theories, in a slight abuse of the term, {\\it abelian}, otherwise we call them {\\it non-abelian}. \n\nThe (A)dS isometries always takes the same ambient space form, \\eqref{eq:adssymmtrans}, regardless of interactions. Specialized to scalars, the transformation is\n\\begin{equation} \n\\delta_{J_{AB}}\\Phi \\equiv J_{AB}\\Phi=X_A\\partial_B\\Phi-X_B\\partial_A\\Phi \\,.\n\\end{equation}\nIt can be checked that these satisfy the $\\frak{so}(D+1)$ commutation relations,\n\\begin{equation} \\left[ J_{AB},J_{CD}\\right]= \\eta_{AC}J_{BD}-\\eta_{BC}J_{AD}+\\eta_{BD}J_{AC}-\\eta_{AD}J_{BC} \\,.\n\\label{eq:Jcomm}\n\\end{equation}\nIn contrast, the traceless shift symmetries acting on the scalar may acquire additional field-dependent terms in the presence of interactions, schematically of the form\n\\begin{equation} \n\\delta_{S_{A_1\\cdots A_k}}\\Phi \\equiv S_{A_1\\cdots A_k}\\Phi=X_{(A_1}\\cdots X_{A_k)_T}+{\\cal O}\\left(\\Phi\\right),\n\\end{equation}\nwhere the additional terms are constrained to have the same homogeneity degree as $\\Phi$.\nThese additional terms can deform the abelian algebra of the free theory into a non-abelian algebra.\nThe commutators between the $J_{AB}$ and the $S_{A_1\\cdots A_k}$ are fixed because $S_{A_1\\cdots A_k}$ is an $\\mathfrak{so}(D+1)$ tensor,\n\\begin{equation} \\label{eq:SJcomm}\n\\left[J_{BC},S_{A_1\\cdots A_{k}}\\right] = \\sum_{i=1}^{k} \\left( \\eta_{BA_i}S_{A_1 \\dots A_{i-1} C A_{i+1}\\dots A_{k}} - \\eta_{CA_i}S_{A_1 \\dots A_{i-1} B A_{i+1}\\dots A_{k}} \\right)\\,.\n\\end{equation}\nThis leaves only the commutators between $S_{A_1\\cdots A_k}$ to be determined. \nThere is only one term consistent with both the symmetries and trace conditions of this commutator, assuming there are no additional generators,\\footnote{Naively, when $k$ is even there could also be terms on the right-hand side of this commutator proportional to $S_{A_1\\ldots A_i B_{i+1} \\ldots B_k}$, but these turn out to be incompatible with the antisymmetry and trace conditions. Such terms are allowed for general higher-spin algebras.}\n\\begin{equation} \\label{eq:SScomm}\n[ S_{A_1 \\ldots A_k}, S^{B_1 \\ldots B_k} ] = \\alpha k!^2\\sum_{n=0}^{\\left \\lfloor{\\frac{k-1}{2}}\\right \\rfloor} a_n \\eta_{(A_1 A_2} \\eta^{(B_1 B_2} \\ldots \\eta_{A_{2n-1} A_{2n}} \\eta^{B_{2n-1} B_{2n}} \\delta_{A_{2n+1}}^{\\ B_{2n+1}} \\ldots \\delta_{A_{k-1}}^{\\ B_{k-1}}J_{A_k)}^{ \\ \\ B_k)} .\n\\end{equation}\nThe $a_n$ are functions of $D$ and $k$ that are fixed by the tracelessness conditions and can be determined recursively by\n\\begin{equation}\na_{n+1}= -a_{n} \\frac{(k-2n-1)(k-2n-2)}{2(D+2k-2n-4)(n+1)} \\,,\n\\end{equation} \nwith $a_0=1$. \n\nThis alone is not enough to guarantee the algebra exists, since we also have to check that the Jacobi identities are satisfied. A nontrivial identity is \n\\begin{equation} \\label{eq:jacobi}\n\\left[S_{A(k)}, \\left[S_{B(k)}, S_{C(k)}\\right]\\right]+\\left[S_{B(k)}, \\left[S_{C(k)}, S_{A(k)}\\right]\\right]+\\left[S_{C(k)}, \\left[S_{A(k)}, S_{B(k)}\\right]\\right]=0 \\,,\n\\end{equation}\nwhere we have used the condensed index notation, $A(k) \\equiv A_1\\cdots A_k$.\nUsing \\eqref{eq:SScomm} and \\eqref{eq:SJcomm}, the first term is given schematically by\n\\begin{equation} \\label{eq:jacobi1}\n[S_{A(k)}, [S_{B(k)}, S_{C(k)}]] \\sim \\alpha \\! \\sum_{n=0}^{\\left \\lfloor{\\frac{k-1}{2}}\\right \\rfloor} a_n (\\eta_{BB})^n (\\eta_{CC})^n (\\eta_{BC})^{k-2n-1} \\!\\left(\\eta_{CA} S_{B A(k-1)}-\\eta_{BA} S_{C A(k-1)} \\right),\n\\end{equation}\nwhere each index type is to be separately symmetrized. Permuting the indices gives the other two terms in \\eqref{eq:jacobi}.\nFor $k>2$, the sum in \\eqref{eq:jacobi1} contains terms with $n>0$ that, due to their index structure, cannot cancel against other terms in \\eqref{eq:jacobi}, so the Jacobi identities can only be satisfied if $\\alpha =0$. This implies that the $k>2$ abelian algebras cannot be deformed without introducing additional particles, at least in generic dimensions. There could still be algebras with $k>2$ that exist only in specific dimensions due to dimension-dependent identities, {\\it e.g.}, as in \\cite{Manvelyan:2013oua}. Additionally, in $D\\leq 3$ there is the possibility of parity-violating terms in the algebras, but we do not consider such cases.\n\nFor $k\\leq 2$, we can check explicitly that all of the Jacobi identities are satisfied for any value of $\\alpha$, so deformations of the algebra can exist. We next discuss each of these cases.\n\n\\subsubsection{$k=0$ algebra}\n\nWhen $k=0$, the generator $S$ is a scalar and hence always commutes with itself, just as in the free theory. The symmetry algebra is simply $\\frak{so}(D+1)\\oplus \\frak{u}(1)$ and there are no non-abelian extensions without introducing additional generators.\nIt is straightforward to write down theories in (A)dS space realizing this algebra. Any interactions where at least one derivative appears on each field will be invariant, which includes ghost-free theories such as $P(X)$ theories. The tadpole term is also invariant and is a Wess--Zumino term for this shift symmetry. \n\n\n\\subsubsection{$k=1$ algebra}\n\nWhen $k=1$, the commutator \\eqref{eq:SScomm} is given by\n\\begin{equation} \n\\left[S_A,S_B\\right]=\\alpha J_{AB} \\, .\\label{k1commutatoralge}\n\\end{equation}\nIn the abelian theory, $\\alpha=0,$ the algebra is $\\frak{iso}(D+1)$, with $S_A$ playing the role of the translations. \nWe can therefore think of the abelian $k=1$ scalar as the Goldstone field for the symmetry breaking pattern\n\\begin{equation} \\label{eq:k1linearSB}\n\\frak{iso}(D+1)\\longrightarrow \\frak{so}(D+1) \\,.\n\\end{equation}\nIn Section~\\ref{sec:k1int} we describe nonlinear interactions which realize this symmetry algebra.\n\n\nFor $\\alpha\\not=0$, the algebra is $\\frak{so}(D+2)$. This can be seen by grouping $J_{AB}$ and $S_A$ into a $(D+2)$-dimensional antisymmetric matrix with $S_A$ along the first row and column.\nWe can therefore think of scalar fields invariant under this deformed algebra as realizing the symmetry breaking\n\\begin{equation} \\label{eq:conformalbreaking}\n\\frak{so}(D+2)\\longrightarrow \\frak{so}(D+1) \\,.\n\\end{equation}\nThe algebra \\eqref{k1commutatoralge} can be realized on a spin-0 field by the ambient space transformation\n\\begin{equation}\n\\delta_{S_A} \\Phi \\equiv S_A\\Phi=X_A+\\alpha\\Phi \\partial_A \\Phi.\\label{k1nonlinetranse}\n\\end{equation}\nIn Section~\\ref{sec:k1int} we construct interactions with second-order equations of motion that are invariant under this transformation and discuss other ways of realizing the symmetry breaking \\eqref{eq:conformalbreaking}. \n\n\n\\subsubsection{$k=2$ algebra}\nWhen $k=2$, the commutator \\eqref{eq:SScomm} is given by\n\\begin{equation}\n[S_{A_1 A_2}, S_{B_1 B_2}] = \\alpha \\left( \\eta_{A_1 B_1} J_{A_2 B_2} + \\eta_{A_2 B_1} J_{A_1 B_2}+ \\eta_{A_1 B_2} J_{A_2 B_1} + \\eta_{A_2 B_2} J_{A_1 B_1} \\right).\n\\label{k2commutatoralge}\n\\end{equation}\nWhen $\\alpha \\not=0$, we can combine the generators into a matrix\n\\begin{equation} \nM_{AB}\\equiv -\\frac{1}{2}J_{AB}\\pm \\frac{i}{2 \\sqrt{\\alpha}}S_{AB},\n\\end{equation}\nwhere the tracelessness of $S_{AB}$ implies that $\\eta^{AB} M_{AB}=0$.\nThe commutators written in terms of $M_{AB}$ are\n\\begin{equation} \\left[M_{AB},M_{CD}\\right]= \\eta_{BC} M_{AD} - \\eta_{AD} M_{CB},\\end{equation}\nwhich are the commutators of $\\frak{sl}(D+1)$. \nIn a scalar theory realizing this symmetry algebra, the $J_{AB}$ are linearly realized and the shift symmetries $S_{AB}$ are non-linearly realized, so we may think of the scalar field as the Goldstone field for the symmetry breaking\n\\begin{equation}\n\\frak{sl}(D+1)\\longrightarrow\\, \\frak{so}(D+1).\n\\end{equation}\nThis nonlinear symmetry can be realized on a spin-0 field by the ambient space transformation\n\\begin{equation}\n\\delta_{S_{AB}}\\Phi\\equiv S_{AB}\\Phi = X_{(A} X_{B)_T}+\\alpha \\partial_{(A}\\Phi\\partial_{B)_T}\\Phi, \\label{nonlinek2syme}\n\\end{equation}\nwhich is just the flat space special galileon transformation~\\cite{Hinterbichler:2015pqa}. In Section \\ref{specialgalileonadssec}, we present the unique ghost-free (A)dS theory invariant under the transformation \\eqref{nonlinek2syme}, which is the curved space analogue of the special galileon.\n\n\n\n\n\\subsection{$k=1$ interactions}\n\\label{sec:k1int}\nNow that we have the possible symmetry algebras that a $k=1$ scalar can realize, we look for invariant interactions. In this case, the theories have been considered before in the literature in slightly different contexts.\nThese scalar fields can be thought of as Goldstone degrees of freedom associated with the nonlinearly realized symmetries, and can be constructed in a systematic way using nonlinear realization techniques. In Appendix~\\ref{cosetappendix}, we employ this method to construct interactions for the abelian $k=1$ and $k=2$ scalars.\n\n\n\\subsubsection{Abelian interactions}\n\\label{sec:abeliank1}\nWe first look for a scalar theory that realizes the $k=1$ abelian symmetry algebra. In this case, the shift symmetries commute and act on the ambient space field as \n\\begin{equation}\n\\delta\\Phi = S_A X^A.\n\\end{equation}\nBy acting on the scalar with two ambient space derivatives and projecting using the rules outlined in Appendix~\\ref{embbedappendix}, we get the invariant ``curvature\" tensor\n\\begin{equation}\n\\partial_A\\partial_B\\Phi \\leadsto \\Phi^{(1)}_{\\mu\\nu} = \\left(\\nabla_{\\mu } \\nabla_{\\nu} + H^2 g_{\\mu \\nu}\\right) \\phi.\n\\label{eq:k1invariant}\n\\end{equation}\nWe can construct invariant interactions by forming scalars from this tensor.\n\nSince the tensor $\\Phi^{(1)}_{\\mu\\nu}$ has a term with two derivatives, most interactions lead to higher-derivative equations of motion and ghosts. There are, however, a set of $D+1$ terms that are ghost free. These are the (A)dS galileons, which were discovered in~\\cite{Goon:2011uw,Goon:2011qf,Burrage:2011bt} and appear in the PM decoupling limit of massive gravity~\\cite{deRham:2018svs}. They are listed in Eqs.~\\eqref{eq:4AdSGals} and~\\eqref{eq:5AdSGal}. In Appendix \\ref{cosetappendix}, we perform a coset construction of the (A)dS galileons, showing that $D$ of them are constructible from the invariant tensor \\eqref{eq:k1invariant}. The last interaction is a Wess--Zumino term, which cannot be written solely in terms of $\\Phi^{(1)}_{\\mu\\nu}$. Another realization of the symmetry breaking pattern \\eqref{eq:k1linearSB} comes from considering a dS brane in a flat bulk~\\cite{Goon:2011uw,Goon:2011qf}. This theory should be related to the (A)dS galileons by a (possibly nonlocal) field redefinition.\\footnote{This is because both theories realize the same symmetry breaking pattern. It is expected that theories of Goldstone bosons are essentially unique and that all nonlinear realizations of the same symmetries with the same degrees of freedom are equivalent. This has been proven in the case of internal symmetries~\\cite{Coleman:1969sm,Callan:1969sn}, but remains conjectural in the case of spacetime symmetries. \n} \n\n\n\n\\subsubsection{Non-abelian interactions}\nWe now consider interactions that are invariant under the non-abelian $k=1$ algebra, $\\mathfrak{so}(D+2)$. This algebra can be realized on a scalar field through the ambient space transformation\n\\begin{equation} \\label{eq:k=1transform}\n\\delta \\Phi = S_A X^A + \\frac{1}{\\Lambda^D}S_A \\Phi \\partial^A \\Phi.\n\\end{equation}\nThis is the same as \\eqref{k1nonlinetranse} but with $\\alpha$ now written in terms of the dimensionful scale $\\Lambda$, which is not to be confused with the cosmological constant. In $D$ dimensions there are $D+1$ ghost-free interactions invariant under \\eqref{eq:k=1transform}, the $n$\\textsuperscript{th} of which can be written as\\footnote{To find these interactions, we started with a general ansatz with second-order equations of motion, constructed with a judicious choice of variables. We then imposed order by order the subset of symmetries \\eqref{eq:k=1transform} for a field depending only on the Poincar\\'e coordinates $(\\eta,x^1)$, up to a very high order, and then we resummed the result.}\n\\begin{align}\n\\frac{\\mathcal{L}_n }{\\sqrt{-g}}&= \\sum_{j=0}^{D-1}\\sum_{m=1}^{D-j} \\frac{c_{j, m,n}\\bar{\\phi}^{m-1}}{(\\bar{\\phi}^2-1)^{D\/2+1}} \\left[ (j+2) \\tilde{f}_{j,m}\\left(\\frac{X}{\\bar{\\phi} ^2-1} \\right)-(j+1) \\tilde{f}_{j+1,m-1}\\left(\\frac{X}{\\bar{\\phi} ^2-1} \\right) \\right] \\partial^{\\mu} {\\phi} \\partial^{\\nu} {\\phi} X^{(j)}_{\\mu \\nu}(\\Pi) \\nonumber \\\\\n& + \\Lambda^D V_n( \\bar{\\phi}), \\label{eq:k1L}\n\\end{align}\nwhere\n\\begin{equation}\n\\bar{\\phi} \\equiv -\\frac{iH\\phi}{\\Lambda^{D\/2}}, \n\\quad X \\equiv \\frac{\\partial_{\\mu} \\phi \\partial^{\\mu} \\phi }{ \\Lambda^{D}},\n\\end{equation}\nand\n\\begin{equation}\n\\tilde{f}_{j,m}(x) \\equiv \\, _2F_1\\left(\\frac{j+1}{2},\\frac{j+m+2}{2} ;\\frac{j+3}{2}; x\\right),\n\\end{equation}\n\\begin{equation}\n c_{j, m,n} \\equiv \\frac{ i^{D+m+1} (j+m-1) \\left((-1)^{m+n+j}-1\\right) \\Gamma \\left(\\frac{D+4}{2}\\right) \\Gamma \\left(\\frac{j+m+2}{2}\\right) (2-j-m)_{j-1} }{H^j\\Lambda^{D j\/2} D(D-1) (j+2)! \\, \\Gamma\n \\left(\\frac{j+m}{2}\\right) \\Gamma \\left(\\frac{ j+m-n+3}{2}\\right) \\Gamma \\left(\\frac{D-j-m+n+3}{2}\\right)(2-D)_{j-1}}.\n\\end{equation}\nThe tensors $X^{(j)}_{\\mu\\nu}$ are defined in Appendix \\ref{xtensorappendix} and depend on the matrix $\\Pi_{\\mu\\nu}\\equiv\\nabla_\\mu\\nabla_\\nu\\phi$.\nThe potentials can be written as\n\\begin{equation}\nV_n( \\bar{\\phi}) = -\\int {\\rm d} \\bar{\\phi} \\sum_{m=1}^{D+1} c_{-1,m,n} \\frac{\\bar{\\phi}^{m-1} }{(\\bar{\\phi}^2-1)^{D\/2+1}}.\n\\end{equation}\nThe general Lagrangian is\n\\begin{equation}\n\\mathcal{L} =\\sum_{n=1}^{D+1} a_n \\mathcal{L}_n,\n\\end{equation}\nwhere $a_n$ are real constants. To canonically normalize the kinetic term we set $a_2=D$ and to remove the tadpole term we set $a_1=0$.\n\nThe Lagrangians~\\eqref{eq:k1L} are written for a particular choice of field variables in which the symmetry transformation takes a particularly simple form in embedding space. There are, however alternative choices where the final invariant actions take a simpler form, at the price of complicating the ambient space field transformation. \nIn particular, the symmetry breaking pattern \\eqref{eq:conformalbreaking} has been considered previously in~\\cite{Hinterbichler:2012mv,Creminelli:2012qr} for different reasons. The construction in this case is most simply phrased directly in the physical (A)dS space. The scalar field is taken to transform as\n\\begin{equation} \\label{eq:k1nonlinear}\n\\delta\\phi = \\frac{1}{D}\\nabla_\\mu\\xi^\\mu+\\xi^\\mu\\partial_\\mu\\phi,\n\\end{equation}\nwhere $\\xi^\\mu$ are the conformal Killing vectors of (A)dS space.\\footnote{The explicit form of these symmetry transformations in the flat slicing can be found in~\\cite{Hinterbichler:2012mv}}\nAlthough this transformation realizes the same algebra as~\\eqref{eq:k=1transform}, it is not of the form of a deformed ambient space polynomial and the invariant interactions differ from~\\eqref{eq:k1L}. \nTo find interactions invariant under \\eqref{eq:k1nonlinear}, we can define an object $\\bar R_{\\mu\\nu}$ that transforms like a tensor under this transformation,\n\\begin{equation}\n\\bar R_{\\mu\\nu} = (D-1)H^2 g_{\\mu\\nu}-(D-2)\\nabla_\\mu\\nabla_\\nu\\phi-g_{\\mu\\nu}\\Box\\phi+(D-2)\\partial_\\mu\\phi\\partial_\\nu\\phi-(D-2) g_{\\mu\\nu}(\\partial\\phi)^2,\n\\end{equation}\nwhere $g_{\\mu\\nu}$ is the (A)dS metric.\nInvariant interactions are then obtained as scalars built out of $\\bar R_{\\mu\\nu}$ and the metric $\\bar g_{\\mu\\nu} = e^{2\\phi} g_{\\mu\\nu}$, for which $\\bar R_{\\mu\\nu}$ is the Ricci curvature. For example, one invariant interaction is given by the Lagrangian\n\\begin{equation}\n{\\cal L} = \\sqrt{-\\bar g}\\left(-\\bar R+(D-1)(D-2)H^2\\right),\n\\end{equation}\nwhich results in the expected mass $m^2 = -DH^2$. There is also a Wess--Zumino term that cannot be constructed from these covariant building blocks and whose explicit form is given in~\\cite{Hinterbichler:2012mv}, along with the details of the coset construction. \nThere is also a DBI presentation of this symmetry breaking pattern~\\cite{Goon:2011uw,Goon:2011qf}. It seems likely that there are field transformations that link these different forms, as in flat space~\\cite{Bellucci:2002ji, Creminelli:2013ygt}.\n\n\n\\subsection{$k=2$ interactions}\nWe now consider interactions for a $k=2$ scalar. Similar to $k=1$, there exist abelian interactions built from a shift-invariant curvature. We also find a theory that realizes the non-abelian extension of the $k=2$ symmetry algebra, which is a curved space version of the special galileon.\n\n\\subsubsection{Abelian interactions}\nWe first consider theories that nonlinearly realize the abelian symmetry algebra, where the $S_{AB}$ generators commute. \nThe symmetries act on the ambient space scalar as\n\\begin{equation}\n\\delta\\Phi = S_{AB}X^AX^B.\n\\label{eq:abeliank2shifts}\n\\end{equation}\nUsing ambient space, we can construct the following invariant tensors:\n\\begin{align}\n\\partial_A\\partial_B\\partial_C\\Phi & \\leadsto \\Phi^{(2)}_{\\mu\\nu\\rho} = \\left(\\nabla_{(\\mu}\\nabla_\\nu\\nabla_{\\rho)}+4H^2 g_{(\\mu\\nu}\\nabla_{\\rho)}\\right)\\phi,\n\\label{eq:phi2tensor} \\\\\n\\partial_A\\partial^A \\Phi & \\leadsto \\Phi^{(2)} = \\left(\\square+2(D+1)H^2\\right)\\phi.\n\\end{align}\nThese tensors are invariant under the restriction of the shifts~\\eqref{eq:abeliank2shifts} to (A)dS space, so any scalar formed from them and their derivatives will provide an invariant Lagrangian. There is also the possibility of Wess--Zumino terms which are not directly constructible from these objects. However, as in flat space~\\cite{Hinterbichler:2014cwa}, all interactions invariant under the abelian $k=2$ shift symmetry have higher-order equations of motion. Such interactions may still be of interest, for example, to describe the helicity-0 mode interactions of a massive spin-3 particle in (A)dS space. \n\n\\subsubsection{Non-abelian interactions \\label{specialgalileonadssec}}\n\nWe now consider interactions that are invariant under the non-abelian $k=2$ algebra, $\\mathfrak{sl}(D+1)$, where the scalar transformation is given in ambient space by\n\\begin{equation} \\delta \\Phi=S_{AB}\\left(X^A X^B +{1\\over \\Lambda^{D+2} }\\partial^A\\Phi \\partial^B\\Phi\\right), \\label{nonlinek2syme2}\n\\end{equation}\nwith symmetric traceless $S_{AB}$.\nUsing the same method as for the non-abelian $k=1$ interactions, we find that there is a single invariant interaction with second-order equations of motion,\\footnote{We have not been able to verify symbolically that~\\eqref{fulllagk2nonline} has the full symmetry to all orders, although we are confident it does. For $D=4$, we have shown that it has the full symmetry up to 14$^{\\rm th}$ order in the fields and has the relevant subset of symmetries to all orders when $\\phi = \\phi(\\eta,x^1)$. We have also shown that the expression for the symmetry variation of the full Lagrangian vanishes when random integers are substituted for the variables, and so we are confident it is zero, but due to its size we have not managed to simplify it to zero symbolically.}\n\\begin{align} \\frac{ {\\cal L}_{\\rm SG}}{ \\sqrt{-g}}=& \n\\sum_{j=0}^{D-1}{ \\psi^{D-j}+(-1)^j{\\psi^\\ast}^{D-j}\\over i^j \\Lambda^{j\\left(D +2\\right)\/2}\\left|\\psi\\right|^{D+3}2\\,\\Gamma(j+3)} \\left[(j+2)f_j\\left({X\\over \\left|\\psi\\right|^2}\\right)-(j+1)f_{j+1}\\left({X\\over \\left|\\psi\\right|^2}\\right) \\right]\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(j)}_{\\mu\\nu}(\\Pi) \\nonumber\\\\\n& + \\frac{\\Lambda^{D+2}}{ 2 (D+1) H^2}\\left(1-{{\\psi^\\ast}^{D+1}+\\psi^{D+1}\\over 2 \\left|\\psi\\right|^{D+1}}\\right), \\label{fulllagk2nonline}\n\\end{align}\nwhere we have defined\\footnote{We use the definition of ${}_2F_1$ of \\href{http:\/\/functions.wolfram.com\/HypergeometricFunctions\/Hypergeometric2F1}{\\tt functions.wolfram.com\/HypergeometricFunctions\/Hypergeometric2F1}.}\n\\begin{equation} \\label{eq:hypergeo}\nf_j(x) \\equiv {}_2F_1\\left({D+3\\over 2},{j+1\\over 2};{j+3\\over 2};-x\\right), \\qquad \\psi\\equiv 1-2i {H^2\\over \\Lambda^{{D\\over 2}+1}}\\phi\\, , \\qquad X\\equiv {H^2\\over \\Lambda^{D+2}}(\\partial\\phi)^2\\,.\n\\end{equation}\nDespite the appearance of factors of $i$, this Lagrangian is real. Note that in this case we have ignored the tadpole term, which can be obtained from \\eqref{fulllagk2nonline} by putting minus signs in front of $\\psi^{* D-j}$ and $\\psi^{D+1}$. There will also exist invariant interactions with higher-order equations of motion, which are (A)dS versions of the interactions constructed in~\\cite{Novotny:2016jkh}.\n\nThe Lagrangian \\eqref{fulllagk2nonline} depends on two scales, $H$ and $\\Lambda$, and therefore has only one dimensionless parameter, the ratio of these scales. Only terms of even order in $\\phi$ are present, so there is also a ${\\mathbb Z}_2$ symmetry $\\phi\\rightarrow -\\phi$. \n\n\nIn the flat space limit, $H\\rightarrow 0$, \\eqref{fulllagk2nonline} reduces to\n\\begin{equation} {\\cal L}_{\\rm SG}\\Big\\rvert_{H=0}=-\\sum_{\\substack{j=0, \\\\ j\\,{\\rm even}}}^{D-1}{1\\over \\Lambda^{j\\left(D+2\\right)\/2}}{(-1)^{j\/2}\\over (j+2)!}\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(j)}_{\\mu\\nu}(\\Pi). \\end{equation}\nUp to a total derivative, this is precisely the Lagrangian for the flat space special galileon of \\cite{Cheung:2014dqa,Hinterbichler:2015pqa,Cheung:2016drk}, \nwhich is invariant under the nonlinear symmetry\n\\begin{equation} \\delta \\phi =s_{\\mu\\nu}\\left(x^\\mu x^\\nu +{1\\over \\Lambda^{D+2} }\\partial^\\mu\\phi\\partial^\\nu\\phi\\right),\n\\end{equation}\nwith symmetric traceless $s_{\\mu\\nu}$~\\cite{Hinterbichler:2015pqa}, as well as the standard galileon and shift symmetries.\nThe Lagrangian \\eqref{fulllagk2nonline} can therefore be considered an $H^2$ deformation of the flat space special galileon.\n\nExpanding in powers of the field, the structure of the Lagrangian~\\eqref{fulllagk2nonline} is\n\\begin{equation} {1\\over \\sqrt{-g}}{\\cal L}_{\\rm SG}=-{1\\over 2}(\\partial\\phi)^2+(D+1)H^2\\phi^2+{1\\over 24\\Lambda^{D+2}}\\left[\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(2)}_{\\mu\\nu}(\\Pi)+{\\cal O}\\left(H^2\\right)\\right]+{\\cal O}\\left(\\phi^6\\right),\n\\end{equation}\nwhich has the correct mass for a $k=2$ scalar. Each power of $\\phi$ comes suppressed with powers of $\\Lambda$, with a tail of lower-derivative terms suppressed by $H$. These tail terms are different from those of the (A)dS galileons and there are interactions at every even order in the field. Thus, unlike the flat space special galileon, which is a particular combination of galileons, the Lagrangian \\eqref{fulllagk2nonline} is {\\it not} a particular combination of (A)dS galileons. \n\nIn the non-abelian $k=1$ case, we found that there is a formulation of the interactions that is much simpler than the one obtained from the simple ambient space field transformation. Based on this, we expect that there should also exist a choice of field variables where the non-abelian $k=2$ theory takes a much simpler form than~\\eqref{fulllagk2nonline}. It would be interesting to find such a presentation of the theory.\n\nIn any given dimension, the hypergeometric functions in Eq.~\\eqref{eq:hypergeo} can be written in terms of elementary functions: square roots and (in odd dimensions) $\\tanh^{-1}(\\sqrt{x})$. For example, in $D=4$ the Lagrangian \\eqref{fulllagk2nonline} can be written as\n\\begin{align}\n{1\\over \\sqrt{-g}}{\\cal L}_{\\rm SG}=& -\\frac{\\Lambda ^6 }{ H^2}\\frac{ (y^2-8 y+8) \\left(8 X^2-3 y^{3\/2} \\sqrt{X+y}+12 X y-3 X \\sqrt{y} \\sqrt{X+y}+3 y^2\\right)}{15y^3\n (X+y)^{3\/2}} \\nonumber\\\\ \n& -\\frac{\\Lambda ^6 }{ H^2} \\left(\\frac{ 5 (y-4) y+16}{10y^{5\/2}}-{1\\over 10}\\right) \n+\\frac{2 (y-4) \\phi }{15 X y^{5\/2}} \\left(\\frac{\\sqrt{y} (2 X+3 y)}{(X+y)^{3\/2}}-3\\right){H^2\\over \\Lambda ^6}\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(1)}_{\\mu\\nu}(\\Pi) \\nonumber\\\\\n&+\\frac{y-2}{30 X^2 y^2} \\left(2 \\sqrt{y}-\\frac{2 X^2+3 X y+2 y^2}{(X+y)^{3\/2}}\\right){1\\over \\Lambda ^6}\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(2)}_{\\mu\\nu}(\\Pi) \\nonumber\\\\\n&+\\frac{\\phi }{45 X^2 y^{3\/2}} \\left(\\frac{\\sqrt{y} (3 X+2 y)}{(X+y)^{3\/2}}-2\\right){H^2\\over \\Lambda^{12}}\\partial^\\mu\\phi\\partial^\\nu\\phi X^{(3)}_{\\mu\\nu}(\\Pi),\n\\end{align}\nwhere we have defined\n\\begin{equation} y\\equiv 1+4{H^4\\over \\Lambda^6}\\phi^2,\\ \\ \\ X\\equiv {H^2\\over \\Lambda^6}(\\partial\\phi)^2 \\,.\\end{equation}\nThough this Lagrangian may seem somewhat complex, we emphasize that it is entirely fixed by the shift symmetries~\\eqref{nonlinek2syme2}.\n\n\\subsubsection*{Potential}\n\nThe (A)dS special galileon possesses a potential whose form is completely fixed by the symmetry. In $D$ dimensions, this potential is given by\n\\begin{equation} V(\\phi)=-{1\\over \\sqrt{-g}}{\\cal L}_{\\rm SG}\\bigg|_{\\partial\\phi=0}= { \\Lambda^{D+2}\\over 2(D+1) H^2}\\left({\\psi^{D+1}+{\\psi^\\ast}^{D+1}\\over 2 \\left|\\psi\\right|^{D+1}} -1\\right),\n\\end{equation}\nwhere we recall that\n\\begin{equation}\n\\psi\\equiv 1-2i H^2\\phi\/ \\Lambda^{(D+2)\/2}.\n\\end{equation}\nWe plot this potential in Figure~\\ref{potentialsfig} for $D=2, \\ldots, 10$ for the dS case $H^2>0$. The ${\\mathbb Z}_2$ symmetry implies that the potential is invariant under interchanging $\\psi \\leftrightarrow \\psi^*$, so the AdS case corresponds to an overall sign flip.\nThe potential is bounded, with the asymptotic values\n\\begin{equation}\nV(\\phi\\to \\pm \\infty) \\simeq - { \\Lambda^{D+2}\\over H^2}\\frac{\\sin \\left(\\frac{\\pi D}{2}\\right)+1}{2 (D+1)}.\n\\end{equation}\nThe number of critical points is $2\\left \\lfloor{{D\\over 2}}\\right \\rfloor +1$, increasing by two every two dimensions, and all the maxima\/minima have the same height. \n\n\\begin{figure\n\\begin{center}\n\\epsfig{file=potentials.pdf,width=6.5in}\n\\caption{\\small Plots of $H^2V(\\phi)\/\\Lambda^{D+2}$ for the special galileon in dS space in various dimensions. The horizontal axes show the dimensionless combination $H^2\\phi\/ \\Lambda^{(D+2)\/2}$.}\n\\label{potentialsfig}\n\\end{center}\n\\end{figure}\n\nFocusing on the case $D=4$, the potential can be written as\n\\begin{equation} V(\\tilde{\\phi})=-{1\\over \\sqrt{-g}}{\\cal L}_{\\rm SG}\\bigg|_{\\partial\\phi=0}={\\Lambda^6\\over 10 H^2} \\left(\\frac{80 \\tilde{\\phi} ^4-40 \\tilde{\\phi} ^2+1}{ \\left(4 \\tilde{\\phi}^2+1\\right)^{5\/2}}-1 \\right)\\,,\n \\end{equation}\nwhere $\\tilde{\\phi} \\equiv H^2 \\phi\/ \\Lambda^3$.\nThis is plotted in Figure~\\ref{potential4d}. \n\\begin{figure\n\\begin{center}\n\\epsfig{file=potential4d.pdf,width=4.75in}\n\\caption{\\small Potential for the $D=4$ special galileon in dS space.\n}\n\\label{potential4d}\n\\end{center}\n\\end{figure}\nAs $\\tilde{\\phi}\\rightarrow \\pm \\infty$, the potential asymptotes to $-\\Lambda^6\/10H^2$. There are five critical points, two absolute minima and three absolute maxima,\n\\begin{align}\n\\text{Minima: } \\tilde{\\phi}_{\\rm min}& =\\pm \\frac{1}{2} \\sqrt{5-2 \\sqrt{5}}\\ ,\\quad { H^2\\over\\Lambda^6}V(\\tilde{\\phi}_{\\rm min})=-{1\\over 5}\\,, \\\\\n\\text{Maxima: }\\tilde{\\phi}_{\\rm max} &=0, \\ \\pm \\frac{1}{2} \\sqrt{5+2 \\sqrt{5}}\\ ,\\quad V(\\tilde{\\phi}_{\\rm max})=0\\, .\n\\end{align}\n\nExpanding about the critical points, the value of the squared mass relative to the kinetic term is always $-10H^2$, as required by the shift symmetry, which is a nontrivial check of the symmetry of the entire Lagrangian. However, the overall sign of the quadratic action differs around the maxima and minima,\n\\begin{align} {1\\over \\sqrt{-g}}{\\cal L}_{\\rm SG} & \\propto -\\left((\\partial\\varphi)^2-10H^2\\varphi^2\\right)+{\\cal O}\\left(\\varphi^3\\right),\\ \\ \\ \\ \\phi=\\phi_{\\rm max}+\\varphi \\, ,\\\\\n {1\\over \\sqrt{-g}}{\\cal L}_{\\rm SG} & \\propto +\\left((\\partial\\varphi)^2-10H^2\\varphi'^2\\right)+{\\cal O}\\left(\\varphi^3\\right),\\ \\ \\ \\ \\phi=\\phi_{\\rm min}+\\varphi.\n\\end{align}\nAlthough the minima appear stable by considering just the potential, the fluctuations about them have ghostly kinetic terms, so they are actually unstable in dS space. As discussed at the end of Section~\\ref{sec:linearscalars}, the would-be growing mode corresponding to the tachyon instability around the origin can be removed by a symmetry transformation. In AdS space, the potential is flipped upside down and there are three stable minima, around which the field has the correct sign kinetic term.\n\n\n\n\\subsection{$k > 2$ interactions}\nWe now comment briefly on interactions for shift symmetries with $k>2$. As discussed in Section~\\ref{algebrassection}, there do not exist deformations of the symmetry algebras when $k>2$ without adding additional generators, except possibly in special dimensions. This implies that interacting theories of a single scalar in generic dimensions must realize the symmetries of the free theory, which act as\n\\begin{equation} \n\\delta\\Phi=S_{A_1\\cdots A_k}X^{A_1}\\cdots X^{A_k}.\n\\label{eq:kg2symms}\n\\end{equation}\nIt is straightforward to construct objects that are invariant under these shifts by acting with $k+1$ derivatives or the d'Alembert operator in ambient space and projecting, \n\\begin{align}\n\\partial_{A_1}\\dots \\partial_{A_{k+1}}\\Phi & \\leadsto \\Phi^{(k)}_{\\mu_1\\cdots\\mu_{k+1}}= \\nabla_{(\\mu_1}\\dots \\nabla_{\\mu_{k+1})}\\phi+{\\cal O}(H^2) {\\rm \\ terms}, \\\\\n\\partial_A \\partial^A \\Phi & \\leadsto \\Phi^{(k)} = \\left(\\square+k(D+k-1)H^2\\right)\\phi.\n\\end{align}\nAny scalar formed from these objects and their derivatives will be an invariant Lagrangian. This captures all possible Lagrangians except for a finite number of Wess--Zumino terms. \nIt is beyond the scope of this work, but it would be interesting to classify such Wess--Zumino terms for these higher shift symmetries. Our expectation is that all interactions invariant under the symmetries~\\eqref{eq:kg2symms} with $k\\geq2$ will have higher-order equations of motion, because there are no known theories for them to reduce to in the flat space limit with lower-order equations of motion. We have checked this explicitly for $k=3$.\n\nIt is worth noting that in cases that break dS invariance, either explicitly or spontaneously, higher shift symmetries can be present that act only on the spatial coordinates. Shift symmetries of this kind appear in the context of inflation~\\cite{Hinterbichler:2012nm,Hinterbichler:2013dpa,Hinterbichler:2016pzn}, and have interesting consequences for correlation functions. It would be interesting to systematically classify theories where such spatial shifts appear.\n\n\n\n\n\n\\section{Conclusions} \\label{sec:conclusions}\nIn this paper we have identified special mass values at which massive bosonic fields of all spins in (A)dS space develop shift symmetries that are the analogues of flat space polynomial shift symmetries. We have explained how these shift-symmetric fields are related to PM fields and we have constructed explicit examples of interacting scalar theories preserving the symmetries. These theories generalize many known interesting examples of shift-symmetric theories in flat space.\n\nIn flat space, shift symmetries have proven useful as an organizing principle to classify EFTs. We have taken the first steps toward such a classification in (A)dS space. In particular, we have\n considered interactions for theories with shift symmetries containing a single scalar field. \n In addition to placing known curved-spacetime EFTs in a new context, \nwe have constructed in every dimension a novel interacting scalar theory with a nonlinear quadratic shift symmetry, which is a highly nontrivial generalization of the special galileon to (A)dS space. \nAdditionally, we have argued that---similar to the special galileon in flat space---this theory should have the highest possible shift symmetry while still retaining second-order equations of motion.\nTherefore, the phenomenology of this theory should be extremely interesting. It possesses a potential that is completely fixed by its deformed shift symmetry, which may prove to be useful for applications in either the early or late universe.\n\nWe have focused on interactions for a single scalar field, but it is possible that there are higher-spin and\/or multi-field interacting theories that are governed by deformations of the linear shift symmetries of Section \\ref{linesymmsectn}.\nIt would be interesting to construct theories of this type.\nA necessary ingredient for such a theory is a Lie algebra with symmetric or mixed-symmetry generators having up to two rows. One such algebra is the infinite higher-spin algebra underlying Vasiliev's higher-spin theory~\\cite{Eastwood:2002su,Vasiliev:2003ev,Bekaert:2005vh,Joung:2014qya}. When realized as the algebra of shift symmetries, this would correspond to a putative (A)dS theory with interacting $k=0$ massive particles of every integer spin except spin one. For simpler examples, one could use the finite algebras that arise from truncations of PM higher-spin algebras~\\cite{Joung:2015jza}. \nThe generators of these algebras comprise all traceless Young tableaux with an even number of boxes less than or equal to $2N$, for some fixed integer $N$, and at most two rows. The $N=1$ algebra corresponds to $\\mathfrak{sl}(D+1)$, which underlies the (A)dS special galileon constructed here, so an intriguing possibility is that the $N>1$ algebras are also realized by finite interacting theories of massive particles in (A)dS space. For example, the $N=2$ algebra would correspond to an interacting theory containing spin-0 fields with $k=2$ and $k=4$, a $k=2$ spin-1 field, and a $k=0$ spin-2 field. Of course, the existence of an algebra is not by itself proof that a theory exists, so more work is required to find realizations of the algebra on fields and to find consistent invariant interactions.\n\nAnother interesting question is whether the interacting shift-symmetric theories have corresponding interacting theories of PM fields, where the algebra of shift symmetries is gauged. The (A)dS galileon corresponds to conformal gravity in this sense, and the (A)dS special galileon would correspond to a theory of a depth-0 spin-3 PM field interacting with gravity. Additionally, in this paper we have focused on bosonic fields described by symmetric tensors, but it would be interesting to extend our constructions to fermionic and mixed-symmetry fields. We also expect that the shift-symmetric fields studied here should play some interesting role in the AdS\/CFT correspondence, where the bulk shift symmetries should have some novel boundary consequences.\n\nFinally, in the same way that\nshift symmetries in flat space imply enhanced soft limits of the $S$-matrix, the shift symmetries on (A)dS space considered here should imply enhanced soft limits for boundary correlation functions. It would be interesting to explore the extent to which the (A)dS theories can be reconstructed from their soft limits.\n\n\n\\vspace{-.2cm}\n\\paragraph{Acknowledgements:} We would like to thank Lasma Alberte, Thomas Basile, Xavier Bekaert, Brando Bellazzini, Miguel Campiglia, Paolo Creminelli, Frederik Denef, Claudia de Rham, Garrett Goon, Maxim Grigoriev, Euihun Joung, Karapet Mkrtchyan, Enrico Pajer, Guilherme Pimentel, Zimo Sun, Massimo Taronna, Andrew Tolley, Sam Wong, and Michael Zlotnikov for helpful conversations and correspondence. KH and JB acknowledge support from DOE grant DE- SC0019143 and Simons Foundation Award Number 658908. RAR is supported by DOE grant DE-SC0011941 and Simons Foundation Award Number 555117. AJ and RAR are supported by NASA grant NNX16AB27G. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.46\\textwidth]{heatmap3_small.png}\\caption{Heatmap of lane changes detected in the collected measurements on German highways. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:lcmap}\n\\end{figure}\n\n\n\nIn general, experienced drivers are aware that they behave differently when facing the same traffic situation depending on the respective location. In this context, various local conditions, as e.\\,g. the curviness, the width of the road and the roadside structures or the placement of the signage, play an important role. \n\n\nIn order to navigate safely and comfortably for the passengers through any traffic situation automated vehicles need to predict surrounding vehicle's behaviors. Although this is already a challenging task, the above described effect makes it even more challenging. In order to support such predictions, this article aims to analyze which local conditions are the most important and to which extent they influence the driving behavior. In our research, we focus on the lane change behavior in highway scenarios.\n\n\\pubidadjcol\n\nFor our analyses, we collect measurement data from a fleet of customer vehicles. Note that most automotive software functions are developed and tested based on data collected with dedicated test vehicles, so far. This, however, results in several limitations. For instance, test drivers, knowing their route very well tend to drive in a special manner. Data collected from real customers therefore might be more representative. Besides, considering customer data offers the chance to cover much more miles in less time. \n\n\n\n\nAfter collecting the data, they are processed and spatially aggregated using digital maps. As a result we obtain a map of lane change probabilities. Together with other map attributes this enables downstreamed analyses. \n\n\nIn detail, we make the following contributions: \n\n\\begin{enumerate}\n\\item A procedure to construct a lane change probability map\n\\item A thorough study and discussion of the impact of three exemplary location-dependent conditions (interchanges, curvature, slope) on the lane change behavior\n\\end{enumerate}\n\nThe remainder of this article is structured as follows: \\autoref{sec:rel_work} discusses related work. \\autoref{sec:prep} outlines the preparation of the map and measurement data. Subsequently, \\autoref{sec:results} presents our empiric investigations and discusses the results obtained. Finally, \\autoref{sec:conclusion} summarizes our contribution and provides an outlook on future work.\n\n\\section{RELATED WORK}\\label{sec:rel_work}\n\nA general overview on motion prediction approaches and especially deep learning-based ones is given in \\cite{lefevre2014, mozaffari2020deep}. Besides characterizing and categorizing approaches, \\cite{mozaffari2020deep} shows typical strengths and weaknesses of each group. Among other things, it is mentioned that environment conditions, such as, road geometry and traffic rules can have significant impacts on the driving behavior. Most shown approaches are, however, not able to handle such impacts due to their input representation. For the latter, most approaches solely rely on the motion history of the vehicle to be predicted as well as its surrounding vehicles history. Other approaches building up on raw sensor data or birds eye views are on the other hand computationally expensive. This makes such approaches hard to implement on onboard hardware with limited resources.\n\n\\cite{wirthmueller2020} agrees with \\cite{mozaffari2020deep} about the importance of external conditions for the driving behavior. The authors explicitly mention weather, daytime and traffic density. Subsequently, the latter's influence on the prediction capability of a known motion prediction system rather than on the driving behavior itself is studied. The article confirms the influence of varying traffic densities and concludes that such impacts need to be studied more in detail and integrated in motion prediction approaches.\n\n\\cite{wirthmuller2020fleet} describes the development of an architecture concept aiming to enhance behavior predictions through massive online learning using customer fleets. The general idea here is to learn several context specific prediction models and provide them to all vehicles.\n\n\\cite{imanishi2020model} is also tackling the motion prediction problem in diverse environments. In order to that, all vehicles of the fleet are permanently connected to a cloud server. The vehicles regularly send information about their location and driving state towards the server. If a vehicle now needs to predict the behavior of a surrounding object, it requests the prediction from the server. There, a kernel density estimation using all nearby collected measurements is performed. Thus, all location-dependent factors are automatically included in the prediction. Although, the approach seems to produce promising results in a small real-world evaluation, it is questionable whether the solution is also efficiently applicable to a larger area. While doing so, problems in terms of storage capacity as well as computation time could arise. In addition, pure online prediction techniques are suffering from transmission times.\n\n\\cite{matute2018longitudinal} uses the curvature of the road as input to plan comfortable trajectories. The plain fact that location-dependent features are also included while planing pleasant trajectories additionally emphasizes that experienced drivers also adapt to changed local conditions.\n\n\\cite{qi2014location} studies location-dependent lane change behaviors on arterial roads from the viewpoint of traffic flow analytics. To do so, the authors solve the motion prediction task through classical modelling. The established model distinguishes between efficiency-driven and objective-driven lane changes and aggregates the contributions of both motivations. The efficiency-driven model part on the one hand is primarily based on the motion of surrounding vehicles. The objective-driven part on the other hand mostly relies on the location. To correctly parametrize the model, the authors use the well-studied NGSIM data set \\cite{colyar2007us}. Here, the term location-dependent has to be comprehended as microscopic rather than macroscopic. The location-dependent nature of the approach solely refers to the location within the road segment observed in the NGSIM data set, which basically describes in which lane the vehicle is driving and how far away the next intersection is. Macroscopic effects such as differences between regions or cities that our article focuses on are not taken into account.\n\nThe article presented in \\cite{gonccalves2020change} originates from a completely different community being interested in so-called concept drifts. Concept drifts in general denote changes in classification problems over lifetime. The latter, can make previously well-working classifiers inadequate. \\cite{gonccalves2020change} deals with the development of a concept drift detector that in particular is able to detect changes in the a-priori probabilities of the underlying classes. The general idea of a concept drift, even if, spatially than temporally, can also be translated to our maneuver classification problem.\n\n\\cite{wang2013learning} also tackles concept drifts in imbalanced online learning applications. The approach dynamically detects changes in the class probabilities and adapts its online learner in order to reflect that.\n\nConcluding our literature study, there exists to the best of our knowledge no research work that spatially aggregates lane change probabilities. This fact may be attributed to the enormous efforts, being necessary to collect the underlying measurement data. Consequently, analyses about location-dependent factors fostering or dampening lane changes have not yet been studied on a large scale.\n\n\n\\section{DATA AND MAP PREPARATION}\\label{sec:prep}\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{labeling2.pdf}\\caption{Simplified visualization of the labeling principle exemplary for a lane change to the left. As shown, all samples 5 seconds prior to the lane crossing are treated as lane change samples, whereas all other samples are treated as lane following ones.}\\label{fig:labeling}\n\\end{figure}\n\nOur empiric investigations build upon data from a large fleet of customer vehicles. All vehicles of our fleet are wirelessly connected with a backend server. From time to time the vehicles send a multitude of selected sensor signals to the backend. Instead of single point measurements for each sensor a time series lasting for up to 200\\,s is collected. Prior to data transmission, the communication module anonymizes and encrypts the data. In the backend, all collected measurement data are transformed into an equidistant time-representation with a time resolution of 50\\,ms. Besides, only measurements made on German highways are considered. Afterwards, it is possible to detect lane changes carried out in the measurement sequences. Thereby, the continuously measured distances between the vehicle and the lane markings can be used for the detection. These detections can be localized through the GNSS (global navigation satellite system) positions that are sensed as well. For our further investigations we pick the collected measurements of one full day. As a whole, this data set contains data of approximately 1\\,350\\,h. At this moment, a longer provisioning of data is not possible in order to technical reasons related to the necessity of a massive storage capacity. \n\nUsing the approximately 58\\,000 detected lane changes it already becomes possible to do a first visual analysis by arranging them geographically as a heatmap (cf. \\autoref{fig:lcmap}). As \\autoref{fig:lcmap} already reveals, this investigation however is not very informative, as all larger determined hotspots now comply with congested areas or, more precisely, that areas that are highly frequented by the vehicles of our fleet. Consequently, this result is also not well suited for being used in downstreamed applications.\n\nTo overcome this, we assess the lane change events in relation to the amount of lane following events. For this purpose, we assign all measurements to the links of a digital map. Links are one form of representing parts of a digital map and can be considered as a sequence of GNSS points. In addition to its geometry, each link can have several properties as road type, curvature, slope or speed limit. These properties enable further investigations (cf. \\autoref{sec:results}). Moreover, we preprocess the map in a way, such that all links have the same length, to be able to compare behavior differences over various links. Furthermore, we shifted from single lane change events to longer measurement parts being categorized as lane changes after some first attempts. Each measurement point being 5 or fewer seconds prior to the crossing of the lane marking is considered as point belonging to that lane change. Note that this is also common practice in the maneuver classification community (e.\\,g. in \\cite{bahram2016, schlechtriemen2015will, wirthmueller2019}). According to the labeling defined in \\cite{wirthmueller2019} each measurement point is therefore assigned to one of the three essential maneuver classes for highway driving: lane change to the left \\textit{LCL}, lane following \\textit{FLW} and lane change to the right \\textit{LCR}. \\autoref{fig:labeling} visualizes this labeling principle. Afterwards, it becomes possible to calculate the probabilities of each of the three maneuver classes for each link. Consequently, the lane change behavior at a certain location can be investigated together with all measurements at that position. In particular, using more than a single point per lane change can help to suppress noise biasing the results of later analyses. Note that this is especially important, as lane changes anyway occur by orders of magnitudes rarer than lane following behavior does. Moreover, the probabilities estimated per link can be directly used as local a-priori probabilities for maneuver classification approaches as described in \\cite{wirthmueller2019}.\n\n\n\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.94\\textwidth]{lcprobs_global3.png}\\caption{Visualization of the calculated lane following probability $P_{FLW}$ per 200\\,m long link. Probabilities are illustrated in shades of red, where dark red corresponds to a large probability. From a large distance, links being close together might overlap. In addition, the links are visualized according to descending lane following probabilities. Larger areas being colored homogeneously indicate the most interesting locations. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:globalmap}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.96\\textwidth]{kr_stutt1.pdf}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link at a highway merger (upper example) and a highway divider (lower example) respectively at the highway interchange Stuttgart (second left highlighting in \\autoref{fig:globalmap}). Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:krstutt}\n\\end{figure*}\n\nIn \\cite{wirthmueller2019}, a maneuver classifier is trained using a balanced data set. This means that the proportion of all maneuver classes in the data set is equal, whereas in reality lane changes occur much rarer compared to lane following. Using a balanced data set instead of the unbalanced one enables the machine learning model (in \\cite{wirthmueller2019} a multilayer perceptron) to learn the differences between the maneuver classes much easier. Through the extreme over-proportion of lane following events, it would otherwise be favorable for the classifier to assign all samples to that class in order to achieve a good classification performance. However, the class probabilities produced by a classifier trained that way do not correspond to the actual ones. Hence, \\cite{wirthmueller2019} suggests weighting the estimated probabilities with the a-priori probabilities of the maneuver classes. At this point it would also be possible to utilize location-dependent rather than global a-priori probabilities. In \\autoref{sec:rel_work} we pointed out that pure online prediction approaches are suffering from transmission latencies. In contrast to such approaches location-specific a-priori probabilities are comparably static and can thus be transmitted for larger regions. Thus, short offline phases as well as transmission times do not cause problems.\n\nNote that location-specific a-priori estimates are strongly influenced by the used prediction horizon $T_H$, which we set to 5\\,s according to \\cite{wirthmueller2019}. A longer horizon results in higher lane change probabilities, whereas a shorter horizon increases the probabilities for lane following. However, the absolute values are not relevant for analyzing location-based differences in the driving behavior. A more important aspect is whether a specific location, geometry or other location-specific properties result in more or less lane changes to one or both neighboring lanes. To get an idea what are more or less lane changes than usual, we again refer to \\cite{wirthmueller2019}, where the following overall a-priori probabilities were derived from a large data set: \\textit{LCL}: 0.03, \\textit{FLW}: 0.94, \\textit{LCR}: 0.03.\n\nAs most vehicles of our fleet are located in the area around Stuttgart, Germany, we restrict the data to highways in that area in order to base our analyses on a sufficient amount of data per link. Besides, we apply a threshold (i.\\,e., 10 measurement points per meter) on the amount of data per link. In some very rare cases, the latter procedure results in few missing links within the considered region. This ensures that the estimated probabilities are less subject to measurement noise. Nevertheless, the data quality can be further improved by increasing the amount of data. This would also help to expand the analytics area. Finally, the resulting data set contains more than 8\\,600 lane change maneuvers. \n\n\\section{INVESTIGATIONS AND RESULTS}\\label{sec:results}\n\nAlready a first look on the map produced after the described preprocessing steps (cf. \\autoref{fig:globalmap}) reveals areas which are obviously of special interest. These include highway interchanges, curves and slopes. In the following sections \\autoref{subsec:exits} - \\autoref{subsec:slopes} we study the respective effects in detail. Besides these identified local conditions causing the effects, there might be even more. As producing a comprehensive collection is, however, not possible in practice we restrict our upcoming investigations to the three given local conditions. \\autoref{subsec:discussion} closes the section with an overall discussion of our examinations. \n\n\\subsection{Highway Interchanges}\\label{subsec:exits}\n\nA closer look on the lane change probabilities at highway interchanges reveals a non-surprising behavior. \\autoref{fig:krstutt} provides examples of more detailed views of highway mergers and dividers. Ahead of dividers and behind mergers, the probability for following the current lane drops. Simultaneously, the probability of lane changes to the right increases ahead of the dividers. This can be traced back to vehicles leaving the current highway. Likewise, approaching vehicles result in increased lane change probabilities to the left behind highway mergers.\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.98\\textwidth]{gaertringen_200.png}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link in a curve close to G\u00e4rtringen. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:curve1}\n\\end{figure*}\n\nFurther, it is noteworthy that in the highway divider example in \\autoref{fig:krstutt} the lane change probability to the right (not explicitly visualized) increases already significantly ahead of the actual location of the divider. This effect can be reproduced at some other dividers as well. Apparently, drivers tend to get in lane early. This may be attributed to the specific characteristics of these highway dividers. Possibly, the departing lanes are very long or the signage is set up very early. Note that, such local conditions are very hard to cover with digital maps as well. Besides, the probability to leave the highway can widely vary from one divider to another depending on typical traffic flows. So, it is easy to imagine that each location is very individual.\n\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{bend.pdf}\\caption{Calculation of the curvature equivalent $bend$.}\\label{fig:bend}\n\\end{figure} \n\n\\subsection{Curves}\\label{subsec:curves}\n\nTo enable investigations concerning curves, first, we calculate a new feature called $bend$ for each link. This value is proportional to the average curvature of the link, but can be calculated more efficiently according to \\autoref{eq:bend}: \n\n\n\\begin{equation}\nbend = \\frac{l_L}{d_{SL}}\n\\label{eq:bend}\n\\end{equation}\n\n\n\n$l_L$ denotes the length of the link $L$. $d_{SL}$ indicates the maximum distance between the link and the secant $S$ connecting its start and end point. \\autoref{fig:bend} visualizes this. To distinguish between right and left turns, we select start and end points in such a way that right turns show off negative $bend$ values and left turns positive ones. This ensures that the bend values are in accordance with the vehicle coordinate system specified in the ISO norm 8855 \\cite{iso8855} as well. \n\n\nNote that the $bend$ values are strongly influenced by the used map discretization. For our purpose, a link length of 200\\,m has shown the best results. Decreasing the link lengths below 200\\,m results in less stable probabilities comprising of more noise. By contrast, links longer than 200\\,m are not able to represent curvatures appropriately. Within the selected region, the determined $bend$ values range from -0.33 to 0.14. Obviously, road segments showing especially large bends occur much rarer than slightly curved or straight ones. The largest share of bends ranges between $\\pm$0.07.\n\nIn order to quantify the impact of curves on the lane change behavior \\autoref{fig:bend_vs_prob}, shows off the median probability for lane following $\\widetilde{P_{FLW}}$ at links within certain $bend$ ranges. Road segments with very large $bends$ ($|.|$\\,$>$\\,$0.07$) were excluded due to the low data densities and resulting noise level. From \\autoref{fig:bend_vs_prob} one can conclude that larger bends result in larger lane following probabilities. Accordingly, curves seem to discourage drivers from changing lanes. However, note that the observable differences are rather small considering absolute values.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{bend_200_vs_pflw_small2.pdf}\\caption{Median lane following probability under certain $bend$ values of 200\\,m long links. Error bars denote standard error of the measurement (SEM).}\\label{fig:bend_vs_prob}\n\\end{figure}\n\n\n\\footnotetext[1]{sign depends on the driving direction}\n\nTaking a look at some specific locations substantiates the existence of the described effect. \\autoref{fig:curve1} shows an example of a rather strongly curved road segment with $bend$ values ranging from $\\pm$0.045\\footnotemark[1] to $\\pm$0.071\\footnotemark[1] in the links belonging to the curve. The annotated lane following probabilities indicate that drivers tend to perform lane changes ahead of or behind the respective curve. Even though it has to be mentioned that close to that curve a particularity, i.\\,e. one of just a few left-hand highway exits in Germany, is located. This should, however, make lane changes even more probable.\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.98\\textwidth]{curve2.png}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link in a curve close to Pforzheim. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:curve2}\n\\end{figure*}\n\n\\autoref{fig:curve2} shows another microscopic view of a curve, where the lane change-dampening impact can be observed especially if the curve is driven through as a right turn. Within the shown curve, the $bend$ values are nearly constant around $\\pm$0.030\\footnotemark[2]. Note that the probability of lane changes to the left seems to decrease particularly within the right turn. Probably this can be explained by the fact that drivers who perform lane changes to the left in a right turn may get the feeling of being pushed out of the curve. Another possible explanation could be the reduced visibility of vehicles on the outer adjacent lane.\n\n\n\n\n\nIn addition, other publications \\cite{wirthmueller2019, dang2017time} argue that lane changes to the right are mostly motivated by the intention to leave the highway (objective-driven cf. \\cite{qi2014location}), whereas lane changes to the left are mostly attributed to the intention of overtaking slower leading vehicles (efficiency-driven cf. \\cite{qi2014location}). This means lane changes to the right might be more necessary to reach the destination. In turn, this might explain why the effect seems to be stronger here compared to lane changes to the left.\n\nFurther note that the latter effect complies with another conclusion one may draw from \\autoref{fig:bend_vs_prob}. Comparing the two bars corresponding to the strongest right curves (leftmost bar) and left curves (rightmost bar), reveals that right curves seem to have a stronger lane change-dampening effect than left curves.\n\nIn conclusion, many indications seem to support our intuition that curves take a lane change-dampening effect. Especially, the superposition of various effects, however, makes it difficult to prove this.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{albaufstieg.jpg}\\caption{Picture taken at the leading up part of the road at `Drackensteiner Hang'. Image source:\\href{http:\/\/www.autobahn-bilder.de\/inlines\/a8.htm}{http:\/\/www.autobahn-bilder.de\/inlines\/a8.htm}}\\label{fig:drackenstein}\n\\end{figure}\n\n\\footnotetext[2]{sign depends on the driving direction}\n\n\n\n\\subsection{Slopes}\\label{subsec:slopes}\nSlopes constitute another spatial particularity being of interest in the context of lane change behavior. Compared to curves and highway interchanges, however, slopes cannot be easily identified on a navigation map, instead one has to rely on expert or local knowledge to identify such correlations at all.\n\nOur attention to this potential correlation was drawn while taking a look at the rightmost highlighting in \\autoref{fig:globalmap}. This location is known as `Albaufstieg' at the so-called `Drackensteiner Hang' \\cite{Wikipedia}. The latter is an approximately 6\\,km long inclination with a quiet constant slope around 5\\,\\%. Due to the particular topology of this area the roads for the two directions are not running in parallel. \\autoref{fig:drackenstein} shows a picture taken at the leading up part of that road. As can be seen, the road course at this location is not only very steep, but is also hard to overlook due to the additional curvature and narrowing road limits. Our lane change investigation reveals that exactly at that location hardly any lane changes have been performed. \n\nThe corresponding down hill part of the road, in turn shows off only a slightly reduced lane change probability. In summary, this special location supports our impression that slopes and especially uphill parts could have a similar lane change-dampening effect as curves.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{slope_200_vs_pflw_small2.pdf}\\caption{Median lane following probability under certain $slope$ values of 200\\,m long links. Error bars denote standard error of the measurement (SEM).}\\label{fig:slope_vs_prob}\n\\end{figure}\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.93\\textwidth]{karlsruhe.png}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link at a location close to Karlsruhe.}\\label{fig:karlsruhe}\n\\end{figure*}\n\nTo investigate this in more detail, \\autoref{fig:slope_vs_prob} shows the median lane following probability $\\widetilde{P_{FLW}}$ at links (with 200\\,m length) under certain $slope$ values. The illustration is equivalent to that for the curvatures in \\autoref{fig:bend_vs_prob}. The shown slopes are also in accordance with the vehicle coordinate system defined by ISO norm 8855 \\cite{iso8855}. Positive slope values indicate ascents, whereas negative values indicate descents. Most slopes on highways are ranging between $\\pm$7\\,\\%. To the best of our knowledge, steeper roads constitute an absolute exception, at least on highways in Central Europe. Already values around 5\\,\\% enduring over several kilometers as in the case of the `Drackensteiner Hang' are very rare.\n\n\n\n\n\nTaking a more detailed view on \\autoref{fig:slope_vs_prob} reveals that it is not possible to detect an unambiguous trend fostering our intuition that slopes have a lane change-dampening effect. Explanations for this phenomenon may be manifold. Obviously, once more, several effects probably superimpose each other. A good example of such a superposition can be found at a location close to Karlsruhe, shown in \\autoref{fig:karlsruhe}. At this location the lane change probabilities are increased even though the road course is rather steep with slope values ranging from $\\pm$4.5\\,\\%\\footnotemark[3] to $\\pm$7.3\\,\\%\\footnotemark[3]. These lane changes are, however, attributed to the close highway interchange. Solely removing the links at this location from the examination in \\autoref{fig:slope_vs_prob} takes a significant effect. The latter corrects the lane following probabilities in the bins corresponding to the largest slopes notably upwards.\n\n\nIn summary, our overall impression is that slopes can contribute in a lane change-dampening manner but solely in combination with other conditions. In this way, slopes can, for instance, in combination with curves make locations hard to overlook, whereby the willingness for lane changes can be decreased. On the other hand, slopes can also lead to opposite effects at other locations, e.\\,g. if poorly motorized vehicles and trucks are slowed down by ascents. This could provide additional motivation to other traffic participants to change lanes in order to overtake the slower vehicles. \n\n\\subsection{Discussion}\\label{subsec:discussion}\n\nThe presented examinations clearly emphasize that location-dependent effects on the lane change behavior definitely exist but are hard to investigate and quantify. Of course this is also attributed to the superposition of multiple effects. Nevertheless, the investigations have revealed the following tendencies:\n\n\\begin{itemize}\n\\item As to be expected, highway mergers and dividers can foster the likelihood of lane changes.\n\\item Curved road segments can have lane change-dampening effects.\n\\item Slopes can affect lane change probabilities in both directions depending on other simultaneous conditions. \n\\end{itemize}\n\nHowever, it is extremely challenging to identify which of the considered conditions are important at a certain location and how far they superimpose each other. Moreover, there are obviously much more than the three described factors that impact lane change probabilities. For instance, the time of day, whether it is weekend or a work day, current weather conditions or even season-dependent position of the sun have the potential to influence lane change probabilities.\n\nSo on the one hand generalizing in order to predict lane change probabilities based on environment conditions is obviously a very hard task as there are so many factors and interdependencies. On the other hand, we actually do not see the necessity to employ such a generalized model. At least in Germany the highway road network is comparably stable and with an overall length of about 13\\,000\\,km rather manageable. Instead of error-prone and complex generalizations we suggest building up a lane change probability map containing lane change probabilities for all map links. This straight forward enumeration becomes possible based on data from a broad customer fleet. New `unseen' highway links will quickly be added and updated to the atlas of lane change probabilities.\n\n\n\n\n \\footnotetext[3]{sign depends on the driving direction}\n\n\n\n\\section{SUMMARY AND OUTLOOK}\\label{sec:conclusion}\n\nThis article showed how to develop a lane change probability map using measurement data collected with a fleet of customer vehicles. These data are spatially aggregated using digital maps. This enabled analyses revealing the impact of three exemplary location-dependent conditions on the lane change behavior. More precisely, the effects of highway interchanges, curvatures and slopes have been thoroughly investigated and discussed. Although the results show clear tendencies, they nevertheless demonstrate that especially the superposition of several factors makes it hard or even impossible to estimate location-specific lane change probabilities using simple models. Thus, we suggest dynamically constructing and maintaining such a lane change map.\n\nFuture work will focus on providing and integrating the information of such a lane change probability map to enhance onboard predictions of surrounding traffic participants behavior. For this purpose, it is also reasonable to collect data over longer time horizons. We are currently integrating our processing chain into a streaming layer. Thus, only final results rather than raw sensor measurements will be provisioned. This will solve our described difficulties concerning data provisioning. The latter will enable us to increase our data basis by an order of magnitudes as well as to enlarge the analytics area. Thus, more advanced analytics and statistical tests become possible, as this increases, for example, the amount of strongly curved or ascending road segments. \n\n\n\n\n\\section*{Acknowledgment}\n\nMap data copyrighted OpenStreetMap contributors and available from \\href{https:\/\/www.openstreetmap.org}{https:\/\/www.openstreetmap.org}\n\n \n \n \n \n \n\n\\balance\n\n\\bibliographystyle{ieeetr}\n\n\n\\section{INTRODUCTION}\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.46\\textwidth]{heatmap3_small.png}\\caption{Heatmap of lane changes detected in the collected measurements on German highways. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:lcmap}\n\\end{figure}\n\n\n\nIn general, experienced drivers are aware that they behave differently when facing the same traffic situation depending on the respective location. In this context, various local conditions, as e.\\,g. the curviness, the width of the road and the roadside structures or the placement of the signage, play an important role. \n\n\nIn order to navigate safely and comfortably for the passengers through any traffic situation automated vehicles need to predict surrounding vehicle's behaviors. Although this is already a challenging task, the above described effect makes it even more challenging. In order to support such predictions, this article aims to analyze which local conditions are the most important and to which extent they influence the driving behavior. In our research, we focus on the lane change behavior in highway scenarios.\n\n\\pubidadjcol\n\nFor our analyses, we collect measurement data from a fleet of customer vehicles. Note that most automotive software functions are developed and tested based on data collected with dedicated test vehicles, so far. This, however, results in several limitations. For instance, test drivers, knowing their route very well tend to drive in a special manner. Data collected from real customers therefore might be more representative. Besides, considering customer data offers the chance to cover much more miles in less time. \n\n\n\n\nAfter collecting the data, they are processed and spatially aggregated using digital maps. As a result we obtain a map of lane change probabilities. Together with other map attributes this enables downstreamed analyses. \n\n\nIn detail, we make the following contributions: \n\n\\begin{enumerate}\n\\item A procedure to construct a lane change probability map\n\\item A thorough study and discussion of the impact of three exemplary location-dependent conditions (interchanges, curvature, slope) on the lane change behavior\n\\end{enumerate}\n\nThe remainder of this article is structured as follows: \\autoref{sec:rel_work} discusses related work. \\autoref{sec:prep} outlines the preparation of the map and measurement data. Subsequently, \\autoref{sec:results} presents our empiric investigations and discusses the results obtained. Finally, \\autoref{sec:conclusion} summarizes our contribution and provides an outlook on future work.\n\n\\section{RELATED WORK}\\label{sec:rel_work}\n\nA general overview on motion prediction approaches and especially deep learning-based ones is given in \\cite{lefevre2014, mozaffari2020deep}. Besides characterizing and categorizing approaches, \\cite{mozaffari2020deep} shows typical strengths and weaknesses of each group. Among other things, it is mentioned that environment conditions, such as, road geometry and traffic rules can have significant impacts on the driving behavior. Most shown approaches are, however, not able to handle such impacts due to their input representation. For the latter, most approaches solely rely on the motion history of the vehicle to be predicted as well as its surrounding vehicles history. Other approaches building up on raw sensor data or birds eye views are on the other hand computationally expensive. This makes such approaches hard to implement on onboard hardware with limited resources.\n\n\\cite{wirthmueller2020} agrees with \\cite{mozaffari2020deep} about the importance of external conditions for the driving behavior. The authors explicitly mention weather, daytime and traffic density. Subsequently, the latter's influence on the prediction capability of a known motion prediction system rather than on the driving behavior itself is studied. The article confirms the influence of varying traffic densities and concludes that such impacts need to be studied more in detail and integrated in motion prediction approaches.\n\n\\cite{wirthmuller2020fleet} describes the development of an architecture concept aiming to enhance behavior predictions through massive online learning using customer fleets. The general idea here is to learn several context specific prediction models and provide them to all vehicles.\n\n\\cite{imanishi2020model} is also tackling the motion prediction problem in diverse environments. In order to that, all vehicles of the fleet are permanently connected to a cloud server. The vehicles regularly send information about their location and driving state towards the server. If a vehicle now needs to predict the behavior of a surrounding object, it requests the prediction from the server. There, a kernel density estimation using all nearby collected measurements is performed. Thus, all location-dependent factors are automatically included in the prediction. Although, the approach seems to produce promising results in a small real-world evaluation, it is questionable whether the solution is also efficiently applicable to a larger area. While doing so, problems in terms of storage capacity as well as computation time could arise. In addition, pure online prediction techniques are suffering from transmission times.\n\n\\cite{matute2018longitudinal} uses the curvature of the road as input to plan comfortable trajectories. The plain fact that location-dependent features are also included while planing pleasant trajectories additionally emphasizes that experienced drivers also adapt to changed local conditions.\n\n\\cite{qi2014location} studies location-dependent lane change behaviors on arterial roads from the viewpoint of traffic flow analytics. To do so, the authors solve the motion prediction task through classical modelling. The established model distinguishes between efficiency-driven and objective-driven lane changes and aggregates the contributions of both motivations. The efficiency-driven model part on the one hand is primarily based on the motion of surrounding vehicles. The objective-driven part on the other hand mostly relies on the location. To correctly parametrize the model, the authors use the well-studied NGSIM data set \\cite{colyar2007us}. Here, the term location-dependent has to be comprehended as microscopic rather than macroscopic. The location-dependent nature of the approach solely refers to the location within the road segment observed in the NGSIM data set, which basically describes in which lane the vehicle is driving and how far away the next intersection is. Macroscopic effects such as differences between regions or cities that our article focuses on are not taken into account.\n\nThe article presented in \\cite{gonccalves2020change} originates from a completely different community being interested in so-called concept drifts. Concept drifts in general denote changes in classification problems over lifetime. The latter, can make previously well-working classifiers inadequate. \\cite{gonccalves2020change} deals with the development of a concept drift detector that in particular is able to detect changes in the a-priori probabilities of the underlying classes. The general idea of a concept drift, even if, spatially than temporally, can also be translated to our maneuver classification problem.\n\n\\cite{wang2013learning} also tackles concept drifts in imbalanced online learning applications. The approach dynamically detects changes in the class probabilities and adapts its online learner in order to reflect that.\n\nConcluding our literature study, there exists to the best of our knowledge no research work that spatially aggregates lane change probabilities. This fact may be attributed to the enormous efforts, being necessary to collect the underlying measurement data. Consequently, analyses about location-dependent factors fostering or dampening lane changes have not yet been studied on a large scale.\n\n\n\\section{DATA AND MAP PREPARATION}\\label{sec:prep}\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{labeling2.pdf}\\caption{Simplified visualization of the labeling principle exemplary for a lane change to the left. As shown, all samples 5 seconds prior to the lane crossing are treated as lane change samples, whereas all other samples are treated as lane following ones.}\\label{fig:labeling}\n\\end{figure}\n\nOur empiric investigations build upon data from a large fleet of customer vehicles. All vehicles of our fleet are wirelessly connected with a backend server. From time to time the vehicles send a multitude of selected sensor signals to the backend. Instead of single point measurements for each sensor a time series lasting for up to 200\\,s is collected. Prior to data transmission, the communication module anonymizes and encrypts the data. In the backend, all collected measurement data are transformed into an equidistant time-representation with a time resolution of 50\\,ms. Besides, only measurements made on German highways are considered. Afterwards, it is possible to detect lane changes carried out in the measurement sequences. Thereby, the continuously measured distances between the vehicle and the lane markings can be used for the detection. These detections can be localized through the GNSS (global navigation satellite system) positions that are sensed as well. For our further investigations we pick the collected measurements of one full day. As a whole, this data set contains data of approximately 1\\,350\\,h. At this moment, a longer provisioning of data is not possible in order to technical reasons related to the necessity of a massive storage capacity. \n\nUsing the approximately 58\\,000 detected lane changes it already becomes possible to do a first visual analysis by arranging them geographically as a heatmap (cf. \\autoref{fig:lcmap}). As \\autoref{fig:lcmap} already reveals, this investigation however is not very informative, as all larger determined hotspots now comply with congested areas or, more precisely, that areas that are highly frequented by the vehicles of our fleet. Consequently, this result is also not well suited for being used in downstreamed applications.\n\nTo overcome this, we assess the lane change events in relation to the amount of lane following events. For this purpose, we assign all measurements to the links of a digital map. Links are one form of representing parts of a digital map and can be considered as a sequence of GNSS points. In addition to its geometry, each link can have several properties as road type, curvature, slope or speed limit. These properties enable further investigations (cf. \\autoref{sec:results}). Moreover, we preprocess the map in a way, such that all links have the same length, to be able to compare behavior differences over various links. Furthermore, we shifted from single lane change events to longer measurement parts being categorized as lane changes after some first attempts. Each measurement point being 5 or fewer seconds prior to the crossing of the lane marking is considered as point belonging to that lane change. Note that this is also common practice in the maneuver classification community (e.\\,g. in \\cite{bahram2016, schlechtriemen2015will, wirthmueller2019}). According to the labeling defined in \\cite{wirthmueller2019} each measurement point is therefore assigned to one of the three essential maneuver classes for highway driving: lane change to the left \\textit{LCL}, lane following \\textit{FLW} and lane change to the right \\textit{LCR}. \\autoref{fig:labeling} visualizes this labeling principle. Afterwards, it becomes possible to calculate the probabilities of each of the three maneuver classes for each link. Consequently, the lane change behavior at a certain location can be investigated together with all measurements at that position. In particular, using more than a single point per lane change can help to suppress noise biasing the results of later analyses. Note that this is especially important, as lane changes anyway occur by orders of magnitudes rarer than lane following behavior does. Moreover, the probabilities estimated per link can be directly used as local a-priori probabilities for maneuver classification approaches as described in \\cite{wirthmueller2019}.\n\n\n\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.94\\textwidth]{lcprobs_global3.png}\\caption{Visualization of the calculated lane following probability $P_{FLW}$ per 200\\,m long link. Probabilities are illustrated in shades of red, where dark red corresponds to a large probability. From a large distance, links being close together might overlap. In addition, the links are visualized according to descending lane following probabilities. Larger areas being colored homogeneously indicate the most interesting locations. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:globalmap}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.96\\textwidth]{kr_stutt1.pdf}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link at a highway merger (upper example) and a highway divider (lower example) respectively at the highway interchange Stuttgart (second left highlighting in \\autoref{fig:globalmap}). Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:krstutt}\n\\end{figure*}\n\nIn \\cite{wirthmueller2019}, a maneuver classifier is trained using a balanced data set. This means that the proportion of all maneuver classes in the data set is equal, whereas in reality lane changes occur much rarer compared to lane following. Using a balanced data set instead of the unbalanced one enables the machine learning model (in \\cite{wirthmueller2019} a multilayer perceptron) to learn the differences between the maneuver classes much easier. Through the extreme over-proportion of lane following events, it would otherwise be favorable for the classifier to assign all samples to that class in order to achieve a good classification performance. However, the class probabilities produced by a classifier trained that way do not correspond to the actual ones. Hence, \\cite{wirthmueller2019} suggests weighting the estimated probabilities with the a-priori probabilities of the maneuver classes. At this point it would also be possible to utilize location-dependent rather than global a-priori probabilities. In \\autoref{sec:rel_work} we pointed out that pure online prediction approaches are suffering from transmission latencies. In contrast to such approaches location-specific a-priori probabilities are comparably static and can thus be transmitted for larger regions. Thus, short offline phases as well as transmission times do not cause problems.\n\nNote that location-specific a-priori estimates are strongly influenced by the used prediction horizon $T_H$, which we set to 5\\,s according to \\cite{wirthmueller2019}. A longer horizon results in higher lane change probabilities, whereas a shorter horizon increases the probabilities for lane following. However, the absolute values are not relevant for analyzing location-based differences in the driving behavior. A more important aspect is whether a specific location, geometry or other location-specific properties result in more or less lane changes to one or both neighboring lanes. To get an idea what are more or less lane changes than usual, we again refer to \\cite{wirthmueller2019}, where the following overall a-priori probabilities were derived from a large data set: \\textit{LCL}: 0.03, \\textit{FLW}: 0.94, \\textit{LCR}: 0.03.\n\nAs most vehicles of our fleet are located in the area around Stuttgart, Germany, we restrict the data to highways in that area in order to base our analyses on a sufficient amount of data per link. Besides, we apply a threshold (i.\\,e., 10 measurement points per meter) on the amount of data per link. In some very rare cases, the latter procedure results in few missing links within the considered region. This ensures that the estimated probabilities are less subject to measurement noise. Nevertheless, the data quality can be further improved by increasing the amount of data. This would also help to expand the analytics area. Finally, the resulting data set contains more than 8\\,600 lane change maneuvers. \n\n\\section{INVESTIGATIONS AND RESULTS}\\label{sec:results}\n\nAlready a first look on the map produced after the described preprocessing steps (cf. \\autoref{fig:globalmap}) reveals areas which are obviously of special interest. These include highway interchanges, curves and slopes. In the following sections \\autoref{subsec:exits} - \\autoref{subsec:slopes} we study the respective effects in detail. Besides these identified local conditions causing the effects, there might be even more. As producing a comprehensive collection is, however, not possible in practice we restrict our upcoming investigations to the three given local conditions. \\autoref{subsec:discussion} closes the section with an overall discussion of our examinations. \n\n\\subsection{Highway Interchanges}\\label{subsec:exits}\n\nA closer look on the lane change probabilities at highway interchanges reveals a non-surprising behavior. \\autoref{fig:krstutt} provides examples of more detailed views of highway mergers and dividers. Ahead of dividers and behind mergers, the probability for following the current lane drops. Simultaneously, the probability of lane changes to the right increases ahead of the dividers. This can be traced back to vehicles leaving the current highway. Likewise, approaching vehicles result in increased lane change probabilities to the left behind highway mergers.\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.98\\textwidth]{gaertringen_200.png}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link in a curve close to G\u00e4rtringen. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:curve1}\n\\end{figure*}\n\nFurther, it is noteworthy that in the highway divider example in \\autoref{fig:krstutt} the lane change probability to the right (not explicitly visualized) increases already significantly ahead of the actual location of the divider. This effect can be reproduced at some other dividers as well. Apparently, drivers tend to get in lane early. This may be attributed to the specific characteristics of these highway dividers. Possibly, the departing lanes are very long or the signage is set up very early. Note that, such local conditions are very hard to cover with digital maps as well. Besides, the probability to leave the highway can widely vary from one divider to another depending on typical traffic flows. So, it is easy to imagine that each location is very individual.\n\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{bend.pdf}\\caption{Calculation of the curvature equivalent $bend$.}\\label{fig:bend}\n\\end{figure} \n\n\\subsection{Curves}\\label{subsec:curves}\n\nTo enable investigations concerning curves, first, we calculate a new feature called $bend$ for each link. This value is proportional to the average curvature of the link, but can be calculated more efficiently according to \\autoref{eq:bend}: \n\n\n\\begin{equation}\nbend = \\frac{l_L}{d_{SL}}\n\\label{eq:bend}\n\\end{equation}\n\n\n\n$l_L$ denotes the length of the link $L$. $d_{SL}$ indicates the maximum distance between the link and the secant $S$ connecting its start and end point. \\autoref{fig:bend} visualizes this. To distinguish between right and left turns, we select start and end points in such a way that right turns show off negative $bend$ values and left turns positive ones. This ensures that the bend values are in accordance with the vehicle coordinate system specified in the ISO norm 8855 \\cite{iso8855} as well. \n\n\nNote that the $bend$ values are strongly influenced by the used map discretization. For our purpose, a link length of 200\\,m has shown the best results. Decreasing the link lengths below 200\\,m results in less stable probabilities comprising of more noise. By contrast, links longer than 200\\,m are not able to represent curvatures appropriately. Within the selected region, the determined $bend$ values range from -0.33 to 0.14. Obviously, road segments showing especially large bends occur much rarer than slightly curved or straight ones. The largest share of bends ranges between $\\pm$0.07.\n\nIn order to quantify the impact of curves on the lane change behavior \\autoref{fig:bend_vs_prob}, shows off the median probability for lane following $\\widetilde{P_{FLW}}$ at links within certain $bend$ ranges. Road segments with very large $bends$ ($|.|$\\,$>$\\,$0.07$) were excluded due to the low data densities and resulting noise level. From \\autoref{fig:bend_vs_prob} one can conclude that larger bends result in larger lane following probabilities. Accordingly, curves seem to discourage drivers from changing lanes. However, note that the observable differences are rather small considering absolute values.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{bend_200_vs_pflw_small2.pdf}\\caption{Median lane following probability under certain $bend$ values of 200\\,m long links. Error bars denote standard error of the measurement (SEM).}\\label{fig:bend_vs_prob}\n\\end{figure}\n\n\n\\footnotetext[1]{sign depends on the driving direction}\n\nTaking a look at some specific locations substantiates the existence of the described effect. \\autoref{fig:curve1} shows an example of a rather strongly curved road segment with $bend$ values ranging from $\\pm$0.045\\footnotemark[1] to $\\pm$0.071\\footnotemark[1] in the links belonging to the curve. The annotated lane following probabilities indicate that drivers tend to perform lane changes ahead of or behind the respective curve. Even though it has to be mentioned that close to that curve a particularity, i.\\,e. one of just a few left-hand highway exits in Germany, is located. This should, however, make lane changes even more probable.\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.98\\textwidth]{curve2.png}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link in a curve close to Pforzheim. Basemap: OpenStreetMap \\cite{OpenStreetMap}}\\label{fig:curve2}\n\\end{figure*}\n\n\\autoref{fig:curve2} shows another microscopic view of a curve, where the lane change-dampening impact can be observed especially if the curve is driven through as a right turn. Within the shown curve, the $bend$ values are nearly constant around $\\pm$0.030\\footnotemark[2]. Note that the probability of lane changes to the left seems to decrease particularly within the right turn. Probably this can be explained by the fact that drivers who perform lane changes to the left in a right turn may get the feeling of being pushed out of the curve. Another possible explanation could be the reduced visibility of vehicles on the outer adjacent lane.\n\n\n\n\n\nIn addition, other publications \\cite{wirthmueller2019, dang2017time} argue that lane changes to the right are mostly motivated by the intention to leave the highway (objective-driven cf. \\cite{qi2014location}), whereas lane changes to the left are mostly attributed to the intention of overtaking slower leading vehicles (efficiency-driven cf. \\cite{qi2014location}). This means lane changes to the right might be more necessary to reach the destination. In turn, this might explain why the effect seems to be stronger here compared to lane changes to the left.\n\nFurther note that the latter effect complies with another conclusion one may draw from \\autoref{fig:bend_vs_prob}. Comparing the two bars corresponding to the strongest right curves (leftmost bar) and left curves (rightmost bar), reveals that right curves seem to have a stronger lane change-dampening effect than left curves.\n\nIn conclusion, many indications seem to support our intuition that curves take a lane change-dampening effect. Especially, the superposition of various effects, however, makes it difficult to prove this.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{albaufstieg.jpg}\\caption{Picture taken at the leading up part of the road at `Drackensteiner Hang'. Image source:\\href{http:\/\/www.autobahn-bilder.de\/inlines\/a8.htm}{http:\/\/www.autobahn-bilder.de\/inlines\/a8.htm}}\\label{fig:drackenstein}\n\\end{figure}\n\n\\footnotetext[2]{sign depends on the driving direction}\n\n\n\n\\subsection{Slopes}\\label{subsec:slopes}\nSlopes constitute another spatial particularity being of interest in the context of lane change behavior. Compared to curves and highway interchanges, however, slopes cannot be easily identified on a navigation map, instead one has to rely on expert or local knowledge to identify such correlations at all.\n\nOur attention to this potential correlation was drawn while taking a look at the rightmost highlighting in \\autoref{fig:globalmap}. This location is known as `Albaufstieg' at the so-called `Drackensteiner Hang' \\cite{Wikipedia}. The latter is an approximately 6\\,km long inclination with a quiet constant slope around 5\\,\\%. Due to the particular topology of this area the roads for the two directions are not running in parallel. \\autoref{fig:drackenstein} shows a picture taken at the leading up part of that road. As can be seen, the road course at this location is not only very steep, but is also hard to overlook due to the additional curvature and narrowing road limits. Our lane change investigation reveals that exactly at that location hardly any lane changes have been performed. \n\nThe corresponding down hill part of the road, in turn shows off only a slightly reduced lane change probability. In summary, this special location supports our impression that slopes and especially uphill parts could have a similar lane change-dampening effect as curves.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.48\\textwidth]{slope_200_vs_pflw_small2.pdf}\\caption{Median lane following probability under certain $slope$ values of 200\\,m long links. Error bars denote standard error of the measurement (SEM).}\\label{fig:slope_vs_prob}\n\\end{figure}\n\n\\begin{figure*}[t!]\n\\centering\\includegraphics[width=0.93\\textwidth]{karlsruhe.png}\\caption{Microscopic view on the lane following probabilities $P_{FLW}$ per 200\\,m long link at a location close to Karlsruhe.}\\label{fig:karlsruhe}\n\\end{figure*}\n\nTo investigate this in more detail, \\autoref{fig:slope_vs_prob} shows the median lane following probability $\\widetilde{P_{FLW}}$ at links (with 200\\,m length) under certain $slope$ values. The illustration is equivalent to that for the curvatures in \\autoref{fig:bend_vs_prob}. The shown slopes are also in accordance with the vehicle coordinate system defined by ISO norm 8855 \\cite{iso8855}. Positive slope values indicate ascents, whereas negative values indicate descents. Most slopes on highways are ranging between $\\pm$7\\,\\%. To the best of our knowledge, steeper roads constitute an absolute exception, at least on highways in Central Europe. Already values around 5\\,\\% enduring over several kilometers as in the case of the `Drackensteiner Hang' are very rare.\n\n\n\n\n\nTaking a more detailed view on \\autoref{fig:slope_vs_prob} reveals that it is not possible to detect an unambiguous trend fostering our intuition that slopes have a lane change-dampening effect. Explanations for this phenomenon may be manifold. Obviously, once more, several effects probably superimpose each other. A good example of such a superposition can be found at a location close to Karlsruhe, shown in \\autoref{fig:karlsruhe}. At this location the lane change probabilities are increased even though the road course is rather steep with slope values ranging from $\\pm$4.5\\,\\%\\footnotemark[3] to $\\pm$7.3\\,\\%\\footnotemark[3]. These lane changes are, however, attributed to the close highway interchange. Solely removing the links at this location from the examination in \\autoref{fig:slope_vs_prob} takes a significant effect. The latter corrects the lane following probabilities in the bins corresponding to the largest slopes notably upwards.\n\n\nIn summary, our overall impression is that slopes can contribute in a lane change-dampening manner but solely in combination with other conditions. In this way, slopes can, for instance, in combination with curves make locations hard to overlook, whereby the willingness for lane changes can be decreased. On the other hand, slopes can also lead to opposite effects at other locations, e.\\,g. if poorly motorized vehicles and trucks are slowed down by ascents. This could provide additional motivation to other traffic participants to change lanes in order to overtake the slower vehicles. \n\n\\subsection{Discussion}\\label{subsec:discussion}\n\nThe presented examinations clearly emphasize that location-dependent effects on the lane change behavior definitely exist but are hard to investigate and quantify. Of course this is also attributed to the superposition of multiple effects. Nevertheless, the investigations have revealed the following tendencies:\n\n\\begin{itemize}\n\\item As to be expected, highway mergers and dividers can foster the likelihood of lane changes.\n\\item Curved road segments can have lane change-dampening effects.\n\\item Slopes can affect lane change probabilities in both directions depending on other simultaneous conditions. \n\\end{itemize}\n\nHowever, it is extremely challenging to identify which of the considered conditions are important at a certain location and how far they superimpose each other. Moreover, there are obviously much more than the three described factors that impact lane change probabilities. For instance, the time of day, whether it is weekend or a work day, current weather conditions or even season-dependent position of the sun have the potential to influence lane change probabilities.\n\nSo on the one hand generalizing in order to predict lane change probabilities based on environment conditions is obviously a very hard task as there are so many factors and interdependencies. On the other hand, we actually do not see the necessity to employ such a generalized model. At least in Germany the highway road network is comparably stable and with an overall length of about 13\\,000\\,km rather manageable. Instead of error-prone and complex generalizations we suggest building up a lane change probability map containing lane change probabilities for all map links. This straight forward enumeration becomes possible based on data from a broad customer fleet. New `unseen' highway links will quickly be added and updated to the atlas of lane change probabilities.\n\n\n\n\n \\footnotetext[3]{sign depends on the driving direction}\n\n\n\n\\section{SUMMARY AND OUTLOOK}\\label{sec:conclusion}\n\nThis article showed how to develop a lane change probability map using measurement data collected with a fleet of customer vehicles. These data are spatially aggregated using digital maps. This enabled analyses revealing the impact of three exemplary location-dependent conditions on the lane change behavior. More precisely, the effects of highway interchanges, curvatures and slopes have been thoroughly investigated and discussed. Although the results show clear tendencies, they nevertheless demonstrate that especially the superposition of several factors makes it hard or even impossible to estimate location-specific lane change probabilities using simple models. Thus, we suggest dynamically constructing and maintaining such a lane change map.\n\nFuture work will focus on providing and integrating the information of such a lane change probability map to enhance onboard predictions of surrounding traffic participants behavior. For this purpose, it is also reasonable to collect data over longer time horizons. We are currently integrating our processing chain into a streaming layer. Thus, only final results rather than raw sensor measurements will be provisioned. This will solve our described difficulties concerning data provisioning. The latter will enable us to increase our data basis by an order of magnitudes as well as to enlarge the analytics area. Thus, more advanced analytics and statistical tests become possible, as this increases, for example, the amount of strongly curved or ascending road segments. \n\n\n\n\n\\section*{Acknowledgment}\n\nMap data copyrighted OpenStreetMap contributors and available from \\href{https:\/\/www.openstreetmap.org}{https:\/\/www.openstreetmap.org}\n\n \n \n \n \n \n\n\\balance\n\n\\bibliographystyle{ieeetr}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAtomic Bose-Einstein condensates (BECs) have received a great deal of interest\nsince they were first produced a decade ago\n\\cite{anderson95,bradley95,davis95}. They can exhibit various topological\nexcitations, such as vortices and solitons. The dynamics of solitons in\nelongated BECs \\cite{stringari_book} is the atom-optics version of the\nnonlinear propagation of light pulses in optical fibres \\cite{kivshar98}. The\nBEC offers a remarkable freedom in terms of controlling the physical parameters\nsuch as dimensionality and even the sign of the strength of the atom-atom\ninteraction \\cite{stringari_book}.\n\nSolitons in BECs can be of both dark and bright type. Dark solitons are formed\nin BECs with repulsive interaction between the atoms \\cite{stringari_book}. For\ncompletely dark solitons the condensate wavefunction is zero at the centre and\nchanges its sign then crossing the central point, i.e.\\ the condensate\nwave-function has an infinitely steep $\\pi$ phase slip at the centre\n\\cite{stringari_book}. On the other hand, the bright solitons are formed in\nBECs with repulsive interaction between the atoms. The wave-function of the BEC\nis then localised at the centre \\cite{stringari_book} and goes to zero further\naway from this point. The dark solitons which manifests themselves as a\ndensity minimum moving with a constant speed against a uniform background\ndensity, as well as bright solitons which are shape preserving wave packets,\nhave both been experimentally realised\n\\cite{burger99,denschlag00,khaykovich02,strecker02}. The dynamics of solitons\nin BECs has been extensively studied. This has included investigations of the\nstability properties \\cite{muryshev99}, as well as soliton dynamics in\ninhomogeneous clouds \\cite{busch00}, in multicomponent BECs\n\\cite{ohberg01,busch01} and in supersonic flow \\cite{el06}. Solitons can be\ncreated in various ways with a variable degree of controllability, e.g., by\ncolliding clouds of BEC \\cite{reinhardt97,scott98,brazhnyi03} or engineering\nthe density \\cite{dutton01,ginsberg05}.\n\nTraditionally dark solitons in BECs are created using phase imprinting\n\\cite{burger99,denschlag00,dobrek99,carr01,biao02}, where a part of the\ncondensate cloud is illuminated by a far detuned laser pulse in order to induce\na sharp $\\pi$ phase slip in the wave function. The subsequent dynamics can\nindeed develop solitons \\cite{burger99,denschlag00}. There are, however, some\nrather severe drawbacks with such a method of phase engineering. The resolution\nof the required phase slip is naturally restricted by the diffraction limit,\ni.e.\\ the width of the phase slip should be larger than an optical wave-length.\nFurthermore the phase imprinting does not produce a density minimum\ncharacteristic to the dark solitons in the region of the phase change. Hence\ncompletely dark stationary solitons are difficult to achieve, which\nconsequently results in so called grey moving solitons with a shallow density\ndip.\n\nIt is of a significant interest to be able to create slowly moving, or even\ncompletely stationary solitons in order to test for instance their scattering\nproperties. The shapes of the colliding solitons are to be preserved. In\naddition, a relative spatial shift is expected. This spatial shift, however, can\nonly be detected for extremely slow solitons due to the inherent logarithmic\ndependence of the spatial shift on the relative velocity between the solitons\n\\cite{zakharov73,burger02}. The standard phase imprinting also inevitably\ncreates phonons in the trapped cloud because the constructed initial state is\nnot the exact soliton solution largely due to the missing density notch\n\\cite{burger99,denschlag00}.\n\nIn this paper we show how states which have the required phase slip and density\nprofile for solitons can be created by sweeping three laser beams through an\nelongated BEC as shown in Fig.~\\ref{fig1}. If one of the beams is in the first\norder (TEM10) Hermite-Gaussian mode, its amplitude has a transversal $\\pi$\nphase slip which will be transferred to atoms thus producing a soliton. More\nimportantly, with a sequence of three laser beams it is possible to circumvent\nthe restriction set by the diffraction limit. The laser fields reshape an\natomic wave-function so that it acquires a zero-point. This leads to a hole in\nthe atomic density, the width of which is only limited by the intensity ratios\nbetween the incident laser beams due to the geometric nature \\cite{ruseckas05}\nof the process. The formation of the hole is accompanied by a step-like\n(infinitely sharp) $\\pi$ phase slip in the atomic wave-function when crossing\nthe zero-point. The method is particularly useful for creating multicomponent\n(vector) solitons of the dark-bright form as well as the dark-dark combination.\nIn addition it is possible to create in a controllable way two or more slowly\nmoving dark solitons close to each other for studying their collisional\nproperties.\n\n\\section{Formulation}\n\\subsection{Outline of the proposed setup}\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=270, width=0.70\\textwidth]{fig_sol1.eps}\n\\caption{a) The level scheme for the three laser beams $\\Omega_i$ ($i=1,2,3$).\nb) The sequence of laser beams being swept through the BEC involves a\npreparation stage $\\Omega_2\\rightarrow\\Omega_1$ and a final stage\n$\\Omega_1\\rightarrow\\Omega_2$ which engineers the phase and density of the BEC\nto produce a soliton.}\n\\label{fig1}\n\\end{figure}\n\nConsider a cigar-shape atomic BEC elongated in the $z$-direction. To create\nsolitons in the BEC, we propose to sweep three incident laser beams across the\ncondensate. The laser beams interact with the condensate atoms in a tripod\nconfiguration \\cite{unanyan98,ruseckas05}, i.e.\\ the atoms are characterized by\nthree ground states $|1\\rangle$, $|2\\rangle$, $|3\\rangle$ and an excited state\n$|0\\rangle$. The $j$-th laser drives resonantly the atomic transition between\nthe ground state $|j\\rangle$ and the excited state $|0\\rangle$, see\nFig.~\\ref{fig1}a. Initially the atoms forming the BEC are prepared in the\nhyperfine ground state $|1\\rangle$. Subsequently the lasers are swept through\nthe BEC in the $x$-direction, i.e.\\ perpendicular to the longitudinal axis $z$\nof the condensate.\n\nThe sweeping process is made of two stages depicted in Fig.~\\ref{fig1}b. In the\nfirst stage the lasers $1$ and $2$ are applied in a counter-intuitive sequence\nto transfer adiabatically the atoms from the ground state $|1\\rangle$ to\nanother ground state $|2\\rangle$. If an additional laser $3$ is on during the\nfirst stage, a partial transfer of atoms from the ground states $|1\\rangle$ to\n$|2\\rangle$ is possible \\cite{unanyan98}. In that case a coherent superposition\nof states $|1\\rangle$ and $|2\\rangle$ is created after completing the first\nstage. In the second stage, the lasers $1$, $2$ and $3$ are applied once again\nto transfer atoms from the state $|2\\rangle$ back to the state $|1\\rangle$ and\nfrom the state $|1\\rangle$ to the state $|2\\rangle$. If the amplitude of one of\nthese lasers $\\Omega_{1}$ or $\\Omega_{2}$ changes the sign at $z=z_{0}$, the\nBEC picks up a $\\pi$ phase shift at this point after the sweeping, and a\nsoliton can be formed. This is the case e.g.\\ if one of the beams is the first\norder Hermite-Gaussian beam centered at $z=0$.\n\nIt is important to realize that at least two laser fields are needed to\ncomplete the adiabatic transfer of population between the ground states.\nTherefore the adiabaticity can be violated in the vicinity of the point\n$z=z_{0}$ where one of the Rabi frequencies $\\Omega_{1}$ or $\\Omega_{2}$ goes\nto zero. Inclusion of the third (support) laser $3$ helps to avoid such a\nviolation of the adiabaticity. In fact the atoms would experience absorption\nin the vicinity of $z=z_{0}$ if the support laser $3$ was missing during the\nsecond stage.\n\nIt should be mentioned that there are similar previous proposals for creating\nvortices in a BEC via the two-laser Raman processes involving the transfer of\nan optical vortex to the atoms \\cite{marzlin97,nandi04,andersen06}. In these\nschemes the lasers are far detuned from the single-photon resonance to avoid\nthe absorption at the vortex core. In our scheme the lasers are in an exact\nsingle-photon resonance, so the use of the third (support) laser is essential\nto avoid the losses. An advantage of the resonant scheme is that an efficient\nand complete population transfer is possible between the hyperfine ground\nstates, whereas in the non-resonant case only a fraction of population can be\ntransferred \\cite{andersen06}.\n\n\\subsection{Hamiltonian for a tripod atom}\n\nLet us now provide a quantitative description of our scheme. The $j$-th laser\nbeam is characterised by the complex Rabi frequency\n$\\tilde{\\Omega}_{j}=\\Omega_{j}\\exp(i\\mathbf{k}_{j}\\cdot\\mathbf{r}+iS_{j})$ ,\nwith $j=1,2,3$, where $\\Omega_{j}$ is the real amplitude, the phase being\ncomprised of the local phase $\\mathbf{k}_{j}\\cdot\\mathbf{r}$ as well as the\nglobal (distance-independent) phase $S_{j}$. In what follows, the Rabi\nfrequencies $\\Omega_{2}$ and $\\Omega_{3}$ are considered to be positive:\n$\\Omega_{2}>0$, $\\Omega_{3}>0$. Yet, the Rabi frequency $\\Omega_{1}$ is allowed\nto be negative. This makes it possible to include an additional $\\pi$ phase\nshift in the spatial profile of the first beam when crossing the zero-point at\n$z=z_{0}$.\n\nThe electronic Hamiltonian of a tripod atom reads in the interaction\nrepresentation:\n\\begin{equation}\n\\hat{H}_{e}=-\\hbar(\\tilde{\\Omega}_{1}|0\\rangle\\langle1|\n+\\tilde{\\Omega}_{2}|0\\rangle\\langle2|+\\tilde{\\Omega}_{3}|0\\rangle\\langle3|)\n+\\mathrm{H.c.}\n\\label{H}\n\\end{equation}\nThe tripod atoms have two degenerate dark states $|D_{1}\\rangle$ and\n$|D_{2}\\rangle$ of zero eigen-energy ($\\hat{H}_{e}|D_{n}\\rangle=0$) containing\nno excited-state contribution \\cite{unanyan98,ruseckas05},\n\\begin{eqnarray}\n|D_{1}\\rangle & = & \\frac{1}{\\sqrt{1+\\zeta^{2}}}\n\\left(|1\\rangle^{\\prime}-\\zeta|2\\rangle^{\\prime}\\right)\\label{D1}\\\\\n|D_{2}\\rangle & = & \\frac{1}{\\sqrt{1+\\zeta^{2}}}\\left(\\xi_{3}\n\\left(\\zeta|1\\rangle^{\\prime}+|2\\rangle^{\\prime}\\right)\n-\\xi_{2}(1+\\zeta^{2})|3\\rangle^{\\prime}\\right)\\,,\n\\label{D2}\n\\end{eqnarray}\nwhere $|j\\rangle^{\\prime}=|j\\rangle\\exp(i(\\mathbf{k}_{3}-\\mathbf{k}_{j})\n\\cdot\\mathbf{r}+i(S_{3}-S_{j}))$ (with $j=1,2,3$) are the modified atomic\nstate-vectors accommodating the phases of the incident laser fields,\n$\\zeta=\\Omega_{1}\/\\Omega_{2}$ is the ratio between the Rabi frequencies of the\nfirst and second fields, and $\\xi_{j}$ are the normalised Rabi frequencies\n($j=1,2,3$),\n\\begin{equation}\n\\xi_{j}=\\frac{\\Omega_{j}}{\\Omega},\\qquad\n\\Omega=\\sqrt{\\Omega_{1}^{2}+\\Omega_{2}^{2}+\\Omega_{3}^{2}}\n\\label{ksi-j}\n\\end{equation}\nwith $\\xi_{3}>0$ and $-\\infty<\\zeta<+\\infty$. The atomic dark states\n$|D_{1}\\rangle$ and $|D_{2}\\rangle$ depend on the centre of mass coordinate\n$\\mathbf{r}$ through the spatial dependence of the Rabi frequencies\n$\\Omega_{j}$ and state-vectors $|j\\rangle^{\\prime}$.\n\n\\subsection{General equations of motion}\n\nThe full atomic state-vector of a multicomponent BEC is\n$|\\Phi(\\mathbf{r},t)\\rangle=\\sum_{j=1}^{4}|j\\rangle\\Psi_{j}(\\mathbf{r},t)$,\nwhere the constituent wave functions $\\Psi_{j}(\\mathbf{r},t)$ describe the\ntranslational motion of the BEC in the internal state $|j\\rangle$ of the tripod\nscheme. The wave functions $\\Psi_{j}(\\mathbf{r},t)$ obey a multicomponent\nGross-Pitaevski equation of the form\n\\begin{equation}\ni\\hbar\\frac{\\partial}{\\partial t}|\\Phi(\\mathbf{r},t)\\rangle=\n\\left[\\frac{1}{2M}\\nabla^{2}+\\hat{H}_{e}\n+\\hat{V}\\right]|\\Phi(\\mathbf{r},t)\\rangle ,\n\\label{eq:GP-Eq}\n\\end{equation}\nwhere $\\hat{H}_{e}$ from Eq.~(\\ref{H}) describes the light-induced transitions\nbetween the different internal states of atoms. The diagonal operator\n\\begin{equation}\n\\hat{V}=\\sum_{l>j=0}^{3}(V_{j}+g_{jl}|\\Psi_{l}|^{2})|j\\rangle\\langle\nj|\\,.\n\\label{eq:V}\n\\end{equation}\naccommodates the trapping potential $V_{j}(\\mathbf{r})$ for the $j$-th internal\nstate, as well as the nonlinear interaction between the components $j$ and $l$\ncharacterised by the strength $g_{jl}=4\\pi\\hbar^{2}a_{jl}\/m$, with $a_{jl}$\nbeing the corresponding scattering length.\n\n\\section{Time-evolution of the atom-light system}\n\\subsection{Adiabatic approximation for the dark states}\n\nWe shall apply the adiabatic approximation \\cite{ruseckas05,juzeliunas05pra,juzeliunas05ljp}\nunder which atoms evolve within\ntheir dark-state manifold during the sweeping. This is legitimate if the total\nRabi frequency $\\Omega$ is sufficiently large compared to the inverse sweeping\nduration $\\tau^{-1}_{\\mathrm{sweep}}$. The full atomic state-vector can then be\nexpanded as: $|\\Phi(\\mathbf{r},t)\\rangle=\\sum_{n=1}^{2}\n\\Psi_{n}^{(D)}(\\mathbf{r},t)|D_{n}(\\mathbf{r},t)\\rangle$, where a composite\nwavefunction $\\Psi_{n}^{(D)}(\\mathbf{r})$ describes the translational motion of\nan atom in the dark state $|D_{n}(\\mathbf{r},t)\\rangle$. The atomic centre of\nmass motion is thus represented by a two-component wave-function\n\\begin{equation}\n\\Psi=\\left(\\begin{array}{c}\n\\Psi_{1}^{(D)}\\\\\n\\Psi_{2}^{(D)\n}\\end{array}\\right)\\,.\n\\label{multicomponent}\n\\end{equation}\nobeying the following equation of motion \\cite{ruseckas05}:\n\\begin{equation}\ni\\hbar\\frac{\\partial}{\\partial t}\\Psi=\n\\left[\\frac{1}{2M}(-i\\hbar\\nabla-\\mathbf{A})^{2}\n+V(\\mathbf{r})+\\phi-\\beta\\right]\\Psi\\,,\n\\label{Eq-dark}\n\\end{equation}\nwhere the effective vector potential $\\mathbf{A}$ and the matrix $\\beta$ are\nthe $2\\times2$ matrices appearing due to the spatial and temporal dependence of\nthe dark states: $\\mathbf{A}_{n,m}=i\\hbar\\langle D_{n}(\\mathbf{r},t)|\\nabla\nD_{m}(\\mathbf{r},t)\\rangle$ and $\\beta_{n,m}=i\\hbar\\langle\nD_{n}(\\mathbf{r},t)|\\partial\/\\partial t D_{m}(\\mathbf{r},t)\\rangle$. The\nformer $\\mathbf{A}$ is known as the Mead-Berry connection\n\\cite{berry84,mead91}, whereas the latter matrix $\\beta$ is responsible for the\ngeometric phase \\cite{wilczek84}. The $2\\times2$ matrix $\\phi$ is the effective\ntrapping potential (explicitly presented in Ref.~\\cite{ruseckas05}) appearing\ndue to the spatial dependence of the dark states. Assuming that all three beams\nco-propagate ($\\mathbf{k}_1 \\approx \\mathbf{k}_2 \\approx \\mathbf{k}_3 $), the\neffective vector potential \\cite{ruseckas05} reduces to\n\\begin{equation}\n\\mathbf{A}=\\hbar\\frac{\\xi_{3}}{1+\\zeta^{2}}\\nabla\\zeta\\left(\n\\begin{array}{cc}\n0 & i\\\\\n-i & 0\n\\end{array}\n\\right)\\,\n\\label{A-tripod}\n\\end{equation}\nLastly, the $2\\times2$ matrix $V$ originating from the operator $\\hat{V}$,\nEq.~(\\ref{eq:V}), accommodates the trapping potential for the dark states\n\\cite{ruseckas05} as well as the atom-atom coupling.\n\n\\subsection{Time-evolution during the sweeping}\n\nSuppose the incident laser beams are swept through a trapped BEC along the $x$\naxis with a velocity $\\mathbf{v}$, as shown in Fig.~\\ref{fig1}b. This can be\ndone either by shifting in the transversal ($x$) direction the laser beams\npropagating along the $y$ axis or by applying a set of laser pulses of the\nappropriate shape and sequence propagating in the $x$ direction. In the latter\ncase, the sweeping velocity $v$ will coincide with the speed of light. In both\ncases the adiabatic dark states depend on time in the following way:\n$|D_{n}(\\mathbf{r},t)\\rangle \\equiv|D_{n}(\\mathbf{r}')\\rangle$, where\n$\\mathbf{r}'=(x',y,z)\\equiv(x-vt,y,z)$ is the atomic coordinate in the frame of\nthe moving laser fields. Let us assume that the time,\n$\\tau_{\\mathrm{sweep}}=d\/v$, it takes to sweep the laser beams through a BEC of\nthe width $d$, is small compared to the time associated with the BEC chemical\npotential $\\tau_{\\mu}=\\hbar\/\\mu$ which is typically of the order of\n$10^{-5}\\,\\mathrm{s}$. In that case one can neglect the dynamics of the atomic\ncentre of mass during the sweeping. Consequently the time evolution of the\nmulticomponent wave-function during the sweeping is governed by the matrix-term\n$\\beta=-vA_{x}$ featured in Eq.~(\\ref{Eq-dark}), giving\n\\begin{equation}\ni\\hbar\\partial_t\\Psi=vA_{x}\\Psi\\,,\n\\label{eq:fast}\n\\end{equation}\nwhere the $A_{x}$ the effective vector potential along the sweeping direction.\n\nIn passing we note that the subsequent time evolution of the BEC after the\ntwo-stage sweeping will be described by the general Gross-Pitaevski equation\n(\\ref{eq:GP-Eq}) with the light fields off ($\\hat{H}_{e}=0$), as we shall do in\nSection~\\ref{sect4}.\n\nReturning to Eq.~(\\ref{eq:fast}), since $vA_{x}$ commutes with itself at\ndifferent times, one can relate the wave-function $\\Psi(t)$ at a final time\n$t=t_{f}$ to the one at the initial time $t=t_{i}$ as\n\\begin{equation}\n\\Psi(\\mathbf{r},t_{f})=\\exp\\left(-i\\Theta\\right)\\Psi(\\mathbf{r},t_{i})\\,,\n\\label{solution-formal1}\n\\end{equation}\nwhere the exponent $\\Theta$ is a $2\\times2$ Hermitian matrix\n\\begin{equation}\n\\Theta=\\frac{1}{\\hbar}\\int_{t_{i}}^{t_{f}}A_{x}(\\mathbf{r}-\\mathbf{v}t)vdt\n=\\frac{1}{\\hbar}\\int_{x_{f}}^{x_{i}}A_{x}(\\mathbf{r'})\ndx'\\,.\n\\label{Q-A}\n\\end{equation}\nand the integration is over the sweeping path $\\mathbf{r}'=(x-vt,y,z)$ from\n$x_{f}=x-vt_{f}$ to $x_{i}=x-vt_{i}$. In most cases of interest the initial and\nfinal times can be considered to be sufficiently remote, so that the spatial\nintegration can be from $x_{f}=-\\infty$ to $x_{i}=+\\infty$.\n\n\\subsubsection{The first stage}\n\nLet us now analyze the proposed two-stage setup in more details. In the first\nstage both Rabi frequencies $\\Omega_{1}$ and $\\Omega_{2}$ are positive. The\nlasers $1$ and $2$ are applied in a counterintuitive order (see\nFig.~\\ref{fig1}b), where the ratio $\\zeta=\\Omega_{1}\/\\Omega_{2}$ changes from\n$\\zeta(t'_{i})=0$ to $\\zeta(t'_{f})=+\\infty$. On the other hand, the laser $3$\nis dominant for both the initial and final times where\n$\\xi_{3}=\\Omega_{3}\/\\Omega=1$. Initially the BEC has the wave-function\n$\\Psi(\\mathbf{r})$ and is in the internal atomic ground state $|1\\rangle$ which\ncoincides with the first dark state at the initial time $t'_{i}$,\ni.e.\\ $|D_{1}(\\mathbf{r},t'_{i})\\rangle=|1\\rangle$. The full initial atomic\nstate-vector is therefore\n$|\\Phi(\\mathbf{r},t'_{i})\\rangle=\\Psi(\\mathbf{r})|D_{1}(\\mathbf{r},t'_{i})\\rangle$.\nThis provides the following initial condition for the multicomponent\nwave-function:\n\\begin{equation}\n\\Psi(\\mathbf{r},t'_{i})=\\left(\n\\begin{array}{c}\n\\Psi(\\mathbf{r})\\\\\n0\n\\end{array}\\right)\\,.\n\\label{solution-initial}\n\\end{equation}\nEquations (\\ref{A-tripod}) and (\\ref{solution-formal1})--(\\ref{solution-initial})\nyield the multicomponent wave-function after the first stage\n\\begin{equation}\n\\Psi(\\mathbf{r},t'_{f})=\\Psi(\\mathbf{r})\\left(\n\\begin{array}{c}\n\\cos\\beta\\\\\n-\\sin\\beta\n\\end{array}\\right)\\,,\n\\label{solution-specific}\n\\end{equation}\nwhere\n\\begin{equation}\n\\beta=\\int_{-\\infty}^{+\\infty}\\xi_{3}\\frac{\\partial\\arctan\\zeta}{\\partial\nx'}dx'\\,\n\\label{alpha-z}\n\\end{equation}\nis the mixing angle between the dark states acquired in the first stage.\n\nSuppose we have the following laser beams. The second beam $\\Omega_{2}$ is the\nGaussian beam characterised by a waist $\\sigma_{z}$ in the $z$ direction. The\nbeam is centered at $x'=\\bar{x}+\\Delta$ in the sweeping direction and at $z=0$\nin the $z$ direction,\n\\begin{equation}\n\\Omega_{2}=Ae^{-z^{2}\/\\sigma_{z}^{2}-(x'-\\bar{x}-\\Delta)^{2}\/\\sigma_{x}^{2}}.\n\\label{Omega-2}\n\\end{equation}\nThe first beam $\\Omega_{1}$ is characterised by the same amplitude $A$, the\nsame waist $\\sigma_{z}$ and the same width $\\sigma_{x}$. Yet it is centered at\n$x'=\\bar{x}-\\Delta$ in the sweeping direction, where $2\\Delta$ is the separation\nbetween the two beams. The beam waists should be of the order of the condensate\nlength (or larger) in the $z$-direction, so that the whole condensate is\nilluminated by the beams.\n\nThe third beam is considered to change little along the sweeping direction $x$.\nFurthermore it has the same width $\\sigma_{z}$ in the $z$-direction as the\nfirst two beams\n\\begin{equation}\n\\Omega_{3}=A\\kappa e^{-z^{2}\/\\sigma_{z}^{2}}.\n\\label{Omega-3}\n\\end{equation}\n\nThe first stage is aimed at creating a superposition of states $|1\\rangle$ and\n$|2\\rangle$. Since we take all the beams to be the Gaussian beams characterized\nby the same widths $\\sigma_{z}$, the Rabi frequency ratios $\\Omega_2\/\\Omega_1$\nand $\\Omega_3\/\\Omega_1$ have no $z$-dependence. As a result the acquired\nmixing angle $\\beta$ has no $z$-dependence, i.e.\\ it is uniform along the BEC.\nThe magnitude of $\\beta$ depends on the relative intensity of the third laser.\nIf the third laser is weak ($\\xi_{3}=\\Omega_{3}\/\\Omega\\rightarrow0$ at the\ncrossing point where $\\zeta=\\Omega_{1}\/\\Omega_{2}=1$), the mixing between the\nstates $|1\\rangle$ and $|2\\rangle$ is small: $\\beta\\ll 1$. On the other hand,\nif the Rabi frequency $\\Omega_{3}$ is comparable with $\\Omega_{1}$ and $\\Omega_{2}$\nat the crossing point where $\\zeta=\\Omega_{1}\/\\Omega_{2}=1$, the mixing can be\nclose to its maximum: $\\beta\\approx\\pi\/4$. In this way, one can control the\nmixing angle by changing the intensity of the third beam, as one can see from\nFig.~\\ref{fig-beta}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.70\\textwidth]{fig_beta.eps}\n\\caption{Dependence of the mixing angle $\\beta$ on the relative amplitude of\nthe third beam $\\kappa$. The spatial separation between the first and the\nsecond beams is taken to be $2\\Delta=1.2\\sigma_x$ where $\\sigma_x$ is the width\nof the beams in the sweeping direction.}\n\\label{fig-beta}\n\\end{figure}\n\n\\subsubsection{The second stage}\n\nIn the second stage the Rabi frequency $\\Omega_{1}$ can be both positive and\nnegative depending on the transversal coordinate $z$. The laser $1$ is now\napplied first, so that the ratio $\\zeta=\\Omega_{1}\/\\Omega_{2}$ changes from\n$\\zeta(t_{i})=\\pm\\infty$ to $\\zeta(t_{f})=0$ in the second stage. Again the\nthird laser dominates for the initial and final times: $\\Omega_{3}\/\\Omega=1$.\nThe second stage takes place immediately after completing the first stage, so\nthe multicomponent wave-function of the first stage (\\ref{solution-specific})\nserves as an initial condition for the second stage.\n\nEquations (\\ref{solution-formal1}), (\\ref{Q-A}) and (\\ref{solution-specific})\ntogether with (\\ref{D1}) and (\\ref{D2}) yield the total state vector after the\nsecond stage:\n\\begin{eqnarray}\n|\\Phi(\\mathbf{r},t_{f})\\rangle & = & |1\\rangle\\Psi(\\mathbf{r})\n\\left(\\sin\\gamma\\cos\\beta-e^{i\\nu_{12}}\\cos\\gamma\\sin\\beta\\right)\\nonumber\\\\\n& & -|2\\rangle\\Psi(\\mathbf{r})\\left(\\cos\\gamma\\cos\\beta\n+e^{i\\nu_{12}}\\sin\\gamma\\sin\\beta\\right)\\:.\n\\label{state-vector-final-bare-basis1}\n\\end{eqnarray}\nwhere $\\nu_{12}=S_1-S_2+S_{2}'-S_{1}'$ is the phase mismatch between the Rabi\nfrequencies $\\Omega_{1}$ and $\\Omega_2$ in the first and second stages. The\nresulting mixing angle acquired in the second stage is\n\\begin{equation}\n\\gamma\\equiv \\gamma_z=\\int_{-\\infty}^{+\\infty}(1-\\xi_{3})\n\\frac{\\partial\\arctan\\zeta}{\\partial x'}dx'\\,.\n\\label{gamma-z}\n\\end{equation}\nIf the first and second lasers are weak ($\\Omega_{3}\/\\Omega\\rightarrow1$ at the\ncrossing point where $\\zeta=\\Omega_{1}\/\\Omega_{2}=1$), the mixing angle is\nsmall $\\gamma_{z}\\ll 1$. On the other hand, if first and second lasers are\nstrong at this point, we have $\\gamma_{z}\\rightarrow\\mp\\pi\/2$. The change in\nsign of $\\gamma_{z}$ will introduce a phase shift which is needed to create\nsolitons.\n\nIn the second stage the first beam $\\Omega_{1}$ is a first-order (in the $z$\ndirection) Hermite-Gaussian beam centered at $z=0$ and\n$x'=\\tilde{x}+\\tilde{\\Delta}$\n\\begin{equation}\n\\Omega_{1}=A\\frac{z}{B}e^{-z^{2}\/\\sigma_{z}^{2}\n-(x'-\\tilde{x}-\\tilde{\\Delta})^{2}\/\\sigma_{x}^{2}},\n\\label{Omega-1}\n\\end{equation}\nwhere $z=\\pm B$ represents a distance where $\\Omega_{1}=\\pm\\Omega_{2}$ for\n$x'=\\tilde{x}$. In most cases of interest the distance $B$ is much smaller than\nthe waist of the beams: $B\\ll\\sigma_z$. The second beam $\\Omega_{2}$ is the\nordinary Gaussian beam centered at $z=0$ along the BEC and\n$x'=\\tilde{x}-\\tilde{\\Delta}$ in the sweeping direction\n\\begin{equation}\n\\Omega_{2}=Ae^{-z^{2}\/\\sigma_{z}^{2}-(x'-\\tilde{x}+\\tilde{\\Delta})^{2}\/\\sigma_{x}^{2}},\n\\end{equation}\nwhere $2\\tilde{\\Delta}$ is the separation between the two beams. The ratio\nbetween the Rabi frequencies reads then\n\\begin{equation}\n\\zeta=\\frac{\\Omega_{1}}{\\Omega_{2}}=\\frac{z}{B}\ne^{4\\tilde{\\Delta}(x'-\\tilde{x})\/\\sigma_{x}^{2}},\n\\label{zeta}\n\\end{equation}\nEquation~(\\ref{zeta}) provides the following limiting cases:\n\\begin{equation}\n\\zeta\\equiv\\zeta(z,x')=\\left\\{\n\\begin{array}{cc}\n0, & \\mathrm{for}\\quad x'\\rightarrow+\\infty\\,,\\\\\n\\pm\\infty,& \\mathrm{for}\\quad x'\\rightarrow-\\infty\\,.\n\\end{array}\\right.\n\\label{zeta-lim}\n\\end{equation}\nFinally let us determine the crossing point where $\\Omega_{1}=\\Omega_{2}$.\nUsing Eq.~(\\ref{zeta}), the condition $|\\zeta|=1$ yields the crossing point\n$x'=x'_{\\mathrm{cr}}$ for a fixed $z$ coordinate:\n\\begin{equation}\nx'_{\\mathrm{cr}}=\\tilde{x}+\\frac{\\sigma^{2}}{4\\tilde{\\Delta}}\\ln\\frac{z}{B}\\,.\n\\label{y-cr}\n\\end{equation}\nSpecifically, if $z=B$, the crossing point is: $x'_{\\mathrm{cr}}=\\tilde{x}$.\nSince $B\\ll\\sigma$, the Rabi frequencies at $z=B$ and $x'=\\tilde{x}$ are:\n\\begin{equation}\n\\Omega_{1}=\\Omega_{2}\\approx Ae^{-\\tilde{\\Delta}^{2}\/\\sigma_{x}^{2}}.\n\\label{Omega-1-2-crossing}\n\\end{equation}\n\nIn the next subsection we shall analyse in more detail the multicomponent\nwave-function after completing the second stage.\n\n\\subsection{Multicomponent wave-function alter the sweeping}\n\nSuppose that there is no phase mismatch between the lasers of the first and\nsecond stages: $\\nu_{12}=0$. In that case\nEq.~(\\ref{state-vector-final-bare-basis1}) yields\n\\begin{equation}\n|\\Phi(\\mathbf{r},t_{f})\\rangle=\n\\Psi(\\mathbf{r})[-\\sin(\\gamma_{z}-\\beta)|1\\rangle\n+\\cos(\\gamma_{z}-\\beta)|2\\rangle]\\,,\n\\label{state-vector-final-bare-basis2b}\n\\end{equation}\nIf $\\beta=0$, the second component is populated after the first stage. After\nthe whole sweeping the state-vector then takes the form\n\\begin{equation}\n|\\Phi(\\mathbf{r},t_{f})\\rangle=\\Psi(\\mathbf{r})[-\\sin\\gamma_{z}|1\\rangle\n+\\cos\\gamma_{z}|2\\rangle]\\,.\n\\label{state-vector-final--nu=0-beta=pi\/2}\n\\end{equation}\nIn this case the first component alters the sign at $z=z_{0}$ where the Rabi\nfrequency $\\Omega_{1}$ or $\\Omega_{2}$ (and hence $\\gamma_{z}$) crosses the\nzero-point. On the other hand, the second component is maximum at this point\nand symmetrically decays to zero away from this point. Such a multicomponent\nwave-function has a shape close to that of a soliton of the dark-bright form\n(see Fig.~\\ref{fig-dark-bright}). This will indeed lead to the formation of\nsuch a soliton, as we shall from the analysis of the subsequent time-evolution\npresented in the next Section.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.70\\textwidth]{fig_dark-bright.eps}\n\\caption{Multicomponent wave-function after completing the second stage in the\ncase where the second component is populated after the first stage ($\\beta=0$)\nand there is no phase mismatch between the lasers of the first and second\nstages ($\\nu_{12}=0$). The second and third laser beams are taken to be the\nGaussian beams with equal widths $\\sigma_{z}$. The first laser beam is the\nfirst order Hermite-Gaussian beam with same width $\\sigma_{z}$. The parameters\nused are $2\\tilde{\\Delta}\/\\sigma_x = 1.2$, $B\/\\sigma_z=0.1$ and $\\kappa=0.1$.\nThe wave function of the first (second) component is plotted in a solid\n(dashed) line.}\n\\label{fig-dark-bright}\n\\end{figure}\n\n\nOn the other hand, $\\beta=\\pi\/4$ corresponds to the case where both components\nare initially populated with equal probabilities. Thus we have after the\nsweeping:\n\\begin{equation}\n|\\Phi(\\mathbf{r},t_{f})\\rangle=\n-\\Psi(\\mathbf{r})[-\\sin(\\gamma_{z}-\\pi\/4)|1\\rangle\n+\\sin(\\gamma_{z}+\\pi\/4)|2\\rangle]\\,.\n\\label{state-vector-final-bare-basis3}\n\\end{equation}\nIn that case \\emph{both components} of the wave-function acquire a $\\pi$\n\\emph{phase shift} in a vicinity of $z=z_{0}$ where $\\Omega_{1}=0$, as one can\nsee clearly in the Fig.~\\ref{fig-dark-dark} Note that the zero-points of each\ncomponent are slightly shifted with respect to each other. This makes it\npossible to produce two component dark-dark solitons oscillating around each\nother, as we shall see in the following Section.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.70\\textwidth]{fig_dark-dark.eps}\n\\caption{Multicomponent wave-function after completing the second stage in the\ncase where where both components are initially populated after the first stage\n($\\beta=\\pi\/4$) and there is no phase mismatch between the lasers of the first\nand second stages ($\\nu_{12}=0$). The second and third laser beams are taken to\nbe the Gaussian beams with equal widths $\\sigma_{z}$. The first laser beam is\nthe first order Hermite-Gaussian beam with same width $\\sigma_{z}$. The\nparameters used are $2\\tilde{\\Delta}\/\\sigma_x = 1.2$, $B\/\\sigma_z=0.1$ and\n$\\kappa=0.1$. The wave function of the first (second) component is plotted in a\nsolid (dashed) line.}\n\\label{fig-dark-dark}\n\\end{figure}\n\nIf $\\beta=\\pi\/4$ , yet there is a $\\pi\/2$ phase mismatch ($\\nu_{12}=\\pi\/2$),\nEq.~(\\ref{state-vector-final-bare-basis1}) reduces to\n\\begin{equation}\n|\\Phi(\\mathbf{r},t_{f})\\rangle=\n-\\Psi(\\mathbf{r})e^{i\\gamma_{z}}\\frac{1}{\\sqrt{2}}\n\\left[|2\\rangle-i|1\\rangle\\right]\\,.\n\\label{state-vector-final-bare-basis2e}\n\\end{equation}\nIn that case both components are characterised by the same spatial modulation\n$\\exp\\left(i\\gamma_{z}\\right)$ and have a relative phase $\\pi\/2$ after the\nsweeping. Therefore both components initially have the same velocity\ndistribution proportional to $\\nabla\\gamma_{z}$. Furthermore there is no hole\nin the atomic density of neither component after the sweeping, similar the case\nin the phase imprinting techniques.\n\nIn this way, the creation of solitons can be controlled by changing the mixing\nangle $\\beta$ and the phase mismatch $\\nu_{12}$\n\n\\section{Subsequent dynamics and soliton formation}\n\n\\label{sect4}\nThe optical preparation of the initial state of the two-component Bose-Einstein\ncondensate described in the previous section, is fast compared to any\ncharacteristic dynamics in the Bose-Einstein condensate. This is the case if\nthe time $\\tau_{\\mathrm{sweep}}=d\/v$ it takes to sweep the laser beams through\na BEC of the width $d$, is small compared to the time associated with the BEC\nchemical potential $\\tau_{\\mu}=\\hbar\/\\mu$ which is typically of the order of\n$10^{-5}\\,\\mathrm{s}$. With the prepared initial state and for sufficiently\nlow temperatures we can therefore describe the subsequent dynamics using a\ntwo-component Gross-Pitaevskii equation \\cite{ohberg01}\n\\begin{eqnarray}\ni\\hbar\\frac{\\partial}{\\partial t} \\Psi_1&=&[-\\frac{\\hbar^2}{2m}\\nabla^2+V(z)\n+g_{11}|\\Psi_1|^2+g_{12}|\\Psi_2|^2]\\Psi_1\\label{gp1}\\\\\ni\\hbar\\frac{\\partial}{\\partial t} \\Psi_2&=&[-\\frac{\\hbar^2}{2m}\\nabla^2+V(z)\n+g_{22}|\\Psi_2|^2+g_{12}|\\Psi_1|^2]\\Psi_2.\\label{gp2}\n\\end{eqnarray}\nThe external potential is here chosen to be quadratic in the $z$-direction,\n\\begin{equation}\nV(z)=\\frac{1}{2} m\\omega^2 z^2,\n\\end{equation}\nwhere $\\omega$ is the trap frequency and $m$ the atomic mass. The two-body\ninteractions are described by \n\\begin{equation}\ng_{ij}=\\frac{4\\pi\\hbar^2a_{ij}}{mS}, \\quad i,j=\\{1,2\\} \\label{gij}\n\\end{equation}\nwith the scattering lengths $a_{ij}$ which represents the intra and inter\ncollisional interactions between the atoms in the states $1$ and $2$. In\nEq.~(\\ref{gij}) we have introduced the effective cross-section $S$ of the\nelongated cloud. Strictly speaking the elongated Bose-Einstein condensate is\nthree dimensional. If, however, the transversal trapping is sufficiently strong,\nthe dynamics can be considered effectively one dimensional, as in\nEqs.~(\\ref{gp1}) and (\\ref{gp2}). This requires that the corresponding\ntransversal ground state energy is much larger than the chemical potential of\nthe condensate. We choose the normalisation as $\\int dz|\\Psi_i(z)|^2=N_i$,\nwhere $N_i$ is the particle number in condensate $i$ $(i=1,2)$.\n\nWith the initial states from the previous section we can simulate the dynamics\nof the Bose-Einstein condensate. We consider a condensate with\n$g_{11}:g_{12}:g_{22}=1.0:0.97:1.03$ where $g_{12}N_1=286$ and $N_1=N_2$. The\nunit of length is $\\sqrt{\\frac{\\hbar}{m\\omega}}$ and time is in units of\n$\\omega^{-1}$. In figure \\ref{beta0} we show the dark-bright soliton dynamics\nwhose initial state is prepared by choosing $\\beta=0$ and $\\nu_{12}=0$. The\ntwo-component system which has one dark soliton in component $1$ and a bright\nsoliton in component $2$, is stable, i.e.\\ the solitons are stationary. This\nshows that the initial state is indeed close to the exact soliton solution. If\nthe initial state is prepared with $\\beta=\\pi\/4$ and $\\nu_{12}=0$, on the other\nhand, the dynamics is strikingly different, see figure \\ref{betapi4nu0}. In\nthis case we create two dark solitons with opposite phase gradients, hence\nthere is an oscillatory motion, sometimes referred to as a soliton molecule.\nSuch a bound state is only stable if the soliton velocities are low\n\\cite{ohberg01} which is indeed the case here. Alternatively, with\n$\\beta=\\pi\/4$ and $\\nu_{12}=\\pi\/2$, the solitons move in unison as shown in figure\n\\ref{betapi4nupi2}. The large oscillatory motion appearing in\nFig.~\\ref{betapi4nupi2} stems from the fact that the condensate density is not\nhomogeneous, hence the solitons experience an effective trap \\cite{busch00}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig_gp_b0nu0_a.eps}\n\\includegraphics[width=0.45\\textwidth]{fig_gp_b0nu0_b.eps}\n\\caption{The dark-bright soliton. The two figures show the one dimensional\ndensity as a function of time for component $1$ and $2$. The lighter (darker) colours \ncorrespond to higher (lower) atomic densities.}\n\\label{beta0}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig_gp_bpi4nu0_a.eps}\n\\includegraphics[width=0.45\\textwidth]{fig_gp_bpi4nu0_b.eps}\n\\caption{The bound state dark-dark soliton. For sufficiently low initial\nsoliton velocities the two dark solitons perform an oscillatory motion around\neach other. The figures show the one dimensional atomic density as a function of time\nfor component $1$ and $2$. The lighter (darker) colours correspond to higher (lower) densities.}\n\\label{betapi4nu0}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig_gp_bpi4nupi2_a.eps}\n\\includegraphics[width=0.45\\textwidth]{fig_gp_bpi4nupi2_b.eps}\n\\caption{The co-propagating dark-dark solitons. If the initial phase gradients\nof the two soliton solutions are chosen to be the same the dark solitons\npropagate in unison. The two figures show the one dimensional density as a\nfunction of time for component $1$ and $2$. The lighter (darker) colours \ncorrespond to higher (lower) atomic densities.}\n\\label{betapi4nupi2}\n\\end{figure}\n\n\\section{Conclusions}\n\nIn summary, we have proposed a new method of creating solitons in elongated\nBose-Einstein Condensates (BECs) by sweeping three laser beams through the BEC.\nIf one of the beams is the first order (TEM10) Hermite-Gaussian mode, its\namplitude has a transversal $\\pi$ phase slip which will be transferred to the\natoms thus creating a soliton. Using this method it is possible to circumvent\nthe restriction set by the diffraction limit. The method allows one to create\nmulticomponent (vector) solitons of the dark-bright form as well as the\ndark-dark combination. In addition it is possible to create in a controllable\nway two or more slowly moving dark solitons close to each other for studying\nthe collisional properties. For this the first beam $\\Omega_1$ should represent\na superposition of the zero and second order Hermite-Gaussian modes in the\nsecond stage. The soliton collisions will be considered in more details\nelsewhere.\n\n\\subsection*{Acknowledgements}\n\nThis work was supported by the Alexander-von-Humboldt foundation through the\ninstitutional collaborative grant between the University of Kaiserslautern and\nthe Institute of Theoretical Physics and Astronomy of Vilnius University. \nP.\\\"O. acknowledges support from the EPSRC and the Royal Society of Edinburgh.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nInspired by Pattern Theory \\cite{mumford2010pattern}, we attempt to model three important and pervasive patterns in natural signals: \\textit{sparse discreteness}, \\textit{low dimensional manifold structure} and \\textit{hierarchical composition}. Each of these concepts have been individually explored in previous studies. For example, sparse coding \\cite{olshausen1996emergence,olshausen1997sparse} and ICA \\cite{bell1995information,hyvarinen2000independent} can learn sparse and discrete elements that make up natural signals. Manifold learning \\cite{tenenbaum2000global,roweis2000nonlinear,maaten2008visualizing,belkin2002laplacian} was proposed to model and visualize low-dimensional continuous transforms such as smooth 3D rotations or translations of a single discrete element. Deformable, compositional models \\cite{wu2010learning,felzenszwalb2010object} allow for a hierarchical composition of components into a more abstract representation. We seek to model these three patterns jointly as they are almost always entangled in real-world signals and their disentangling poses an unsolved challenge. \n\nIn this paper, we introduce an interpretable, generative and unsupervised learning model, the \\textit{sparse manifold transform} (SMT), which has the potential to untangle all three patterns simultaneously and explicitly. The SMT consists of two stages: dimensionality expansion using sparse coding followed by contraction using manifold embedding. Our SMT implementation is to our knowledge, the first model to bridge sparse coding and manifold learning. Furthermore, an SMT layer can be stacked to produce an unsupervised hierarchical learning network.\n\n{\\em The primary contribution of this paper is to establish a theoretical framework for the SMT by reconciling and combining the formulations and concepts from sparse coding and manifold learning.} In the following sections we point out connections between three important unsupervised learning methods: sparse coding, local linear embedding and slow feature analysis. We then develop a single framework that utilizes insights from each method to describe our model. Although we focus here on the application to image data, the concepts are general and may be applied to other types of data such as audio signals and text. All experiments performed on natural scenes used the same dataset, described in Supplement \\ref{sup:video_experiments}.\n\n\\subsection{Sparse coding}\nSparse coding attempts to approximate a data vector, $x\\in{\\rm I\\!R}^n$, as a sparse superposition of dictionary elements $\\phi_i$:\n\\vspace{-0.05in}\n\\begin{equation}\\label{eqn:linear}\n x = \\Phi\\, \\alpha + \\epsilon\n\\end{equation}\nwhere $\\Phi\\in{\\rm I\\!R}^{n\\times m}$ is a matrix with columns $\\phi_i$, $\\alpha\\in{\\rm I\\!R}^m$ is a sparse vector of coefficients and $\\epsilon$ is a vector containing independent Gaussian noise samples, which are assumed to be small relative to $x$. Typically $m>n$ so that the representation is {\\em overcomplete}.\nFor a given dictionary, $\\Phi$, the sparse code, $\\alpha$, of a data vector, $x$, can be computed in an online fashion by minimizing an energy function composed of a quadratic penalty on reconstruction error plus an L1 sparseness penalty on $\\alpha$ (see Supplement \\ref{sup:online_steps}). \nThe dictionary itself is adapted to the statistics of the data so as to maximize the sparsity of $\\alpha$. The resulting dictionary often provides important insights about the structure of the data. \nFor natural images, the dictionary elements become `Gabor-like'---i.e., spatially localized, oriented and bandpass---and form a tiling over different locations, orientations and scales due to the natural transformations of objects in the world. \n\n\nThe sparse code of an image provides a representation that makes explicit the structure contained in the image. However the dictionary is typically {\\em unordered}, and so the sparse code will lose the topological organization that was inherent in the image. \nThe pioneering works of \\citet{hyvarinen2000emergence,hyvarinen2001topographic} and \\citet{osindero2006topographic} addressed this problem by specifying a fixed 2D topology over the dictionary elements that groups them according to the co-occurrence statistics of their coefficients. Other works learn the group structure from a statistical approach \\cite{lyu2008nonlinear, balle2015density, koster2010two}, but do not make explicit the underlying topological structure. Some previous topological approaches \\cite{lee2003nonlinear,de2004topological,carlsson2008local} used non-parametric methods to reveal the low-dimensional geometrical structure in local image patches, which motivated us to look for the connection between sparse coding and geometry. {\\em From this line of inquiry, we have developed what we believe to be the first mathematical formulation for learning the general geometric embedding of dictionary elements when trained on natural scenes}.\n\nAnother observation motivating this work is that the representation computed using overcomplete sparse coding can exhibit large variability for time-varying inputs that themselves have low variability from frame to frame \\cite{rozell2008sparse}. \nWhile some amount of variability is to be expected as image features move across different dictionary elements, the variation can appear unstructured without information about the topological relationship of the dictionary. In section \\ref{sec:smt} and section \\ref{sec:results}, we show that considering the joint spatio-temporal regularity in natural scenes can allow us to learn the dictionary's group structure and produce a representation with smooth variability from frame to frame (Figure \\ref{fig:coefficient_smoothness}).\n\n\\subsection{Manifold Learning}\n\\label{subsec:manifold}\nIn manifold learning, one assumes that the data occupy a low-dimensional, smooth manifold embedded in the high-dimensional signal space. A smooth manifold is locally equivalent to a Euclidean space and therefore each of the data points can be linearly reconstructed by using the neighboring data points. The Locally Linear Embedding (LLE) algorithm \\cite{roweis2000nonlinear} first finds the neighbors of each data point in the whole dataset and then reconstructs each data point linearly from its neighbors. It then embeds the dataset into a low-dimensional Euclidean space by solving a generalized eigendecomposition problem. \n\nThe first step of LLE has the same linear formulation as sparse coding \\eqref{eqn:linear}, with $\\Phi$ being the whole dataset rather than a learned dictionary, i.e., $\\Phi=X$, where $X$ is the data matrix. The coefficients, $\\alpha$, correspond to the linear interpolation weights used to reconstruct a datapoint, $x$, from its $K$-nearest neighbors, resulting in a $K$-sparse code. \n(In other work~\\cite{elhamifar2011sparse}, $\\alpha$ is inferred by sparse approximation, which provides better separation between manifolds nearby in the same space.) \nImportantly, once the embedding of the dataset $X\\rightarrow Y$ is computed, the embedding of a new point $x^{\\textsc{new}}\\rightarrow y^{\\textsc{new}}$ is obtained by a simple linear projection of its sparse coefficients. That is, if $\\alpha^{\\textsc{new}}$ is the $K$-sparse code of $x^{\\textsc{new}}$, then $y^{\\textsc{new}}=Y\\,\\alpha^{\\textsc{new}}$. Viewed this way, {\\em the dictionary may be thought of as a discrete sampling of a continuous manifold, and the sparse code of a data point provides the interpolation coefficients for determining its coordinates on the manifold}. However, using the entire dataset as the dictionary is cumbersome and inefficient in practice.\n\n\n\n\n\nSeveral authors \\cite{de2004sparse,silva2006selecting,vladymyrov2013locally} have realized that it is unnecessary to use the whole dataset as a dictionary. A random subset of the data or a set of cluster centers can be good enough to preserve the manifold structure, making learning more efficient. Going forward, we refer to these as {\\em landmarks}. In Locally Linear Landmarks (LLL)~\\cite{vladymyrov2013locally}, the authors compute two linear interpolations for each data point $x$:\n\\begin{eqnarray}\n x & = & \\Phi_{\\textsc{lm}}\\, \\alpha + n \n \\label{eqn:dictionary_neighbors}\\\\\n x & = & \\Phi_{\\textsc{data}}\\, \\gamma + n^{\\prime} \n \\label{eqn:data_neighbors}\n\\end{eqnarray}\nwhere $\\Phi_{\\textsc{lm}}$ is a dictionary of landmarks and $\\Phi_{\\textsc{data}}$ is a dictionary composed of the whole dataset. As in LLE, $\\alpha$ and $\\gamma$ are coefficient vectors inferred using KNN solvers (where the $\\gamma$ coefficient corresponding to $x$ is forced to be $0$). We can substitute the solutions to equation \\eqref{eqn:dictionary_neighbors} into $\\Phi_{\\textsc{data}}$, giving $\\Phi_{\\textsc{data}} \\approx \\Phi_{\\textsc{lm}}A$, where the $j\\textsuperscript{th}$ column of the matrix $A$ is a unique vector $\\alpha_{j}$. This leads to an interpolation relationship:\n\\vspace{-0.05in}\n\\begin{equation}\\label{eqn:data_interpolation}\n\\Phi_{\\textsc{lm}}\\alpha \\approx \\Phi_{\\textsc{lm}}\\, A\\, \\gamma\n\\end{equation}\nThe authors sought to embed the landmarks into a low dimensional Euclidean space using an embedding matrix, $P_{\\textsc{lm}}$, such that the interpolation relationship in equation \\eqref{eqn:data_interpolation} still holds: \n\\begin{equation}\\label{eqn:manifold_interpolation}\n P_{\\textsc{lm}} \\alpha \\approx P_{\\textsc{lm}}\\, A\\, \\gamma\n\\end{equation}\nWhere we use the same $\\alpha$ and $\\gamma$ vectors that allowed for equality in equations \\eqref{eqn:dictionary_neighbors} and \\eqref{eqn:data_neighbors}. $P_{\\textsc{lm}}$ is an embedding matrix for $\\Phi_{\\textsc{lm}}$ such that each of the columns of $P$ represents an embedding of a landmark. $P_{\\textsc{lm}}$ can be derived by solving a generalized eigendecomposition problem \\cite{vladymyrov2013locally}. \n\nThe similarity between equation \\eqref{eqn:linear} and equation \\eqref{eqn:dictionary_neighbors} provides an intuition to bring sparse coding and manifold learning closer together. However, LLL still has a difficulty in that it requires a nearest neighbor search.\nWe posit that temporal information provides a more natural and efficient solution.\n\n\\subsection{Slow~Feature~Analysis~(SFA)}\n\nThe general idea of imposing a `slowness prior' was initially proposed by \\cite{foldiak1991learning} and \\cite{wiskott2002slow} to extract invariant or slowly varying features from temporal sequences rather than using static orderless data points. While it is still common practice in both sparse coding and manifold learning to collect data in an orderless fashion, other work has used time-series data to learn spatiotemporal representations \\cite{van1998independent,olshausen2003learning,hyvarinen2003bubbles} or to disentangle form and motion~\\cite{berkes2009structured,cadieu2012learning,denton2017unsupervised}. Specifically, the combination of topography and temporal coherence in \\cite{hyvarinen2003bubbles} provides a strong motivation for this work. \n\nHere, we utilize {\\em temporal adjacency} to determine the nearest neighbors in the embedding space (eq.~\\ref{eqn:data_neighbors}) by specifically minimizing the second-order temporal derivative, implying that video sequences form {\\em linear trajectories} in the manifold embedding space. A similar approach was recently used by \\cite{goroshin2015learning} to linearize transformations in natural video. This is a variation of `slowness' that makes the connection to manifold learning more explicit. It also connects to the ideas of manifold flattening~\\cite{dicarlo2007untangling} or straightening~\\cite{henaff2018perceptual} which are hypothesized to underly perceptual representations in the brain.\n\n\n\\section{Functional Embedding: A Sensing Perspective}\n\\label{sec:functional}\nThe SMT framework differs from the classical manifold learning approach in that it relies on the concept of {\\em functional embedding} as opposed to embedding individual data points. We explain this concept here before turning to the sparse manifold transform in section~\\ref{sec:smt}. \n\nIn classical manifold learning~\\cite{huo2007survey}, for a $m$-dimensional compact manifold, it is typical to solve a generalized eigenvalue decomposition problem and preserve the $2\\textsuperscript{nd}$ to the $(d+1)\\textsuperscript{th}$ trailing eigenvectors as the embedding matrix $P_{\\textsc{c}}\\in{\\rm I\\!R}^{d\\times N}$, where $d$ is as small as possible (parsimonious) such that the embedding preserves the topology of the manifold (usually, $m\\leq d \\le 2m$ due to the strong Whitney embedding theorem\\cite{lee2012introduction}) and $N$ is the number of data points or landmarks to embed. It is conventional to view the columns of an embedding matrix, $P_{\\textsc{c}}$, as an embedding to an Euclidean space, which is (at least approximately) topologically-equivalent to the data manifold. Each of the rows of $P_{\\textsc{c}}$ is treated as a coordinate of the underlying manifold. One may think of a point on the manifold as a single, constant-amplitude delta function with the manifold as its domain. Classical manifold embedding turns a non-linear transformation (i.e., a moving delta function on the manifold) in the original signal space into a simple linear interpolation in the embedding space. This approach is effective for visualizing data in a low-dimensional space and compactly representing the underlying geometry, but less effective when the underlying function is not a single delta function. \n\nIn this work we seek to move beyond the single delta-function assumption, because natural images are not well described as a single point on a continuous manifold of fixed dimensionality. For any reasonably sized image region (e.g., a $16\\times16$ pixel image patch), there could be multiple edges moving in different directions, or the edge of one occluding surface may move over another, or the overall appearance may change as features enter or exit the region. Such changes will cause the manifold dimensionality to vary substantially, so that the signal structure is no longer well-characterized as a manifold. \n\nWe propose instead to think of any given image patch as consisting of $h$ discrete components simultaneously moving over the same underlying manifold - i.e., as $h$ delta functions, or {\\em an $h$-sparse function} on the smooth manifold. This idea is illustrated in figure~\\ref{fig:smt_concept}. First, let us organize the Gabor-like dictionary learned from natural scenes on a 4-dimensional manifold according to the position $(x,y)$, orientation $(\\theta)$ and scale $(\\sigma)$ of each dictionary element $\\phi_i$. Any given Gabor function corresponds to a point with coordinates $(x,y,\\theta,\\sigma)$ on this manifold, and so the learned dictionary as a whole may be conceptualized as a discrete tiling of the manifold. Then, the $k$-sparse code of an image, $\\alpha$, can be viewed as a set of $k$ delta functions on this manifold (illustrated as black arrows in figure~\\ref{fig:smt_concept}C). Hyv\\\"arinen has pointed out that when the dictionary is topologically organized in a similar manner, the active coefficients $\\alpha_i$ tend to form clusters, or ``bubbles,'' over this domain~\\cite{hyvarinen2003bubbles}. Each of these clusters may be thought of as linearly approximating a ``virtual Gabor\" at the center of the cluster (illustrated as red arrows in figure~\\ref{fig:smt_concept}C), effectively performing a flexible ``steering'' of the dictionary to describe discrete components in the image, similar to steerable filters \\cite{freeman1991design,simoncelli1992shiftable,simoncelli1995steerable,perona1995deformable}. Assuming there are $h$ such clusters, then the $k$-sparse code of the image can be thought of as a discrete approximation of an underlying $h$-sparse function defined on the continuous manifold domain, where $h$ is generally greater than 1 but less than $k$. \n\n\n\\begin{figure}[!ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\textwidth]{figures\/smt_concept.pdf}}\n\\caption{Dictionary elements learned from natural signals with sparse coding may be conceptualized as landmarks on a smooth manifold. A) A function defined on $\\mathbb{R}^2$ (e.g. a gray-scale natural image) and one local component from its reconstruction are represented by the black and red curves, respectively. B) The signal is encoded using sparse inference with a learned dictionary, $\\Phi$, resulting in a $k$-sparse vector (also a function) $\\alpha$, which is defined on an orderless discrete set $\\{1, \\cdots, N\\}$. C) $\\alpha$ can be viewed as a discrete $k$-sparse approximation to the true $h$-sparse function, $\\alpha_{\\textsc{true}}(M)$, defined on the smooth manifold ($k=8$ and $h=3$ in this example). Each dictionary element in $\\Phi$ corresponds to a landmark (black dot) on the smooth manifold, $M$. Red arrows indicate the underlying $h$-sparse function, while black arrows indicate the $k$ non-zero coefficients of $\\Phi$ used to interpolate the red arrows. D) Since $\\Phi$ only contains a finite number of landmarks, we must interpolate (i.e. ``steer'') among a few dictionary elements to reconstruct each of the true image components.}\n\\label{fig:smt_concept}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\nAn $h$-sparse function would not be recoverable from the $d$-dimensional projection employed in the classical approach because the embedding is premised on there being only a single delta function on the manifold. Hence the inverse will not be uniquely defined. Here we utilize a more general \\textit{functional} embedding concept that allows for better recovery capacity. A functional embedding of the landmarks is to take the first $f$ trailing eigenvectors from the generalized eigendecomposition solution as the embedding matrix $P\\in{\\rm I\\!R}^{f\\times N}$, where $f$ is larger than $d$ such that the $h$-sparse function can be recovered from the linear projection. Empirically\\footnote{This choice is inspired by the result from compressive sensing\\cite{donoho2006compressed}, though here $h$ is different from $k$.} we use $f=O(h\\log(N))$.\n\nTo illustrate the distinction between the classical view of a data manifold and the additional properties gained by a functional embedding, let us consider a simple example of a function over the 2D unit disc. Assume we are given 300 landmarks on this disc as a dictionary $\\Phi_{\\textsc{lm}}\\in{\\rm I\\!R}^{2\\times300}$. We then generate many short sequences of a point $x$ moving along a straight line on the unit disc, with random starting locations and velocities. At each time, $t$, we use a nearest neighbor (KNN) solver to find a local linear interpolation of the point's location from the landmarks, that is $x_t=\\Phi_{\\textsc{lm}}\\,\\alpha_t$, with $\\alpha_t\\in{\\rm I\\!R}^{300}$ and ${\\alpha_t}\\succeq 0$ (the choice of sparse solver does not impact the demonstration). Now we seek to find an embedding matrix, $P$, which projects the $\\alpha_t$ into an $f$-dimensional space via $\\beta_t=P\\,\\alpha_t$ such that the trajectories in $\\beta_t$ are as straight as possible, thus reflecting their true underlying geometry. This is achieved by performing an optimization that minimizes the second temporal derivative of $\\beta_t$, as specified in equation \\eqref{opt:smt} below.\n\nFigure \\ref{fig:unit_disc}A shows the rows of $P$ resulting from this optimization using $f=21$. Interestingly, they resemble Zernike polynomials on the unit-disc. We can think of these as functionals that \"sense\" sparse functions on the underlying manifold. Each row $p^{\\prime}_i\\in{\\rm I\\!R}^{300}$ (here the prime sign denotes a row of the matrix $P$) projects a discrete $k$-sparse approximation $\\alpha$ of the underlying $h$-sparse function to a real number, $\\beta_{i}$. We define the full set of these linear projections $\\beta=P\\,\\alpha$ as a \"manifold sensing\" of $\\alpha$. \n\nWhen there is only a single delta-function on the manifold, the second and third rows of $P$, which form simple linear ramp functions in two orthogonal directions, are sufficient to fully represent its position. These two rows would constitute $P_\\textsc{c}\\in{\\rm I\\!R}^{2\\times300}$ as an embedding solution in the classical manifold learning approach, since a unit disk is diffeomorphic to ${\\rm I\\!R}^{2}$ and can be embedded in a 2 dimensional space. The resulting embedding $\\beta_2,\\beta_3$ closely resembles the 2-D unit disk manifold and allows for recovery of a one-sparse function, as shown in Figure~\\ref{fig:unit_disc}B. \n\n\\begin{figure}[!ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{figures\/embedding.pdf}}\n\\caption{Demonstration of functional embedding on the unit disc. \nA) The rows of $P$, visualized here on the ground-truth unit disc. Each disc shows the weights in a row of $P$ by coloring the landmarks according to the corresponding value in that row of $P$. The color scale for each row is individually normalized to emphasize its structure. The pyramidal arrangement of rows is chosen to highlight their strong resemblance to the Zernike polynomials.\nB) (Top) The classic manifold embedding perspective allows for low-dimensional data visualization using $P_\\textsc{c}$, which in this case is given by the second and third rows of $P$ (shown in dashed box in panel A). Each blue dot shows the 2D projection of a landmark using $P_\\textsc{c}$. Boundary effects cause the landmarks to cluster toward the perimeter. (Bottom) A 1-sparse function is recoverable when projected to the embedding space by $P_{\\textsc{c}}$. \nC) (Top) A 4-sparse function (red arrows) and its discrete $k$-sparse approximation, $\\alpha$ (black arrows) on the unit disc. (Bottom) The recovery, $\\alpha_{\\textsc{rec}}$, (black arrows) is computed by solving the optimization problem in equation \\eqref{opt:inversion}. The estimate of the underlying function (red arrows) was computed by taking a normalized local mean of the recovered $k$-sparse approximations for a visualization purpose.}\n\\label{fig:unit_disc}\n\\end{center}\n\\vskip -0.25in\n\\end{figure}\n\nRecovering more than a one-sparse function requires using additional rows of $P$ with higher spatial-frequencies on the manifold, which together provide higher sensing capacity. Figure \\ref{fig:unit_disc}C demonstrates recovery of an underlying 4-sparse function on the manifold using all 21 functionals, from $p^{\\prime}_1$ to $p^{\\prime}_{21}$. From this representation, we can recover an estimate of $\\alpha$ with positive-only sparse inference:\n\\begin{equation}\\label{opt:inversion}\n\\begin{aligned}\n \\alpha_{\\textsc{rec}} = g(\\beta) \\equiv \\underset{\\alpha}{\\text{argmin}} \\| \\beta - P\\,\\alpha \\|_F^2 + \\lambda z^{T}\\alpha,\\;\\;\\text{s.t.}\\;\\alpha\\succeq 0,\n\\end{aligned}\n\\end{equation}\nwhere $z = \\left[\\|p_{1}\\|_{2},\\cdots,\\|p_{N}\\|_{2}\\right]^{T}$ and $p_j \\in{\\rm I\\!R}^{21}$ is the $j\\textsuperscript{th}$ column of $P$. Note that although \n$\\alpha_{\\textsc{rec}}$ is not an exact recovery of $\\alpha$, \nthe 4-sparse structure is still well preserved, up to a local shift in the locations of the delta functions. We conjecture this will lead to a recovery that is perceptually similar for an image signal.\n\n\nThe functional embedding concept can be generalized beyond functionals defined on a single manifold and will still apply when the underlying geometrical domain is a union of several different manifolds. A thorough analysis of the capacity of this sensing method is beyond the scope of this paper, although we recognize it as an interesting research topic for model-based compressive sensing.\n\n\n\\section{The Sparse Manifold Transform}\n\\label{sec:smt}\nThe Sparse Manifold Transform (SMT) consists of a non-linear sparse coding expansion followed by a linear manifold sensing compression (dimension reduction). \nThe manifold sensing step acts to linearly pool the sparse codes, $\\alpha$, with a matrix, $P$, that is learned using the functional embedding concept (sec.~\\ref{sec:functional}) in order to straighten trajectories arising from video (or other dynamical) data. \n\nThe SMT framework makes three basic assumptions: \n\\begin{enumerate}\n\\item The dictionary $\\Phi$ learned by sparse coding has an organization that is a discrete sampling of a low-dimensional, smooth manifold, $M$ (Fig. \\ref{fig:smt_concept}). \n\\item The resulting sparse code $\\alpha$ is a discrete $k$-sparse approximation of an underlying $h$-sparse function defined on $M$. There exists a functional manifold embedding, $\\tau:\\Phi \\hookrightarrow P$, that maps each of the dictionary elements to a new vector, $p_j=\\tau(\\phi_j)$, where $p_j$ is the $j\\textsuperscript{th}$ column of $P$ s.t. both the topology of $M$ and $h$-sparse function's structure are preserved. \n\\item A continuous temporal transformation in the input (e.g., from natural movies) lead to a linear flow on $M$ and also in the geometrical embedding space. \n\\end{enumerate}\n\nIn an image, the elements of the underlying $h$-sparse function correspond to discrete components such as edges, corners, blobs or other features that are undergoing some simple set of transformations. Since there are only a finite number of learned dictionary elements tiling the underlying manifold, they must cooperate (or `steer') to represent each of these components as they appear along a continuum. \n\nThe desired property of linear flow in the geometric embedding space may be stated mathematically as\n\\vspace{-0.05in}\n\\begin{equation}\n P\\alpha_t \\approx \\tfrac{1}{2}P\\alpha_{t-1} + \\tfrac{1}{2}P\\alpha_{t+1}.\n\\label{eqn:flow}\n\\end{equation}\nwhere $\\alpha_t$ denotes the sparse coefficient vector at time $t$. Here we exploit the temporal continuity inherent in the data to solve the otherwise cumbersome nearest-neighbor search required of LLE or LLL. The embedding matrix $P$ satisfying \\eqref{eqn:flow} may be derived by minimizing an objective function that encourages the second-order temporal derivative of $P\\,\\alpha$ to be zero:\n\\begin{equation}\n \\underset{P}{\\text{min}}\\, \\| PAD \\|_F^2 ,\\ \\text{s.t.}\\ P V P^T = I\n\\label{opt:smt}\n\\end{equation}\nwhere \n$A$ is the coefficient matrix whose columns are the coefficient vectors, $\\alpha_t$, in temporal order, and $D$ is the second-order differential operator matrix, with $D_{t-1,t} = -0.5, D_{t,t} = 1, D_{t+1,t} = -0.5$ and $D_{\\tau,t}=0$ otherwise. \n$V$ is a positive-definite matrix for normalization, $I$ is the identity matrix and $\\|\\bullet\\|_F$ indicates the matrix Frobenius norm. We choose $V$ to be the covariance matrix of $\\alpha$ and thus the optimization constraint makes the rows of $P$ orthogonal in whitened sparse coefficient vector space. Note that this formulation is qualitatively similar to applying SFA to sparse coefficients, but using the second-order derivative instead of the first-order derivative.\n\nThe solution to this generalized eigen-decomposition problem is given \\cite{vladymyrov2013locally} by $P = V^{-\\frac{1}{2}}U$, where $U$ is a matrix of $f$ trailing eigenvectors (i.e. eigenvectors with the smallest eigenvalues) of the matrix $V^{-\\frac{1}{2}}ADD^TA^TV^{-\\frac{1}{2}}$. Some drawbacks of this analytic solution are that: 1) there is an unnecessary ordering among different dimensions, 2) the learned functional embedding tends to be global, which has support as large as the whole manifold and 3) the solution is not online and does not allow other constraints to be posed. In order to solve these issues, we modify the formulation slightly with a sparse regularization term on $P$ and develop an online SGD (Stochastic Gradient Descent) solution, which is detailed in the Supplement \\ref{sup:onlinesgd}.\n\nTo summarize, the SMT is performed on an input signal $x$ by first computing a higher-dimensional representation $\\alpha$ via sparse inference with a learned dictionary, $\\Phi$, and second computing a contracted code by sensing a manifold representation, $\\beta = P\\,\\alpha$ with a learned pooling matrix, $P$.\n\n\\section{Results}\n\\label{sec:results}\n\n{\\bf Straightening of video sequences.} We applied the SMT optimization procedure on sequences of whitened $20\\times20$ pixel image patches extracted from natural videos. We first learned a $10\\times$ overcomplete spatial dictionary $\\Phi\\in{\\rm I\\!R}^{400\\times4000}$ and coded each frame $x_t$ as a 4000-dimensional sparse coefficient vector $\\alpha_t$. We then derived an embedding matrix $P\\in{\\rm I\\!R}^{200\\times4000}$ by solving equation~\\ref{opt:smt}. Figure \\ref{fig:coefficient_smoothness} shows that while the sparse code $\\alpha_t$ exhibits high variability from frame to frame, the embedded representation $\\beta_t=P\\,\\alpha_t$ changes in a more linear or smooth manner. It should be emphasized that finding such a smooth linear projection (embedding) is highly non-trivial, and is possible if and only if the sparse codes change in a locally linear manner in response to smooth transformations in the image. If the sparse code were to change in an erratic or random manner under these transformations, any linear projection would be non-smooth in time. Furthermore, we show that this embedding does not constitute a trivial temporal smoothing, as we can recover a good approximation of the image sequence via $\\hat{x}_t=\\Phi\\,g(\\beta_t)$, where $g(\\beta)$ is the inverse embedding function \\eqref{opt:inversion}. We can also use the functional embedding to regularize sparse inference, as detailed in Supplement \\ref{sup:embedding_regularization}, which further increases the smoothness of both $\\alpha$ and $\\beta$. \n\n\\begin{figure}[!ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{figures\/smt_act_img.png}}\n\\end{center}\n\\vskip -0.25in\n\\caption{SMT encoding of a 80 frame image sequence. A) Rescaled activations for 80 randomly selected $\\alpha$ units. Each row depicts the temporal sequence of a different unit. B) The activity of 80 randomly selected $\\beta$ units. C) Frame samples from the 90fps video input (top) and reconstructions computed from the $\\alpha_{\\textsc{rec}}$ recovered from the sequence of $\\beta$ values (bottom).}\n\\label{fig:coefficient_smoothness}\n\\end{figure}\n\n{\\bf Affinity Groups and Dictionary Topology.} \nOnce a functional embedding is learned for the dictionary elements, we can compute the cosine similarity between their embedding vectors, $\\cos(p_j, p_k) = \\frac{p_j^T p_k}{\\|p_j\\|_2\\|p_k\\|_2}$, to find the neighbors, or affinity group, of each dictionary element in the embedding space. In Figure \\ref{fig:groups}A we show the affinity groups for a set of randomly sampled elements from the overcomplete dictionary learned from natural videos. As one can see, the topology of the embedding learned from the SMT reflects the structural similarity of the dictionary elements according to the properties of position, orientation, and scale. Figure \\ref{fig:groups}B shows that the nearest neighbors of each dictionary element in the embedding space are more 'semantically similar' than the nearest neighbors of the element in the pixel space. To measure the similarity, we chose the top 500 most well-fit dictionary elements and computed their lengths and orientations. For each of these elements, we find the top 9 nearest neighbors in both the embedding space and in pixel space and then compute the average difference in length ($\\Delta$ Length) and orientation ($\\Delta$ Angle). The results confirm that the embedding space is succeeding in grouping dictionary elements according to their structural similarity, presumably due to the continuous geometric transformations occurring in image sequences.\n\n\\begin{figure}[h]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{figures\/affinity_groups_similarity_plot.pdf}}\n\\vspace{-0.05in}\n\\caption{\nA) Affinity groups learned using the SMT reveal the topological ordering of a sparse coding dictionary. Each box depicts as a needle plot the affinity group of a randomly selected dictionary element and its top 40 affinity neighbors. The length, position, and orientation of each needle reflect those properties of the dictionary element in the affinity group (see Supplement \\ref{sup:figure_details} for details). The color shade indicates the normalized strength of the cosine similarity between the dictionary elements. B) The properties of length and orientation (angle) are more similar among nearest neighbors in the embedding space ({\\em E}) as compared to the pixel space ({\\em P}).}\n\\label{fig:groups}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\nComputing the cosine similarity can be thought of as a hypersphere normalization on the embedding matrix $P$. In other words, if the embedding is normalized to be approximately on a hypersphere, the cosine distance is almost equivalent to the Gramian matrix, $P^T P$. Taking this perspective, the learned geometric embedding and affinity groups can explain the dictionary grouping results shown in previous work \\cite{hosoya2016learning}. In that work, the layer 1 outputs are pooled by an affinity matrix given by $P=E^TE$, where $E$ is the eigenvector matrix computed from the correlations among layer 1 outputs. This PCA-based method can be considered an embedding that uses only spatial correlation information, while the SMT model uses both spatial correlation and temporal interpolation information.\n\n{\\bf Hierarchical Composition.}\nA SMT layer is composed of two sublayers: a sparse coding sublayer that models sparse discreteness, and a manifold embedding sublayer that models simple geometrical transforms. It is possible to stack multiple SMT layers to form a hierarchical architecture, which addresses the third pattern from Mumford's theory: hierarchical composition. It also provides a way to progressively flatten image manifolds, as proposed by DiCarlo \\& Cox~\\cite{dicarlo2007untangling}. Here we demonstrate this process with a two-layer SMT model (Figure~\\ref{fig:hierarchy}A) and we visualize the learned representations. The network is trained in a layer-by-layer fashion on a natural video dataset as above.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{figures\/hierarchy.pdf}}\n\\caption{SMT layers can be stacked to learn a hierarchical representation. A) The network architecture. Each layer contains a sparse coding sublayer (red) and a manifold sensing sublayer (green). B) Example dictionary element groups for $\\Phi^{(1)}$ (left) and $\\Phi^{(2)}$ (right). C) Each row shows an example of interpolation by combining layer 3 dictionary elements. From left to right, the first two columns are visualizations of two different layer-3 dictionary elements, each obtained by setting a single element of $\\alpha^{(3)}$ to one and the rest to zero. The third column is an image generated by setting both elements of $\\alpha^{(3)}$ to $0.5$ simultaneously. The fourth column is a linear interpolation in image space between the first two images, for comparison. D) Information is approximately preserved at higher layers. From left to right: The input image and the reconstructions from $\\alpha^{(1)}$, $\\alpha^{(2)}$ and $\\alpha^{(3)}$, respectively. The rows in C) and D) are unique examples. See section \\ref{sec:functional} for visualization details.}\n\\label{fig:hierarchy}\n\\end{center}\n\\vskip -0.3in\n\\end{figure}\n\nWe can produce reconstructions and dictionary visualizations from any layer by repeatedly using the inverse operator, $g(\\beta)$. Formally, we define $\\alpha_{\\textsc{rec}}^{(l)} = g^{(l)}(\\beta^{(l)})$, where $l$ is the layer number. For example, the inverse transform from $\\alpha^{(2)}$ to the image space will be $x_{\\textsc{rec}} = C\\Phi^{(1)}g^{(1)}(\\Phi^{(2)}\\alpha^{(2)})$, where $C$ is an unwhitening matrix. We can use this inverse transform to visualize any single dictionary element by setting $\\alpha^{(l)}$ to a $1$-hot vector. Using this method of visualization, Figure~\\ref{fig:hierarchy}B shows a comparison of some of the dictionary elements learned at layers 1 and 2. We can see that lower layer elements combine together to form more global and abstract dictionary elements in higher layers, e.g. layer-2 units tend to be more curved, many of them are corners, textures or larger blobs. \n\nAnother important property that emerges at higher levels of the network is that dictionary elements are steerable over a larger range, since they are learned from progressively more linearized representations. To demonstrate this, we trained a three-layer network and performed linear interpolation between two third-layer dictionary elements, resulting in a non-linear interpolation in the image space that shifts features far beyond what simple linear interpolation in the image space would accomplish (Figure \\ref{fig:hierarchy}C). A thorough visualization of the dictionary elements and groups is provided in the Supplement \\ref{sup:dict_visualization}.\n\n\\section{Discussion}\n\\label{sec:discussion}\nA key new perspective introduced in this work is to view both the signals (such as images) and their sparse representations as functions defined on a manifold domain. A gray-scale image is a function defined on a 2D plane, tiled by pixels. Here we propose that the dictionary elements should be viewed as the new `pixels' and their coefficients are the corresponding new `pixel values'. The pooling functions can be viewed as low pass filters defined on this new manifold domain. This perspective is strongly connected to the recent development in both signal processing on irregular domains~\\cite{shuman2013emerging} and geometric deep learning~\\cite{bronstein2017geometric}.\n\nPrevious approaches have learned the group structure of dictionary elements mainly from a statistical perspective~\\cite{hyvarinen2000emergence,hyvarinen2001topographic,osindero2006topographic,koster2010two,lyu2008nonlinear,malo2006v1}. Additional unsupervised learning models~\\cite{shan2013efficient,paiton2016deconvolutional,le2012building,zeiler2011adaptive} combine sparse discreteness with hierarchical structure, but do not explicitly model the low-dimensional manifold structure of inputs. Our contribution here is to approach the problem from a geometric perspective to learn a topological embedding of the dictionary elements.\n\nThe functional embedding framework provides a new perspective on the pooling functions commonly used in convnets. In particular, it provides a principled framework for learning the pooling operators at each stage of representation based on the underlying geometry of the data, rather than being imposed in a 2D topology {\\em a priori} as was done previously to learn linearized representations from video~\\cite{goroshin2015learning}. This could facilitate the learning of higher-order invariances, as well as equivariant representations~\\cite{sabour2017capsules}, at higher stages. In addition, since the pooling is approximately invertible due to the underlying sparsity, it is possible to have bidirectional flow of information between stages of representation to allow for hierarchical inference~\\cite{lee2003hierarchical}. The invertibility of SMT is due to the underlying sparsity of the signal, and is related to prior works on the invertibility of deep networks\\cite{gilbert2017towards, bruna2013signal, zeiler2014visualizing,dosovitskiy2016inverting}. Understanding this relationship may bring further insights to these models.\n\n \\subsubsection*{Acknowledgments}\nWe thank Joan Bruna, Fritz Sommer, Ryan Zarcone, Alex Anderson, Brian Cheung and Charles Frye for many fruitful discussions; Karl Zipser for sharing computing resources; Eero Simoncelli and Chris Rozell for pointing us to some valuable references. This work is supported by NSF-IIS-1718991, NSF-DGE-1106400, and NIH\/NEI T32 EY007043.\n\n\n\\part*{Supplementary~Material}\n\n\\setcounter{figure}{0} \n\\renewcommand{\\figurename}{Supplementary Figure}\n\n\\section{Sparse Coding}\n\\label{sup:online_steps}\nIn traditional sparse coding \\cite{olshausen1996emergence} approach, the coefficient vector $\\alpha$ is computed for each input image by performing the optimization:\n\n\\begin{equation}\n\\min\\limits_{\\alpha} \\tfrac{1}{2} \\| x - \\Phi \\alpha \\|_{2}^{2} + \\lambda \\|\\alpha\\|_{1}\n\\label{sup:sparse_energy_1}\n\\end{equation}\n\nwhere $\\lambda$ is a sparsity trade-off penalty. \n\nIn this paper we learn a 10-20 times overcomplete dictionary \\cite{olshausen2013highly} and impose the additional restriction of positive-only coefficients ($\\alpha \\succeq 0$). Empirically, we find that at such an overcompleteness the interpolation behavior of the dictionary is close to locally linear. Therefore, steering the elements can be accomplished by local neighborhood interpolation. High overcompleteness with a positive-only constraint makes the relative geometry of the dictionary elements more explicit. The positive-only constraint gives us our modified optimization:\n\n\\begin{equation}\n\\min\\limits_{\\alpha} \\tfrac{1}{2} \\| x - \\Phi\\alpha \\|_{2}^{2} + \\lambda \\|\\alpha\\|_{1},\\ \\text{s.t.} \\alpha \\succeq 0,\n\\label{sup:sparse_energy_2}\n\\end{equation}\n\n\\section{Sparse Coefficient Inference with Manifold~Embedding~Regularization}\n\\label{sup:embedding_regularization}\nFunctional manifold embedding can be used to regularize sparse inference for video sequences. We assume that in the embedding space the representation has a locally linear trajectory. Then we can change the formulation from equation \\ref{sup:sparse_energy_2} to the following:\n\n\\vspace{-0.05in}\n\\begin{equation}\\label{sup:sparsecoding_smt_regularized}\n\t\\begin{aligned}\n \t&\\underset{\\alpha_t}{\\text{min}} & & \\| x_t - \\Phi \\alpha_t \\|_2^2 + \\lambda\\|\\alpha_t\\|_1 + \\gamma_0\\|P(\\alpha_t - \\widetilde{\\alpha_t})\\|_2^2 \\\\\n &\\text{s.t.}\n & & \\alpha_t \\succeq 0,\\ \\widetilde{\\alpha_t} = \\alpha_{t-1} + (\\alpha_{t-1} - \\alpha_{t-2}),\n \\end{aligned}\n\\end{equation}\n\nwhere $\\widetilde{\\alpha_t}$ is the casual linear prediction from the previous two steps $\\alpha_{t-1}$, $\\alpha_{t-2}$. Our preliminary experiments indicate that this can significantly increase the linearity of the embedding $\\beta = P\\alpha$.\n\n\n\n\n\n\n\n\n\\section{Online SGD Solution}\n\\label{sup:onlinesgd}\nSome minor drawbacks of the analytic generalized eigendecomposition solution are that: 1) there is an unnecessary ordering among different embedding dimensions, 2) the learned functional embedding tends to be global, which has support as large as the whole manifold and 3) the solution is not online and does not allow other constraints to be posed. In order to solve these issues, we modify the formulation of equation \\ref{opt:smt} slightly with a sparse regularization term on $P$ and develop an online SGD (Stochastic Gradient Descent) solution:\n\n\\vspace{-0.05in}\n\\begin{equation}\\label{sup:smt_regularized}\n \\underset{P}{\\text{min}} \\ \\gamma_0 \\| PAD \\|_F^2 + \\gamma_1 \\|PW\\|_1,\\ \\text{s.t.}\\ P V P^T = I,\n\\end{equation}\n\nwhere $\\gamma_0$ and $\\gamma_1$ are both positive parameters, $W=\\mbox{\\rm diag}(\\langle \\alpha \\rangle)$ and $\\|\\bullet\\|_1$ is the $L_{1,1}$ norm, $V$ is the covariance matrix of $\\alpha$. The sparse regularization term encourages the functionals to be localized on the manifold. \n\nThe iterative update rule for solving equation (\\ref{sup:smt_regularized}) to learn the manifold embedding consists of: \n\\begin{enumerate}\n\\item One step along the whitened gradient computed on a mini-batch: $P\\mathrel{+}=-2\\gamma_0 \\eta_P PADD^TA^TV^{-1}$, where ${V^{-1}}$ serves to whiten the gradient. $\\eta_P$ is the learning rate of $P$. \n\n\\item Weight regularization: $P = \\text{Shrinkage}_{\\langle\\alpha\\rangle,\\gamma1}(P)$, which shrinks each entry in the $j\\textsuperscript{th}$ column of $P$ by $\\gamma_1\\langle\\alpha_j\\rangle$. \n\n\\item Parallel orthogonalization: $(PVP^T)^{-\\frac{1}{2}}P$, which is a slight variation to an orthogonalization method introduced by \\cite{karhunen1997class}.\n\\end{enumerate}\n\n\\section{Dataset and Preprocessing}\n\\label{sup:video_experiments}\nThe natural videos were captured with a head-mounted GoPro 5 session camera at 90fps. The video dataset we used contains about 53 minutes (about 300,000 frames) of 1080p video while the cameraperson is walking on a college campus. For each video frame, we select the center 1024x1024 section and down-sample it to 128x128 by using a steerable pyramid \\cite{simoncelli1995steerable} to decrease the range of motion in order to avoid temporal aliasing. For each 128x128 video frame, we then apply an approximate whitening in the Fourier domain: For each frame, 1) take a Fourier transform, 2) modulate the amplitude by a whitening mask function $w(u,v)$, 3) take an inverse Fourier transform. Since the Fourier amplitude map of natural images in general follows a $1\/f$ law\\cite{field1987relations}, the whitening mask function is chosen to be the product: $w(u,v) = w_1(u,v)w_2(u,v)$, where $w_1(u,v) = r$ is a linear ramp function of the frequency radius $r(u,v) = (u^2+v^2)^{\\frac{1}{2}}$ and $w_2$ is a low-pass windowing function in the frequency domain $w_2(u,v) = e^{-{(\\frac{r}{r_0})}^4}$, with $r_0=48$. The resulting amplitude modulation function is shown in Supplementary Figure \\ref{sup:fourier_whitening}. For a more thorough discussion on this method, please see \\cite{atick1990towards, atick1992does, olshausen1997sparse}. We found that the exact parameters for whitening were not critical. We also implemented a ZCA version and found that the results were qualitatively similar.\n\n\\begin{figure}[ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=1.0\\columnwidth]{figures\/Fourier_Whitening.pdf}}\n\\end{center}\n\\vskip -0.3in\n\\caption{In this figure we show the whitening mask function $w(u,v)$. Since $w$ is radial symmetric, we show its value with respect to the frequency radius $r$ as a 1-D curve. As we are using 128x128 images, the frequency radius we show is from 0 to 64, which is the Nyquist frequency.}\n\\label{sup:fourier_whitening}\n\\end{figure}\n\n\n\\section{Needle Plot Details}\n\\label{sup:figure_details}\nFigure \\ref{fig:groups} utilized a visualization technique that depicts each dictionary element as a needle whose position, orientation and length indicate those properties of the element. The parameters of the needles were computed from a combination of the analytic signal envelope and the Fourier transform of each dictionary element. The envelope was fitted with a 2D Gaussian, and its mean was used to determine the center location of the dictionary element. The primary eigenvector of the Gaussian covariance matrix was used to determine the spatial extent of the primary axis of variation, which we indicated by the bar length. The orientation was computed from the peak in the Fourier amplitude map of the dictionary element. Supplementary Figure \\ref{sup:basis_fitting} gives an illustration of the fitting process. A subset of elements were not well summarized by this methodology because they do not fit a standard Gaussian profile, which can be seen in the last row of Supplementary Figure \\ref{sup:basis_fitting}. However, the majority appear to be well characterized by this method. Each box in the main-text figure \\ref{fig:groups} indicates a single affinity group. The color indicates the normalized similarity between the dictionary elements in the embedding space.\n\n\\begin{figure}[ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{figures\/function_fitting.png}}\n\\end{center}\n\\vskip -0.3in\n\\caption{Needle fitting procedure}\n\\label{sup:basis_fitting}\n\\end{figure}\n\n\\section{Dictionary Visualization}\n\\label{sup:dict_visualization}\nHere we provide a more complete visualization of the dictionary elements in the two-layer SMT model. In Supplementary Figure \\ref{sup:dict_groups}, we show 19 representative dictionary elements and their top 5 affinity group neighbors in both layer 1 (Supplementary Figure \\ref{sup:dict_groups}A) and layer 2 (Supplementary Figure \\ref{sup:dict_groups}B). For each row, we choose a representative dictionary element (on the left side) and compute its cosine similarity to the rest of the dictionary elements in the embedding space. The closest 5 elements in the embedding space are shown on its right. So each row can be treated as a small affinity group. Figures \\ref{sup:ly1_sample} and \\ref{sup:ly2_sample} show a large random sample of the dictionary elements in layers 1 and 2, respectively.\n\n\\begin{figure}[ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=1.0\\columnwidth]{figures\/supp_groups.pdf}}\n\\end{center}\n\\vskip -0.3in\n\\caption{Representative dictionary elements and their top 5 affinity group neighbors in the embedding space. A) Dictionary elements from layer 1. B) Dictionary elements from layer 2. We encourage the reader to compare the affinity neighbors to the results found in \\cite{zeiler2014visualizing}.}\n\\label{sup:dict_groups}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=1.3\\columnwidth]{figures\/supp_ly1_sample.pdf}}\n\\end{center}\n\\vskip -0.3in\n\\caption{A random sample of 1200 dictionary elements from layer 1 (out of 4000 units). Details are better discerned by zoomed in on a monitor.}\n\\label{sup:ly1_sample}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\vskip -0.05in\n\\begin{center}\n\\centerline{\\includegraphics[width=1.3\\columnwidth]{figures\/supp_ly2_sample.pdf}}\n\\end{center}\n\\vskip -0.3in\n\\caption{A random sample of 1200 dictionary elements from layer 2 (out of 3000 units). Details are better discerned by zoomed in on a monitor.}\n\\label{sup:ly2_sample}\n\\end{figure}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nEarly detection of ovarian cancer is important since clinical symptoms sometimes do not appear until the late stage of the disease. This leads to difficulties in treatment of the patient. Using the antigen CA125 significantly improves the quality of diagnosis. However, CA125 becomes less reliable at early stages and sometimes elevates too late to make use of it. Our goal is to investigate whether existing methods of online prediction can improve the quality of the detection of the disease and to demonstrate that the information contained in mass spectra is useful for ovarian cancer diagnosis in the early stages of the disease. We refer to the \\emph{combination} of CA125 and peak intensity meaning the decision rule in the form\n\\begin{equation*}\nu(v,w,p) = v \\ln C + w\\ln I_p,\n\\end{equation*}\nwhere $C$ is the level of CA125, $I_p$ is the intensity of the $p$-th peak, and $v,w$ are taken from the sets described below.\n\nWe consider prediction in \\emph{triplets}:\neach case sample is accompanied by two samples from healthy individuals,\n\\emph{matched controls},\nwhich are chosen to be as close as possible to the case sample\nwith respect to attributes such as age, storage conditions, and serum processing.\nIn the given triplet of samples of different individuals we detect one sample which we predict as cancer. This framework was first described in \\cite{Gammerman2008}. The authors analyze an ovarian cancer data set and show that the information contained in mass-spectrometry peaks can help to provide more precise\nand reliable predictions of the diseased patient than the CA125 criteria by itself\nsome months before the moment of the diagnosis. In this paper we use the same framework and set of decision rules (CA125 combined with peak intensity) to derive an algorithm which performs better\nin some sense than any of these rules.\n\nFor our research we use a different more recent ovarian cancer data set \\cite{Menon2005} processed by the authors of \\cite{Devetyarov2009} with a larger number of items than in \\cite{Gammerman2008}. We combine decision rules proposed in \\cite{Devetyarov2009} by using an online prediction algorithm\\footnote[1]{A survey of online prediction can be found in \\cite{CesaBianchi2006}.} and thus get our own decision rule. In\nthis paper we use a combining algorithm described in \\cite{VovkPEABG}, because it allows us\nto output a probability measure on a given triplet and has the best theoretical guarantees for this type of prediction. In order to estimate classification accuracy, we convert probability predictions\ninto strict predictions by the \\emph{maximum rule}: we assign weight 1 to the labels with maximum predicted probability, weight 0 to the labels of other samples, and then normalize the assigned weights.\n\nWe show that our algorithm gives more reliable predictions\nthan the vast majority of particular combinations\n(in fact, more thorough experiments, not described here,\nshow that it outperforms all particular combinations).\nIt performs well on different stages of disease.\nAnd when testing the hypothesis that CA125 and peaks\ndo not contain useful information for the prediction of the disease\nat its early stages,\nour algorithm gives better $p$-values\nin comparison to the algorithm which chooses the best combination;\nin addition, our algorithm requires fewer adjustments.\n\nOur paper is organized as follows. In Section~\\ref{sec:frame} we describe methods we use to give predictions. Section~\\ref{sec:data} gives a short description of the\ndata set on which we work.\nWe show our experiments and results in Section~\\ref{sec:experiments},\nseparated into description of the probability prediction algorithm in Subsection~\\ref{ssec:alltr} and detection at different stages before diagnosis in Subsection~\\ref{ssec:early}.\nSection~\\ref{sec:conclusion} concludes our paper.\n\n\\section{Online prediction framework and Aggregating Algorithm}\\label{sec:frame}\nThe mathematical framework used in this paper is called prediction with expert advice. In this framework different experts predict a sequence of events step by step. The ones that make errors suffer loss defined by a chosen loss function. The goal of an online prediction algorithm is to combine the experts' predictions in such a way that at each step the algorithm's cumulative loss is close to the cumulative loss of the best expert. Unlike statistical learning theory, online prediction does not impose any restrictions on the data generating process.\n\nA game of prediction consists of three components: the space of outcomes $\\Omega$, the space of predictions $\\Gamma$, and the loss function $\\lambda: \\Omega \\times \\Gamma \\to \\mathbb{R}$, which measures the quality of predictions. In our experiments we are interested in the \\emph{Brier game} \\cite{Brier1950}, since it is widely used in probability forecasting.\n\nLet $\\Omega$ be a finite and non-empty set,\n$\\Gamma:=\\PPP(\\Omega)$ be the set of all probability measures on $\\Omega$.\nThe Brier loss function is defined by\n\\begin{equation}\n \\lambda(\\omega,\\gamma)\n =\n \\sum_{o\\in\\Omega}\n \\left(\n \\gamma\\{o\\} - \\delta_{\\omega}\\{o\\}\n \\right)^2.\n\\end{equation}\nHere $\\gamma \\in \\Gamma$ and $\\delta_{\\omega}\\in\\PPP(\\Omega)$ is the probability measure\nconcentrated at $\\omega$:\n$\\delta_{\\omega}\\{\\omega\\}=1$\nand $\\delta_{\\omega}\\{o\\}=0$ for $o\\ne\\omega$.\nFor example, if $\\Omega=\\{1,2,3\\}$, $\\omega=1$,\n$\\gamma\\{1\\}=1\/2$, $\\gamma\\{2\\}=1\/4$, and $\\gamma\\{3\\}=1\/4$, then\n$\\lambda(\\omega,\\gamma)=(1\/2-1)^2+(1\/4-0)^2+(1\/4-0)^2=3\/8$.\n\nThe game of prediction is being played repeatedly\nby a learner that has access to decisions made by a pool of experts,\nwhich leads to the following prediction protocol:\n\\makeatletter\n \\renewcommand{\\ALG@name}{Protocol}\n\\makeatother\n\\begin{algorithm}[H]\n \\caption{Prediction with expert advice}\n \\label{prot:PEA}\n \\begin{algorithmic}\n \\STATE $L_0:=0$.\n \\STATE $L_0^k:=0$, $k=1,\\ldots,K$.\n \\FOR{$N=1,2,\\dots$}\n \\STATE Expert $k$ announces $\\gamma_N^k\\in\\Gamma$, $k=1,\\ldots,K$.\n \\STATE Learner announces $\\gamma_N\\in\\Gamma$.\n \\STATE Reality announces $\\omega_N\\in\\Omega$.\n \\STATE $L_N:=L_{N-1}+\\lambda(\\omega_N,\\gamma_N)$.\n \\STATE $L_N^k:=L_{N-1}^k+\\lambda(\\omega_N,\\gamma_N^k)$, $k=1,\\ldots,K$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\\makeatletter\n \\renewcommand{\\ALG@name}{Algorithm}\n\\makeatother\nHere $L_N$ is the cumulative loss of the learner at a time step $N$, and $L_N^k$ is the cumulative loss of $k$th expert at this step. There are a lot of well-developed algorithms for the learner, probably the most known are Weighted Average Algorithm \\cite{Kivinen1999}, Strong Aggregating Algorithm \\cite{VovkAS,VovkGofPEA}, Weak Aggregating Algorithm \\cite{KalnishkanWAAWM}, Hedge Algorithm \\cite{Freund1997}, and Tracking the Best Expert \\cite{HerbsterTBE}. The basic idea behind these algorithms is to assign weights to experts and then use their predictions in the correspondence with their weights in a way that minimizes the learner's loss. Weights of experts are changed at each step, which allows a prediction algorithm to adapt to the sequence of outcomes.\n\nThe Strong Aggregating Algorithm, further called the Aggregating Algorithm or the AA, has the strongest theoretical guarantees for some games with a\n``sufficiently convex'' loss function, whereas the accuracy in practice\nsome cases can probably not be the best one. We use the Aggregating\nAlgorithm for the experiments described in this paper, but one can use other\nonline algorithms to give probability forecasts. In the case of the Brier game\nwith more than two outcomes only the AA and the Weighted Average Algorithm have\ntheoretical bounds for their losses derived in the extended arXiv version of \\cite{VovkPEABG}.\nThe Aggregating Algorithm has a parameter $\\eta$, the learning rate. It is proved that for the Brier game the best theoretical guarantees can be received if $\\eta = 1$.\nThe theoretical bound for its cumulative loss at a prediction\nstep $N$ is\n\\begin{equation}\nL_N(\\mathrm{AA}) \\le L_N^k + \\ln K\n\\end{equation}\nfor any expert $k$, where the number of experts equals $K$. The way it makes\npredictions is described as Algorithm~\\ref{alg:SAA}.\n\n\\addtocounter{algorithm}{-1}\n\\begin{algorithm}[ht]\n \\caption{Strong aggregating algorithm for the Brier game} \\label{alg:SAA}\n \\begin{algorithmic}\n \\STATE $w_0^k:=1$, $k=1,\\ldots,K$.\n \\FOR{$N=1,2,\\dots$}\n \\STATE Read the Experts' predictions $\\gamma^k_N$, $k=1,\\ldots,K$.\n \\STATE Set $G_N(\\omega):=-\\frac{1}{\\eta}\\ln\\sum_{k=1}^K w^k_{N-1} e^{-\\eta\\lambda(\\omega,\\gamma_N^k)}$, $\\omega\\in\\Omega$.\n \\STATE Solve $\\sum_{\\omega\\in\\Omega}(s-G_N(\\omega))^+=2$ in $s\\in\\mathbb{R}$.\n \\STATE Set $\\gamma_N\\{\\omega\\}:=(s-G_N(\\omega))^+\/2$, $\\omega\\in\\Omega$.\n \\STATE Output prediction $\\gamma_N\\in\\PPP(\\Omega)$.\n \\STATE Read observation $\\omega_N$.\n \\STATE $w_N^k:=w_{N-1}^ke^{-\\eta\\lambda(\\omega_N,\\gamma_N^k)}$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\\bigskip\n\\section{Data set}\\label{sec:data}\nWe are working with a data set \\cite{Devetyarov2009}\nthat was collected over the period of 7 years\nand has patients with the disease (referred to as \\emph{cases})\nand patients who were healthy all this period,\ncalled\n\\emph{controls}.\nDescription of the collection process is not a goal of this\npaper, so we do not state this question in detail.\nMore detailed description of the data set and peak extracting procedures\ncan be found in \\cite{Menon2005} and \\cite{Devetyarov2009}.\nThis paper develops further the analysis performed in \\cite{Devetyarov2009}.\n\nWe consider prediction in \\emph{triplets}.\nThere are 881 samples in total: 295 cases, 586 matched controls. There are up to 5 samples for each of the cases. Information for all\nsamples contains the value of CA125, time to diagnosis, intensities of 67 mass-\nspectrometry peaks, and other. Time to diagnosis is the time interval measured\nin months between the date when the measurement was taken and the date\nwhen OC was diagnosed, or the date of operation. Peaks are ordered by their\nfrequency, or the percentage of samples having a non-aligned peak. We have 67\npeaks of frequency more than 33\\%. For classification purposes we exclude cases\nwith only one matched control, and cases with lack of suitable information. As a\nresult, we have 179 triplets containing 358 control samples and 179 case samples taken from\n104 individuals. Each triplet is assigned a \\emph{time-to-diagnosis} defined from the time\nto the moment of diagnosis of the case sample in this triplet.\n\n\\section{Experiments}\\label{sec:experiments}\nThis section describes two experiments.\nThe first is a study of probability prediction of\novarian cancer.\nThe second checks that our results are not accidental by calculating $p$-values.\n\n\\subsection{Probability prediction of ovarian cancer}\\label{ssec:alltr}\nThe aim of this experiment is to demonstrate how we give probability predictions\nfor samples in a triplet and compare them to predictions using CA125 only.\nThe outcome of each event can be represented as a vector $(1, 0, 0)$, $(0, 1, 0)$, or $(0, 0, 1)$.\nThe prediction of CA125 is represented as a vector $(a1, a2, a3)$. This vector is received by applying the maximum rule to CA125 levels.\n\nWe use the following procedure to construct other predictors combining\nCA125 and peak intensities. For each patient we calculate values\n\\begin{equation}\\label{eq:pred}\nu(v,w,p) = v \\ln C + w\\ln I_p,\n\\end{equation}\nwhere $C$ is the level of CA125, $I_p$ is the intensity of the $p$-th peak, $p=1,\\ldots,67$,\n$v \\in \\{0,1\\}, w \\in \\{-2,-1,-1\/2,0,1\/2,1,2\\}$. The total number of different\ncombinations, or experts, is 537: $402 = 6\\times67$ for $v = 1, w \\ne 0$, $134 = 2\\times67$ for\n$v = 0$, and $1$ for $v = 1,w = 0$. The authors of \\cite{Devetyarov2009} show how such combinations can\npredict cancer well up to 15 months before diagnosis.\n\nFor online prediction purposes we sort all the triplets by the date of measurement of the case sample. At each step we give the probability of being\ndiseased for each person in the triplet, or numbers $p_1, p_2, p_3 \\ge 0$ : $p_1+p_2+p_3 = 1$.\nWe choose the uniform initial distribution on the experts and the theoretically optimal value for the parameter $\\eta$, $\\eta=1$, of the Aggregating Algorithm. The evolution of the cumulative Brier loss of all the experts minus the cumulative loss of our algorithm over all the 179 triplets is presented in Figure~\\ref{Fig:Prob}. Clearly, the line for the AA is zero\nsince we subtract its loss from itself. Experts having the line lower than zero are\nbetter than the AA, experts having the line higher than zero are worse. The $x$-axis\npresents triplets in the chronological order.\n\\begin{figure}\n\\includegraphics[width=0.47\\textwidth]{ProbabAA.eps} \\hfill\n\\includegraphics[width=0.47\\textwidth]{StrictAA.eps} \\\\\n\\parbox[t]{0.47\\textwidth}{\\caption{Cumulative loss of probability predictions of the Aggregating Algorithm and other predictors over all the triplets.}\\label{Fig:Prob}} \\hfill\n\\parbox[t]{0.47\\textwidth}{\\caption{Cumulative loss of strict predictions of categorical AA and other predictors over all the triplets.}\\label{Fig:Strict}}\n\\end{figure}\nWe can see from Figure~\\ref{Fig:Prob} that the\nAggregating Algorithm predicts better than most experts in our class after about\n54 triplets, in particular better than CA125. At the end the AA is better than all the\nexperts. The group of lines clustered on the top of the graph separated from the\nmain group are experts which do not include CA125. They make relatively many\nmistakes especially on late stages of the disease and accumulate a large loss. This\nshows that the probability predictions of the AA are more precise than predictions\nof experts interpreted as probability predictions. Moreover, we can be sure that the loss of the Aggregating Algorithm will never be much worse than the loss of the best expert since there is a\ntheoretical bound for it \\cite{VovkPEABG}.\n\nOne can say this comparison is not fair because we\nallow experts give only strict predictions, and our algorithm is more flexible so\nits Brier loss is not so large. On the other hand, it is not trivial to find experts\nwhich make probability predictions, or convert CA125 to probabilities of the\ndisease for each sample in triplet, so this approach presents one of the ways to\ngenerate them.\n\nIn order to make a more strict comparison we allow the AA to make only strict predictions and use the maximum rule to convert probability predictions into strict predictions. We will further refer to this algorithm as to the \\emph{categorical AA}. If we calculate the Brier loss, we\nget Figure~\\ref{Fig:Strict}. We can see that the categorical AA still beats CA125 at the end in the case where\nit gives strict predictions. The final performance is the performance on the whole\ndata set. In this case the loss of the categorical AA is more than the loss of some predictors.\nIt is useful to know specific combinations which perform well in this experiment.\nAt the last step the best performance is achieved by combinations\n\\begin{align} \\label{eq:bestcomb1}\n&\\ln C - \\ln I_{3}, \\ln C - \\frac{1}{2}\\ln I_{3},\\\\\n&\\ln C - \\ln I_{2}, \\ln C - \\frac{1}{2}\\ln I_{7}.\\notag\n\\end{align}\nAfter them combinations with peaks 50, 2, 7, 1, 34, 47 follow.\n\n\\subsection{Prediction on different stages of the disease}\\label{ssec:early}\nOur second experiment is aimed to investigate whether it is possible to predict\nbetter than CA125 at early stages of the disease. In this experiment we follow the approach proposed in \\cite{Devetyarov2009}. We consider 6-month time intervals with starting\npoint $t = 0, 1,\\ldots,16$ months before diagnosis. We will show further that our\npredictions are not reliable for earlier stages. For each period we select only those\ntriplets from the corresponding time interval, the latest for each case patient if\nthere are more than one. We denote the number of triplets for the interval $t$ of length $\\theta$\nby $S_{t,\\theta}$. We use $\\theta = 6$.\n\nIn this experiment we do not use a uniform initial weight distribution on the\nexperts for the Aggregating Algorithm. Instead, we assume the importance of a peak\ndecreases as its number increases in accordance with a power law,\nand that different combinations including the same peak have the same importance.\nThis makes sense because peaks are\nsorted by their frequency in the data set, so peaks further down\nthe list are less frequent and important for fewer people.\nOur specific weighting scheme is that\nthe combinations with peak 1 have initial weight $1 = d^0$,\nthe combinations with peak 2 have initial weight $d^{-1}$, etc.\nWe empirically choose the coefficient for this distribution\n$d = 1.2$, and the parameter $\\eta$ for the AA $\\eta = 0.65$.\nThe number of errors was calculated as a half of Brier loss,\nwhich corresponds to counting errors in the case where predictions are strict.\nFigure~\\ref{Fig:Err} shows the fraction of erroneous predictions made by different algorithms\nover different time periods. It presents values for CA125, for the Aggregating\nAlgorithm, and for the best one combination of the form \\eqref{eq:pred}. We\nalso include fractions of erroneous predictions for the three best combinations \\eqref{eq:bestcomb1} as peaks 2 and 3\nwere noticed in \\cite{Devetyarov2009} to have a good performance.\n\n\\begin{figure}\n\\includegraphics[width=0.47\\textwidth]{Errors.eps} \\hfill\n\\includegraphics[width=0.47\\textwidth]{pvalues.eps} \\\\\n\\parbox[t]{0.47\\textwidth}{\\caption{Fraction of erroneous predictions over different time periods of different predictors}\\label{Fig:Err}} \\hfill\n\\parbox[t]{0.47\\textwidth}{\\caption{The logarithm of $p$-values for different algorithms}\\label{Fig:pvalues}}\n\\end{figure}\n\nThis figure shows that the performance of the Aggregating Algorithm is at least as good as the performance of CA125 on all stages before diagnosis. For\nthe period 9--13 months the combination $\\ln C - \\ln I_3$ performs better than the AA,\nbut on late stages 0--8 months it performs worse. Other combinations are even\nworse. Thus we can say that instead of choosing one particular combination, we should use the Aggregating Algorithm to mix all the combinations. This allows\nus to predict well on some stages of the disease.\n\nThe choice of the coefficients for the AA requires us\nto check that our results are not accidental.\nSince the amount of data we have\ndoes not allow us to carry out reliable cross-validation procedure,\nwe follow the approach to calculating $p$-values proposed in \\cite{Gammerman2008}.\nThis approach was applied for combinations \\eqref{eq:pred} in \\cite{Devetyarov2009}.\nFor each stage of the disease,\nwe are testing the null hypothesis that peak intensities and CA125\ndo not carry any information relevant for predicting labels.\nExcept for the earliest stages,\nwe prove that either this hypothesis is violated or some very unlikely event happened.\n\nWe calculate $p$-values for testing the null hypothesis.\nThe $p$-value can be defined as the value taken by a function $p$ satisfying\n\\begin{equation*}\n\\forall \\delta \\text{ Probability}(p \\le \\delta) \\le \\delta\n\\end{equation*}\nfor all $\\delta\\in(0,1)$ under the null hypothesis.\nTo calculate $p$-values we choose the test statistic $T$ described below,\napply it to our data, and get the value $T_0$.\nThen we calculate the probability of the event that $T \\le T_0$ under the null hypothesis.\n\nLet $\\tau$ be a triplet in $S_{t,6}$ and $\\text{err}(\\tau, d, \\eta)$ be half loss of the categorical AA with parameter $\\eta$ and initial power distribution with parameter $d$ on\nthe triplet $\\tau$.\nThen the half loss in each time interval $[t,t+6]$ is $\\text{Err}(S_{t,6}; d; \\eta) =\n\\sum_{\\tau \\in S_{t,6}} \\text{err}(\\tau; d; \\eta)$, where $S_{t,6}$ is the set of triplets for the time interval $t$ $[t,t+6]$. Let us\nassume that the AA with parameters $d = 1.2$ and $\\eta = 0.65$\nmakes $N_t$ errors on the triplets from $S_{t,6}$. We randomly reassign labels in triplets.\nThen for each $t$ we calculate the minimum number of errors $E$ made by the AA by the rule\n\\begin{equation*}\nE = \\min_{d \\in D, \\eta \\in R} \\text{Err}(S_{t,6}, d, \\eta).\n\\end{equation*}\nHere $D = \\{1.1, 1.2,\\ldots,2.0\\}$ and $R = \\{0.1, 0.15, 0.2,\\ldots,1.0\\}$, so we consider different values for all parameters of the algorithm. This number is our test statistic. The $p$-value is calculated by the Monte-Carlo procedure stated as Algorithm~\\ref{alg:pvalue}.\n\\begin{algorithm}[ht]\n \\caption{$p$-value calculation}\\label{alg:pvalue}\n \\begin{algorithmic}\n \\STATE \\textbf{Input:} $t$, time to diagnosis.\n \\STATE \\textbf{Input:} $N=10^4$, number of trials.\n \\STATE $E_0 := \\min_{d \\in D, \\eta \\in R} \\text{Err}(S_{t,6}, d, \\eta)$\n \\STATE $Q :=0 $\n \\FOR{$j=1,\\ldots,N$}\n \\STATE Assign a case label to a randomly chosen sample in each triplet in $S_{t,6}$.\n \\STATE Calculate $E = \\min_{d \\in D, \\eta \\in R} \\text{Err}(S_{t,6},d,\\eta)$ for this data set.\n \\IF{$E \\le E_0$}\n \\STATE $Q = Q+1$\n \\ENDIF\n \\ENDFOR\n \\STATE \\textbf{Output:} $\\frac{Q+1}{N+1}$ as a $p$-value.\n \\end{algorithmic}\n\\end{algorithm}\n\n\\bigskip\n\nThe logarithms of $p$-values for different algorithms\nare presented in Figure~\\ref{Fig:pvalues}.\nIt includes values for AA.\nIt also includes values taken from \\cite{Devetyarov2009} for the CA125 only. It\nincludes $p$-values for the algorithm described in \\cite{Devetyarov2009}. This algorithm chooses the combination\nwith the best performance and the most frequent peak for each permutation of labels.\nThe figure also includes the $p$-values for the algorithm, which chooses the best combination with one particular peak, 2 or 3.\n\nAs we can see, our algorithm has small $p$-values, comparable with or even\nsmaller than $p$-values for other algorithms. But our algorithm has fewer adjustments, because it does not choose even the peak at each step, but mixes all peaks in the same manner. It does not even choose the best parameters\nfor every time interval but chooses them for all the time periods. The\nprecise values for errors and $p$-values are presented in Table~\\ref{tab:erpval}. Lower index $_e$\nmeans the half loss for a given algorithm, lower index $_p$ means the $p$-values\nfor a given algorithm. The $\\text{Min}_e$ column shows the minimum number of errors made\nby one of the combinations, the $p$ column shows the $p$-values for the method which chooses\nthe best combination for a current time period (see \\cite{Devetyarov2009}), $C^3_{1,e}$ shows the number of errors\nfor the combination $\\ln C -\\ln I_3$, $C^3_{2,e}$ shows the number of errors for the combination $\\ln C - \\frac{1}{2}\\ln I_3$,$C^2_{e}$ shows number of errors for the combination $\\ln C -\\ln I_2$.\nColumns $3_p$ and $2_p$ contain the $p$-values for peaks 3 and 2 correspondingly.\n\n\\begin{table}[h]\n\\caption{Number of errors and $p$-values for different algorithms}\\label{tab:erpval}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n$t$ & $|S_{t,6}|$ & $\\text{CA125}_e$ & $\\text{CA125}_p$ & $\\text{AA}_e$ & $\\text{AA}_p$ & $\\text{Min}_e$ & $p$ & $C_{1,e}^3$ & $C_{2,e}^3$ & $3_p$ & $C^2_e$ & $2_p$\\\\\n\\hline\n0\t&\t68\t&\t2\t&\t0.0001\t&\t2\t&\t0.0001\t&\t1\t&\t0.0001\t&\t3\t&\t2\t&\t0.0001\t&\t3\t&\t0.0001\t \\\\\n1\t&\t56\t&\t4\t&\t0.0001\t&\t4\t&\t0.0001\t&\t2\t&\t0.0001\t&\t5\t&\t4\t&\t0.0001\t&\t5\t&\t0.0001\t \\\\\n2\t&\t47\t&\t6\t&\t0.0001\t&\t5\t&\t0.0001\t&\t3\t&\t0.0001\t&\t7\t&\t5\t&\t0.0001\t&\t6\t&\t0.0001\t \\\\\n3\t&\t36\t&\t8\t&\t0.0001\t&\t8\t&\t0.0001\t&\t4\t&\t0.0001\t&\t9\t&\t7\t&\t0.0001\t&\t8\t&\t0.0001\t \\\\\n4\t&\t27\t&\t7\t&\t0.0001\t&\t7\t&\t0.0001\t&\t4\t&\t0.0001\t&\t8\t&\t6\t&\t0.0001\t&\t7\t&\t0.0001\t \\\\\n5\t&\t23\t&\t7\t&\t0.0008\t&\t5\t&\t0.0006\t&\t4\t&\t0.0006\t&\t7\t&\t6\t&\t0.0007\t&\t6\t&\t0.0004\t \\\\\n6\t&\t20\t&\t6\t&\t0.0010\t&\t5\t&\t0.0004\t&\t4\t&\t0.0028\t&\t6\t&\t7\t&\t0.0046\t&\t5\t&\t0.0010\t \\\\\n7\t&\t17\t&\t6\t&\t0.0071\t&\t4\t&\t0.0006\t&\t4\t&\t0.0141\t&\t5\t&\t6\t&\t0.0098\t&\t4\t&\t0.0017\t \\\\\n8\t&\t17\t&\t5\t&\t0.0021\t&\t3\t&\t0.0003\t&\t3\t&\t0.0019\t&\t4\t&\t5\t&\t0.0020\t&\t4\t&\t0.0020\t \\\\\n9\t&\t20\t&\t7\t&\t0.0042\t&\t6\t&\t0.0009\t&\t5\t&\t0.0076\t&\t5\t&\t6\t&\t0.0009\t&\t5\t&\t0.0010\t \\\\\n10\t&\t28\t&\t14\t&\t0.0503\t&\t7\t&\t0.0001\t&\t6\t&\t0.0003\t&\t6\t&\t8\t&\t0.0001\t&\t8\t&\t0.0001\t \\\\\n11\t&\t28\t&\t15\t&\t0.1028\t&\t9\t&\t0.0006\t&\t8\t&\t0.0042\t&\t8\t&\t9\t&\t0.0004\t&\t11\t&\t0.0008\t \\\\\n12\t&\t28\t&\t17\t&\t0.3164\t&\t11\t&\t0.0120\t&\t10\t&\t0.0585\t&\t10\t&\t11\t&\t0.0049\t&\t13\t&\t0.0033\t \\\\\n13\t&\t30\t&\t16\t&\t0.0895\t&\t10\t&\t0.0011\t&\t10\t&\t0.0168\t&\t10\t&\t11\t&\t0.0015\t&\t13\t&\t0.0007\t \\\\\n14\t&\t25\t&\t16\t&\t0.4661\t&\t10\t&\t0.0070\t&\t8\t&\t0.0304\t&\t10\t&\t11\t&\t0.0301\t&\t11\t&\t0.0015\t \\\\\n15\t&\t20\t&\t13\t&\t0.5211\t&\t8\t&\t0.0124\t&\t6\t&\t0.0464\t&\t8\t&\t9\t&\t0.0577\t&\t9\t&\t0.0022\t \\\\\n16\t&\t10\t&\t6\t&\t0.4406\t&\t6\t&\t0.6708\t&\t2\t&\t0.4101\t&\t6\t&\t6\t&\t0.5979\t&\t6\t&\t0.5165\t \\\\\n\\hline\n\\end{tabular}\n\\end{table}\nIn practice, one often chooses a suitable significance level for their particular task. If\nwe choose it at 5\\%, then we can see from the table that CA125 classification is\nsignificant up to 9 months in advance of diagnosis (the $p$-values are less than 5\\%).\nAt the same time, the results for peaks combinations and for AA are significant for\nup to 15 months.\n\\section{Conclusion}\\label{sec:conclusion}\nOur results show that the CA125 criterion, which is a current standard for the\ndetection of ovarian cancer, can be outperformed, especially at early stages.\nWe have proposed a way to give probability predictions for the disease and\nshowed that predicting this way we suffer less loss than other predictors based on\nthe combination of CA125 and peak intensities.\nWe made another experiment to investigate the performance of our algorithm\nat different stages before diagnosis.\nWe found that the Aggregating Algorithm we use to mix combinations predicts better\nthan almost any combination.\nTo check that our results are not accidental we calculate $p$-values from it under the null hypothesis that peaks and CA125 do not give any\ninformation about the disease at a particular time before the diagnosis. Using our test statistic we get small $p$-values.\nThey show this hypothesis can be rejected at the standard significance level 5\\% later than 16 months before diagnosis. Our test statistic produces $p$-values that are never worse than the $p$-values produced by the statistic proposed in \\cite{Devetyarov2009}. There is no other papers dealing with our database. Other approaches of probability prediction of ovarian cancer using CA125 criteria based on the Risk of Ovarian Cancer algorithm (see \\cite{Skates2003}) require multiple statistical assumptions about the data and a much larger size of a database. Thus they can not be comparable in our setting.\n\nAn interesting direction of future research is to consider the prediction of the probability of the disease for an individual patient, rather than put it artificially into triplets.\n\n\\section{Acknowledgments}\\label{sec:acknowledgments}\nWe would like to thank Mike Waterfield, Ali Tiss, Celia Smith, Rainer Cramer, Alex Gentry-Maharaj, Rachel Hallett, Stephane Camuzeaux, Jeremy Ford, John Timms, Usha Menon, and Ian Jacobs for the given data set and useful discussions of experiments and results. This work has been supported by EPSRC grant EP\/F002998\/1 ``Practical Competitive Prediction'', EU FP7 grant ``OPTM Biomarkers'', MRC grant G0301107 ``Proteomic Analysis of the Human Serum Proteome'', ASPIDA grant ``Development of new methods of conformal prediction with application to medical diagnosis'' from the Cyprus Research Promotion Foundation, Veterinary Laboratories Agency of DEFRA grant ``Development and application of machine learning algorithms for the analysis of complex veterinary data'', and EPSRC grant EP\/E000053\/1 ``Machine Learning for Resource Management in Next-Generation Optical Networks''.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}