diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziazr" "b/data_all_eng_slimpj/shuffled/split2/finalzziazr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziazr" @@ -0,0 +1,5 @@ +{"text":"\\section{Unconventional d.c. and optical conductivities}\n\\label{sec:intro}\n\nIt is an observed fact that several chemically diverse families of unconventional materials exhibit a low temperature resistivity that is linear in temperature, when tuned to what appear to be quantum critical points.\nIllustrative recent experimental data can be found for cuprates, pnictides, heavy fermions, ruthenates and organic superconductors -- see \\cite{Sachdev:2011cs} for an overview and references. While it is likely that there may be more than one explanation for this behavior, the universality of the temperature dependence of the resistivity seen in the phase diagrams of these materials strongly motivates the search for robust physical mechanisms that can reproduce the observations.\n\nIn this paper we show, in the context of the holographic correspondence, that the structure of superradiant instabilities of extremal black hole horizons leads universally to a linear in temperature resistivity when these systems are tuned to the quantum critical point mediating the instability. This mechanism will be described in some generality. We go on to illustrate the process in a particular model that captures additional features of several of the unconventional materials of interest: the resistivity will be due to scattering off modes that are becoming unstable, and which are supported at nonzero momentum. One can take these modes to model the spin density and charge density wave instabilities appearing in the phase diagrams of \\cite{Sachdev:2011cs}.\n\nThat the resistivity is linear rather than, say, quadratic in temperature is only half of the mystery. Unlike conventional metals, many of the materials of interest fail to exhibit resistivity saturation as the temperature is increased. The resistivity increases unabated with temperature through the Mott-Ioffe-Regel limit. The materials are consequently known as bad metals \\cite{Emery:1995zz}. This fact becomes particularly confusing when considered in conjunction with the dependence of the in-plane (frequency dependent) optical conductivity on temperature. In conventional metals, the optical conductivity exhibits a Drude peak that broadens as the temperature is increased. Resistivity saturation occurs when the spectral weight is smeared out over the whole of the bandwidth and the Drude peak effectively disappears \\cite{hussey}. In several classes of unconventional materials that do not exhibit saturation, the Drude peak is accompanied by an extended tail that falls off slowly at large frequencies up to the bandwidth scale \\cite{basov}. At the would-be saturation temperature, the Drude peak is observed to melt into this broader feature \\cite{hussey, basov}. The d.c.$\\,$resistivity, however, continues to increase linearly in temperature with the same slope, irrespectively of whether or not there is an associated Drude peak! The conundrum is served: Does the linear in temperature resistivity originate in Drude-like (i.e. momentum-relaxing) scattering or not?\n\nThe answer to this question is again likely to be material dependent. The picture of resistivity saturation outline above may not be correct. Nonetheless, the characterization of bad metals as metals that do not exhibit a zero frequency collective mode encourages the notion that the resistivity might be controlled by quantum critical physics, presumably responsible for the extended tail in the spectral density, rather than be sensitive to the mechanism of momentum relaxation. Such a picture is likely to have trouble with the fact that at lower temperatures a very sharp Drude peak is observed on top of the broader feature, see e.g. \\cite{boris} for measurements in optimally doped YBCO. The mechanism of linear resistivity presented in this paper will be quantum critical in nature, and we will assume that the Drude peak has been swamped by the critical degrees of freedom.\n\nFigure \\ref{fig:optical} below illustrates the above discussion.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width = \\textwidth]{OpticalConductivity.pdf}\\caption{Theorist's schematic view of the optical conductivity in bad metals at lower temperatures (left) and higher temperatures (right). As the temperature is raised, spectral weight is shifted from the Drude peak into the broad tail and to interband energy scales. The linear in temperature d.c.$\\,$resistivity does not notice the melting of the Drude peak.\n\\label{fig:optical}}\n\\end{center}\n\\end{figure}\n\n\\section{Extremal horizons and BKT transitions}\n\\label{sec:bkt}\n\nIn quantum theories with a holographically dual description, the strongly interacting physics is described by classical gravitational dynamics in a dual spacetime with one extra spatial dimension. The extra dimension geometrically implements the renormalization group flow. In particular, the far interior of the spacetime describes the far IR, or lowest energy scales, of the dual quantum system \\cite{Hartnoll:2011fn}. In many holographic models placed at a finite charge density, $\\langle J^t \\rangle \\neq 0$, the far IR geometry is characterized by an emergent scaling symmetry that indicates a power law specific heat at low temperatures, and hence the presence of gapless degrees of freedom \\cite{Hartnoll:2011fn}. Working in two spatial dimensions for concreteness, the general scale invariant form of the metric is \\cite{Kachru:2008yh}\n\\begin{equation}\\label{eq:scalingIR}\nds^2_{\\text{IR}} = L^2_{\\text{IR}} \\left(\\frac{- dt^2 + d\\r^2}{\\r^2} + \\frac{dx^2 + dy^2}{\\r^{2\/z}} \\right) \\,.\n\\end{equation}\nThis spacetime admits the scaling symmetry $\\{t,\\r\\} \\to \\lambda \\{t,\\r\\}, \\vec x \\to \\lambda^{1\/z} \\vec x$. Therefore $z$ is the dynamical critical exponent. For simplicity we will not discuss here the more general class of metrics describing hyperscaling violation \\cite{Charmousis:2010zz, Huijse:2011ef}, although our considerations apply to those spacetimes also. Holographic IR scaling spacetimes typically arise over a finite range of parameter space and therefore describe critical phases.\n\nIn a scaling geometry of the form (\\ref{eq:scalingIR}), all the operators ${\\mathcal{O}}$ in the low energy theory have an energy scaling dimension $\\Delta$. This dimension determines the spectral weight (imaginary part of the retarded Green's function) in the regime $\\w \\ll T \\ll \\mu$ to be\n\\begin{equation}\\label{eq:im}\n\\lim_{\\w \\to 0}\\frac{1}{\\w} \\text{Im} \\, G_{{\\mathcal{O}} {\\mathcal{O}}}^R(\\w,T) \\sim T^{2 \\Delta - 2 - 2\/z}\\,.\n\\end{equation}\nHere the chemical potential $\\mu$ has been used to indicate the UV scale. Physics well below that scale is captured by the low energy spacetime (\\ref{eq:scalingIR}) and is amenable to dimensional analysis. We also used the fact that the spectral weight is odd in frequency for bosonic operators. In general, the above spectral weight is evaluated at zero momentum, $k=0$. A slight generalization is possible in the case of $z = \\infty$. In this case, space does not scale and so the momentum $k$ is dimensionless and can be nonzero. The scaling dimension becomes\nmomentum dependent: $\\Delta(k)$.\n\nAccording to the basic holographic dictionary \\cite{Witten:1998qj, Gubser:1998bc}, each operator ${\\mathcal{O}}$ is dual to a field $\\phi$ in the IR spacetime. For simplicity, to start with, consider the case in which $\\phi$ is a scalar field with mass $m$ that is not coupled to other fields at a linearized level. By solving the bulk wave equation in the geometry (\\ref{eq:scalingIR}) and reading off the scaling behavior from the solution as $\\rho \\to 0$, one\nimmediately finds\n\\begin{equation}\\label{eq:delta}\n\\Delta = \\frac{2 + z}{2 z} + \\nu \\equiv \\frac{2 + z}{2 z} + \\sqrt{L^2_\\text{IR} m^2 + \\frac{(2+z)^2}{4 z^2}} \\,.\n\\end{equation}\nThe parameter $\\nu$ introduced here will appear repeatedly below.\nIn the case of $z = \\infty$, the mass $m^2$ can be momentum dependent. In this expression we see that if the mass squared satisfies the generalized Breitenlohner-Freedman bound\n\\begin{equation}\\label{eq:bf}\nL^2_\\text{IR} m^2 > - \\frac{(2+z)^2}{4 z^2} \\,,\n\\end{equation}\nthen the scaling dimension $\\Delta$ is real.\n\nThe quantity $L^2_\\text{IR} m^2$ can be tuned by varying UV parameters. In particular, we can imagine tuning these parameters such that the mass squared drops below the bound (\\ref{eq:bf}), and the scaling dimension becomes complex. It is well understood by now that this triggers an interesting quantum phase transition in which the operator ${\\mathcal{O}}$ condenses once the mass squared of $\\phi$ becomes too negative. We will return to the nature of the transition very shortly, but first we notice that precisely at the critical point, where the square root in (\\ref{eq:delta}) vanishes and $\\nu = 0$, then from (\\ref{eq:im}) and (\\ref{eq:delta}) we have\n\\begin{equation}\\label{eq:critical}\n\\lim_{\\w \\to 0}\\frac{1}{\\w} \\text{Im} \\, G_{{\\mathcal{O}} {\\mathcal{O}}}^R(\\w) \\sim \\frac{1}{T} \\,.\n\\end{equation}\nThis expression, which has been previously emphasized in \\cite{Iqbal:2011aj, Vegh:2011aa}, will be at the heart of the linear resistivity above a quantum critical point that we will discuss in the following section. Previous interest in this expression is due to the fact that it resembles the spectral weight of the bosonic mode underlying the marginal Fermi liquid phenomenology of the cuprates \\cite{Varma:1989zz}. It is not quite the same, however, because in general (\\ref{eq:critical}) only holds at $k=0$. Even in the case of $z = \\infty$, because the dimension $\\Delta$ is $k$ dependent -- this is why these systems were termed semi-locally critical in \\cite{Iqbal:2011in}, rather than fully locally critical -- the full spectral density will have a nontrivial $k$ dependence, and (\\ref{eq:critical}) will only hold for one value of $k$ at a time. The absence of a $k$ dependence is crucial for the marginal Fermi liquid, as the mode is coupled to a Fermi surface, which has spectral weight at a nonzero $k=k_F$ that is set by UV dynamics. In contrast, our framework will operate entirely at the level of currents, which control the holographic charge dynamics at leading classical order in the bulk, and will not involve explicit discussion of Fermi surfaces.\n\nThe association of complex IR scaling dimensions to instabilities was first made in the context of holographic superconductors \\cite{Hartnoll:2008kx, Denef:2009tp, Gubser:2008pf}. In cases where the operator ${\\mathcal{O}}$ carries a charge, the instability can be understood as a cousin of the superradiant instabilities of charged black holes, driven by pair production of quanta near the horizon and leading to a discharging of the black hole \\cite{Hartnoll:2011fn}.\nIt was later realized that the quantum phase transition mediating this instability was of Berezinskii-Kosterlitz-Thouless (BKT) type. For instance, when the mass squared is just below the bound (\\ref{eq:bf}), the temperature below which the instability occurs scales like\n\\begin{equation}\\label{eq:tc}\nT_c \\sim \\mu \\, e^{- \\pi\/\\sqrt{-\\nu^2}} \\,.\n\\end{equation}\nSimilar exponential hierarchies control quantities such as the condensate just below the critical mass squared.\nSuch zero temperature BKT transitions were first discussed in \\cite{Kaplan:2009kr}.\nUnlike the conventional BKT transition, they can occur in any dimension and are not tied to an interpretation of vortex unbinding, but rather describe the merger of a UV and IR fixed point. These transitions were subsequently noted to be rather generic in holographic settings \\cite{Jensen:2010ga, Iqbal:2010eh} and to admit a `semi-holographic' description \\cite{Jensen:2011af, Iqbal:2011aj}, in which the only role of holography is to provide a critical IR sector in which the transition occurs \\cite{Faulkner:2010tq}. Our discussion in the following section will be essentially semi-holographic in nature, being independent of most details of the UV region of the bulk geometry. In \\S \\ref{sec:broader} of the discussion section we will take a further step back from holography and comment on the validity of the result we have just described for general BKT transitions.\n\nThere are two key points we would like the reader to take from the above. Firstly, that given a `critical phase' with an IR scaling symmetry described by the metric (\\ref{eq:scalingIR}) one can induce a quantum phase transition by tuning the dimension of an operator to become complex. Secondly, at the corresponding quantum critical point, the spectral density of this operator has the temperature dependence (\\ref{eq:critical}). This last statement holds at $k=0$ for finite $z$, and at some specific $k_\\star$ when $z = \\infty$.\n\n\\section{Mechanism of linear resistivity}\n\\label{sec:mechanism}\n\nThe d.c.$\\,$conductivity is given by\n\\begin{equation}\\label{eq:sigma}\n\\sigma = \\lim_{\\w \\to 0} \\frac{1}{\\w} \\, \\text{Im} \\, G^R_{J^x J^x}(\\w,T) \\,.\n\\end{equation}\nAt a nonzero charge density $\\langle J^t \\rangle$ and if momentum is conserved, this quantity is problematic because, in addition to the contribution (\\ref{eq:sigma}), there is a delta function in the dissipative conductivity at $\\w = 0$. In order to relax momentum over experimental timescales, the charge carriers must either be parametrically diluted or must interact with parametrically heavier degrees of freedom \\cite{Hartnoll:2012rj}. The result is the broadening of the delta function into a Drude peak. The contribution of the Drude peak to the d.c. conductivity is intimately connected to the mechanism by which momentum is relaxed. For instance, umklapp scattering in a Fermi liquid gives rise to the celebrated $T^2$ dependence of the resistivity. Instead, we would like the d.c.$\\,$conductivity to be dominated by the universal quantum critical dynamics underlying the spectral weight (\\ref{eq:critical}). This can happen if the Drude peak contribution is swamped by the zero frequency limit of an extended tail that arises from scattering off critical modes. As we discussed in \\S \\ref{sec:intro} above, this may be the case in bad metallic regimes.\n\nIt is sometimes asserted that the presence of a Drude peak is synonymous with a quasiparticle description of transport. This is not quite correct. The essential requirement for a Drude peak is a hierarchy of time scales, whereby the momentum relaxation timescale is much longer than any other timescale in the system. This is particularly clear in hydrodynamic or memory matrix approaches, e.g. \\cite{Hartnoll:2007ih, Hartnoll:2012rj}. In the presence of such a hierarchy, strongly correlated systems without a quasiparticle description will still exhibit a Drude peak. Conversely, \nthe absence of a Drude peak simply requires that momentum is being dumped by the charge carriers at a rate comparable to all other interactions. In such circumstances, the spectral weight from the delta function is transferred into the critical tail or indeed to interband energy scales. Strong interactions are presumably important here to maintain a metallic character and avoid localization \\cite{Emery:1995zz}. In the concrete model we consider in the following section, we assume that such a process is occurring in our system, without disrupting the momentum-conserving interactions that we consider, so that in effect we can ignore the delta function contribution to the conductivity. More generally, we can imagine that momentum-relaxing processes are already part of the critical system that is undergoing the quantum critical BKT transition.\n\nIf $J^x$ itself were the operator undergoing the BKT transition, then combining (\\ref{eq:critical}) and (\\ref{eq:sigma}) would directly give a linear in temperature resistivity at the critical point. Such a phase transition would correspond to the spontaneous generation of a uniform current and presumably requires spontaneous symmetry breaking of the global $U(1)$ symmetry. We are not aware of holographic, or other, models where this occurs. However, it is easy for the IR critical behavior (\\ref{eq:critical}) to get communicated to the current operator via operators that are irrelevant from the IR quantum critical point of view.\n\nThe IR scaling geometry (\\ref{eq:scalingIR}) is generically deformed by irrelevant operators that drive a renormalization group flow up towards the finite density UV fixed point or cutoff. In the IR scaling regime, before these irrelevant operators kick in, we can imagine diagonalizing the equations of motion for a generic perturbation of the background to obtain decoupled gauge invariant fields $\\Phi_I$. Near the boundary ($\\rho \\to 0$) of the IR geometry, these satisfy\n\\begin{equation}\\label{eq:GIR}\n\\Phi^I(\\rho) \\sim c^I \\Big( \\rho^{1+2\/z-\\Delta_I} + \\rho^{\\Delta_I} \\, {\\mathcal G}^R_I(\\w,T) \\Big) \\,.\n\\end{equation}\nHere ${\\mathcal G}^R_I(\\w,T)$ is the IR Green's function, the $c^I$ are constants, and $\\Delta_I$ is the IR dimension of the operator. We have allowed a temperature $T \\ll \\mu$ that only affects the IR spacetime. One of these dimensions $\\Delta_I$ is assumed to undergo a BKT transition of the form described in the previous section.\n\nAway from the IR geometry, the irrelevant operators will typically couple these perturbations. However, in the regime of interest $\\w,T \\ll \\mu$, this coupling does not introduce any additional non-analytic temperature or frequency dependence. In appendix \\ref{eq:matching} we show that a generalization of the usual matching procedure of e.g.\n\\cite{Faulkner:2011tm} implies that to leading order at low frequencies and temperatures\\footnote{To be precise, as explained in the appendix, equation (\\ref{eq:imgsum}) holds when the $\\nu_I$'s are real, that is, on the stable side of the quantum critical point. When some of the $\\nu_I$'s are imaginary, the general expression has a complicated logarithmic dependence on $T$ and $\\w$.}\n\\begin{equation}\\label{eq:imgsum}\n\\text{Im} \\, G^R_{J^x J^x}(\\w,T) = \\sum_I d^I \\, \\text{Im} \\, {\\mathcal G}^R_I(\\w,T) \\,.\n\\end{equation}\nHere the real coefficients $d^I$ will generically be nonzero if the irrelevant operators mix the IR mode $\\Phi_I$ with the gauge field $A_x$. This is by no means automatic; the model of the following sections will achieve the required mixing by combining several interesting ingredients. In more familiar condensed matter language, we can think of the direct coupling in (\\ref{eq:imgsum}) between the current and a fluctuating order parameter as a cousin of the Aslamazov-Larkin process \\cite{al}, in this case mediated by irrelevant operators. The key fact is that we couple the fluctuating field directly to the current, not going via e.g. a fermionic self-energy. It is now immediate that if one of these IR operators undergoes a BKT transition, then the spectral weight (\\ref{eq:critical}), plugged into the matching formula (\\ref{eq:imgsum}) leads to a linear in temperature resistivity at the critical point\n\\begin{equation}\\label{eq:linearT}\nr = \\frac{1}{\\sigma} \\sim T \\,.\n\\end{equation}\nNote that, according to (\\ref{eq:im}) and (\\ref{eq:delta}), the contribution from a general IR operator ${\\mathcal{O}}_I$ to the conductivity is\n\\begin{equation}\\label{eq:2nu}\n\\sigma = T^{-1 + 2 \\nu_I} \\,.\n\\end{equation}\nRecall that $\\nu$ was defined in (\\ref{eq:delta}). Assuming we are in a stable phase, then we see that the contribution of all the other operators with $\\nu_I > 0$ are subleading in the sum (\\ref{eq:imgsum}) compared to the critical operator with $\\nu = 0$. The above discussion is depicted in figure \\ref{fig:phased} below.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=270pt]{PhaseDiagram.pdf}\\caption{Schematic phase diagram. The BKT quantum phase transition occurs at the boundary of a quantum critical phase when the scaling dimension of an operator becomes complex, signaling a condensation instability. If the unstable operator is coupled to the current via irrelevant operators, then above the critical point the quantum critical contribution to the resistivity is linear in temperature. Close to the critical point, the mode becoming unstable has a strong effect on the d.c.$\\,$and optical conductivities.\n\\label{fig:phased}}\n\\end{center}\n\\end{figure}\n\nWhile (\\ref{eq:2nu}) allows the critical operator to dominate at the critical point, away from the critical point, on the disordered side, it implies that the exponent of the resistivity will be $1 - 2 \\nu < 1$. Here $\\nu$ is the exponent of the critical operator away from the critical point. This is in contrast to experiments in all the unconventional materials of interest, which show that when detuned from criticality, the exponent of the resistivity increases towards the Fermi liquid $T^2$ behavior \\cite{Sachdev:2011cs}. To capture this behavior we would need to increase the scope of our model. The simplest way to do this would be to combine the critical tail contribution we are discussing with a more conventional Femi liquid Drude peak contribution. It may be that by coupling the critical operator to the Fermi liquid, along the marginal Fermi liquid lines of \\cite{Iqbal:2011aj, Vegh:2011aa} or otherwise, one can remove the $T^2$ contribution to the resistivity in the critical region of the phase diagram. For instance, very schematically, a form like\n\\begin{equation}\\label{eq:fudged}\n\\sigma \\sim \\nu \\, T^{-2} + T^{-1 + 2 \\nu} \\,,\n\\end{equation}\nwould start to look closer to the data.\n\nComplications with the Drude peak are removed if one looks at the optical conductivity. The scaling arguments above now imply that the dissipative conductivity scales like\n\\begin{equation}\\label{eq:sw}\n\\sigma \\sim \\w^{-1 + 2 \\nu} \\,.\n\\end{equation}\nThis gives the marginal Fermi liquid form\n\\begin{equation}\\label{eq:1w}\n\\sigma \\sim \\frac{1}{\\w} \\,,\n\\end{equation}\nat the critical point. This may be compatible with observations in e.g. YBCO \\cite{Varma:1989zz} and LSCO \\cite{lsco}. We discuss the optical conductivity in more detail below. It would be interesting to measure in detail the doping dependence of the power law tail of the optical conductivity in these materials, to assess the plausibility of a dependence like (\\ref{eq:sw}) close to the critical point. As we noted above, the expressions (\\ref{eq:sw}) and (\\ref{eq:2nu}) are strictly applicable only on the $\\nu > 0$ side of the quantum phase transition. It is a feature of these models that the d.c.$\\,$and optical conductivities have asymmetric behavior on opposite sides of the quantum critical point.\n\nThere may seem to be a tension in the fact that $J^x$ does not acquire a vacuum expectation value and yet its correlator couples at the lowest frequencies to the unstable mode according to (\\ref{eq:imgsum}). We will verify explicitly in our concrete model below that these two statements are compatible.\n\nThe mechanism of linear in temperature resistivity we have described is universal in the sense that it only depends on the onset of a holographic BKT quantum phase transition at the boundary of a quantum critical phase, combined with the presence of irrelevant operators that couple the IR critical operator to the current. The mechanism is independent of the details of the theory undergoing the transition and also of the UV completion of the critical theory. Beyond the tuning to the quantum critical point, no additional specification of dimensions of operators or dynamical critical exponents is necessary. In the remainder of this paper we describe a specific holographic realization of the scenario that we have just outlined.\n\n\\section{Lattices and finite wavevector instabilities}\n\\label{sec:lattice}\n\nA concrete holographic model realizing the mechanism outlined above can be achieved by combining three interesting ingredients that have been the focus of recent discussion:\n\n\\begin{itemize}\n\n\\item Vectorial instabilities occurring at finite wavevector $k_\\star > 0$ \\cite{Nakamura:2009tf, Donos:2011bh, Donos:2011qt}.\n\n\\item A (semi-) locally quantum critical sector in which all momenta are critical\n\\cite{Sachdev:2010um, McGreevy:2010zz, Iqbal:2011in}.\n\n\\item A lattice that is irrelevant in the IR scaling regime \\cite{Hartnoll:2012rj, Horowitz:2012ky, Liu:2012tr}.\n\n\\end{itemize}\n\n\\noindent The essential idea is that the irrelevant coupling to the lattice will communicate the finite wavevector instability to the $k=0$ electrical current. The instability at $k_\\star$ can have critical scaling because of the local quantum criticality.\n\nThe most important effect of a lattice is to broaden the delta function in the conductivity \\cite{Hartnoll:2012rj} into a Drude peak. Impurities will also achieve this effect \\cite{Hartnoll:2008hs}. Nonetheless, many of the interesting materials are believed to be very clean, and indeed, the Fermi liquid $T^2$ resistivity observed over large ranges of temperatures in these materials, away from the critical regions, presumably originates from umklapp scattering off the lattice rather than impurities plus interactions \\cite{maslov}. As we have stressed repeatedly, however, in this work we are not interested in the Drude peak contribution to the conductivity. The role of the lattice for us will be to mix modes with different wavevectors.\n\nWe will consider the following 3+1 dimensional bulk model \\cite{Donos:2011bh}\n\\begin{align}\\label{lagra1}\n\\mathcal{L}=&{\\textstyle{\\frac12}} R\\,\\ast 1-{\\textstyle{\\frac12}} \\,\\ast d\\varphi\\wedge d\\varphi-V\\left(\\varphi\\right) \\ast 1- {\\textstyle{\\frac12}} \\,\\tau\\left(\\varphi\\right) F\\wedge\\ast F-{\\textstyle{\\frac12}} \\,\\vartheta\\left(\\varphi\\right) F\\wedge F\\, ,\n\\end{align}\nwhere $F=dA$. This Lagrangian describes Einstein-Maxwell theory coupled to a pseudoscalar field with both dilatonic and axionic couplings to the field strength. The corresponding equations of motion are given by\n\\begin{align}\\label{eomi}\n&R_{\\mu\\nu}=\\partial_\\mu \\varphi\\partial_\\nu \\varphi+g_{\\mu\\nu}\\,V-\\tau\\,\\left(\\frac{1}{4}g_{\\mu\\nu}\\,F_{\\lambda\\rho}F^{\\lambda\\rho} -F_{\\mu\\rho}F_{\\nu}{}^{\\rho}\\right) \\,, \\nonumber \\\\\n&d\\left(\\tau\\ast F+\\vartheta F\\right)=0 \\,, \\nonumber \\\\\n&d\\ast d\\varphi+V'*1+\\frac{1}{2}\\tau'\\,F\\wedge\\ast F+\\frac{1}{2}\\vartheta'\\,F\\wedge F=0\\, .\n\\end{align}\nWe will assume that the three functions $V$, $\\tau$ and $\\vartheta$ have the following expansions\n\\begin{align}\\label{expans}\nV=-6+\\frac{1}{2}m_{s}^{2}\\,\\varphi^{2}+\\dots,\\qquad\n\\tau=1-\\frac{n}{12}\\,\\varphi^{2}+\\dots,\\qquad\n\\vartheta=\\frac{c_{1}}{2\\sqrt{3}}\\,\\varphi+\\dots\\, .\n\\end{align}\nIt is sufficient to know the action for the pseudoscalar to quadratic order, as it will only appear as a perturbation of the background. In (\\ref{expans}) we have furthermore set the asymptotic $AdS_4$ radius to $L^2 = 1\/2$. In the Lagrangian (\\ref{lagra1}) we already set both the Newton and Maxwell constants to unity.\nAll of these quantities can be scaled out of the equations of motion in this model.\n\nThe key feature of the theory (\\ref{lagra1}), to be described immediately below, is that it can develop vectorial instabilities, which involve the electric current. Closely related to this fact is that the instabilities of interest occur at nonzero wavevector $k_\\star$, avoiding the spontaneous generation of a homogeneous current, as we would expect. By subsequently introducing a lattice, that becomes irrelevant in the IR and does not disrupt the critical phase, we will communicate the effects of this instability to the spectral weight of the current operator, leading to strong temperature and frequency dependence in the conductivity.\n\n\\subsection{IR spectrum of excitations and instability}\n\\label{sec:irspectrum}\n\nThe equations of motion (\\ref{eomi}) admit the following $AdS_{2}\\times \\mathbb{R}^{2}$ black hole solution\n\\begin{equation}\\label{eq:AdS2_bh}\nds_{4}^{2} = -f\\,dt^{2}+\\frac{1}{12} \\left(\\frac{dr^{2}}{f}+dx^{2}+dy^{2}\\right) \\,, \\qquad A=\\left(r-r_{+}\\right)\\,dt \\,, \\qquad\n\\varphi=0 \\,.\n\\end{equation}\nHere the emblackening factor\n\\begin{equation}\nf=r^{2}-r_{+}^{2} \\,.\n\\end{equation}\nThe horizon of the black hole \\eqref{eq:AdS2_bh} is at $r=r_{+}$ and the temperature is $T = \\sqrt{3}\\,r_{+}\/\\pi$.\nThe boundary of $AdS_2$ is at $r \\to \\infty$. This will serve as our near horizon scaling geometry,\n as per our discussion in \\S \\ref{sec:bkt} above, which we have further\nplaced at a finite temperature. That the effects of the temperature are restricted to the IR scaling geometry implies that $T \\ll \\mu$, the UV scale. The scaling regime described by $AdS_{2}\\times \\mathbb{R}^{2}$ has dynamical critical exponent $z=\\infty$ and an associated ground state entropy density. This ground state entropy has tainted the reputation of $AdS_{2}\\times \\mathbb{R}^{2}$ as a ubiquitous tool in the applied holography effort. (Semi) local quantum criticality with $z=\\infty$ is in fact compatible with a vanishing ground state entropy density if hyperscaling is violated \\cite{Hartnoll:2012wm}. Holographic scaling geometries with $z=\\infty$ are the only currently known holographic duals (away from the probe limit) that share at leading bulk classical order the basic property of Fermi liquids of having spectral weight at low energies but finite momentum. It seems conceivable that $AdS_{2}\\times \\mathbb{R}^{2}$ may have the last laugh.\n\nAs usual, perturbations about the background can be decomposed into transverse and longitudinal sectors that decouple from each other. The finite wavevector instability occurs in the transverse channel. We can write the transverse perturbations around the exact solution \\eqref{eq:AdS2_bh} as\n\\begin{align}\\label{fluctuations}\n\\delta g_{ty}=h_{ty}\\left(t,r\\right)\\sin\\left(kx\\right) \\,, \\qquad \\delta g_{xy}=h_{xy}\\left(t,r\\right)\\cos\\left(kx\\right) \\,, \\nonumber \\\\\n\\delta A_{y}=a\\left(t,r\\right)\\sin\\left(kx\\right) \\,, \\qquad \\qquad \\delta \\varphi= w\\left(t,r\\right)\\cos\\left(kx\\right) \\,.\n\\end{align}\nThe equations of motion \\eqref{eomi} then yield the system of linear coupled equations\n\\begin{align}\n-k\\,\\partial_{t}h_{xy}+k^{2}\\,h_{ty}-f \\left(2\\,\\partial_{r}a+\\partial_{r}^{2}h_{ty}\\right)=0 \\,, \\nonumber \\\\\n2\\,\\partial_{t}a+12kf\\,\\partial_{r}h_{xy}+\\partial_{t}\\partial_{r}h_{ty}=0 \\,, \\nonumber \\\\\n-f^{-1}\\,\\partial_{t}^{2}h_{xy}+12\\,\\partial_{r}\\left(f\\partial_{r}h_{xy} \\right)+kf^{-1}\\,\\partial_{t}h_{ty}=0 \\,, \\nonumber \\\\\n-f^{-1}\\,\\partial_{t}^{2}a+12\\,\\partial_{r}\\left(f\\partial_{r}a \\right)-12k^{2}\\,a+c_{1}k\\,w+12\\,\\partial_{r}h_{ty}=0 \\,, \\nonumber \\\\\n-f^{-1}\\,\\partial_{t}^{2}w+12\\,\\partial_{r}\\left(f\\partial_{r}w \\right)-\\left(12k^{2}+m_{s}^{2}+n\\right)\\,w+12c_{1}k\\,a=0 \\,.\n\\end{align}\nIntroducing the new field $\\phi_{xy}$ through\n\\begin{equation}\n\\partial_{t}\\phi_{xy}=f\\,\\partial_{r}h_{xy} \\,,\n\\end{equation}\nthe system of equations \\eqref{fluctuations} leads to the linear system of equations\n\\begin{equation}\\label{eq:system}\n\\left(\\Box_{2}-M^{2}\\right)\\mathbf{v}=0 \\,,\n\\end{equation}\nwith ${\\bf v}=(\\phi_{xy},a,w)$ and the mass matrix\n\\begin{equation}\\label{mmfirstmod}\nM^{2}=\\left(\\begin{array}{ccc}12\\, k^{2} & 2\\,k & 0 \\\\144\\,k & 24+12\\,k^{2} & -c_{1}k \\\\0 & -12\\,c_{1}k & 12\\,k^{2}+m_{s}^{2} + n\\end{array}\\right)\\, .\n\\end{equation}\nNote that the Maxwell field fluctuation decouples from the remainder at zero wavenumber, $k=0$.\nThe Laplacian appearing in equation \\eqref{eq:system} is with respect to the two dimensional metric\n\\begin{equation}\nds_{2}^{2}=-f\\,dt^{2}+\\frac{dr^{2}}{12\\,f} \\,.\n\\end{equation}\n\nThe linearly coupled system of equations \\eqref{eq:system} can be diagonalized to yield three independent modes\n\\begin{eqnarray}\n\\lefteqn{\\left(\\Box_{2}-\\mu_{i}^{2}\\right)\\,g_{i}=0}\\nonumber \\\\\n&& \\Rightarrow \\qquad \\partial_{r}^{2}g_{i}+\\frac{f^{\\prime}}{f}\\,\\partial_{r}g_{i}-\\left(\\frac{1}{12\\,f^{2}}\\partial_{t}^{2}+\\frac{1}{12\\,f}\\mu_{i}^{2} \\right)g_{i}=0 \\,,\n\\end{eqnarray}\nwith $\\mu_{i}^{2}$ being the three eigenvalues of the mass matrix \\eqref{mmfirstmod}. If we go to frequency space by writing $g_{i}(t,r)=e^{-\\imath\\omega t} u_{i}(r)$, the solution with infalling boundary conditions at the horizon \\cite{Son:2002sd, Hartnoll:2009sz} is then\n\\begin{equation}\nu_{i}(r)=\\left(\\frac{2}{r_{+}}\\right)^{\\nu_{i}}\\,\\Gamma\\left(a_{i} \\right)\\Gamma\\left(1+\\nu_{i}\\right)f(r){}^{-\\imath \\frac{\\omega}{4\\sqrt{3}r_{+}}}r^{-a_{i}}\\,{}_{2}F_{1}\\left(\\frac{a_{i}}{2},\\frac{a_{i}+1}{2},1-\\nu_{i};\\frac{r_{+}^{2}}{r^{2}} \\right)-\\left(\\nu_{i}\\leftrightarrow -\\nu_{i}\\right) \\,,\n\\end{equation}\nwhere $a_{i}=\\frac{1}{2}-\\imath \\frac{\\omega}{2\\sqrt{3}r_{+}}-\\nu_{i}$ and $\\nu_{i}=\\frac{1}{2}\\sqrt{1+\\frac{1}{3}\\mu_{i}^{2}}$.\nBy expanding the above solution near the boundary of the $AdS_{2}$, i.e. as $r \\to \\infty$, we can read off the $AdS_{2}$ retarded Green's functions in the usual way (e.g. \\cite{Faulkner:2011tm}) to obtain\\footnote{The case $\\nu_i = 0$, which will be of particular interest below, should properly be treated independently, with the solution to the wave equation expressed in terms of a Legendre function. Correct result are obtained by continuation of the general results to $\\nu_i \\to 0$. In particular, the spectra density $\\text{Im} G^R \\sim \\w\/T$ at $\\nu_i = 0$.}\n\\begin{equation}\n\\mathcal{G}^R_{i}\\left(\\omega, T \\right)=-\\left(\\frac{\\pi}{2\\sqrt{3}} T \\right)^{2\\nu_{i}}\\frac{\\Gamma\\left(1-\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}-\\imath\\frac{\\omega}{2\\pi T}+\\nu_{i}\\right)}{\\Gamma\\left(1+\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}-\\imath\\frac{\\omega}{2\\pi T}-\\nu_{i}\\right)} \\,. \\label{eq:gads2}\n\\end{equation}\nThese have the expected form that is determined by the $SL(2,{{\\Bbb R}})$ symmetry of $AdS_{2}$. Again, see for instance \\cite{Faulkner:2011tm}. These are the IR Green's functions that will appear in the matching formula (\\ref{eq:imgsum}).\nThe perturbation includes the transverse current mode $\\d A_y(x)$, at nonzero momentum $k$. Below we will couple this mode to the homogeneous current using a lattice.\n\nFor small $\\omega$ we have\n\\begin{eqnarray}\n\\lefteqn{\\mathcal{G}^R_{i}\\left(\\omega\\right)=-\\left(\\frac{\\pi}{2\\sqrt{3}} T \\right)^{2\\nu_{i}}\\frac{\\Gamma\\left(1-\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}+\\nu_{i}\\right)}{\\Gamma\\left(1+\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}-\\nu_{i}\\right)}}\\nonumber \\\\\n&& +\\imath\\omega \\frac{1}{2}\\,\\left(\\frac{\\pi}{2\\sqrt{3}} \\right)^{2\\nu_{i}}\\,T^{2\\nu_{i}-1}\\frac{\\Gamma\\left(\\frac{1}{2}+\\nu_{i} \\right)\\Gamma\\left(1-\\nu_{i}\\right)}{\\Gamma\\left(\\frac{1}{2}-\\nu_{i}\\right)\\Gamma\\left(1+\\nu_{i}\\right)}\\,\\tan\\pi\\nu_{i} \\,.\n\\end{eqnarray}\nIf $\\nu_i$ is real, then taking the imaginary part we recover the scaling with temperature that we anticipated in (\\ref{eq:im}) above, using the expression (\\ref{eq:delta}) for $\\Delta$ in terms of $\\nu$.\nIn general, the eigenvalues of the matrix \\eqref{mmfirstmod} are slightly complicated functions of the wavenumber $k$, but are easily found numerically.\n\nTo illustrate the features of the spectrum of our theory, consider the special case where $m_{s}^{2}+n=0$.\n(We will in fact consider a different case for the numerics below.) In that case we have\n\\begin{align}\n\\nu_{1}=&\\frac{1}{2}\\sqrt{1+4\\,k^{2}} \\,, \\nonumber \\\\\n\\nu_{2}=&\\frac{1}{2}\\sqrt{5+4\\,k^{2}-\\frac{\\sqrt{12}}{3}\\sqrt{12+\\left(24+c_{1}^{2}\\right)k^{2}}} \\,, \\nonumber \\\\\n\\nu_{3}=&\\frac{1}{2}\\sqrt{5+4\\,k^{2}+\\frac{\\sqrt{12}}{3}\\sqrt{12+\\left(24+c_{1}^{2}\\right)k^{2}}} \\,.\\label{eq:nus}\n\\end{align}\nFrom the expressions above we can see that $\\nu_{2}^{2}$ has two minima at\n\\begin{equation}\nk^{\\pm}_\\text{min} = \\pm\\frac{1}{2\\sqrt{12}}\\,\\frac{c_{1}\\sqrt{48+c_{1}^{2}}}{\\sqrt{24+c_{1}^{2}}} \\,,\n\\end{equation}\nat which\n\\begin{equation}\n\\nu_{2\\,\\text{min}} = \\frac{1}{4}\\sqrt{12-\\frac{c_{1}^{2}}{3}-\\frac{192}{24+c_{1}^{2}}} \\,.\n\\end{equation}\nFor $c_{1}>2\\sqrt{6}$ we see that there is a range of $k$ for which $\\nu_{2}$ is imaginary.\nThese are the finite wavenumber instabilities that we shall use. For our purposes, it does not matter whether or not the range of unstable momenta extends down to $k=0$. What is important is that there are finite wavenumber instabilities involving the Maxwell field $\\d A_y$.\n\nEven when the system is stable, when $00 : \\pi \\in \\Pi \\}$, \\lm{lm:occupancy} shows the one-to-one correspondence between $\\Pi$ and $\\caD$:\n\n\\begin{lemma}[Theorem 2 of \\cite{syed2008apprenticeship}]\\label{lm:occupancy}\n If $\\rho \\in \\caD$, then $\\rho$ is the occupancy measure for $\\pi_\\rho(a|s) \\triangleq \\frac{\\rho(s,a)}{\\sum_{a'}\\rho(s,a')}$, and $\\pi_\\rho$ is the only policy whose occupancy measure is $\\rho$.\n\\end{lemma}\n\\vspace{-2pt}\n\nThus, the occupancy measure can be used as an alternative in the general IL objective shown in \\eq{eq:il}. Applying KL divergence as the distance metric, we could construct the IL objective as minimizing the reverse KL divergence $\\kld$ of between the agent's occupancy measure and expert's:\n\\begin{equation}\\label{eq:kl-il}\n \\pi^* = \\argmin_{\\pi} \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}})~.\n\\end{equation}\n\\begin{proposition}\\label{prop:kl-il}\n The optimal solution for the reverse KL divergence objective of IL shown in \\eq{eq:kl-il} is that $\\pi^*={\\pi_E}$.\n\\end{proposition}\n\\vspace{-6pt}\n\\begin{proof}\n It is obvious that the solution of \\eq{eq:kl-il} is unique since $\\kld$ is convex for $\\rho_\\pi$, which achieves the optimal value iff $\\rho_\\pi = \\rho_{{\\pi_E}}$. According to \\lm{lm:occupancy}, we can recover policy ${\\pi_E}$ if we can recover the occupancy measure of expert policy.\n Thus the optimal solution is that $\\pi^* = {\\pi_E}$.\n\\end{proof}\nConsidering to model the normalized occupancy measure with Boltzmann distribution, the density can be represented by an EBM as\n\\begin{equation}\\label{eq:ebm}\n (1-\\gamma)\\rho_\\pi(s,a) = \\frac{1}{Z}\\exp(-E(s,a))~,\n\\end{equation}\nwhere leads to the following proposition.\n\\begin{proposition}\\label{prop:eb-il}\n The reverse KL divergence objective of IL \\eq{eq:kl-il} is equivalent to the following of Energy-Based Imitation Learning (EBIL) objective \\eq{eq:eb-il}:\n\\iffalse\nThus we take \\eq{eq:ebm} into \\eq{eq:kl-il} for both policy $\\pi$ and ${\\pi_E}$, we get that: \n\\begin{equation}\\label{eq:kl-induce}\n \\begin{aligned}\n \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}}) &= \\sum_{s,a} \\rho_\\pi(s,a) \\log \\frac{\\rho_\\pi(s,a)}{\\rho_{\\pi_E}(s,a)}\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left( \\log{\\rho_\\pi(s,a)} - \\log{\\frac{e^{-E_{\\pi_E}(s,a)}}{Z'}} \\right )\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left(E_{\\pi_E}(s,a)+\\log{\\rho_\\pi(s,a)} + \\log{Z'}\\right )\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} + \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} -\\log{\\sum_{s,a'}\\rho(s,a')}+ \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\left [ \\rho_\\pi(s,a)\/\\sum_{a'}\\rho(s,a')\\right ]}+ \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ] - \\overline{H}\\left (\\rho_\\pi(s,a) \\right ) + \\text{const}\n \n \n \n \n \n ~,\n \\end{aligned}\n\\end{equation}\nwhere $\\overline{H}=-\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)\/\\sum_{a'}\\rho(s,a')}$ is the entropy of the occupancy measure, $E_{\\pi_E}$ is the EBM of policy ${\\pi_E}$ and $Z'$ is its partition function.\n\\begin{lemma}[Lemma 3 of \\cite{ho2016generative}]\n $\\overline{H}$ is strictly concave, and for all $\\pi \\in \\Pi$ and $\\rho \\in \\caD$, we have $H(\\pi) = \\overline{H}(\\rho_\\pi)$ and $\\overline{H}(\\rho)=H(\\pi_\\rho)$.\n\\end{lemma}\nThus, \\eq{eq:kl-il} in the end lead to the objective function of Energy-Based Imitation Learning (EBIL) as:\n\\fi\n\\begin{equation}\\label{eq:eb-il}\n\\pi^* = \\argmax_{\\pi} \\bbE_\\pi \\left[-E_{\\pi_E}(s,a)\\right ] + H(\\pi)~.\n\\end{equation}\n\\end{proposition}\n\\vspace{-6pt}\nThe proof of \\prop{prop:eb-il} is in \\ap{ap:eb-il}. A key observation is that \\eq{eq:eb-il} provides exactly the same form as the objective of MaxEnt RL shown in \\eq{eq:maxent-rl}, and thus we can just estimate the energy of expert data as a surrogate reward function without involving the intractable partition function $Z$\\footnote{One may notice that according to \\lm{lm:occupancy}, we can simply recover the expert policy directly through $\\pi^*=\\exp(-E(s,a))\/\\sum_{a'}{\\exp{(-E(s,a'))}}$. However, this may be hard to generalize to continuous or high-dimensional action space in practice. For more discussions one can refer to \\se{ap:ebil-sac}.}. Generally, we can choose the reward function as $r(s,a)=h(-E_{\\pi_E}(s,a))$ where $h$ is a monotonically increasing linear function to keep the objective of \\eq{eq:eb-il} still true. \n\nTherefore, EBIL that learns with MaxEnt RL using the expert energy function as the reward function aims to minimize the reverse KL divergence between the agent's occupancy measure and the expert's. It is worth noting that if we remove the entropy term to construct a standard RL objective, then it will collapse into minimizing the cross entropy of the occupancy measure rather than the KL divergence.\n\n\\subsection{Expert Energy Estimation with Demonstrations}\n\nAs described above, our reward function is determined by $E_{{\\pi_E}}(s,a)$, a learned energy function of the occupancy measure. In this section, we elaborate on how to estimate $E_{{\\pi_E}}(s,a)$ from expert demonstrations. \nSpecifically, in this paper, we leverage Deep Energy Estimator Networks (DEEN)~\\cite{saremi2018deep}, a scalable and efficient algorithm to directly estimate the energy of expert's occupancy measure of policy via a differentiable framework without involving the intractable estimation of the normalizing constant. Then we can take the estimated energy as the surrogate reward to train the agent policy.\n\nFormally, let the random variable $X=g(s,a)\\sim g(\\rho_{\\pi_E}(s,a))$. \nLet the random variable $Y$ be the noisy observation of $X$ that $y\\sim x+N(0, \\sigma^2I)$, \\textit{e.g.}, $y$ is derived from samples $x$ by adding with white Gaussian noise $\\xi\\sim N(0, \\sigma^2I)$. \nThe empirical Bayes least square estimator, \\textit{i.e.}, the optimal denoising function $g(y)$ for the white Gaussian noise, is solved as\n\\begin{equation}\n\\begin{aligned}\\label{eq:denoise}\n g(y) = y + \\sigma^2\\nabla_y\\log{p(y)}~.\n\\end{aligned}\n\\end{equation}\nSuch a function can be trained with a feedforward neural network $\\hat{g}$ by denoising each sample $y_i \\in Y$ to recover $x_i \\in X$, which can be implemented with a Denoising Autoencoder (DAE)~\\cite{vincent2008extracting}. Then we can use the DAE $\\hat{g}$ to approximate the score function $\\nabla_y \\log p(y)$ of the corrupted distribution by $\\nabla_y \\log p(y) \\propto \\hat{g}(y) - y$~\\cite{robbins1955empirical,miyasawa1961empirical,raphan2011least}. However, one should note that the DAE does not provide the energy function but instead approximates the score function -- the gradient of $\\log \\rho_\\pi(s,a)$, which cannot be adopted as the vital reward function.\n\nThus, in order to estimate the EBM of expert's state-action pairs provided through demonstration data, we consider to parameterize the energy function $E_\\theta(y)$ with a neural network explicitly. As shown in \\cite{saremi2018deep}, such a network called DEEN can be trained by minimizing the following objective:\n\\begin{equation}\\label{eq:deen}\n \\argmin_{\\theta} \\sum_{x_i\\in X, y_i\\in Y} \\left \\| x_i - y_i + \\sigma^2\\frac{\\partial E_\\theta(y=y_i)}{\\partial y} \\right \\|^2~,\n\\end{equation}\nwhich ensures the relation of score function $\\partial E_\\theta(y)\/\\partial y$ shown in \\eq{eq:denoise}. It is worth noting that the EBM estimates the energy of the noisy samples. This can be seen as a Parzen window estimation of $p(x)$ with variance $\\sigma^2$ as the smoothing parameter~\\cite{saremi2019neural,vincent2011connection}. A trivial problem here is that \\eq{eq:deen} requires the samples (state-action pairs) to be continuous so that the gradient can be accurately computed. Actually, EBIL can be easily expanded to discrete space by using other energy estimation methods, \\textit{e.g.}, Noise Contrastive Estimation~\\cite{gutmann2010noise}. In this work, we concentrate on the continuous space and leave the discrete version in the future.\n\nIn practice, we learn the EBM of expert data from off-line demonstrations and construct the reward function, which will be fixed until the end to help agent learn its policy with a normal RL procedure. Specifically, we construct the surrogate reward function $\\hat{r}(s,a)$ as follows:\n\\begin{equation}\\label{eq:reward}\n \\hat{r}(s,a) = h(-E_{{\\pi_E}}(s,a))~,\n\\end{equation}\nwhere $h(x)$ is a monotonically increasing linear function, which can be specified for different environments.\nFormally, the overall EBIL algorithm is presented in \\alg{alg:EBIL} of \\ap{ap:algo} .\n\\section{Discussions}\n\\label{sec:discussion}\n\nIn this paper, we propose to estimate the energy function from expert demonstrations directly, then regard it as the surrogate reward function to force the agent to learn a good policy that can match the occupancy measure of the expert. Interestingly, MaxEnt IRL can be seen as a special implementation of EBM, which constructs the expert demonstrations as a Boltzmann distribution of the cost \/ reward function, and tries to extract optimal policy from it. In this section, we theoretically clarify the relation between EBIL and MaxEnt IRL, and show that EBIL actually can be seen as a simplified and efficient solution for MaxEnt IRL.\n\nIRL aims to recover a cost or reward, under which the set of demonstrations are near-optimal. However, the optimal solution is still underdefined. To that end, MaxEnt IRL resolves the reward ambiguity problem in normal IRL by employing the principle of maximum entropy~\\cite{ziebart2008maximum,ziebart2010modeling,bloem2014infinite}, which introduces probabilistic models to explain suboptimal behaviors as noise. More specifically, MaxEnt IRL models the paths in demonstrations using a Boltzmann distribution, where the energy is given by the unknown reward function $\\hat{r}^*$:\n\\begin{equation}\n\\label{eq:maxent-irl}\n p_{\\hat{r}^*}(\\tau) = \\frac{1}{Z}\\exp{(\\hat{r}^*(\\tau))}~,\n\\end{equation}\nwhere $\\tau=\\{ s_0, a_0, \\cdots, s_T, a_T \\}$ is a trajectory of state-action pairs and $T$ is its length; $\\hat{r}^*(\\tau)=\\sum_{t}\\hat{r}^*(s_t,a_t)$ is the reward function, under which the expert demonstrations are optimal; $Z$ is the partition function.\\footnote{Note that \\eq{eq:maxent-irl} is formulated under the deterministic MDP setting. A general form for stochastic MDP is derived in \\cite{ziebart2008maximum,ziebart2010modeling} yet owns similar analysis: the probability of a trajectory is decomposed as the product of conditional probabilities of the states $s_t$, which can factor out of all likelihood ratios since they are not affected by the reward function.} Similar to other EBMs, the parameters of the reward function is optimized to maximize the likelihood of expert trajectories.\n\nUnder this model, ultimately we hope that our policy can generate any trajectories with the probability increases exponentially as the return gets higher, and we can obtain the desired optimal trajectories with the highest likelihood. Following previous work, we focus on the maximum casual entropy IRL \\cite{ziebart2008maximum,ziebart2010modeling}, which aims to maximize the entropy of the distribution over paths under the constraints of feature matching that can be regarded as the matching of the reward. Formally, maximum causal entropy IRL can be represented as the following optimization problem~\\cite{ho2016generative}:\n\\begin{equation}\\label{eq:irl-cas}\n\\begin{aligned}\n\\hat{r}^* &= \\argmax_{\\hat{r}} \\bbE_{{\\pi_E}}\\left [\\hat{r}(s,a) \\right ] - \\left ( \\max_{\\pi\\in\\Pi} \\bbE_{\\pi}\\left [\\hat{r}(s,a)\\right ] + H(\\pi) \\right ) ~,\n\\end{aligned}\n\\end{equation}\nwhere $H(\\pi) := \\mathbb{E}_\\pi [-\\log \\pi (a|s)]$ is the causal entropy~\\cite{bloem2014infinite} of the policy $\\pi$.\n\n\\subsection{EBIL is a Dual Problem of MaxEnt IRL}\n\nNow we first illustrate that EBIL (\\eq{eq:kl-il}) is a dual of the above MaxEnt IRL problem. The proof sketch is to show that EBIL is an instance of occupancy measure matching problem which has been proven to be a dual problem of MaxEnt IRL.\n\n\\begin{lemma}\\label{lm:dual}\nIRL is a dual of the following occupancy measure matching problem and the induced optimal policy is the primal optimum which matches the expert's occupancy measure $\\rho_{\\pi_E}$:\n\\begin{equation}\\label{eq:irl-dual}\n \\min_{\\rho_\\pi\\in\\caD} \\overline{H}(\\rho_\\pi) \\text{ subject to } \\rho_\\pi(s,a) = \\rho_{\\pi_E}(s,a) \\text{ } \\forall s \\in \\caS, a \\in \\caA~.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proposition}\\label{prop:ebil-dual}\n The EBIL objective \\eq{eq:kl-il} is an instance of the occupancy measure matching problem \\eq{eq:irl-dual}.\n\\end{proposition}\n\nThe proof of \\lm{lm:dual} can be refered to the Section 3 of \\cite{ho2016generative}, and the proof of \\prop{prop:ebil-dual} is shown in \\ap{ap:ebil-dual}. Combining \\lm{lm:dual} and \\prop{prop:ebil-dual}, we have that EBIL is a dual problem of MaxEnt IRL. \n\n\\subsection{EBIL is More Efficient than IRL}\nNow we are going to illustrate why EBIL is a more efficient solution.\nRecall that the intuition of MaxEnt IRL is to regard the demonstrations as a Boltzmann distribution of the reward function, and then induce the optimal policy. Suppose that we have already recovered the optimal reward function $\\hat{r}^*$, then the optimal policy is induced by the following practical forward MaxEnt RL procedure:\n\\begin{equation}\\label{eq:rl-hr}\n \\pi^* = \\argmax_{\\pi} \\bbE_\\pi\\left [\\hat{r}^*(s,a)\\right ]+ H(\\pi)~. \n\\end{equation}\nNow we further demonstrate that EBIL aims to seek the same policy, while through a simplified and more efficient training procedure.\n\n\\begin{proposition}\\label{prop:tau-kl}\n Denote $\\tau$ and $\\tau_E$ are trajectories sampled by the agent and the expert respectively, and suppose we have the optimal reward function $\\hat{r}^*$, then the policy $\\pi^*$ induced by $\\hat{r}^*$ is the optimal solution to the following optimization problem: \n \\begin{equation}\\label{eq:tau-kl}\n \\begin{aligned}\n \\min_\\pi \\kld(p(\\tau)\\|p(\\tau_E))~.\n \\end{aligned}\n \\end{equation}\n\\end{proposition}\n\n\\begin{proposition}\\label{prop:kl-kl}\n The optimization problem \\eq{eq:tau-kl} is equivalent to the KL divergence IL objective \\eq{eq:kl-il} and the EBIL objective \\eq{eq:eb-il}.\n\\end{proposition}\n\nThe proof for these two propositions are in \\ap{ap:tau-kl} and \\ap{ap:kl-kl} separately. These two propositions together reveal the optimal policy obtained from EBIL is the same as the one from MaxEnt IRL when the optimal reward function is recovered. However, the latter one is indirect and much more computationally expensive. Primitively, as shown in \\eq{eq:irl-cas}, IRL methods are in fact aimed to recover the optimal reward function (served as an EBM) by maximum likelihood on the expert trajectories instead of directly learning the policy and cannot avoid estimating the non-trivial partition function $Z$, which is always hard, especially in high-dimensional spaces. The SoTA IRL method, AIRL\\cite{fu2017learning}, as we shown in \\se{sec:one-domain} and \\ap{ap:airl}, actually does not recover the energy. By contrast, EBIL is essentially a fixed-reward-based method that seeks to learn the policy guided by the pre-estimated energy of expert's occupancy measure, which shows much more efficiency and provides more flexible and general choices on training the expert's EBM.\n\nAs a supplementary statement, \\cite{finn2016connection} reveals the close relationship among Guided Cost Learning~\\cite{finn2016guided} (a sample-based algorithm of MaxEnt IRL), GAIL and EBMs. Besides, \\cite{ghasemipour2019divergence} also presents a unified perspective among previous IL algorithms and discuss in a divergence minimization view similar as ours shown in \\se{subsec:EBIL}. Therefore, all of these methods show connections among each other.\n\\section{Related Work}\n\nInstead of seeking to alternatively update the policy and the reward function as in IRL and GAIL, many recent works of IL aim to learn a fixed reward function directly from expert demonstrations and then apply a normal reinforcement learning procedure with that reward function. Although this idea can be found inherently in previous GMMIL work~\\cite{kim2018imitation} that utilizes the maximum mean discrepancy as the distance metric to guide training,\nit is recently proposed by Random Expert Distillation (RED) \\cite{wang2019red}, which employs the idea of Random Network Distillation\n\\cite{burda2018rnd} to estimate the support of expert policy, and compute the reward function by the loss of fitting a random neural network. \nIn addition, Disagreement-Regularized Imitation Learning \n\\cite{brantley2020disagreementregularized} constructs the reward function using the disagreement in their predictions of an ensemble of policies trained on the demonstration data, which is optimized together with a supervised behavioral cloning cost.\nInstead of using a learned reward function, the fixed reward of Soft-Q Imitation Learning\n\\cite{reddy2019softqil} applies constant rewards by setting positive rewards for the expert state-actions and zero rewards for other ones, which is optimized with the off-policy SQL algorithm~\\cite{haarnoja2018soft}.\n\nOur EBIL relies highly on EBMs, which have played an important role in a wide range of tasks including image modeling, trajectory modeling and continual online learning \\cite{Du2019generalization}. \nThanks to the appealing features, EBMs have been introduced into many RL literature, for instance, parameterized as value function~\\cite{sallans2004valueEBM}, employed in the actor-critic framework~\\cite{heess2012acEBMs}, applied to MaxEnt RL~\\cite{haarnoja2017reinforcement} and model-based RL regularization~\\cite{boney2019regularizing}. \nHowever, EBMs are always difficult to train due to the partition function \\cite{finn2016connection}. Nevertheless, recent works have tackled the problem of training large-scale EBMs on high-dimensional data, such as DEEN~\\cite{saremi2018deep} which is applied in our implementation. Except for DEEN, there still leaves plenty of choices for efficiently training EBMs \\cite{gutmann2012noise,Du2019generalization,nijkamp2019learn}. \n\\section{Experiments}\n\n\\subsection{Synthetic Task}\n\\label{sec:one-domain}\n\n\\begin{wrapfigure}{r}{0.4\\textwidth}\n\\vspace{-7pt}\n\\includegraphics[width=1\\linewidth]{figs\/KL-crop.pdf}\n\\vspace{-2pt}\n\\caption{The KL divergence between the agent trajectories and the expert during the learning procedure, which indicates that EBIL is much more stable than the other methods. The blue dash line represents the converged result of EBIL.}\n\\vspace{-14pt}\n\\label{fig:kl-div}\n\\end{wrapfigure}\n\nIn the synthetic task, we want to evaluate the qualitative performance of different IL methods by displaying the heat map of the learned reward and sampled trajectories. \nAs analyzed in Section~\\ref{sec:ebil} and \\ref{sec:discussion}, EBIL is capable of guiding the agent to recover the expert policy and correspondingly generate the high-quality trajectories. To demonstrate this point, we evaluate EBIL along with its counterparts (GAIL~\\cite{ho2016generative}, AIRL~\\cite{fu2017learning} and RED~\\cite{wang2019red}) on a synthetic environment where the agent tries to move in a one-dimensional space. \nSpecifically, the state space is $[ -0.5, 10.5 ]$ and the action space is $[-1, 1]$. The environment initializes the state at $0$, and we set the expert policy as static rule policies $\\pi_E = {\\mathcal{N}}(0.25, 0.06)$ when the state $s\\in [ -0.5, 5 )$, and $\\pi_E = {\\mathcal{N}}(0.75, 0.06)$ when $s\\in [ 5, 10.5 ]$. The sampled expert demonstration contains $40$ trajectories with up to $30$ timesteps in each one. \nFor all methods, we choose SAC \\cite{haarnoja2018soft} as the learning algorithm.\n\nWe plot the KL divergence between the agent's and the expert's trajectories during the training procedure in \\fig{fig:kl-div} and visualize the final estimated rewards with corresponding induced trajectories in \\fig{fig:heat_maps}.\nAs illustrated in \\fig{fig:ebil_heat} and \\fig{fig:kl-div}, the reward estimated by EBIL successfully captures the likelihood of the expert trajectories, and the induced policy quickly converge to the expert policy. By contrast, GAIL requires a noisy adversarial process to correct the policy. As a result, although GAIL achieves compatible final performance against EBIL~(\\fig{fig:gail_heat}), it suffers a slow, unstable training as shown in \\fig{fig:kl-div} and assigns arbitrary reward for some regions of the state-action space.\nIn addition, as suggested in \\fig{fig:airl_heat} and \\fig{fig:red_heat} respectively, under this simple one-dimensional domain, AIRL does not in fact recover the energy as meaningful rewards, and RED suffers from the diverged reward and fail to imitate the expert.\n\n\n\n\\begin{figure*}[!t]\n\\centering\n\\subfigure[Expert Trajectories]{\n\\begin{minipage}[b]{0.28\\linewidth}\n\\label{fig:expert_heat}\n\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/expert-crop.pdf}\n\\vspace{-0.8pt}\n\\end{minipage}\n}\n\\hspace{-10pt}\n\\subfigure[EBIL]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:ebil_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/ebil_reward_heat_40-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/ebil_agent_heat_40_667-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[GAIL]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:gail_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/gail_heat_40_19890-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/gail_agent_heat_40_15000-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[AIRL]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:airl_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_heat_40_16000_1-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_agent_heat_40_16000-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[RED]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:red_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/rnd_heat_40_499-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/rnd_agent_heat_40_19999-crop.pdf}\n\\end{minipage}\n}\n\\vspace{-7pt}\n\\caption{Heat maps of the expert trajectories (leftmost), the (final) \\textit{estimated rewards} recovered by different methods (top) and the corresponding \\textit{induced policy} (bottom). The horizontal and the vertical axis denote the \\textit{state space} and the \\textit{action space} respectively. The red dotted line represents the position where the agent should change its policy. \nIt is worth noting that EBIL and RED both learn fixed reward functions, while GAIL and AIRL iteratively update the reward signals. We do not compare BC since it learns the policy via supervised learning without recovering any reward signals.}\n\\vspace{-20pt}\n\\label{fig:heat_maps}\n\\end{figure*}\n\n\n\n\\subsection{Mujoco Tasks}\n\nWe further test our method on five continuous control benchmarking Mujoco environments: Humanoid, Hopper, Walker2d and Swimmer and InvertedDoublePendulum. In this experiments, we still compare EBIL against GAIL, AIRL, GMMIL and RED, where we employ Trust Region Policy Optimization (TRPO)~\\cite{schulman2015trust} as the learning algorithm in the implementation for all evaluated methods. Expert agents are trained with OpenAI baselines version~\\cite{dhariwal2016openai} of Proximal Policy Optimization (PPO)~\\cite{schulman2017proximal}. Furthermore, we consider to sample 4 trajectories by the trained expert policy, as \\cite{ho2016generative,wang2019red} do.\nThe training curves along with more experiments are in \\ap{ap:exp}, showing the training stability of the algorithm.\n\nAs shown in \\tb{tab:mujoco}, EBIL achieves the best or comparable performance among all environments, indicating that the energy is able to become an excellent fixed reward function and is efficient to guide the agent to do imitation learning. It is worth noting that we do not apply BC initialization for all tasks. We surprisingly find that we are unsuccessful with AIRL on several environments even after extensive tuning, similar to the results in \\cite{liu2019state}.\n\n\\begin{table*}[t]\n\\vskip 0.15in\n\\caption{Comparison for different methods of the episodic true rewards on 5 continuous control benchmarks. The means and the standard deviations are evaluated over 50 runs.}\n\\vspace{-5pt}\n\\begin{center}\n\\begin{scriptsize}\n\\begin{tabular}{cccccc}\n\\toprule\n& Humanoid & Hopper & Walker2d & Swimmer & InvertedDoublePendulum\\\\\n\\midrule\nRandom & 100.38 $\\pm$ 28.25 & 14.21 $\\pm$ 11.20 & 0.18 $\\pm$ 4.35 & 0.89 $\\pm$ 10.96 & 49.57 $\\pm$ 16.88 \\\\\n\\hline\nBC & 178.74 $\\pm$ 55.88 & 28.04 $\\pm$ 2.73 & 312.04 $\\pm$ 83.83 & 5.93 $\\pm$ 16.77 & 138.81 $\\pm$ 39.99\\\\\nGAIL & 145.84 $\\pm$ 7.01 & 459.33 $\\pm$ 216.79 & 278.93 $\\pm$ 36.82 & 23.79 $\\pm$ 21.84 & 122.71 $\\pm$ 71.36 \\\\\nAIRL & 286.63 $\\pm$ 6.05 & 126.92 $\\pm$ 62.39 & 215.79 $\\pm$ 23.04 & -13.44 $\\pm$ 2.69 & 76.78 $\\pm$ 19.63 \\\\\nGMMIL & 416.83 $\\pm$ 59.46 & 1000.87 $\\pm$ 0.87 & 1585.91 $\\pm$ 575.72 & -0.73 $\\pm$ 3.28 & 4244.63 $\\pm$ 3228.14\\\\\nRED & 140.23 $\\pm$ 19.10 & 641.08 $\\pm$ 2.24 & 641.13 $\\pm$ 2.75 & -3.55 $\\pm$ 5.05 & 6400.19 $\\pm$ 4302.03 \\\\\n\\textbf{EBIL (Ours)} & \\textbf{472.22 $\\pm$ 107.72} & \\textbf{1040.99 $\\pm$ 0.53} & \\textbf{2334.55 $\\pm$ 633.91} & \\textbf{58.09 $\\pm$ 2.03} & \\textbf{8988.37$\\pm$ 1812.76}\\\\\n\\hline\nExpert (PPO) & 1515.36 $\\pm$ 683.59 & 1407.36 $\\pm$ 176.91 & 2637.27 $\\pm$ 1757.72 & 122.09 $\\pm$ 2.60 & 6129.10 $\\pm$ 3491.47 \\\\\n\\bottomrule\n\\end{tabular}\n\\label{tab:mujoco}\n\\end{scriptsize}\n\\end{center}\n\\vspace{-15pt}\n\\end{table*}\n\\section{Conclusion}\n\nIn this paper, we propose Energy-Based Imitation Learning (EBIL), which shows that it is feasible to compute a fixed reward function via directly estimating the expert's energy to help agents learn from the demonstrations. We further theoretically discuss the connections of our method with Maximum Entropy Inverse Reinforcement Learning (MaxEnt IRL) and reveal that EBIL is a dual problem of MaxEnt IRL which provides a simplified and efficient solution. We empirically show the comparable or better performance of EBIL against SoTA Imitation Learning (IL) algorithms in multiple tasks. Future work can focus on different energy estimation methods for expert demonstrations and exploring more properties of EBM in IL.\n\n\\clearpage\n\\section{Broader Impact}\n\nFor potential positive impacts, EBIL simplifies the learning procedure and increases the efficiency of IL, which can be applied into practical decision-making problems where agents are required to imitate demonstrated behaviors such as robotics and autonomous driving in the future work. However, negative consequences also exist since advances in automation led by IL may bring about workers who are engaged in repetitive tasks being displaced by robots. After all, teaching machines with amounts of expert demonstrations must be much cheaper than hiring hundreds of teachers and skillful employees. \n\\section{Algorithm}\n\\label{ap:algo}\n\\begin{algorithm}[!h]\n \\caption{Energy-Based Imitation Learning}\n \\label{alg:EBIL}\n \\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} Expert demonstration data $\\tau_E=\\{ (s_i, a_i) \\}_{i=1}^{N}$, parameterized energy-based model $E_{\\phi}$, parameterized policy $\\pi_{\\theta}$;\n \n \\FOR{$k = 0, 1, 2, \\dotsc$}\n \\STATE Optimize $\\phi$ with the objective in \\eq{eq:deen}.\n \\ENDFOR\n \n Compute the surrogate reward function $\\hat{r}$ via \\eq{eq:reward}.\n \n \\FOR{$k = 0, 1, 2, \\dotsc$}\n \\STATE Update $\\theta$ with a normal RL procedure using the surrogate reward function $\\hat{r}$.\n \\ENDFOR\n \n \\RETURN {$\\pi$}\n \\end{algorithmic}\n\\end{algorithm}\n\n\\section{Proofs}\n\n\\subsection{Proof of \\protect\\prop{prop:eb-il}}\n\\label{ap:eb-il}\nBefore showing the equivalence between the reverse KL divergence objective and the EBIL objective, we first present the following lemma.\n\\begin{lemma}[Lemma 3 of \\cite{ho2016generative}]\n $\\overline{H}$ is strictly concave, and for all $\\pi \\in \\Pi$ and $\\rho \\in \\caD$, we have $H(\\pi) = \\overline{H}(\\rho_\\pi)$ and $\\overline{H}(\\rho)=H(\\pi_\\rho)$~,\nwhere $\\overline{H}\\left (\\rho \\right )=-\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)\/\\sum_{a'}\\rho_\\pi(s,a')}$ is the entropy of the occupancy measure.\n\\end{lemma}\n\n\\begin{proof}[Proof of \\protect\\prop{prop:eb-il}]\n\nTake \\eq{eq:ebm} into \\eq{eq:kl-il} for both policy ${\\pi_E}$, one can obtain that: \n\\begin{equation}\\label{eq:kl-induce}\n \\begin{aligned}\n \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}}) &= \\sum_{s,a} \\rho_\\pi(s,a) \\log \\frac{\\rho_\\pi(s,a)}{\\rho_{\\pi_E}(s,a)}\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left( \\log{\\rho_\\pi(s,a)} - \\log{\\frac{e^{-E_{\\pi_E}(s,a)}}{(1-\\gamma)Z'}} \\right )\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left(E_{\\pi_E}(s,a)+\\log{\\rho_\\pi(s,a)} + \\log{(1-\\gamma)Z'}\\right )\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} + \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} -\\log{\\sum_{s,a'}\\rho_\\pi(s,a')} + \\log{\\sum_{s,a'}\\rho_\\pi(s,a')} + \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\left [ \\rho_\\pi(s,a)\/\\sum_{a'}\\rho_\\pi(s,a')\\right ]}+ \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ] - \\overline{H}\\left (\\rho_\\pi \\right ) + \\text{const}\n \\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ] - H(\\pi) + \\text{const}\n ~,\n \\end{aligned}\n\\end{equation}\nwhere $E_{\\pi_E}$ is the EBM of policy ${\\pi_E}$ and $Z'$ is its partition function. Therefore \\eq{eq:kl-il} in the end leads to the objective function of EBIL \\eq{eq:eb-il}:\n\\begin{equation}\\label{eq:kl-eq-ebil}\n \\begin{aligned}\n \\argmin_{\\pi} \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}}) = \\argmax_{\\pi} \\bbE_\\pi \\left[-E_{\\pi_E}(s,a)\\right ] + H(\\pi)\n \\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\n\\subsection{Proof of \\protect\\prop{prop:ebil-dual}}\n\\label{ap:ebil-dual}\n\n\\begin{proof}[Proof of \\protect\\prop{prop:ebil-dual}]\n Similarly as in \\cite{ho2016generative}, we would like to relax \\eq{eq:irl-dual} into the following form with the motivation from \\eq{eq:rl-irl}:\n \\begin{equation}\n \\min_{\\pi} d_\\psi(\\rho_\\pi,\\rho_{\\pi_E})- H(\\pi)~,\n \\end{equation}\n where we modify the IRL regularizer $\\psi$ so that $d_\\psi(\\rho_\\pi,\\rho_{\\pi_E}) \\triangleq \\psi^*(\\rho_\\pi - \\rho_{\\pi_E})$ is a smooth distance metric that penalizes violations in the difference between the occupancy measures.\n By choosing $\\psi= \\bbE_{\\pi_E} [-1-\\log(r(s, a))+r(s, a)]$, we obtain $d_\\psi(\\rho_\\pi,\\rho_{\\pi_E})=D_{\\text{KL}}(\\rho_\\pi\\|\\rho_{\\pi_E})$\\footnote{Full derivations can be found in Appendix D of \\protect\\cite{ghasemipour2019divergence} and we replace $c(s,a)$ with $-r(s,a)$.}. Thus we have:\n \\begin{equation}\\label{eq:kl-h}\n \\min_{\\pi} D_{\\text{KL}}(\\rho_\\pi\\|\\rho_{\\pi_E})- H(\\pi)~.\n \\end{equation}\n Refer to \\eq{eq:kl-induce}, we have:\n \\begin{equation}\n D_{\\text{KL}}(\\rho_\\pi\\|\\rho_{\\pi_E}) = \\bbE_\\pi\\left[E_{\\pi_E}(s,a)\\right ] - H(\\pi) + \\text{const}~.\n \\end{equation}\n Thus, we can rewrite \\eq{eq:kl-h} as the following optimization problem:\n \\begin{equation}\\label{eq:eb-il-2}\n \\pi^* = \\argmax_{\\pi} \\bbE_\\pi \\left[-E_{\\pi_E}(s,a)\\right ] + 2H(\\pi)~,\n \\end{equation}\n which leads to the EBIL objective \\eq{eq:eb-il} with the temperature hyperparameter $\\alpha=2$.\n This indicates that EBIL is a dual problem of the MaxEnt IRL problem.\n\\end{proof}\n\n\\subsection{Proof of \\protect\\prop{prop:tau-kl}}\n\\label{ap:tau-kl}\n\n\\begin{proof}[Proof of \\protect\\prop{prop:tau-kl}]\nSuppose we have recovered the optimal reward function $\\hat{r}$, then we can derive the objective of the KL divergence between the two trajectories into the forward MaxEnt RL procedure.\n\nWith chain rule, the induced trajectory distribution $p(\\tau)$ is given by\n\\begin{equation}\n p(\\tau) = p(s_0)\\prod_{t=0}^{T}P(s_{t+1}|s_t,a_t)\\pi(a_t|s_t)~.\n\\end{equation}\n\nSuppose the desired expert trajectory distribution $p(\\tau_E)$ is given by\n\\begin{equation}\n\\begin{aligned}\np(\\tau) &\\propto p(s_0)\\prod_{t=0}^{T}P(s_{t+1}|s_t,a_t) \\exp(\\hat{r}^*(\\tau))\\\\\n&= p(s_0)\\prod_{t=0}^{T}P(s_{t+1}|s_t,a_t) \\exp(\\sum_{t=0}^{T}\\hat{r}^*(s_t, a_t))~,\n\\end{aligned}\n\\end{equation}\nnow we will show that the following optimization problem is equivalent to a forward MaxEnt RL procedure given the optimal reward $\\hat{r}^*$:\n\\begin{equation}\\label{eq:tau-kl-induce-1}\n\\begin{aligned}\n \\kld(p(\\tau)\\|p(\\tau_E))&= \\sum_{\\tau\\sim\\pi} p(\\tau) \\log \\frac{p(\\tau)}{p(\\tau_E)}\\\\\n &= \\sum_{\\tau\\sim\\pi}p(\\tau)\\left( \\log{p(\\tau)} - \\log{p(\\tau_E)} \\right )\\\\\n &= \\bbE_{\\tau\\sim\\pi} \\left[ \\log p(s_0) + \\sum_{t=0}^T \\left (\\log P(s_{t+1}|s_t,a_t) + \\hat{r}^*(s_t, a_t)\\right ) - \\right. \\\\\n &~~~~~\\left. \\log p(s_0) - \\sum_{t=0}^T \\left (\\log P(s_{t+1}|s_t,a_t) + \\log \\pi(a_t|s_t)\\right ) \\right] + \\text{const}\\\\\n &= \\bbE_{\\tau \\sim p(\\tau)} \\left[ \\sum_{t=0}^T \\hat{r}^*(s_t, a_t) - \\pi(a_t|s_t)) \\right] + \\text{const} \\\\\n &= \\sum_{t=0}^T \\bbE_{(s_t, a_t) \\sim \\rho(s_t, a_t)} [\\hat{r}^*(s_t,a_t) - \\log \\pi(a_t|s_t)] + \\text{const}~.\n\\end{aligned}\n\\end{equation}\n\nWithout loss of generality, we approximate the finite term $\\sum_{t=0}^T \\bbE_{(s_t, a_t)}$ with an infinite term $\\bbE_{\\pi}$ by the definition, and then we have\n\\begin{equation}\\label{eq:tau-kl-induce-2}\n\\begin{aligned}\n \\kld(p(\\tau)\\|p(\\tau_E))\n &\\approx \\bbE_{(s, a) \\sim \\rho(s, a)} [\\hat{r}^*(s,a) - \\log \\pi(a_t|s_t)] + \\text{const}\\\\\n &= \\bbE_{\\pi} [\\hat{r}^*(s,a) - \\log \\pi(a|s)] + \\text{const}\\\\\n &= \\bbE_{\\pi} [\\hat{r}^*(s,a)] - \\bbE_{\\pi}[\\log \\pi(a|s)] + \\text{const}\\\\\n &= \\bbE_{\\pi} [\\hat{r}^*(s,a)] - H(\\pi) + \\text{const}~.\n\\end{aligned}\n\\end{equation}\n\nThus the objective \\eq{eq:tau-kl} is equivalent to the following optimization problem:\n\\begin{equation}\n \\max_{\\pi} \\bbE_\\pi\\left [\\hat{r}^*(s,a)\\right ]+ H(\\pi)~, \n\\end{equation}\nwhich is exactly the objective of a forward MaxEnt RL procedure (\\eq{eq:rl-hr}). This indicates that when MaxEnt IRL recovers the optimal reward function, a forward RL learning which leads to the optimal (expert) policy is equivalent to minimize the reverse KL divergence between the trajectories sampled by the agent and by the expert.\n\\end{proof}\n\n\n\\subsection{Proof of \\protect\\prop{prop:kl-kl}}\n\\label{ap:kl-kl}\n\\begin{proof}[Proof of \\protect\\prop{prop:kl-kl}]\n From the deviation \\eq{eq:tau-kl-induce-2}, we know that the optimization problem \\eq{eq:tau-kl} is equivalent to a forward MaxEnt RL problem \\eq{eq:rl-hr}. Let $r^*(s,a)=-E(s,a)$, and we will exactly get the EBIL objective \\eq{eq:eb-il}. \n Note that the reverse KL divergence IL objective \\eq{eq:kl-il} can also be derived into the EBIL objective \\eq{eq:eb-il} followed \\eq{eq:kl-induce}. Thus \\eq{eq:tau-kl} is equivalent to both the reverse KL divergence IL objective \\eq{eq:kl-il} and the EBIL objective \\eq{eq:eb-il}.\n\\end{proof}\n\n\\section{Experiments}\n\\label{ap:exp}\n\\subsection{Hyperparameters}\n\nWe show the hyperparameters for both DEEN training and policy training on different tasks in \\tb{tab:hyperparameters}. Specifically, we use MLPs as the networks for training DEEN and the policy network.\n\n\\begin{table*}[htbp]\n\\caption{Important hyperparameters used in our experiments}\n\\label{tab:hyperparameters}\n\\centering\n\\resizebox{0.7\\textwidth}{!}{\n\\begin{tabular}{llcccccc}\n\\toprule\n& Hyperparameter & One-D. & Human. & Hop. & Walk. & Swim. & Invert. \\\\\n\\midrule\n\\multirow{4}{*}{Policy} & Hidden layers & 3 & 3 & 3 & 3 & 3 & 3 \\\\\n& Hidden Size & 200 & 200 & 200 & 200 & 200 & 200 \\\\\n& Iterations & 6000 & 6000 & 6000 & 6000 & 6000 & 6000 \\\\\n& Batch Size & 32 & 32 & 32 & 32 & 32 & 32 \\\\\n\\midrule\n\\multirow{6}{*}{DEEN} & Hidden layers & 3 & 3 & 3 & 4 & 3 & 3 \\\\\n& Hidden size & 200 & 200 & 200 & 200 & 200 & 200 \\\\\n& Epochs & 3000 & 3000 & 6000 & 500 & 1900 & 500 \\\\\n& Batch Size & 32 & 32 & 32 & 32 & 32 & 32 \\\\\n& Noise Scale $\\sigma$ & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\\\\n& Reward Scale $\\alpha$ & 1 & 1 & 5 & 1 & 1 & 1000 \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{table*}\n\n\\subsection{Synthetic Task Training Procedure}\n\nWe demonstrate more training slices of the synthetic task in this section. \n\nWe analyze the learned behaviors during the training procedure of the synthetic task, as illustrated by visitation heatmaps in \\fig{fig:train-heat}. For each method, we choose to show four training stages from different training iterations. These figures provide more evidence that although GAIL can finally achieve good results, EBIL provides fast and stable training. By contrast, GMMIL and RED fail to achieve effective results during the whole training time.\\footnote{For better understanding how these methods learn reward signals, we also visualize the changes of estimated rewards during the training procedures. Videos can be seen at \\url{https:\/\/www.dropbox.com\/s\/0mrsoqyu040crdo\/video.zip?dl=0}.}\n\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width=0.90\\linewidth]{figs\/policy-iter.pdf}\n\n\\caption{The induced policy during policy training procedures. In each figure the horizontal axis denotes the \\textit{state space}, and the vertical axis represents the \\textit{action space}. Methods from top to bottom are separately EBIL, GAIL, AIRL and RED and each one contains four training stages shown in one line. The color bar is the same as \\fig{fig:heat_maps}. The brighter the yellow color, the higher the visitation frequency.}\n\\vspace{-15pt}\n\\label{fig:train-heat}\n\\end{figure*}\n\n\\subsection{Training Curves}\n\\label{ap:curves-mujoco}\n\nWe plot the episodes averaged return during the training procedure in \\fig{fig:curves-mujoco}, where EBIL shows effective guidance to help agent learn good policies while owns stability on all environments in comparison to the other methods. In our experiments, we find that the training procedure of GAIL suffers from instability.\n\n\\begin{figure*}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{.\/figs\/curves-mujoco.pdf}\n\n\\vspace{-3pt}\n\\caption{Training curves of GAIL, GMMIL, RED and EBIL on different continuous control benchmarking tasks, where the solid curves depict the mean and the shaded areas indicate the standard deviation. Each iterations contains 1024 timesteps.}\n\\vspace{-15pt}\n\\label{fig:curves-mujoco}\n\\end{figure*}\n\n\n\\subsection{Energy Evaluation}\n\\label{curves-energy}\n\nSince the loss function of DEEN cannot be directly used as an indicator to evaluate the quality of the learned energy network, we propose to evaluate the averaged energy value for expert trajectories and the random trajectories on different tasks. As shown in \\fig{fig:curves-energy}, DEEN finally converges in all experiments by differentiating the expert data. It is worth noting that in our experiments, we find that a well-trained energy network may be hard for agents to learn the expert policies on some environments. \nWe regard it as the ``sparse reward signals'' problem, as discussed in \\se{sec:one-domain}. By contrast, sometimes a ``half-trained'' model may provide smoother rewards, and can help the agent to learn more efficiently. Similar phenomenon also occurs when training the discriminator in GAN.\n\nWe will further analyze the training performance with energy models of different epochs in ablation study \\se{sec:abl-study}.\n\n\\begin{figure*}[h!]\n\n\\centering\n\\includegraphics[width=1\\linewidth]{.\/figs\/curves-energy.pdf}\n\n\\vspace{-33pt}\n\\caption{Energy evaluation curves on different mujoco tasks, where the red line represents for the average energy estimation on expert data and the blue is for random trajectories, which contain 100 trajectories separately. Note that lower energy values correspond to higher rewards. The min value is -1, and the max is 1 since we use \\textit{tanh} for the last layer of the DEEN network.}\n\\vspace{-15pt}\n\\label{fig:curves-energy}\n\\end{figure*}\n\n\n\\subsection{Ablation Study}\n\\label{sec:abl-study}\n\nTo further understand what the better energy model for learning a good policy is, we conduct ablation study on energy models trained from different epochs. The results are illustrated in \\fig{fig:ablation-energy}, which verify our intuition that a ``half-trained'' model can provide smoother rewards that solve the \"sparse reward`` problem, which is better for imitating the expert policy.\n\n\\begin{figure*}[h!]\n\\centering\n\\includegraphics[width=0.45\\linewidth]{.\/figs\/ablation-energy-hopper}\n\\includegraphics[width=0.45\\linewidth]{.\/figs\/ablation-energy-walker}\n\\vspace{-4pt}\n\\caption{The average episode rewards evaluated on Hopper-v2 and Walker2d-v2 by agents that are learned with energy models from different training epochs.}\n\\vspace{-15pt}\n\\label{fig:ablation-energy}\n\\end{figure*}\n\n\\section{Further Discussions}\n\n\\subsection{Surrogate Reward Functions}\n\nAs discussed in \\cite{kostrikov2018discriminator}, the reward function is highly related to the property of the task. Positive rewards may achieve better performance in the ``surviving'' style environment, and negative ones may take advantage in the ``per-step-penalty'' style environment. The different choices are common in those imitation learning works based on GAIL, which can use either $\\log(D)$ or $-\\log(1-D)$, where\n$D\\in [0,1]$ is the output of the discriminator, determined by the final ``\\textit{sigmoid}'' layer. In our work, we choose ``\\textit{tanh}'' as the final layer of the energy network, which in result leads the energy into a range of $[-1,1]$. In order to adapt to different environments while holding the good property of the energy, we can apply a monotonically increasing linear function $h$ as the surrogate reward function, which makes translation or scaling transformation on the energy outputs. It appears that in all of our tasks, the original energy signal does not show much ascendancy, and thus we choose different $h$ for these tasks.\n\nIn the one-dimension domain experiment, we choose to use the following surrogate reward function:\n\\begin{equation}\n \\hat{r}(s,a)=h(x)=x+1~,\n\\end{equation}\nwhere $\\hat{r} \\in [0, 2]$ and $x=-E(s,a)$ is the energy function. Thus, the experts' state-action pair will get close-to-zero rewards at each step.\n\nIn mujoco tasks, we choose the surrogate reward function as:\n\\begin{equation}\n \\hat{r}(s,a)=h(x)=(x+1)\/2~,\n\\end{equation}\nwhere $x=-E(s,a)$ is the energy function. Note that we construct this reward function to make a normalized reward $\\hat{r} \\in [0, 1]$ so that the non-expert's state-action pairs will gain near-zero rewards while the experts' get close-to-one rewards at each step regarding the output range of the energy is $[-1,1]$. In our experiments, similar rewards as the one-dimensional synthetic environment can also work well.\n\n\\subsection{AIRL Does Not Recover the Energy}\n\\label{ap:airl}\n\nAdversarial Inverse Reinforcement Learning (AIRL)\\cite{fu2017learning}, is a SoTA IRL method that apply an adversarial architecture similar as GAIL to solve the IRL problem. Formally, AIRL constructs the discriminator as \n\\begin{equation}\\label{eq:airl-dis}\nD(s,a) = \\frac{\\exp(f(s,a))}{\\exp(f(s,a)) + \\pi(a|s)}~.\n\\end{equation}\nThis is motivated by the former GCL-GAN work~\\cite{finn2016connection}, which proposes that one can apply GAN to train GCL that formulate the discriminator as\n\\begin{equation}\\label{eq:airl-dis}\nD(\\tau) = \\frac{\\frac{1}{Z}\\exp(c(\\tau))}{\\frac{1}{Z}\\exp(c(\\tau)) + \\pi(\\tau)}~,\n\\end{equation}\nwhere $\\tau$ denotes the trajectory. AIRL uses a surrogate reward\n\\begin{equation}\\label{eq:airl-reward}\n\\begin{aligned}\nr(s,a) &= \\log D(s,a) - \\log(1-D(s,a))\\\\\n&= f(s,a)-\\log{\\pi(a|s)}~,\n\\end{aligned}\n\\end{equation}\nwhich can be seen as an entropy-regularized reward function.\n\nHowever, the difference between \\eq{eq:airl-dis} and \\eq{eq:airl-reward} indicates that AIRL does not actually recover the expert's energy since they omit the partition function $Z$ which is important for estimating the energy. Also, the learning signal that drives the agent to learn a good policy is not a pure reward term but contains an entropy term itself. We visualize the different reward choice ($f(s,a)$ or $f(s,a)-\\log{\\pi(a|s)}$) in \\fig{fig:airlreward} as comparison, which verify our intuition that AIRL in fact does not recover the expert's energy as EBIL does.\n\n\\begin{figure*}[!t]\n\\centering\n\\subfigure[Reward as $f(s,a)-\\log{\\pi(a|s)}$]{\n\\begin{minipage}[b]{0.33\\linewidth}\n\\label{fig:logreward}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_heat_40_16000_1-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[Reward as $f(s,a)$]{\n\\begin{minipage}[b]{0.393\\linewidth}\n\\label{fig:freward}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_heat_40_16000_0_bar-crop.pdf}\n\\end{minipage}\n}\n\\vspace{-7pt}\n\\caption{Heat maps of the different estimated rewards recovered by AIRL.}\n\\label{fig:airlreward}\n\\end{figure*}\n\n\\subsection{Discussions with MaxEnt RL Methods}\n\\label{ap:ebil-sac}\n\nSoft-Q Learning (SQL)~\\cite{haarnoja2017reinforcement} and Soft Actor-Critic (SAC)~\\cite{haarnoja2018soft} are two main approaches of MaxEnt RL, particularly, they propose to use a general energy-based form policy as:\n\\begin{equation}\\label{eq:softq}\n \\pi(a_t|s_t) \\propto \\exp{(-E(s_t, a_t))}~.\n\\end{equation}\nTo connect the policy with soft versions of value functions and Q functions, they set the energy model $E(s_t,a_t) = -\\frac{1}{\\alpha}Q_{\\text{soft}}(s_t,a_t)$ where $\\alpha$ is the temperature parameter, such that the policy can be represented with the Q function which holds the highest probability at the action with the highest Q value, which essentially provides a soft version of the greedy policy. Thus, one can choose to optimize the soft Q function to obtain the optimal policy by minimizing the expected KL-divergence:\n\n\\begin{equation}\\label{eq:softac}\n J(\\pi) = \\bbE_{s\\sim \\rho^s_{\\caD}}\\left[\\kld\\left( \\pi(\\cdot|s) \\big \\| \\frac{\\exp{(Q(s,\\cdot))}}{Z(s)} \\right) \\right ]~,\n\\end{equation}\nwhere $\\rho^s_{\\caD}$ is the distribution of previously sampled states and actions, or a replay buffer. Therefore, the second term in the KL-divergence in fact can be regarded as the target or the reference for the policy.\n\nConsider to use the KL-divergence as the distance metric in the general objective of IL shown in \\eq{eq:il}, then we get:\n\\begin{equation}\n\\begin{aligned}\n \\pi^* &= \\argmin_\\pi \\mathbb{E}_{\\pi} \\left [\\kld \\left( \\pi(\\cdot|s) \\big \\| {\\pi_E}(\\cdot|s) \\right) \\right ]~.\n\\end{aligned}\n\\end{equation}\nIf we choose to model the expert policy using the energy form of \\eq{eq:softq} then we get:\n\\begin{equation}\n\\begin{aligned}\\label{eq:softil}\n \\pi^* &= \\argmin_\\pi \\mathbb{E}_{\\pi}\\left [\\kld \\left( \\pi(\\cdot|s) \\big \\| \\frac{\\exp{(-E_{\\pi_E}(s,a)})}{Z} \\right) \\right ]~.\n\\end{aligned}\n\\end{equation}\n\n\\begin{proposition}\n\\label{prop:ebil-sac}\n The IL objective shown in \\eq{eq:softil} is equivalent to the EBIL objective shown in \\eq{eq:eb-il}.\n\\end{proposition}\n\\begin{proof}\n Since \\eq{eq:eb-il} is equivalent to \\eq{eq:kl-il}, it holds the optimal solution such that $\\pi^*={\\pi_E}$ according to \\prop{prop:kl-il}. Also, it is easy to see that \\eq{eq:softil} has the same optimal solution such that $\\pi^*={\\pi_E}$.\n\\end{proof}\nThus, \\prop{prop:ebil-sac} reveals the relation between MaxEnt RL and EBIL. Specifically, EBIL employs the energy model learned from expert demonstrations as the target policy. The difference is that MaxEnt RL methods use the Q function to play the role of the energy function, construct it as the target policy, and iteratively update the Q function and the policy, while EBIL directly utilizes the energy function to model the expert occupancy measure and constructs the target policy. \n\nAs a result, it makes sense to directly optimize the policy by taking the energy model as the target policy instead of the reward function, which leads to the optimal solution as:\n\n\\begin{equation}\n \\begin{aligned}\n \\pi^*(a|s)&= \\frac{\\rho_{{\\pi_E}}(s,a)}{\\sum_{a'}\\rho_{{\\pi_E}}(s,a')}\\\\\n &=\\frac{\\frac{1}{Z}\\exp(-E_{{\\pi_E}}(s,a))}{\\frac{1}{Z}\\sum_{a'}\\exp{(-E_{{\\pi_E}}(s,a'))}}\\\\\n &=\\frac{\\exp(-E(s,a))}{\\sum_{a'}\\exp{(-E(s,a'))}}~.\n \\end{aligned}\n\\end{equation}\nTherefore, the optimal solution can also be obtained through estimating the energy function and summing it over the action space, which may be intractable for high-dimensional or continuous action space. Nevertheless, this can be solved in simple scenarios.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section[Preliminaries]{Preliminaries}\n\nIn this section we discuss a local characterization of conformally flat hypersurfaces $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct principal curvatures \n and present the examples of minimal conformally flat hypersurfaces in $\\mathbb{Q}^{4}(c)$ with three distinct principal curvatures given by generalized cones over Clifford tori. \n\n\\subsection{Characterization of conformally flat hypersurfaces}\n\n First we recall the notion of holonomic hypersurfaces. One says that a hypersurface $f\\colon M^{n} \\to \\mathbb{Q}^{n+1}(c)$ is \\emph{holonomic} if $M^n$ carries global orthogonal coordinates $(u_1,\\ldots, u_{n})$ such that the coordinate vector fields \n$\\partial_j=\\dfrac{\\partial}{\\partial u_j}$ diagonalize the second fundamental form $I\\!I$ of $f$. \n\nSet $v_j=\\|\\partial_j\\|$\n and define $V_{j} \\in C^{\\infty}(M)$, $1\\leq j\\leq n$, by \n $I\\!I(\\partial_j, \\partial_j)=V_jv_j$, $1\\leq j\\leq n$. \nThen the first and second fundamental forms of $f$ are\n\\begin{equation} \\label{fundforms}\nI=\\sum_{i=1}^nv_i^2du_i^2\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\sum_{i=1}^nV_iv_idu_i^2.\n\\end{equation} \nDenote $v=(v_1,\\ldots, v_n)$ and $V=(V_{1},\\ldots, V_n)$. We call $(v,V)$ the pair associated to $f$. \nThe next result is well known.\n\n\\begin{proposition}\\label{fund}\n The triple $(v,h,V)$, where \n $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i},$\n satisfies the system of PDE's\n \n \\begin{eqnarray}\\label{sistema-hol}\n \\left\\{\\begin{array}{l}\n (i) \\ \\dfrac{\\partial v_i}{\\partial u_j}=h_{ji}v_j,\\,\\,\\,\\,\\,\\,\\,\\,\\,(ii) \\ \\dfrac{\\partial h_{ik}}{\\partial u_j}=h_{ij}h_{jk},\\vspace{1ex}\\\\\n (iii) \\ \\dfrac{\\partial h_{ij}}{\\partial u_i} + \\dfrac{\\partial h_{ji}}{\\partial u_j} + \\sum_{k\\neq i,j} h_{ki}h_{kj} + \n V_{i}V_{j}+cv_iv_j=0,\\vspace{1ex}\\\\\n (iv) \\ \\dfrac{\\partial V_{i}}{\\partial u_j}=h_{ji}V_{j},\\,\\,\\,\\,1\\leq i \\neq j \\neq k \\neq i\\leq n.\n \\end{array}\\right.\n \\end{eqnarray}\nConversely, if $(v,h,V)$ is a solution of $(\\ref{sistema-hol})$ on a simply connected open subset $U \\subset {\\mathbb R}^{n}$, with $v_i> 0$\neverywhere for all $1\\leq i\\leq n$,\nthen there exists a holonomic hypersurface $f\\colon U \\to \\mathbb{Q}^{n+1}(c)$ whose first and second fundamental forms are given by $(\\ref{fundforms}).$\n\\end{proposition}\n\nThe following characterization of conformally flat hypersurfaces $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct principal curvatures was given in \\cite{ct}, improving a theorem due to Hertrich-Jeromin \\cite{h-j}. \n\n\\begin{theorem}\\label{main3} Let $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ \nbe a holonomic hypersurface whose associated pair $(v, V)$ satisfies \n\\begin{equation} \\label{holo.flat}\\sum_{i=1}^3\\delta_iv_i^2=0, \\,\\,\\,\\,\\,\\,\\sum_{i=1}^3\\delta_iv_iV_i=0\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,\\sum_{i=1}^3\\delta_iV_i^2=1, \n\\end{equation} \nwhere $(\\delta_1, \\delta_2, \\delta_3)= (1,-1, 1)$. Then $M^3$ is conformally flat and $f$ has three distinct principal curvatures. \n\nConversely, any conformally flat hypersurface $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct \nprincipal curvatures is locally a \nholonomic hypersurface whose associated pair $(v, V)$ satisfies $(\\ref{holo.flat}).$\n\\end{theorem}\n\n\n\nIt will be convenient to use the following equivalent version of Theorem~\\ref{main3}.\n\n\n\\begin{corollary}\\label{le:asspair}\nLet $f\\colon M^3\\to \\mathbb{Q}^4(c)$ be a holonomic hypersurface whose associated pair $(v,V)$ satisfies \n\\begin{equation}\\label{holo.flat.med.2}\n\\begin{array}{lcl}\nv_2^2=v_1^2+v_3^2, & \\quad &\n\\displaystyle{V_2=-\\frac{1}{3}\\Big(\\frac{v_1}{v_3}-\\frac{v_3}{v_1}\\Big) + \\frac{v_2}{3}H}, \\vspace{.2cm} \\\\\n\\displaystyle{V_1=-\\frac{1}{3}\\Big(\\frac{v_2}{v_3}+\\frac{v_3}{v_2}\\Big) + \\frac{v_1}{3}H}, & \\quad &\n\\displaystyle{V_3=\\frac{1}{3}\\Big(\\frac{v_1}{v_2}+\\frac{v_2}{v_1}\\Big) + \\frac{v_3}{3}H},\n\\end{array}\n\\end{equation}\nwhere $H$ is\nthe mean curvature function of $f$. Then $M^3$ is conformally flat and $f$ has three distinct principal curvatures. \n\nConversely, any conformally flat hypersurface $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct \nprincipal curvatures is locally a \nholonomic hypersurface whose associated pair $(v, V)$ satisfies $(\\ref{holo.flat.med.2}).$\n\\end{corollary}\n\\begin{proof} It suffices to show that equations (\\ref{holo.flat}) together with \n\\begin{equation}\\label{lem1.med}\n\\begin{array}{l}\nH=\\sum_{i=1}^3{V_iv_i^{-1}}\n\\end{array}\n\\end{equation}\nare equivalent to (\\ref{holo.flat.med.2}). For that, consider the Minkowski space ${\\mathbb L}^3$ endowed with the Lorentz inner product \n$$\\<(x_1,x_2,x_3), (y_1,y_2,y_3)\\> = x_1y_1-x_2y_2+x_3y_3.$$\nThen the conditions in (\\ref{holo.flat}) say that $v=(v_1, v_2, v_3)$ and $V=(V_1, V_2, V_3)$ are orthogonal with respect to such inner product, $v$ is light-like and $V$ is a unit space-like vector. Since $w=(-v_3,0,v_1)\\in {\\mathbb L}^3$ is orthogonal to $v,$ we have $v^\\perp=\\mbox{span}\\{v,w\\}.$ As $V\\in v^{\\perp},$ we can write $V=av+bw$ for some $a,b \\in C^{\\infty}(M^3)$. Note that $V_2=av_2.$ \nUsing $(\\ref{holo.flat})$ we obtain\n $$1=\\langle V,V \\rangle = \\langle av+bw,av+bw \\rangle=b^2\\langle w,w \\rangle=b^2v_2^2.$$ \n Thus $V=\\frac{V_2}{v_2}v+\\frac{\\lambda}{v_2}w,$ with $\\lambda=\\pm 1.$ Therefore \n\\begin{equation}\\label{V1.e.V3.med.}\n\\begin{array}{lll}\n\\displaystyle{V_1=\\frac{1}{v_2}(V_2v_1-\\lambda v_3)}\\qquad \\mbox{and} \\qquad \\displaystyle{V_3=\\frac{1}{v_2}(V_2v_3+\\lambda v_1)}.\n\\end{array}\n\\end{equation}\nSubstituting (\\ref{V1.e.V3.med.}) in (\\ref{lem1.med}) \nwe obtain\n\\begin{equation}\\label{V2.med.}\n\\begin{array}{l}\n\\displaystyle{V_2=-\\frac{\\lambda}{3}\\Big(\\frac{v_1}{v_3}-\\frac{v_3}{v_1}\\Big) + \\frac{v_2}{3}H}.\n\\end{array}\n\\end{equation}\nSubstituting (\\ref{V2.med.}) in (\\ref{V1.e.V3.med.}) yields \n$$V_1=-\\frac{\\lambda}{3}\\Big(\\frac{v_2}{v_3}+\\frac{v_3}{v_2}\\Big) + \\frac{v_1}{3}H \\,\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,\nV_3=\\frac{\\lambda}{3}\\Big(\\frac{v_1}{v_2}+\\frac{v_2}{v_1}\\Big) + \\frac{v_3}{3}H,$$\nand changing the orientation, if necessary, we may assume that $\\lambda=1$.\\end{proof} \n\n\\subsection{Generalized cones over Clifford tori}\n\nFirst we show that, if $g\\colon {\\mathbb R}^2 \\to {\\mathbb S}^3\\subset {\\mathbb R}^4$ is the Clifford torus parametrized by \n\\begin{equation} \\label{eq:g}\n g(x_1,x_2)=\\frac{1}{\\sqrt{2}}(\\cos (\\sqrt{2} x_1), \\sin (\\sqrt{2} x_1),\\cos (\\sqrt{2} x_2), \\sin (\\sqrt{2} x_2)),\n \\end{equation} \n then the standard cone\n$F\\colon (0, \\infty)\\times {\\mathbb R}^2\\to {\\mathbb R}^4$ over $g$ given by\n$$F(s,x)=sg(x),\\,\\,\\,\\,x=(x_1, x_2),$$\nis a minimal conformally flat hypersurface. \n\n\nThe first and second fundamental forms of $F$ with respect to the unit normal vector field \n$$\\eta(s, x_1, x_2)=\\frac{1}{\\sqrt{2}}(\\cos (\\sqrt{2} x_1), \\sin (\\sqrt{2} x_1),-\\cos (\\sqrt{2} x_2), -\\sin (\\sqrt{2} x_2))$$\nare\n$$\n\\begin{array}{lll}\nI=ds^2+ s^2(dx_1^2+dx_2^2) \\qquad \\text{and} \\qquad I\\!I= s(-dx^2_1+dx^2_2).\n\\end{array}\n$$\nIn terms of the new coordinates $u_1, u_2, u_3$, related to $s, x_1, x_2$ by\n$$\nu_2=\\log s,\\,\\,\\,\\,u_1=\\sqrt{2}x_1\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,u_3=\\sqrt{2}x_2,\n$$\nthe first and second fundamental forms of $F$ become\n$$I=\\frac{e^{2u_2}}{2}(du_1^2+2du_2^2+du_3^2)\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\frac{e^{u_2}}{2}(-du_1^2+du_3^2),\n$$\nhence $F$ is a minimal conformally flat hypersurface with three distinct principal curvatures, one of which being zero.\\vspace{1ex}\n\nThe preceding example can be extended to the case in which the ambient space is any space form, yielding examples of minimal conformally flat hypersurfaces $f\\colon M^3\\to \\mathbb{Q}^4(c)$ with three distinct principal curvatures also for $c\\neq 0$. \\vspace{.5ex}\n\n Start with the Clifford torus $g\\colon {\\mathbb R}^2\\to {\\mathbb S}^3\\subset {\\mathbb R}^4$ parametrized by (\\ref{eq:g}).\n If $c>0$, define $F\\colon (0, \\pi\/\\sqrt{c})\\times {\\mathbb R}^2\\to {\\mathbb S}^4(c)\\subset {\\mathbb R}^5={\\mathbb R}^4\\times {\\mathbb R}$ by\n$$F(s,x)=\\frac{1}{\\sqrt{c}}(\\cos ({\\sqrt{c}}s) e_5+\\sin ({\\sqrt{c}}s) g(x)),$$\nwhere $x=(x_1, x_2)$ and $e_5$ is a unit vector spanning the factor ${\\mathbb R}$ in the orthogonal decomposition ${\\mathbb R}^5={\\mathbb R}^4\\times {\\mathbb R}$.\n\nNotice that, for each fixed $s=s_0$, the map $F_{s_0}\\colon {\\mathbb R}^2\\to {\\mathbb S}^4(c)$, given by $F_{s_0}(x)=F(s_0, x)$, is also a Clifford torus \nin an umbilical hypersurface ${\\mathbb S}^3(\\tilde c)\\subset {\\mathbb S}^4(c)$ with curvature $\\tilde c=c\/\\sin^2(\\sqrt{c}s_0)$, which has \n$$N_{s_0}=F_*\\frac{\\partial}{\\partial s}|_{s=s_0}=-\\sin(\\sqrt{c}s_0)e_5+\\cos(\\sqrt{c}s_0)g$$\nas a unit normal vector field along $F_{s_0}$. Notice also that\n$$F(s+s_0, x)=\\cos(\\sqrt{c}s)F_{s_0}(x)+\\sin(\\sqrt{c}s)N_{s_0}(x),$$\nthus $s\\mapsto F(s,x)$ parametrizes the geodesic in ${\\mathbb S}^4(c)$ through $F_{s_0}(x)$ tangent to $N_{s_0}(x)$ at $F_{s_0}(x)$.\nHence $F$ is a generalized cone over $F_{s_0}$.\n\nThe first and second fundamental forms of $F$ with respect to the unit normal vector field \n$$\\eta(s, x_1, x_2)=\\frac{1}{\\sqrt{2}}(\\cos (\\sqrt{2} x_1), \\sin (\\sqrt{2} x_1),-\\cos (\\sqrt{2} x_2), -\\sin (\\sqrt{2} x_2), 0)$$\nare\n$$\n\\begin{array}{lll}\n\\displaystyle{I=ds^2+ \\frac{1}{c}\\sin^2 ({\\sqrt{c}}s)(dx_1^2+dx_2^2)} \\;\\; \\mbox{and} \\;\\; \\displaystyle{I\\!I=\\frac{\\sin ({\\sqrt{c}}s)}{\\sqrt{c}}(-dx^2_1+dx^2_2)}.\n\\end{array}\n$$\nIn terms of the new coordinates $u_1, u_2, u_3$, related to $s, x_1, x_2$ by\n\\begin{equation} \\label{eq:uisxis}\n\\frac{du_2}{ds}=\\frac{\\sqrt{c}}{\\sin(\\sqrt{c}s)},\\,\\,\\,\\,u_1=\\sqrt{2}x_1\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,u_3=\\sqrt{2}x_2,\\end{equation} \nthe first and second fundamental forms of $F$ become\n\\begin{equation} \\label{eq:forms}\nI=\\frac{\\sin^2 \\theta}{2c}(du_1^2+2du_2^2+du_3^2)\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\frac{\\sin \\theta}{2\\sqrt{c}}(-du_1^2+du_3^2),\n\\end{equation} \nwhere $\\theta=\\sqrt{c}s$, which, in view of the first equation in (\\ref{eq:uisxis}), \n satisfies \n$$\\frac{d\\theta}{du_2}=\\sin \\theta.$$\nIt follows from (\\ref{eq:forms}) that $F$ is a minimal conformally flat hypersurface.\n\nIf $c<0$, define\n$F\\colon (0, \\infty)\\times {\\mathbb R}^2\\to \\mathbb{H}^4(c)\\subset {\\mathbb L}^5$ by\n$$F(s,x)=\\frac{1}{\\sqrt{-c}}(\\cosh ({\\sqrt{-c}}s) e_5+\\sinh ({\\sqrt{-c}}s) g(x)),$$\nwhere $x=(x_1, x_2)$, $e_5$ is a unit time-like vector in ${\\mathbb L}^5$ and $e_5^\\perp$ is identified with ${\\mathbb R}^4$. \n\nAs in the previous case, for each fixed $s=s_0$ the map $F_{s_0}\\colon {\\mathbb R}^2\\to \\mathbb{H}^4(c)$, given by $F_{s_0}(x)=F(s_0, x)$, is also a Clifford torus \nin an umbilical hypersurface ${\\mathbb S}^3(\\tilde c)\\subset \\mathbb{H}^4(c)$ with curvature $\\tilde c=-c\/\\sinh^2(\\sqrt{-c}s_0)$, and \n$F$ is a generalized cone over $F_{s_0}$.\n\nNow the first and second fundamental forms of $F$ \nare\n$$I=ds^2+ \\frac{1}{-c}\\sinh^2 ({\\sqrt{-c}}s)(dx_1^2+dx_2^2)$$\nand\n$$I\\!I=\\frac{\\sinh ({\\sqrt{-c}}s)}{\\sqrt{-c}}(-dx^2_1+dx^2_2).$$\nIn terms of the new coordinates $u_1, u_2, u_3$, related to $s, x_1, x_2$ by\n$$\n\\frac{du_2}{ds}=\\frac{\\sqrt{-c}}{\\sinh(\\sqrt{-c}s)},\\,\\,\\,\\,u_1=\\sqrt{2}x_1\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,u_3=\\sqrt{2}x_2,\n$$\nthey become\n\\begin{equation} \\label{eq:formsb}I=\\frac{\\sinh^2 \\theta}{-2c}(du_1^2+2du_2^2+du_3^2)\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\frac{\\sinh \\theta}{2\\sqrt{-c}}(-du_1^2+du_3^2),\\end{equation} \nwhere $\\theta(s)=\\sqrt{-c}s$\nsatisfies \n$$\\frac{d\\theta}{du_2}=\\sinh \\theta.$$\nIt follows from (\\ref{eq:formsb}) that $F$ is a minimal conformally flat hypersurface with three distinct principal curvatures,\none of which being zero.\n\n\n\\section{The proofs of Theorems \\ref{thm:cmc} and \\ref{thm:minimalcneq0}}\n\nFirst we derive a system of PDE's for new unknown functions associated to a conformally flat hypersurface $f\\colon M^3\\to \\mathbb{Q}^4(c)$ with three distinct principal curvatures under the assumption that \n$f$ has constant mean curvature.\n\n\n\n\\begin{proposition}\\label{flat.med.reduz.result.}\nLet $f\\colon M^3\\to \\mathbb{Q}^4(c)$ be a holonomic hypersurface with constant mean curvature $H$ whose associated pair $(v,V)$ satisfies $(\\ref{holo.flat.med.2})$. Set $$(\\alpha_1,\\alpha_2,\\alpha_3)=\\Big(\\frac{1}{v_2}\\frac{\\partial v_2}{\\partial u_1},\\frac{1}{v_3}\\frac{\\partial v_3}{\\partial u_2},\\frac{1}{v_1}\\frac{\\partial v_1}{\\partial u_3}\\Big).$$ \nThen $v_1, v_2, v_3, \\alpha_1, \\alpha_2, \\alpha_3$ satisfy the differential equations \n\\begin{equation} \\label{eq:e0}\\frac{\\partial v_1}{\\partial u_1}=\\frac{v_1}{v_2^4}(v_2^4+v_2^2v_3^2+v_3^4)\\alpha_1,\\,\\,\\,\\,\\,\\frac{\\partial v_2}{\\partial u_1}=v_2\\alpha_1,\\,\\,\\,\\,\\,\\frac{\\partial v_3}{\\partial u_1}=\\frac{v_3^5}{v_2^4}\\alpha_1,\\end{equation} \n\\begin{equation} \\label{eq:e0a}\\frac{\\partial v_1}{\\partial u_2}=\\frac{v_1^5}{v_3^4}\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial v_2}{\\partial u_2}=\\frac{v_2}{v_3^4}(v_1^4-v_1^2v_3^2+v_3^4)\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial v_3}{\\partial u_2}=v_3\\alpha_2,\\end{equation} \n\\begin{equation} \\label{eq:e0b}\\frac{\\partial v_1}{\\partial u_3}= v_1\\alpha_3,\\,\\,\\,\\,\\,\\frac{\\partial v_2}{\\partial u_3}=\\frac{v_2^5}{v_1^4}\\alpha_3,\\,\\,\\,\\,\\,\\frac{\\partial v_3}{\\partial u_3}=\\frac{v_3}{v_1^4}(v_1^4+v_1^2v_2^2+v_2^4)\\alpha_3,\\end{equation} \n\\begin{equation} \\label{eq:e1}\\begin{array}{l}\n{\\displaystyle \\frac{\\partial \\alpha_1}{\\partial u_1} = \\frac{1}{v_2^4}(3v_2^4-v_3^4)\\alpha_1^2 - \\frac{v_1^6}{v_3^8}(3v_1^2-2v_3^2)\\alpha_2^2 + \\frac{v_2^4}{v_1^2v_3^4}(3v_1^2+2v_2^2)\\alpha_3^2 } \\vspace{.17cm} \\\\ \n{\\displaystyle +\\ \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2)- \\frac{v_1^2v_2^2}{v_3^2}c + \\frac{v_1v_2}{18v_3^3}(v_2^2+v_3^2-2v_1v_2v_3H)H,} \\end{array}\n\\end{equation} \n\\begin{equation} \\label{eq:e1a}\\frac{\\partial \\alpha_2}{\\partial u_1}=2 \\frac{v_1^2}{v_2^2}\\alpha_1\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial \\alpha_3}{\\partial u_1}=\n2 \\frac{v_3^2}{v_2^4}(4v_3^2+v_1^2)\\alpha_1\\alpha_3,\\end{equation} \n\\begin{equation} \\label{eq:e2}\\begin{array}{l}\n{\\displaystyle \\frac{\\partial \\alpha_2}{\\partial u_2} = -\\frac{v_3^4}{v_1^4v_2^2}(3v_2^2 + 2v_3^2) \\alpha_1^2 -\\frac{1}{v_3^4}(v_1^4 - 3v_3^4)\\alpha_2^2 - \\frac{v_2^6}{v_1^8}(2v_1^2+3v_2^2)\\alpha_3^2 } \\vspace{.17cm} \\\\ \n{\\displaystyle -\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2) + \\frac{v_2^2v_3^2}{v_1^2}c - \\frac{v_2v_3}{18v_1^3}(v_1^2-v_3^2-2v_1v_2v_3H)H,} \n\\end{array}\n\\end{equation} \n\\begin{equation} \\label{eq:e2a}\\frac{\\partial \\alpha_1}{\\partial u_2}=2 \\frac{v_1^2}{v_3^4}(4v_1^2- v_2^2)\\alpha_1\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial \\alpha_3}{\\partial u_2}=2 \\frac{v_2^2}{v_3^2}\\alpha_2\\alpha_3,\\end{equation} \n\\begin{equation} \\label{eq:e2b}\\frac{\\partial \\alpha_1}{\\partial u_3}=-2 \\frac{v_3^2}{v_1^2}\\alpha_1\\alpha_3,\\,\\,\\,\\,\\,\\frac{\\partial \\alpha_2}{\\partial u_3}=2 \\frac{v_2^2}{v_1^4}(4v_2^2- v_3^2)\\alpha_2\\alpha_3\\end{equation} \nand\n\\begin{equation} \\label{eq:e3}\\begin{array}{l}\n{\\displaystyle \\frac{\\partial \\alpha_3}{\\partial u_3} = \\frac{v_3^6}{v_2^8}(2v_2^2+3v_3^2)\\alpha_1^2 + \\frac{v_1^4}{v_2^4v_3^2}(2v_1^2-3v_3^2)\\alpha_2^2 + \\frac{1}{v_1^4}(3v_1^4-v_2^4)\\alpha_3^2 } \\vspace{.17cm} \\\\ \n{\\displaystyle + \\frac{1}{9v_2^4}(5v_3^4+2v_1^2v_2^2) - \\frac{v_1^2v_3^2}{v_2^2}c - \\frac{v_1v_3}{18v_2^3}(v_1^2+v_2^2+2v_1v_2v_3H)H,} \n\\end{array}\n\\end{equation} \nas well as the algebraic relations\n\\begin{equation}\\label{equ.alg.f.e.}\n\\begin{array}{l}\n\\displaystyle{\\Big(\\frac{30v_1}{v_2v_3^9} F - \\frac{4v_1^4v_2^2}{v_3^6}(v_1^2-v_3^2)c + \\frac{v_1^3v_2}{18v_3^7} m_2 H \\Big)\\alpha_2 =0}, \\vspace{-.25cm} \\\\ \\\\\n\\displaystyle{\\Big( \\frac{30v_2}{v_1^9v_3}F + \\frac{4v_2^4v_3^2}{v_1^6}(v_1^2+v_2^2)c + \\frac{v_3v_2^3}{18v_1^7} m_3 H \\Big)\\alpha_3=0} , \\vspace{-.25cm} \\\\ \\\\\n\\displaystyle{\\Big( \\frac{30v_3}{v_1v_2^9} F - \\frac{4v_1^2v_3^4}{v_2^6}(v_2^2+v_3^2)c + \\frac{v_1v_3^3}{18v_2^7}m_1 H \\Big)\\alpha_1=0 ,}\n\\end{array}\n\\end{equation}\n where\n\\begin{equation*}\n\\begin{array}{rcl}\nm_1 &=& (v_2^2+4v_3^2)(4v_2^2+v_3^2)-8v_1v_2v_3(v_2^2+v_3^2)H,\\vspace{.2cm} \\\\\nm_2 &=& (v_1^2-4v_3^2)(4v_1^2-v_3^2)-8v_1v_2v_3(v_1^2-v_3^2)H, \\vspace{.2cm}\\\\\nm_3 &=& (v_1^2+4v_2^2)(4v_1^2+v_2^2)+8v_1v_2v_3(v_1^2+v_2^2)H, \\vspace{.2cm}\\\\\nF &=& \\displaystyle{\\frac{1}{27v_1v_2v_3}\\big[9v_1^2v_3^8(v_2^2+v_3^2)\\alpha_1^2 + 9v_1^8v_2^2(v_1^2-v_3^2)\\alpha_2^2 - 9v_2^8v_3^2(v_1^2+v_2^2)\\alpha_3^2} \\vspace{.2cm}\\\\\n && - \\ v_1^2v_2^2v_3^2(2v_1^2v_2^4-2v_1^2v_3^4-2v_2^2v_3^4-v_1^2v_2^2v_3^2)\\big].\n\\end{array}\n\\end{equation*}\n\\end{proposition}\n\\proof\nThe triple $(v,h,V),$ where $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i},$ satisfies the system of PDE's\n\\begin{equation}\\label{hol.flat.f.esp.}\n\\left\\{\\begin{array}{l}\\displaystyle{\n\\!\\!\\!(i) \\frac{\\partial v_i}{\\partial u_j}= h_{ji}v_j, \\,\\,\\,\\,\\,\\,\\, (ii) \\frac{\\partial h_{ij}}{\\partial u_i} + \\frac{\\partial h_{ji}}{\\partial u_j} +h_{ki}h_{kj}+ V_iV_j + cv_iv_j =0}, \\vspace{.18cm}\\\\\n\\!\\!\\! \\displaystyle{(iii) \\frac{\\partial h_{ik}}{\\partial u_j}=h_{ij}h_{jk}, \\,\\,\\,\\,\\,\\,\\, (iv) \\frac{\\partial V_i}{\\partial u_j}=h_{ji}V_j, \\hspace{.5cm} 1\\leq i\\neq j\\neq k\\neq i \\leq 3,\\vspace{.18cm}}\\\\\n\\!\\!\\! \\displaystyle{(v) \\delta_i\\frac{\\partial v_i}{\\partial u_i}+ \\delta_jh_{ij}v_j +\\delta_kh_{ik}v_k=0, \\, \\ \\ (vi) \\delta_i\\frac{\\partial V_i}{\\partial u_i}+ \\delta_jh_{ij}V_j +\\delta_kh_{ik}V_k=0.}\n\\end{array}\\right. \\!\\!\\!\\!\\!\\!\\!\n\\end{equation}\nEquations $(i)$, $(ii)$, $(iii)$ and $(iv)$ are due to the fact that $f$ is a holonomic hypersurface with $(v,V)$ as its associated pair, and $(v)$ and $(vi)$ follow by differentiating (\\ref{holo.flat}).\n\nUsing (\\ref{holo.flat.med.2}) and equations $(i)$, $(iv)$, $(v)$ and $(vi)$ in (\\ref{hol.flat.f.esp.}) one can show that\n\\begin{equation}\\label{rel.h-ki-kj}\n v_j^5h_{ki}=v_i^5h_{kj}, \\quad 1\\leq i\\neq j\\neq k\\neq i\\leq 3.\n\\end{equation}\nFrom $(i)$ and $(v)$ of (\\ref{hol.flat.f.esp.}), together with (\\ref{rel.h-ki-kj}), one obtains the formulae for the derivatives $\\frac{\\partial v_i}{\\partial u_j},$ $1\\leq i,j\\leq 3$. In a similar way, using $(i)$, $(iii)$ and $(v)$, together with (\\ref{rel.h-ki-kj}), one finds the derivatives $\\frac{\\partial \\alpha_i}{\\partial u_j},$ $1\\leq i\\neq j\\leq 3$. In order to compute $\\frac{\\partial \\alpha_i}{\\partial u_i},$ $1\\leq i\\leq 3$, we note that equation $(ii)$, together with (\\ref{rel.h-ki-kj}) and the remaining equations in (\\ref{hol.flat.f.esp.}), determines the system of linear equations \n\\begin{equation*}\nMP=-B\n\\end{equation*}\nin the variables $\\frac{\\partial \\alpha_i}{\\partial u_i},$ $1\\leq i\\leq 3$, where\n\\begin{displaymath}\nM= \\left( \\begin{array}{ccc}\n 9v_1^2v_2^4v_3^2 & \\frac{9v_1^8v_2^2}{v_3^2} & 0\\vspace{.17cm}\\\\\n \\frac{9v_1^2v_3^8}{v_2^2} & 0 & 9v_1^4v_2^2v_3^2 \\vspace{.17cm}\\\\\n 0 & 9v_1^2v_2^2v_3^4 & \\frac{9v_2^8v_3^2}{v_1^2}\n\\end{array}\\right)\\!\\!, \\quad P= \\left( \\begin{array}{c}\n\\frac{\\partial \\alpha_1}{\\partial u_1} \\vspace{.17cm}\\\\\n\\frac{\\partial \\alpha_2}{\\partial u_2} \\vspace{.17cm}\\\\\n\\frac{\\partial \\alpha_3}{\\partial u_3}\n\\end{array}\\right)\n\\end{displaymath}\nand\n\\begin{displaymath}\nB= \\left( \\begin{array}{r}\n- 9v_1^2v_3^4(v_2^2+v_3^2)\\alpha_1^2 + \\frac{9v_1^8v_2^2}{v_3^6}(4v_2^2+v_3^2)(v_1^2-v_3^2)\\alpha_2^2 + 9v_2^8\\alpha_3^2 \\\\- v_1^2v_2^2(2v_3^4-v_1^2v_2^2) + \\ 9v_1^4v_2^4v_3^2 c - v_1^3v_2^3v_3(v_1^2+v_2^2-v_1v_2v_3H)H. \\vspace{0.32cm}\\\\\n - \\frac{9v_1^2v_3^8}{v_2^6}(4v_1^2+v_2^2)(v_2^2+v_3^2)\\alpha_1^2 + 9v_1^8\\alpha_2^2 - 9v_2^4v_3^2(v_1^2+v_2^2)\\alpha_3^2\\\\ - v_1^2v_3^2(2v_2^4+v_1^2v_3^2) + \\ 9v_1^4v_2^2v_3^4 c + v_1^3v_2v_3^3(v_1^2-v_3^2+v_1v_2v_3H)H. \\vspace{0.32cm}\\\\\n 9v_3^8\\alpha_1^2 - 9v_1^4v_2^2(v_1^2-v_3^2)\\alpha_2^2 + \\frac{9v_2^8v_3^2}{v_1^6}(4v_3^2-v_1^2)(v_1^2+v_2^2)\\alpha_3^2 \\\\ - v_2^2v_3^2(2v_1^4-v_2^2v_3^2) + \\ 9v_1^2v_2^4v_3^4 c + v_1v_2^3v_3^3(v_2^2+v_3^2+v_1v_2v_3H)H.\n\\end{array}\\right)\\!\\!.\n\\end{displaymath}\nOne can check that \nsuch system has a unique solution given by (\\ref{eq:e1}), (\\ref{eq:e2}) and (\\ref{eq:e3}).\n\nFinally, computing the mixed derivatives $\\frac{\\partial^2 \\alpha_i}{\\partial u_j\\partial u_k} = \\frac{\\partial^2 \\alpha_i}{\\partial u_k\\partial u_j},$ $1\\leq i,k,j\\leq 3$, from (\\ref{hol.flat.f.esp.}) we obtain \n$$0=\\frac{\\partial^2 \\alpha_1}{\\partial u_2\\partial u_1} - \\frac{\\partial^2 \\alpha_1}{\\partial u_1\\partial u_2} = \\Big(\\frac{30v_1}{v_2v_3^9} F - \\frac{4v_1^4v_2^2}{v_3^6}(v_1^2-v_3^2)c + \\frac{v_1^3v_2}{18v_3^7} m_2 H \\Big)\\alpha_2, \n$$\n$$\n0=\\frac{\\partial^2 \\alpha_2}{\\partial u_3\\partial u_2} - \\frac{\\partial^2 \\alpha_2}{\\partial u_2\\partial u_3} = \\Big( \\frac{30v_2}{v_1^9v_3}F + \\frac{4v_2^4v_3^2}{v_1^6}(v_1^2+v_2^2)c + \\frac{v_3v_2^3}{18v_1^7} m_3 H \\Big)\\alpha_3 \n$$\nand\n$$\n 0=\\frac{\\partial^2 \\alpha_3}{\\partial u_1\\partial u_3} - \\frac{\\partial^2 \\alpha_3}{\\partial u_3\\partial u_1} = \\Big( \\frac{30v_3}{v_1v_2^9} F - \\frac{4v_1^2v_3^4}{v_2^6}(v_2^2+v_3^2)c + \\frac{v_1v_3^3}{18v_2^7}m_1 H \\Big)\\alpha_1.\\qed\n $$\n \n \n \n In the lemmata that follows we assume the hypotheses of Proposition \\ref{flat.med.reduz.result.} to be satisfied and use the notations therein.\n\n\\begin{lemma} \\label{le:v1equalv3} If $v_1=v_3$ everywhere then $H=0$.\n\\end{lemma}\n\\proof By the assumption and the first equation in (\\ref{holo.flat.med.2}) we have $v_2=\\sqrt{2}v_1$. We obtain from any two of the equations in (\\ref{eq:e0}) that $\\alpha_1=0$, whereas any two of the equations in (\\ref{eq:e0b}) imply that $\\alpha_3=0$.\nThen (\\ref{eq:e1}) and (\\ref{eq:e3}) give\n$$\n\\begin{array}{l}\n18 \\alpha_2^2 + 4 v_1^2 H^2 - 3 \\sqrt{2}v_1 H + 36 v_1^2 c -18=0, \\vspace{0.17cm} \\\\\n18 \\alpha_2^2 + 4 v_1^2 H^2 + 3 \\sqrt{2}v_1 H + 36 v_1^2 c -18=0,\n\\end{array}\n$$\nwhich imply that $H=0$. \\qed\n\n\n\\begin{lemma} \\label{le:alphaiszero} The functions $\\alpha_1, \\alpha_2, \\alpha_3$ can not vanish symultaneously on any open subset of $M^3$. \n\\end{lemma}\n\\proof If $\\alpha_1, \\alpha_2, \\alpha_3$ all vanish on the open subset $U\\subset M^3$, then (\\ref{eq:e1}), (\\ref{eq:e2}) and (\\ref{eq:e3}) become \n\\begin{equation}\\label{cond.a1=a2=a3=0.1}\n\\begin{array}{l}\n2(5v_1^4+2v_2^2v_3^2) - 18v_1^2v_2^2v_3^2c + v_1v_2v_3(v_2^2+v_3^2-2v_1v_2v_3H)H=0,\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{cond.a1=a2=a3=0.2}\n\\begin{array}{l}\n2(5v_2^4-2v_1^2v_3^2) - 18v_1^2v_2^2v_3^2c + v_1v_2v_3(v_1^2-v_3^2-2v_1v_2v_3H)H=0,\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{cond.a1=a2=a3=0.3}\n\\begin{array}{l}\n2(5v_3^4+2v_1^2v_2^2) - 18v_1^2v_2^2v_3^2c - v_1v_2v_3(v_1^2+v_2^2+2v_1v_2v_3H)H=0. \n\\end{array}\n\\end{equation}\nComparying (\\ref{cond.a1=a2=a3=0.1}) with (\\ref{cond.a1=a2=a3=0.2}) and (\\ref{cond.a1=a2=a3=0.3}) yields, respectively, \n\\begin{equation}\\label{cond.a1=a2=a3=0.3.1}\n\\begin{array}{ll}\n\\displaystyle{H = \\frac{2(v_1^2+v_2^2)}{v_1v_2v_3} \\qquad {\\text{and}} \\qquad H=-\\frac{2(v_1^2-v_3^2)}{v_1v_2v_3}},\n\\end{array}\n\\end{equation}\nwhich is a contradiction. \\qed \n\n\\begin{lemma} \\label{le:alpha1alpha3} There does not exist any open subset of $M^3$ where $v_1-v_3$ is nowhere vanishing and $\\alpha_1=0=\\alpha_3$. \n\\end{lemma}\n\\proof Assume that $\\alpha_1 = 0=\\alpha_3$ and that $v_1-v_3$ does not vanish on the open subset $U \\subset M^3$.\nBy Lemma \\ref{le:alphaiszero}, $\\alpha_2$ must be nonvanishing on an open dense subset $V\\subset U$. Then equations (\\ref{eq:e0}), (\\ref{eq:e0a}), (\\ref{eq:e0b}), (\\ref{eq:e1a}), (\\ref{eq:e2a}) and (\\ref{eq:e2b}) reduce to the following on $V$:\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{\\frac{\\partial v_i}{\\partial u_1}=\\frac{\\partial v_i}{\\partial u_3}=\\frac{\\partial \\alpha_i}{\\partial u_j}=0}, \\ \\ i,j=1,2,3, \\ \\ \\ \\ i\\neq j, \\hspace{2.08cm}\n\\end{array}\n\\end{equation*}\n\\begin{equation}\\label{col.ex.f.e.1}\n\\begin{array}{l}\n\\displaystyle{\\frac{\\partial v_1}{\\partial u_2}=\\frac{v_1^5}{v_3^4}\\alpha_2, \\quad \n\\frac{\\partial v_2}{\\partial u_2}=\\frac{v_2}{v_3^4}(v_1^4-v_1^2v_3^2+v_3^4) \\alpha_2, \\quad\n\\frac{\\partial v_3}{\\partial u_2}=v_3\\alpha_2},\n\\end{array}\n\\end{equation}\nand, since $\\alpha_1=\\alpha_3=0,$ equations (\\ref{eq:e1}), (\\ref{eq:e2}) and (\\ref{eq:e3}) become, respectively, \n\\begin{equation}\\label{col.ex.f.e.2}\n\\begin{array}{l}\n\\displaystyle{\\frac{v_1^6}{v_3^8}(3v_1^2-2v_3^2)\\alpha_2^2 + \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2) - \\frac{v_1^2v_2^2}{v_3^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{+\\frac{v_1v_2}{18v_3^3}(v_2^2+v_3^2-2v_1v_2v_3H)H =0},\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{col.ex.f.e.3}\n\\begin{array}{l}\n\\displaystyle{\\frac{\\partial \\alpha_2}{\\partial u_2} = -\\frac{1}{v_3^4}(v_1^4 - 3v_3^4)\\alpha_2^2 -\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2) + \\frac{v_2^2v_3^2}{v_1^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{- \\frac{v_2v_3}{18v_1^3}(v_1^2-v_3^2-2v_1v_2v_3H)H},\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{col.ex.f.e.4}\n\\begin{array}{l}\n\\displaystyle{\\frac{v_1^4}{v_2^4v_3^2}(2v_1^2-3v_3^2)\\alpha_2^2 + \\frac{1}{9v_2^4}(5v_3^4+2v_1^2v_2^2) - \\frac{v_1^2v_3^2}{v_2^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex} \\displaystyle{-\\frac{v_1v_3}{18v_2^3}(v_1^2+v_2^2+2v_1v_2v_3H)H=0}. \n\\end{array}\n\\end{equation}\nMultiplying (\\ref{col.ex.f.e.2}) and (\\ref{col.ex.f.e.4}) by $2v_3^8$ and $3v_1^2v_2^4v_3^2$, respectively, and subtracting one\nfrom the other, yield\n\\begin{equation}\n\\begin{array}{lcr}\\label{col.ex.f.e.5}\n\\displaystyle{\\alpha_2^2 = \\frac{1}{90 v_1^6}\\big[-2 v_1^2 v_2^2 v_3^2 (3 v_1^2 + 2 v_3^2)H^2 - v_1 v_2 v_3 (6 v_1^4 + v_1^2 v_3^2 - 4 v_3^4)H} \\vspace{0.15cm} \\\\ \\hspace*{5ex} \\displaystyle{- \\ 18 v_1^2 v_2^2 v_3^2 (3 v_1^2 + 2 v_3^2)c + 2 (6 v_1^6 + 16 v_1^4 v_3^2 + 19 v_1^2 v_3^4 + 4 v_3^6)\\big]}.\n \\end{array}\n\\end{equation} \nOn one hand, substituting (\\ref{col.ex.f.e.5}) in (\\ref{col.ex.f.e.3}) we obtain\n\\begin{equation}\\label{col.ex.f.e.6}\n\\begin{array}{l}\n\\displaystyle{ \\frac{\\partial \\alpha_2}{\\partial u_2} = \\frac{1}{90 v_1^6 v_3^4}\\big[2 v_1^2 v_2^2 v_3^2 (3 v_1^6 + 2 v_1^4 v_3^2 - 4 v_1^2 v_3^4 - 6 v_3^6)H^2} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{+ \\ v_1 v_2 v_3 (6 v_1^8 + v_1^6 v_3^2 - 27 v_1^4 v_3^4 + 2 v_1^2 v_3^6 + 12 v_3^8)H} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{+ \\ 18 v_1^2 v_2^2 v_3^2 (3 v_1^6 + 2 v_1^4 v_3^2 - 4 v_1^2 v_3^4 - 6 v_3^6)c} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{- \\ 4 (v_1^4 - v_3^4) (3 v_1^6 + 8 v_1^4 v_3^2 + 16 v_1^2 v_3^4 + 6 v_3^6) \\big]}.\n\\end{array}\n\\end{equation}\nOn the other hand, differentiating (\\ref{col.ex.f.e.5}) with respect to $u_2$ and using (\\ref{col.ex.f.e.1}) gives\n\\begin{equation}\\label{col.ex.f.e.7}\n\\begin{array}{l}\n\\displaystyle{\\alpha_2\\frac{\\partial \\alpha_2}{\\partial u_2} = -\\frac{\\alpha_2}{180 v_1^6 v_2 v_3^3} \\big[v_2^2(-4 v_1^2 v_2 v_3^3 (5 v_1^4 - 4 v_1^2 v_3^2 - 6 v_3^4)H^2} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{- \\ v_1 v_3^2 (8 v_1^6 - 27 v_1^4 v_3^2 - 8 v_1^2 v_3^4 + 24 v_3^6)H} \\vspace{1ex}\\\\\\hspace*{9ex}\\displaystyle{- \\ 36 v_1^2 v_2 v_3^3 (5 v_1^4 - 4 v_1^2 v_3^2 - 6 v_3^4)c} \\vspace{1ex}\\\\\\hspace*{9ex}\\ \\displaystyle{+8 v_2 v_3 (v_1^2 - v_3^2) (v_2^2 + v_3^2) (8 v_1^2 + 3 v_3^2)\\big]}.\n\\end{array}\n\\end{equation}\nUsing that $\\alpha_2\\neq 0$ and $v_1-v_3\\neq 0$ on $V$, we obtain from (\\ref{col.ex.f.e.6}) and (\\ref{col.ex.f.e.7}) that\n\\begin{equation}\\label{col.ex.f.e.8}\n\\begin{array}{l}\n\\displaystyle{ H^2 + \\frac{4 v_1^6 - 2 v_1^4 v_3^2 - 9 v_1^2 v_3^4 + 4 v_3^6}{4 v_1^3 v_2 v_3 (v_1^2 - v_3^2)}H - \\frac{2 v_2^2 (v_1^2 - v_3^2)}{v_1^4 v_3^2} + 9 c=0.}\n\\end{array}\n\\end{equation}\nDifferentiating (\\ref{col.ex.f.e.8}) with respect to $u_2,$ and using (\\ref{col.ex.f.e.1}) we obtain\n\\begin{equation}\\label{col.ex.f.e.9}\n\\begin{array}{l}\nv_1 v_3 (22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)H = -16 v_2^3 (v_1^2 - v_3^2)^2.\n\\end{array}\n\\end{equation}\nSince $v_1-v_3\\neq 0,$ we must have $(22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)\\neq 0$ on $V.$ Therefore, (\\ref{col.ex.f.e.9}) implies that\n\\begin{equation}\\label{col.ex.f.e.10}\n\\begin{array}{l}\n\\displaystyle{H = -\\frac{16 v_2^3 (v_1^2 - v_3^2)^2}{v_1 v_3 (22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)}}.\n\\end{array}\n\\end{equation}\nFinally, differentiating (\\ref{col.ex.f.e.10}) with respect to $u_2$ and using (\\ref{col.ex.f.e.1}) we obtain\n\\begin{equation}\\label{col.ex.f.e.11}\n\\begin{array}{l}\n\\displaystyle{0 = -\\frac{1680 v_1^5v_2^3 (v_1^2 - v_3^2)^2}{v_3 (22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)^2}\\alpha_2,}\n\\end{array}\n\\end{equation}\nwhich is a contradiction, for the right-hand-side of (\\ref{col.ex.f.e.11}) is nonzero.\\qed \n\n\\begin{lemma} \\label{le:alpha2alphaj} There does not exist any open subset of $M^3$ where $\\alpha_2=0=\\alpha_j$ for some $j\\in \\{1, 3\\}$. \n\\end{lemma}\n\\proof We argue for the case in which $j=1$, the other case being similar. So, assume that $\\alpha_1$ and $\\alpha_2$ vanish on an open subset $U \\subset M^3$. By Lemma \\ref{le:alphaiszero}, $\\alpha_3$ is nonzero on an open dense subset $V\\subset U$. Equations (\\ref{eq:e1}) and (\\ref{eq:e2}) can be rewritten as follows on $V$: \n\n\\begin{equation}\\label{con.ad.f.e.5}\n\\begin{array}{l}\n\\displaystyle{\\frac{v_2^4}{v_1^2v_3^4}(3v_1^2+2v_2^2)\\alpha_3^2 + \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2) - \\frac{v_1^2v_2^2}{v_3^2}c }\\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{+ \\frac{v_1v_2}{18v_3^3}(v_2^2+v_3^2-2v_1v_2v_3H)H =0}, \\vspace{0.17cm} \\\\\n\n\\displaystyle{- \\frac{v_2^6}{v_1^8}(2v_1^2+3v_2^2)\\alpha_3^2 -\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2) + \\frac{v_2^2v_3^2}{v_1^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{- \\frac{v_2v_3}{18v_1^3}(v_1^2-v_3^2-2v_1v_2v_3H)H =0}.\n\\end{array}\n\\end{equation}\nEliminating $\\alpha_3^2$ from the equations in (\\ref{con.ad.f.e.5}) yields\n\\begin{equation}\\label{con.ad.f.e.6}\n\\begin{array}{ll}\n\\displaystyle{H^2- \\frac{7 v_1^4 + 7 v_1^2 v_3^2 + 2 v_3^4}{2 v_1 v_2 v_3 (v_1^2 + v_2^2)} H - \\frac{ 2 v_3^2}{v_1^2 v_2^2} + 9 c =0}.\n\\end{array}\n\\end{equation}\nDifferentiating (\\ref{con.ad.f.e.6}) with respect to $u_3$ and using that $\\alpha_3\\neq 0$ we obtain\n\\begin{equation}\\label{con.ad.f.e.7}\n\\displaystyle{H=\\frac{8 v_3^3 (v_1^2 + v_2^2)}{v_1 v_2 (21 v_1^4 + 21 v_1^2 v_3^2 + 4 v_3^4)}}.\n\\end{equation}\nFinally, differentiating (\\ref{con.ad.f.e.7}) with respect to $u_3$ and using the fact that $H$ is constant we obtain \n\\begin{equation*}\n\\begin{array}{ll}\n\\displaystyle{0=\\frac{120 v_3^3 v_2 (v_1^2 + v_2^2) (7 v_1^4 + 7 v_1^2 v_3^2 + 2 v_3^4)}{v_1^3 (21 v_1^4 + 21 v_1^2 v_3^2 + 4 v_3^4)^2},}\n\\end{array}\n\\end{equation*}\nwhich is a contradiction. \\qed\n\n\\begin{lemma} \\label{le:alphaialphajneq0} If there exist $p\\in M^3$ and $1\\leq i\\neq j\\leq 3$ such that $\\alpha_i(p)\\neq 0\\neq \\alpha_j(p)$ then $H=0=c$. \n\\end{lemma}\n\\proof We give the proof for the case in which $i=1$ and $j=2$, the remaining ones being similar. Let $U\\subset M^3$ be an open neighborhood of $p$ where $\\alpha_1$ and $\\alpha_2$ are nowhere vanishing. Then (\\ref{equ.alg.f.e.}) gives\n$$\\frac{30v_3}{v_1v_2^9} F - \\frac{4v_1^2v_3^4}{v_2^6}(v_2^2+v_3^2)c + \\frac{v_1v_3^3}{18v_2^7}m_1 H=0 $$\nand \n$$ \\frac{30v_1}{v_2v_3^9} F - \\frac{4v_1^4v_2^2}{v_3^6}(v_1^2-v_3^2)c + \\frac{v_1^3v_2}{18v_3^7} m_2 H =0,\n$$\nor equivalently,\n\\begin{equation}\\label{con.ad.f.e.2}\n\\left\\{\\begin{array}{l}\nF=-\\frac{1}{540}v_1^2v_2^2v_3^2[Hm_1-72cv_1v_2v_3(v_2^2+v_3^2)], \\vspace{0.18cm} \\\\\nF=-\\frac{1}{540}v_1^2v_2^2v_3^2[Hm_2-72cv_1v_2v_3(v_1^2-v_3^2)].\n\\end{array}\\right.\n\\end{equation}\nSubtracting one of the equations in (\\ref{con.ad.f.e.2}) from the other we obtain \n\\begin{equation}\\label{con.ad.f.e.3}\n\\begin{array}{lll}\n\\displaystyle{H^2-\\frac{7(v_1^2 \\ + \\ v_2^2)}{8v_1v_2v_3}H+9c=0}.\n\\end{array}\n\\end{equation}\nDifferentiating (\\ref{con.ad.f.e.3}) with respect to $u_1$ we obtain \n\\begin{equation}\\label{con.ad.f.e.4}\n\\begin{array}{lll}\n\\displaystyle{\\frac{21v_3^3}{8v_1v_2^3}H\\alpha_1=0}.\n\\end{array}\n\\end{equation}\nSince $\\alpha_1\\neq 0$, equation (\\ref{con.ad.f.e.4}) implies that $H=0$, and hence $c=0$ by (\\ref{con.ad.f.e.3}). \\vspace{1ex}\\qed\n\n\n\n\\begin{lemma} \\label{le:v1neqv3} If $v_1\\neq v_3$ at some point of $M^3$ then $H=0=c$. \n\\end{lemma}\n\\proof Assume that $v_1(p_0)\\neq v_3(p_0)$ for some $p_0\\in M^3$, and hence that $v_1\\neq v_3$ on some open neighborhood $U\\subset M^3$\nof $p_0$. By Lemma \\ref{le:alphaiszero}, there exist an open subset $U'\\subset U$ and $i\\in\\{1,2,3\\}$ such that $\\alpha_i(p)\\neq 0$ for all $p\\in U'$. It follows from Lemma \\ref{le:alpha1alpha3} and Lemma \\ref{le:alpha2alphaj} that there exist $q\\in U'$ and $j\\in\\{1,2,3\\}, \\ j\\neq i,$ such that $\\alpha_j(q)\\neq 0$. Thus there exists $q\\in M^3$ such that $\\alpha_i(q)\\neq 0$ and $\\alpha_j(q)\\neq 0, \\ i\\neq j,$ and the conclusion follows from Lemma \\ref{le:alphaialphajneq0}. \\vspace{2ex}\\qed\n\n\\noindent \\emph{Proof of Theorem $\\ref{thm:cmc}$:} Follows immediately from Lemma \\ref{le:v1equalv3} and Lemma \\ref{le:v1neqv3}.\\vspace{2ex}\\qed\n\n\n\n\n\n\n\n\n\\noindent \\emph{Proof of Theorem $\\ref{thm:minimalcneq0}$:} \nGiven $p\\in M^3$, let $u_1, u_2, u_3$ be $U$ be local principal coordinates on an open neighborhood $U$ of $p$ as in \nCorollary \\ref{le:asspair}. It follows from Lemma \\ref{le:v1neqv3} that the associated pair $(v,V)$ satisfies $v_1=v_3$ on $U$. Thus $\\lambda_2$ vanishes on $U$, and hence everywhere on $M^3$ by analyticity. The statement is now a consequence of the next proposition. \\qed\n\n\\begin{proposition}\\label{thm:minimalpczero} Let $f\\colon M^{3} \\to \\mathbb{Q}^{4}(c)$ be a conformally flat hypersurface with three distinct principal curvatures. If one of the principal curvatures is everywhere zero, then either $c=0$ and $f$ is locally a cylinder over a surface $g\\colon M^2(\\bar c)\\to {\\mathbb R}^3$ with constant Gauss curvature $\\bar c\\neq 0$ or $f$ is locally a generalized cone over a surface $g\\colon M^2(\\bar c)\\to \\mathbb{Q}^{3}(\\tilde c)$ with constant Gauss curvature $\\bar c\\neq \\tilde c$ in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c\\geq c$, with $\\tilde c>0$ if $c=0$. If, in addition, $f$ is minimal, then $f(M^3)$ is an open subset of a generalized cone over a Clifford torus in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c>0$, with $\\tilde c\\geq c$ if $c>0$.\n\\end{proposition}\n\\proof Let $e_1, e_2, e_3$\ndenote local unit vector fields which are principal directions correspondent to the distinct\nprincipal curvatures $\\lambda_1, \\lambda_2, \\lambda_3$, respectively. Then conformal flatness of $M^3$ is\nequivalent to the relations\n\\begin{equation} \\label{eq:uno}\n\\<\\nabla_{e_i}e_j,e_k\\>=0\n\\end{equation} \nand\n\\begin{equation} \\label{eq:dos}\n(\\lambda_j-\\lambda_k)e_i(\\lambda_i)+(\\lambda_i-\\lambda_k)e_i(\\lambda_j)+ (\\lambda_j-\\lambda_i)e_i(\\lambda_k)=0,\n\\end{equation} \nfor all distinct indices $i, j, k$ (see \\cite{la}, p. 84). It follows from Codazzi's equation and (\\ref{eq:uno}) that\n\\begin{equation} \\label{eq:tres}\n\\nabla_{e_i}e_i=\\sum_{j\\neq i}(\\lambda_i-\\lambda_j)^{-1}e_j(\\lambda_i)e_j.\n\\end{equation} \nIf, say, $\\lambda_2=0$, then equation\n(\\ref{eq:dos}) yields\n$$\n\\lambda_3^{-1}e_2(\\lambda_3)=\\lambda_1^{-1}e_2(\\lambda_1):=\\varphi,\n$$\nhence the distribution $\\{e_2\\}^\\perp$ spanned by $e_1$ and $e_3$ is umbilical in $M^3$ by\n(\\ref{eq:tres}).\n\nIf $\\varphi$ is identically zero on $M^3$, then $\\{e_2\\}^\\perp$ is a totally geodesic distribution, and hence \n$M^3$ is locally isometric to a Riemannian product $I\\times M^2$ by the local de Rham theorem. Since $M^3$ is conformally flat, it follows that $M^2$ must have constant Gauss curvature. Moreover, \nby Molzan's theorem (see Corollary $17$ in \\cite{nol}), $f$ is locally an extrinsic product of isometric immersions of the factors, \nwhich is not possible if $c\\neq 0$ because $f$ has three distinct principal curvatures. Therefore $c=0$ and $f$ is locally a cylinder over a surface with constant Gauss curvature in ${\\mathbb R}^3$. \n\nIf $\\varphi$ is not identically zero on $M^3$, given $x\\in M^3$ let $\\sigma$ be the leaf of $\\{e_2\\}^\\perp$ containing $x$ and let $j\\colon\\sigma\\to M^3$ be the inclusion of $\\sigma$ into $M^3$. Denote $\\tilde g=f\\circ j$. \nThen the normal bundle $N_{\\tilde g}\\sigma$ of $\\tilde g$ splits as \n\\begin{equation*}\nN_{\\tilde g}\\sigma=f_*N_j\\sigma\\oplus N_fM=\\mbox{span}\\{f_*e_2\\}\\oplus N_fM\n\\end{equation*}\nand\n\\bea\n\\tilde \\nabla_X f_*e_2\\!\\!\\!&=&\\!\\!\\!f_*\\nabla_Xe_2+\\alpha^f(j_*X,e_2)\\\\\n\\!\\!\\!&=&\\!\\!\\!-\\varphi \\tilde{g}_*X\n\\eea\nfor all $X\\in \\mathfrak{X}(\\sigma)$, where $\\tilde \\nabla$ is the induced \nconnection on ${\\tilde g}^*T\\mathbb{Q}^4(c)$. It follows that the normal vector field $\\eta=f_*e_2$ of $\\tilde g$ \nis parallel with respect \nto the normal connection of $\\tilde g$, and that the shape operator of $\\tilde g$ with \nrespect to $\\eta$ is \ngiven by \n$A^{\\tilde g}_\\eta=\\varphi I$. \nIt is a standard fact that this implies $\\tilde g(\\sigma)$ to be contained in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c\\geq c$, that is, there exist an umbilical hypersurface $i\\colon \\mathbb{Q}^{3}(\\tilde c)\\to \\mathbb{Q}^4(c)$ and an isometric immersion \n$g\\colon M^2=\\sigma\\to\\mathbb{Q}^3({\\tilde c})$ such that $\\tilde g=i\\circ g$. \nMoreover, since at any $y\\in\\sigma$ the fiber $L(y)\\!=\\!\\mbox{span}\\{\\eta(y)\\}$ \ncoincides with the normal space of $i$ at $g(y)$, it follows that $f$ coincides with \nthe generalized cone over $g$ in a neighborhood of $x$. \n\nIn particular, $M^3$ is a warped product $I\\times_{\\rho}M^2$, and since $M^3$ is conformally flat, $M^2$ must have constant Gauss curvature. If, in addition, $f$ is minimal, then $g$ must be a Clifford torus in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c>0$, with $\\tilde c\\geq c$ if $c>0$, and the preceding argument shows that $f(M^3)$ is an open subset of a generalized cone over~$g$.\\qed\n\n\\section{Proof of Theorem \\ref{thm:minimalceq0}}\nFirst we rewrite Proposition (\\ref{flat.med.reduz.result.}) when $H=0=c$ and state a converse to it.\n\n\\begin{proposition}\\label{flat.min.reduz.result.}\nLet $f\\colon M^3\\to {\\mathbb R}^4$ be a holonomic hypersurface whose associated pair $(v,V)$ satisfies \n\\begin{equation} \\label{eq:vis}v_2^2=v_1^2+v_3^2\\end{equation} and \n\\begin{equation}\\label{holo.flat.min.2}\n\\begin{array}{l}\n\\displaystyle{V_1=-\\frac{1}{3}\\Big(\\frac{v_2}{v_3}+\\frac{v_3}{v_2}\\Big)},\\quad\n\\displaystyle{V_2=-\\frac{1}{3}\\Big(\\frac{v_1}{v_3}-\\frac{v_3}{v_1}\\Big)}, \\quad\n\\displaystyle{V_3=\\frac{1}{3}\\Big(\\frac{v_1}{v_2}+\\frac{v_2}{v_1}\\Big)}.\n\\end{array}\n\\end{equation}\n Set $$\\alpha=(\\alpha_1,\\alpha_2,\\alpha_3)=\\Big(\\frac{1}{v_2}\\frac{\\partial v_2}{\\partial u_1},\\frac{1}{v_3}\\frac{\\partial v_3}{\\partial u_2},\\frac{1}{v_1}\\frac{\\partial v_1}{\\partial u_3}\\Big).$$\n Then $\\phi=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$ satisfies the system of PDE's \n\\begin{equation}\\label{flat.min.reduz.}\n\\left\\{\\begin{array}{l}\n\\displaystyle{\\frac{\\partial \\phi}{\\partial u_1}=\\Big(\\frac{\\partial v_1}{\\partial u_1},v_2\\alpha_1, \\frac{v_3^5}{v_2^4}\\alpha_1,\\frac{\\partial\\alpha_1}{\\partial u_1},2 \\frac{v_1^2}{v_2^2}\\alpha_1\\alpha_2,2 \\frac{v_3^2}{v_2^4}(4v_3^2+v_1^2)\\alpha_1\\alpha_3 \\Big)}, \\vspace{.25cm}\\\\\n\\displaystyle{\\frac{\\partial \\phi}{\\partial u_2}=\\Big( \\frac{v_1^5}{v_3^4}\\alpha_2, \\frac{\\partial v_2}{\\partial u_2}, v_3\\alpha_2, 2 \\frac{v_1^2}{v_3^4}(4v_1^2- v_2^2)\\alpha_1\\alpha_2,\\frac{\\partial\\alpha_2}{\\partial u_2},2 \\frac{v_2^2}{v_3^2}\\alpha_2\\alpha_3 \\Big)},\\vspace{.25cm}\\\\\n\\displaystyle{\\frac{\\partial \\phi}{\\partial u_3}=\\Big( v_1\\alpha_3,\\frac{v_2^5}{v_1^4}\\alpha_3, \\frac{\\partial v_3}{\\partial u_3}, -2 \\frac{v_3^2}{v_1^2}\\alpha_1\\alpha_3, 2 \\frac{v_2^2}{v_1^4}(4v_2^2- v_3^2)\\alpha_2\\alpha_3,\\frac{\\partial\\alpha_3}{\\partial u_3} \\Big),}\n\\end{array}\\right.\n\\end{equation}\nwhere\n$$\n\\begin{array}{l}\n\\displaystyle \\frac{\\partial v_1}{\\partial u_1}=\\frac{v_1}{v_2^4}(v_2^4+v_2^2v_3^2+v_3^4)\\alpha_1, \\;\\;\\;\n\\displaystyle \\frac{\\partial v_2}{\\partial u_2}=\\frac{v_2}{v_3^4}(v_1^4-v_1^2v_3^2+v_3^4)\\alpha_2, \\vspace{.2cm}\\\\\n\\displaystyle \\frac{\\partial v_3}{\\partial u_3}=\\frac{v_3}{v_1^4}(v_1^4+v_1^2v_2^2+v_2^4)\\alpha_3, \\vspace{.2cm}\\\\\n\\displaystyle{\\frac{\\partial \\alpha_1}{\\partial u_1} = \\frac{1}{v_2^4}(3v_2^4-v_3^4)\\alpha_1^2 - \\frac{v_1^6}{v_3^8}(3v_1^2-2v_3^2)\\alpha_2^2 + \\frac{v_2^4}{v_1^2v_3^4}(3v_1^2+2v_2^2)\\alpha_3^2} \\vspace{1ex}\\\\\\hspace*{6ex} \\displaystyle{+ \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2)},\\vspace{.2cm}\\\\\n\\displaystyle{\\frac{\\partial \\alpha_2}{\\partial u_2} =-\\frac{v_3^4}{v_1^4v_2^2}(3v_2^2 + 2v_3^2) \\alpha_1^2 -\\frac{1}{v_3^4}(v_1^4 - 3v_3^4)\\alpha_2^2 - \\frac{v_2^6}{v_1^8}(2v_1^2+3v_2^2)\\alpha_3^2} \\vspace{1ex}\\\\\\hspace*{6ex}\\displaystyle{-\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2)},\\vspace{.2cm}\\\\\n\\displaystyle{\\frac{\\partial \\alpha_3}{\\partial u_3} = \\frac{v_3^6}{v_2^8}(2v_2^2+3v_3^2)\\alpha_1^2 + \\frac{v_1^4}{v_2^4v_3^2}(2v_1^2-3v_3^2)\\alpha_2^2 + \\frac{1}{v_1^4}(3v_1^4-v_2^4)\\alpha_3^2} \\vspace{1ex}\\\\\\hspace*{6ex}\\displaystyle{+ \\frac{1}{9v_2^4}(5v_3^4+2v_1^2v_2^2)},\n\\end{array}\n$$\nas well as the algebraic equation \n\\begin{equation}\\label{equ.chave}\n\\begin{array}{rcr}\n9v_1^2v_3^8(v_2^2+v_3^2)\\alpha_1^2 + 9v_1^8v_2^2(v_1^2-v_3^2)\\alpha_2^2 - 9v_2^8v_3^2(v_1^2+v_2^2)\\alpha_3^2 \\vspace{.2cm}\\\\\n - \\ v_1^2v_2^2v_3^2(2v_1^2v_2^4-2v_1^2v_3^4-2v_2^2v_3^4-v_1^2v_2^2v_3^2) = 0.\n\\end{array}\n\\end{equation}\nConversely, if $\\phi=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$ is a solution of $(\\ref{flat.min.reduz.})$ satisfying (\\ref{eq:vis}) on an open simply-connected subset $U\\subset {\\mathbb R}^3$, then $\\phi$ satisfies (\\ref{equ.chave}) and the triple $(v, h, V)$, where $v=(v_1,v_2, v_3)$, $V=(V_1, V_2, V_3)$ is given by $(\\ref{holo.flat.min.2})$ and $h=(h_{ij})$, with $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i}$, $1\\leq i\\neq j\\leq 3$, satisfies (\\ref{sistema-hol}), and hence gives rise to a holonomic hypersurface $f\\colon U\\to {\\mathbb R}^4$ whose associated pair $(v,V)$ satisfies (\\ref{eq:vis}) and (\\ref{holo.flat.min.2}). \n\\end{proposition}\n\nIn view of Corollary \\ref{le:asspair} and Proposition \\ref{flat.min.reduz.result.}, minimal conformally flat hypersurfaces of ${\\mathbb R}^4$ are in correspondence with solutions $\\phi=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$ of (\\ref{flat.min.reduz.}) satisfying (\\ref{eq:vis}) and (\\ref{equ.chave}). We shall prove that such solutions are, in turn, in correspondence with the leaves of a foliation of codimension one on the algebraic variety constructed in the next result. \n\n\n\\begin{proposition}\\label{prop.chave} \nDefine $G,F:{\\mathbb R}^6={\\mathbb R}^3\\times {\\mathbb R}^3\\to{\\mathbb R}$ by \n$$G(x,y)= x_2^2-x_1^2-x_3^2$$\n and\n\\begin{equation*}\n\\begin{array}{r}\nF(x,y)= 9x_1^2x_3^8(x_2^2+x_3^2)y_1^2 + 9x_1^8x_2^2(x_1^2-x_3^2)y_2^2 - 9x_2^8x_3^2(x_1^2+x_2^2)y_3^2 \\vspace{.2cm}\\\\\n - x_1^2x_2^2x_3^2(2x_1^2x_2^4-2x_1^2x_3^4-2x_2^2x_3^4-x_1^2x_2^2x_3^2).\n\\end{array}\n\\end{equation*}\nLet\n$M^4:=F^{-1}(0)\\cap G^{-1}(0)\\cap\\{(x,y)\\in {\\mathbb R}^6;x_1>0,x_2>0,x_3>0\\,\\,\\mbox{and}\\,\\,y\\neq 0\\}$\nand let $\\ell_{\\pm}$ be the half lines in $M^4$ given by $$\\ell_{\\pm}=\\{(x,y)\\in M^4\\;:\\; x=s(1, \\sqrt{2},1)\\,\\,\\mbox{for some $s>0$} \\,\\,\\mbox{and}\\,\\, y=(0,\\pm 1,0)\\}.$$ Then $\\tilde{M}^4=M^4\\setminus (\\ell_-\\cup \\ell_+)$ is a regular submanifold of ${\\mathbb R}^6$ and $\\ell_-\\cup \\ell_+$ is the singular set of $M^4$.\n\\end{proposition}\n\\proof If $p\\in M^4$ we have $\\nabla G(p)=\\big(-2x_1,2x_2,-2x_3,0,0,0\\big)$, while the components of $\\nabla F(p)$ are given by \n$$x_1\\frac{\\partial F}{\\partial x_1}(p)= 18x_1^8x_2^2(4x_1^2-3x_3^2)y_2^2 + 18x_2^{10}x_3^2y_3^2-2x_1^4x_2^2x_3^2(2x_2^4-x_2^2x_3^2-2x_3^4),$$\n$$x_2\\frac{\\partial F}{\\partial x_2}(p)=-18x_1^2x_3^{10}y_1^2-18x_2^8x_3^2(3x_1^2+4x_2^2)y_3^2 -2x_1^2x_2^4x_3^2(4x_1^2x_2^2-x_1^2x_3^2-2x_3^4),$$\n$$x_3\\frac{\\partial F}{\\partial x_3}(p)= 18x_1^2x_3^8 (3x_2^2+4x_3^2)y_1^2 - 18x_1^{10}x_2^2y_2^2 +2x_1^2x_2^2x_3^4(x_1^2x_2^2+4x_1^2x_3^2+4x_2^2x_3^2),$$\n$$\\frac{\\partial F}{\\partial y_1}(p)= 18x_1^2x_3^8(x_2^2+x_3^2)y_1,\\,\\,\\,\\,\\,\n\\frac{\\partial F}{\\partial y_2}(p)=18x_1^8 x_2^2(x_1^2-x_3^2)y_2$$\nand\n$$\\frac{\\partial F}{\\partial y_3}(p)=-18x_2^8(x_1^2+x_2^2)x_3^2y_3.$$\nThat $M^4\\setminus (\\ell_+\\cup \\ell_-)$ is a smooth submanifold of ${\\mathbb R}^6$ and $\\ell_-\\cup \\ell_+$ is the singular set of $M^4$ is a consequence of the next two facts.\\vspace{1ex}\n\n\\noindent {\\bf Fact 1:}\n$\\nabla F(p)\\neq 0$ for all $p\\in M^4.$ \n\\proof If $\\nabla F(p)=0$ at $p=(x_1,x_2, x_3, y_1, y_2, y_3)$ then from $\\frac{\\partial F}{\\partial y_1}(p)=0$ it follows that $y_1=0$, whereas \n$\\frac{\\partial F}{\\partial y_3}(p)=0$ implies that $y_3=0$. Thus $y_2\\neq 0$, and hence $x_3=x_1$ from $\\frac{\\partial F}{\\partial y_2}(p)=0$. Therefore $x_2=\\sqrt{2}x_1,$ and then $0=\\frac{\\partial F}{\\partial x_2}(p)=-20\\sqrt{2}x_1^{11},$ which contradicts the fact that $x_1>0$. \\vspace{1ex} \\\\\n\\noindent {\\bf Fact 2:} The subset $\\{p\\in M^4\\,:\\,\\nabla F(p)=a\\nabla G(p)\\,\\,\\,\\mbox{for some}\\,\\,\\,a\\in {\\mathbb R}-\\{0\\}\\}$ coincides with $\\ell_{-}\\cup \\ell_{+}.$ \n\\proof Assume that \n\\begin{equation} \\label{eq:nFnG}\\nabla F(p)=a\\nabla G(p)\\end{equation} for some $a\\in {\\mathbb R}-\\{0\\}$. Equation (\\ref{eq:nFnG}) gives us six equations, the last three of which yield $y_1=y_3=0$ and $x_3=x_1$. Since $x_2^2=x_1^2+x_3^2,$ we obtain that $x_2=\\sqrt{2}x_1.$ Using this and the second of such equations we obtain that $a=-10x_1^{10}.$ Finally, the first one implies that $y_2^2=1$. \\vspace{1ex} \\qed\n\n\\begin{proposition}\\label{prop.chaveb} \nLet $X_1,X_2,X_3\\colon M^4\\to{\\mathbb R}^6$ be defined by \n\\begin{equation*}\n\\begin{array}{lcl}\n\\displaystyle{X_1(p)=\\frac{1}{x_2^4}\\Big((x_2^4+x_2^2x_3^2+x_3^4)x_1y_1,x_2^5y_1, x_3^5y_1,x_2^4 A_1(p)},\\vspace{1ex}\\\\\\hspace*{13ex} 2 x_1^2x_2^2y_1y_2,(8x_3^2+2x_1^2)x_3^2y_1y_3\\Big), \\vspace{.2cm} \\\\\n\\displaystyle{X_2(p)=\\frac{1}{x_3^4}\\Big( x_1^5y_2,(x_1^4-x_1^2x_3^2+x_3^4)x_2y_2,x_3^5y_2} ,\\vspace{1ex}\\\\\\hspace*{13ex}(8x_1^2- 2x_2^2)x_1^2y_1y_2, x_3^4A_2(p),2 x_2^2x_3^2y_2y_3 \\Big), \\vspace{.2cm} \\\\\n\\displaystyle{X_3(p)=\\frac{1}{x_1^4}\\Big( x_1^5y_3,x_2^5y_3, (x_1^4+x_1^2x_2^2+x_2^4)x_3y_3,-2x_1^2x_3^2y_1y_3},\\vspace{1ex}\\\\\\hspace*{13ex}(8x_2^2-2 x_3^2)x_2^2y_2y_3,x_1^4A_3(p) \\Big),\n\\end{array}\n\\end{equation*}\n where \n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{A_1(p) = \\frac{1}{x_2^4}(3x_2^4-x_3^4)y_1^2 - \\frac{x_1^6}{x_3^8}(3x_1^2-2x_3^2)y_2^2 + \\frac{x_2^4}{x_1^2x_3^4}(3x_1^2+2x_2^2)y_3^2} \\vspace{1ex}\\\\\\hspace*{10ex} \\displaystyle{+ \\frac{1}{9x_3^4}(5x_1^4+2x_2^2x_3^2)}, \n\\end{array}\n\\end{equation*}\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{A_2(p) =-\\frac{x_3^4}{x_1^4x_2^2}(3x_2^2 + 2x_3^2) y_1^2 -\\frac{1}{x_3^4}(x_1^4 - 3x_3^4)y_2^2 - \\frac{x_2^6}{x_1^8}(2x_1^2+3x_2^2)y_3^2} \\vspace{1ex}\\\\\\hspace*{10ex}\\displaystyle{-\\frac{1}{9x_1^4}(5x_2^4-2x_1^2x_3^2)},\n\\end{array}\n\\end{equation*}\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{A_3(p) = \\frac{x_3^6}{x_2^8}(2x_2^2+3x_3^2)y_1^2 + \\frac{x_1^4}{x_2^4x_3^2}(2x_1^2-3x_3^2)y_2^2 + \\frac{1}{x_1^4}(3x_1^4-x_2^4)y_3^2} \\vspace{1ex}\\\\\\hspace*{10ex}\\displaystyle{+ \\frac{1}{9x_2^4}(5x_3^4+2x_1^2x_2^2)}.\n\\end{array}\n\\end{equation*}\nThen the following assertions hold:\n\\begin{itemize}\n\\item[(i)] $X_1(p),X_2(p),X_3(p)$ are linearly independent for all $p\\in~\\tilde M^4$.\n\\item[(ii)] $\\{p\\in M^4;X_1(p)=0\\}=\\ell_-\\cup \\ell_+=\\{p\\in M^4;X_3(p)=0\\}$.\n\\item[(iii)] The vector fields $X_1,X_2,X_3$ are everywhere tangent to $\\tilde{M}^4$ and the curves \n $\\gamma_\\pm\\colon {\\mathbb R}\\to {\\mathbb R}^6$ given by $\\gamma_\\pm(t)=(e^t,\\sqrt{2}e^t,e^t,0,\\pm 1,0)$,\n are integral curves of $X_2$ with $\\gamma_\\pm({\\mathbb R})=\\ell_\\pm$.\n \\item[(iv)] $[X_i, X_j]=0$ on $\\tilde{M}^4$ for all $1\\leq i\\neq j\\leq 3$.\n \\end{itemize}\n \\end{proposition}\n \\proof First notice that $X_2(p)=0$ if and only if $y_2=0$ and $A_2(p)=0$. Since $A_2(p)<0$ whenever $y_2=0$, it follows that\n $X_2(p)\\neq 0$ for all $p~\\in~M^4$. \n\nNow observe that $X_1(p)=0$ if and only if $y_1=0$ and $A_1(p)=0$. Thus, if $p=(x_1, x_2, x_3, y_1, y_2, y_3)$ is such that $X_1(p)=0$ then \n\\begin{equation*}\n\\left\\{ \\begin{array}{l}\nay_2^2+by_3^2 =c,\\vspace{.2cm}\\\\\ndy_2^2+ey_3^2 =f,\\vspace{.2cm}\\\\\nx_2^2=x_1^2+x_3^2,\n\\end{array}\\right.\n\\end{equation*}\n where\n$$a=9x_1^8x_2^4(3x_1^2-2x_3^2),\\,\\,\\,\\,b=-9x_2^8x_3^4(3x_1^2+2x_2^2),\\,\\,\\,\\, c= x_1^2x_2^4x_3^4(5x_1^4+2x_2^2x_3^2),$$\n$$ d=9x_1^8x_2^2(x_1^2-x_3^2),\\,\\,\\,\\, e=-9x_2^8x_3^2(x_1^2+x_2^2)$$\nand \n$$ f= x_1^2x_2^2x_3^2(2x_1^2x_2^4-2x_1^2x_3^2-2x_2^2x_3^4-x_1^2x_2^2x_3^2).$$\nSuch system has a unique solution given by\n\\begin{equation*}\ny_2^2=\\frac{x_3^8(x_1^2+x_2^2)^2}{9x_1^{12}} \\quad \\quad \\mbox{and} \\quad \\quad y_3^2=-\\frac{(x_1^2-x_3^2)^2}{9x_1^4}.\n\\end{equation*}\nThus we must have $y_3=0$, and hence $x_1=x_3$ and $y_2=\\pm 1.$ It follows that the subset $\\{p\\in M^4;X_1(p)=0\\}$ coincides with $\\ell_-\\cup \\ell_+$.\n\nIn a similar way one shows that the subset $\\{p\\in M^4;X_3(p)=0\\}$ coincides with $\\ell_-\\cup \\ell_+$, and the proof of $(ii)$ is completed.\n\n\nTo prove $(i)$, first notice that $X_1(p),X_2(p),X_3(p)$ are pairwise linearly independent. This already implies that if $\\lambda_1,\\lambda_2,\\lambda_3\\in {\\mathbb R}$ are such that \n\\begin{equation}\\label{combina.linear}\n\\lambda_1X_1(p)+\\lambda_2X_2(p)+\\lambda_3X_3(p)=0\n\\end{equation}\nthen either $\\lambda_1=\\lambda_2=\\lambda_3=0$ ou $\\lambda_1\\neq 0,\\lambda_2\\neq 0$ e $\\lambda_3\\neq 0.$ We will show that the last possibility can not occur. \n\nEquation (\\ref{combina.linear}) gives the system of equations \n\\begin{displaymath}\n\\left\\{ \\begin{array}{ll}\nx_2^2x_3^4y_1\\lambda_1+x_1^6y_2\\lambda_2=0 \\vspace{.2cm}\\\\\nx_1^4x_3^2y_2\\lambda_2+x_2^6y_3\\lambda_3=0 \\vspace{.2cm}\\\\\n\\displaystyle{\\big[ A_1(p)-\\frac{2}{x_2^4}(3x_2^4+x_3^4+2x_2^2x_3^2)y_1^2\\big]\\lambda_1=0}\\vspace{.2cm} \\\\\n\\displaystyle{\\big[A_2(p)-\\frac{2}{x_3^4}(x_1^4+3x_3^4-2x_1^2x_3^2)y_2^2\\big]\\lambda_2=0} \\vspace{.2cm}\\\\\n\\displaystyle{\\big[ A_3(p)-\\frac{2}{x_1^4}(3x_1^4+x_2^4+2x_1^2x_2^2)y_3^2 \\big]\\lambda_3=0.}\n\\end{array}\\right.\n\\end{displaymath}\nThus, it suffices to prove that the system of equations\n\\begin{displaymath}\n\\left\\{ \\begin{array}{ll}\n\\displaystyle{ A_1(p)-\\frac{2}{x_2^4}(3x_2^4+x_3^4+2x_2^2x_3^2)y_1^2=0}\\vspace{.2cm} \\\\\n\\displaystyle{A_2(p)-\\frac{2}{x_3^4}(x_1^4+3x_3^4-2x_1^2x_3^2)y_2^2=0 }\\vspace{.2cm}\\\\\n \\displaystyle{A_3(p)-\\frac{2}{x_1^4}(3x_1^4+x_2^4+2x_1^2x_2^2)y_3^2 =0}\n\\end{array}\\right.\n\\end{displaymath}\nhas no solutions for $p=(x_1, x_2, x_3, y_1, y_2, y_3)\\in \\tilde{M}^4$. We write the preceding system as a linear system \n\\begin{displaymath}\n\\left\\{ \\begin{array}{rcl}\na_1y_1^2+a_2y_2^2+a_3y_3^2=a_4 \\vspace{.2cm}\\\\\nb_1y_1^2+b_2y_2^2+b_3y_3^2=b_4 \\vspace{.2cm}\\\\\nc_1y_1^2+c_2y_2^2+c_3y_3^2=c_4\n\\end{array}\\right.\n\\end{displaymath}\nin the variables $y_1^2,y_2^2$ and $y_3^2$, where\n\\begin{equation*}\n\\begin{array}{lll}\na_1= 9x_1^2x_3^8(3x_1^4+10x_2^2x_3^2), \\hspace{.6cm} & b_1= 9x_1^4x_3^8(3x_2^2+2x_3^2), \\vspace{.2cm}\\\\\n\na_2= 9x_1^8x_2^4(3x_1^2-2x_3^2), & b_2= 9x_1^8x_2^2(3x_2^4-10x_1^2x_3^2), \\vspace{.2cm}\\\\\n\na_3= -9x_2^8x_3^4(3x_1^2+2x_2^2), & b_3=9x_2^8x_3^4(2x_1^2+3x_2^2), \\vspace{.2cm}\\\\\n\na_4= x_1^2x_2^4x_3^4(5x_1^4+2x_2^2x_3^2), & b_4= -x_1^4x_2^2x_3^4(5x_2^4-2x_1^2x_3^2), \n\\end{array}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{array}{lll}\n c_1= 9x_1^4x_3^8(2x_2^2+3x_3^2),\\,\\,\\,\\,\\,\\,\\,\nc_2=9x_1^8x_2^4(2x_1^2-3x_3^2),\\vspace{.2cm}\\\\\n\n c_3=-9x_2^8x_3^2(3x_3^4+10x_1^2x_2^2),\\,\\,\\,\\,\\,\\,\\,\nc_4= -x_1^4x_2^4x_3^2(5x_3^4+2x_1^2x_2^2).\n\\end{array}\n\\end{equation*}\nSince\n\\begin{equation*}\n\\begin{array}{l}\n\\det\n\\left(\\!\\!\\begin{array}{ccc}\na_1 & a_2 & a_3 \\\\\nb_1 & b_2 & b_3 \\\\\nc_1 & c_2 & c_3\n\\end{array}\\!\\right)=656100x_1^{12}x_2^{12}x_3^{12}(x_3^4+x_1^2x_2^2)(x_1^2-x_2^2+x_3^2)=0\\vspace{.2cm}\\\\\n\n\\det\n\\left(\\!\\!\\begin{array}{ccc}\na_1 & a_2 & a_4 \\\\\nb_1 & b_2 & b_4 \\\\\nc_1 & c_2 & c_4\n\\end{array}\\!\\right)=-29160x_1^{14}x_3^{18}x_2^6(x_3^4+x_1^2x_2^2)\\neq 0,\n\\end{array}\n\\end{equation*}\nsuch system has no solutions. Thus $(i)$ is proved. \n\n\n\nNow, for $p\\in M^4$ we have \n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{\\langle \\nabla F(p),X_1(p)\\rangle=\\frac{10y_1}{x_2^4}(x_2^4+x_3^4)F(p)=0=\\langle \\nabla G(p),X_1(p)\\rangle}\\vspace{.2cm} \\\\\n\\displaystyle{\\langle \\nabla F(p),X_2(p)\\rangle=\\frac{10y_2}{x_3^4}(x_1^4+x_3^4)F(p)=0=\\langle \\nabla G(p),X_2(p)\\rangle}\\vspace{.2cm} \\\\\n\\displaystyle{\\langle \\nabla F(p),X_3(p)\\rangle=\\frac{10y_3}{x_1^4}(x_1^4+x_2^4)F(p)=0= \\langle \\nabla G(p),X_3(p)\\rangle},\n\\end{array}\n\\end{equation*}\n hence $X_1,X_2,X_3$ are everywhere tangent to $\\tilde{M}^4$. That $\\gamma_\\pm$ is an integral curve of $X_2$ follows by \nchecking that $X_2(\\gamma_\\pm(t))=(e^t,\\sqrt{2}e^t,e^t,0,0,0)=\\gamma'_\\pm(t)$. Finally, a straightforward computation gives\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{[X_1,X_2](p)=\\Big(0,0,0,\\frac{10y_2}{9x_2^2x_3^{10}}F(p),\\frac{10y_1}{9x_1^6x_2^4x_3^2}F(p),0\\Big)=0} \\vspace{.2cm} \\\\\n\n\\displaystyle{[X_1,X_3](p)=\\Big(0,0,0,-\\frac{10y_3}{9x_1^4x_2^2x_3^6}F(p),0,-\\frac{10y_1}{9x_1^2x_2^{10}}F(p)\\Big)=0} \\vspace{.2cm} \\\\\n\n\\displaystyle{[X_2,X_3](p)=\\Big(0,0,0,0,\\frac{10y_3}{9x_1^{10}x_3^2}F(p),\\frac{10y_2}{9x_1^2x_2^6x_3^4}F(p)\\Big)=0}. \\qed\n\\end{array} \n\\end{equation*}\n\n\n\n\n\n\n \n\nThe proof of the next proposition is straightforward.\n\n\\begin{proposition}\\label{prop:involutions} $(i)$ For each $\\epsilon=(\\epsilon_1, \\epsilon_2, \\epsilon_3)$, $\\epsilon_j\\in \\{-1, 1\\}$ for $1\\leq j\\leq 3$, the map $\\Phi^{\\epsilon}\\colon \\tilde{M}^4\\to \\tilde{M}^4$ given by \n$$\\Phi^{\\epsilon}(x_1, x_2, x_3, y_1, y_2, y_3)=(x_1, x_2, x_3, \\epsilon_1y_1, \\epsilon_2y_2, \\epsilon_3y_3)$$\nsatisfies $\\Phi^{\\epsilon}_*X_j(p)=\\epsilon_jX_j(\\Phi^{\\epsilon}(p))$ for all $p\\in \\tilde{M}^4$.\\vspace{1ex}\\\\\n$(ii)$ The map $\\Psi\\colon \\tilde{M}^4\\to \\tilde{M}^4$ given by \n$$\\Psi(x_1, x_2, x_3, y_1, y_2, y_3)=\\bigg(x_3, x_2, x_1, \\frac{x_2^4}{x_1^4}y_3, \\frac{x_1^4}{x_3^4}y_2,\\frac{x_3^4}{x_2^4}y_1\\bigg)$$\nsatisfies \n$$\\Psi_*X_1(p)=X_3(\\Psi(p)),\\;\\;\\Psi_*X_2(p)=X_2(\\Psi(p))\\;\\;\\mbox{and}\\;\\;\\Psi_*X_3(p)=X_1(\\Psi(p))$$\n for all $p=(x_1, x_2, x_3, y_1, y_2, y_3)\\in \\tilde{M}^4$.\\vspace{1ex}\\\\\n$(iii)$ The maps $\\Psi$ and $\\Phi^{\\epsilon}$, $\\epsilon\\in \\{-1, 1\\}\\times \\{-1, 1\\}\\times\\{-1, 1\\}$, generate a group of involutions\nof $\\tilde{M}^4$ isomorphic to $\\mathbb{Z}_2\\times \\mathbb{Z}_2\\times \\mathbb{Z}_2\\times \\mathbb{Z}_2$ that preserves the distribution ${\\cal D}$ spanned by the vector fields $X_1, X_2$ and $X_3$.\n\\end{proposition}\n\n\n\n\n\n\n\\noindent \\emph{Proof of Theorem \\ref{thm:minimalceq0}:} \nFirst we associate to each leaf $\\sigma$ of ${\\cal D}$ a covering map $\\phi_\\sigma\\colon U_\\sigma\\to \\sigma$ from a simply-connected open subset $U_\\sigma\\subset {\\mathbb R}^3$ and a minimal immersion $f_\\sigma\\colon U_\\sigma\\to {\\mathbb R}^4$ with three distinct principal curvatures whose induced metric is conformally flat. \n\nFor any $q\\in \\tilde M^4$ and for $1\\leq i\\leq 3$ denote by $\\tau_q^i\\colon J_q^i\\to \\tilde{M}^4$ the maximal integral curve of $X_i$ through $q$, that is, $0\\in J_q^i$, $\\tau^i_q(0)=q$, $(\\tau^i_q)'(t)=X_i(\\tau^i_q(t))$ for all $t\\in J_q^i$, and $J_q^i$ is maximal with these properties. Let $${\\cal D}(X_i)=\\{(t, q)\\in {\\mathbb R}\\times \\tilde{M}^4\\,:\\, t\\in J^i_q\\}$$ and let $\\varphi^i\\colon {\\cal D}(X_i)\\to \\tilde{M}^4$ be the flow of $X_i$, given by $\\varphi^i(t, q)=\\tau^i_q(t)$. For a fixed $p\\in \\sigma$ define $U_\\sigma=U_\\sigma(p)$ by\n$$U_\\sigma=\\{(u_1, u_2, u_3)\\,:\\, u_1\\in J^1_p, \\,u_2\\in J^2_{\\varphi^1(u_1, p)}, \\, u_3\\in J^3_{\\varphi^2(u_2,\\varphi^1(u_1, p))}\\}$$\nand $\\phi_\\sigma=\\phi^p_\\sigma$ by\n$$\\phi_\\sigma(u_1, u_2, u_3)=\\varphi^3(u_3, \\varphi^2(u_2,\\varphi^1(u_1, p))).$$\nThen $0\\in U_\\sigma$, $\\phi_\\sigma(0)=p$, and for all $u\\in U_\\sigma$ we have\n\\begin{equation*\n\\frac{\\partial \\phi_\\sigma}{\\partial u_i}(u)=X_i(\\phi_\\sigma(u)), \\,\\, \\,\\,\\, 1\\leq i\\leq 3.\n\\end{equation*} \n \nWe claim that $\\phi_\\sigma$ is a covering map onto $\\sigma$. Given $x\\in\\sigma$, let $\\tilde B_{2\\epsilon}(0)$ be an open \nball of radius $2\\epsilon$ centered at the origin such that\n$\\phi_\\sigma^x|_{\\tilde B_{2\\epsilon}(0)}$ is a diffeomorphism onto \n$B_{2\\epsilon}(x)=\\phi_\\sigma^x(\\tilde B_{2\\epsilon}(0))$. Since\n\\begin{equation} \\label{eq:psix0}\n\\phi_\\sigma^p(t+s)=\\varphi^3(t_3,\\varphi^2(t_2,\\varphi^1(t_1, \\phi_\\sigma^p(s))))=\\phi_\\sigma^{\\phi_\\sigma^p(s)}(t)\n\\end{equation} \nwhenever both sides are defined, where $t=(t_1,t_2,t_3)$ \nand $s=(s_1,s_2,s_3)$, if $x=\\phi_\\sigma^p(s)$, $s=(s_1,s_2, s_3)\\in U_\\sigma$, then for any \n$y=\\phi_\\sigma^x(t)\\in B_{2\\epsilon}(x)$, $t=(t_1,t_2,t_3)\\in \\tilde B_{2\\epsilon}(0)$, we have\n$$ y=\\phi_\\sigma^x(t)=\\phi_\\sigma^{\\phi_\\sigma^p(s)}(t)=\\phi_\\sigma^p(s+t).$$\nThis shows that $B_{2\\epsilon}(x)\\subset \\phi_\\sigma^p(U_\\sigma)$ if $x\\in \\phi_\\sigma^p(U_\\sigma)$, hence $\\phi_\\sigma^p(U_\\sigma)$ is open in $\\sigma$. But since $y=\\phi_\\sigma^x(t)$ if and only if $x=\\phi_\\sigma^y(-t)$, as follows from (\\ref{eq:psix0}), the same argument shows that\n$x\\in \\phi_\\sigma^p(U_\\sigma)$ if $y\\in \\phi_\\sigma^p(U_\\sigma)$ for some $y\\in B_{2\\epsilon}(x)$, and hence $\\sigma\\setminus \\phi_\\sigma^p(U_\\sigma)$ is also open. It follows that $\\phi_\\sigma^p$ is onto $\\sigma$.\n\n\n\n\nNow, for any $x\\in \\sigma$ write \n$$\n(\\phi_\\sigma^p)^{-1}(x)=\\cup_{\\alpha\\in A}\\tilde x_\\alpha,\n$$\n and for each $\\alpha\\in A$ let $\\tilde B_{2\\epsilon}(\\tilde x_\\alpha)$ denote \nthe open ball of radius $2\\epsilon$ centered at $\\tilde x_\\alpha$. Define \na map $\\psi_\\alpha\\colon B_{2\\epsilon}(x)\\to\\tilde B_{2\\epsilon}(\\tilde x_\\alpha)$\nby \n$$\n\\psi_\\alpha(y)=\\tilde x_\\alpha+(\\phi_\\sigma^x)^{-1}(y).\n$$\nBy (\\ref{eq:psix0}) we have \n\\bea\n\\phi_\\sigma^p(\\psi_\\alpha(y))\\!\\!\\!&=&\\!\\!\\!\\phi_\\sigma^p(\\tilde x_\\alpha+(\\phi_\\sigma^x)^{-1}(y))\\\\\n\\!\\!\\!&=&\\!\\!\\!\\phi_\\sigma^{\\phi_\\sigma^p(\\tilde x_\\alpha)}((\\phi_\\sigma^x)^{-1}(y))\\\\\n\\!\\!\\!&=&\\!\\!\\!\\phi_\\sigma^x((\\phi_\\sigma^x)^{-1}(y))\\\\\n\\!\\!\\!&=&\\!\\!\\!y\n\\eea\nfor all $y\\in B_{2\\epsilon}(x)$. Thus $\\phi_\\sigma^p$ is a diffeomorphism \nfrom $\\tilde B_{2\\epsilon}(\\tilde x_\\alpha)$ onto \n$B_{2\\epsilon}(x)$ having $\\psi_\\alpha$ as its inverse. \nIn particular, this implies that $\\tilde B_\\epsilon(\\tilde x_\\alpha)$ and \n$\\tilde B_\\epsilon(\\tilde x_\\beta)$ are disjoint if $\\alpha$ and \n$\\beta$ are distinct indices in $A$. Finally, it remains to check that \nif $\\tilde y\\in (\\phi_\\sigma^p)^{-1}(B_\\epsilon(x))$ then \n$\\tilde y\\in \\tilde B_\\epsilon(\\tilde x_\\alpha)$ for some $\\alpha\\in A$. \nThis follows from the fact that \n$$\n\\phi_\\sigma^p(\\tilde y-(\\phi_\\sigma^x)^{-1}(\\phi_\\sigma^p(\\tilde y)))=\\phi_\\sigma^{\\phi_\\sigma^p(\\tilde y)}\n(-(\\phi_\\sigma^x)^{-1}(\\phi_\\sigma^p(\\tilde y)))=x.\n$$\nFor the last equality, observe from (\\ref{eq:psix0}) that for all \n$x, y\\in \\sigma$ we have that $\\phi_\\sigma^x(t)=y$ if and only if $\\phi_\\sigma^y(-t)=x$.\n\n\n \nWriting $\\phi_\\sigma=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$, it follows that $\\phi_\\sigma$ satisfies (\\ref{flat.min.reduz.}), as well as (\\ref{eq:vis}) and (\\ref{equ.chave}). Defining $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i}$, $1\\leq i\\neq j\\leq 3$, and $V=(V_1, V_2, V_3)$ by (\\ref{holo.flat.min.2}), it follows from Proposition~\\ref{flat.min.reduz.result.} that the triple $(v, h, V)$, where $v=(v_1,v_2, v_3)$ and $h=(h_{ij})$, satisfies (\\ref{sistema-hol}), and hence gives rise to a \n minimal conformally flat hypersurface $f_\\sigma\\colon U_\\sigma\\to {\\mathbb R}^4$ with three distinct principal curvatures by Corollary \\ref{le:asspair}. \n \n Given two distinct leaves $\\sigma$ and $\\tilde \\sigma$ of ${\\cal D}$, the corresponding immersions $f_\\sigma$ and $f_{\\tilde \\sigma}$ are congruent if and only if there exists a diffeomorphism $\\psi\\colon U_\\sigma\\to U_{\\tilde \\sigma}$ such that \n $$\\psi^*I_{\\tilde\\sigma}=I_\\sigma\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\psi^*I\\!I_{\\tilde\\sigma}=I\\!I_\\sigma$$\n where $I_\\sigma$ and $I\\!I_{\\sigma}$ are the first and second fundamental formulae of $f_\\sigma$, respectively, and $I_{\\tilde\\sigma}$, $I\\!I_{\\tilde\\sigma}$ are those of $f_{\\tilde \\sigma}$. A long but straightforward computation shows that, up to a translation, either $\\psi$ coincides with the map given by\n $$\\psi_\\epsilon(u_1,u_2,u_3)=(\\epsilon_1u_1, \\epsilon_2u_2, \\epsilon_3u_3)$$ \n for some $\\epsilon=(\\epsilon_1, \\epsilon_2, \\epsilon_3)$ with $\\epsilon_i\\in \\{-1, 1\\}$ for $1\\leq i\\leq 3$, or it is the composition of such a map with the map given by\n $$\\theta(u_1, u_2, u_3)=(u_3, u_2, u_1).$$\n It is easy to check that this is the case if and only if there exists $\\Theta\\in G$ such that $\\phi_{\\tilde \\sigma}\\circ \\psi=\\Theta\\circ \\phi_{\\sigma}$. \n \n \n \n \n \n \n Now let $f\\colon M^3\\to {\\mathbb R}^4$ be a minimal isometric immersion with three distinct principal curvatures of a simply connected\n conformally flat Riemannian manifold. We shall prove that either $f(M^3)$ is an open subset of the cone over a Clifford torus in ${\\mathbb S}^3$ or there exist a leaf $\\sigma$ of ${\\cal D}$ and a local diffeomorphism $\\rho\\colon M^3\\to V$ onto an open subset $V\\subset U_\\sigma$ \n such that $f$ is congruent to $f_\\sigma\\circ \\rho$.\n \n First we associate to $f$ a map $\\phi_f\\colon M^3\\to M^4\\subset {\\mathbb R}^6$ as follows.\n Fix a unit normal vector field $N$ along $f$ and denote by $\\lambda_1<\\lambda_2<\\lambda_3$ the distinct principal curvatures of $f$ with respect to $N$. For each $1\\leq j\\leq 3$, the eigenspaces $E_{\\lambda_j}=\\ker (A-\\lambda_j I)$ associated to $\\lambda_j$, where $A$ is the shape operator with respect to $N$ and $I$ is the identity endomorphism, form a field of directions along $M^3$, and since $M^3$ is simply connected we can find a smooth global unit vector field $Y_j$ along $M^3$ such that $\\mbox{span}\\{Y_j\\}=E_{\\lambda_j}$. \nLet the functions $v_1, v_2, v_3$ be defined on $M^3$ by\n\\begin{equation} \\label{eq:globalvj}v_j=\\sqrt{\\frac{\\delta_j}{(\\lambda_j-\\lambda_i)(\\lambda_j-\\lambda_k)}},\\,\\,\\,\\delta_j=\\frac{(\\lambda_j-\\lambda_i)(\\lambda_j-\\lambda_k)}{|(\\lambda_j-\\lambda_i)(\\lambda_j-\\lambda_k)|}, \\,\\,\\,i\\neq j\\neq k\\neq i,\\end{equation} \nand let $\\alpha_1, \\alpha_2,\\alpha_3$ be given by\n$$\\alpha_1=\\frac{v_1}{v_2}Y_1(v_2), \\,\\,\\,\\,\\alpha_2=\\frac{v_2}{v_3}Y_2(v_3)\\,\\,\\,\\mbox{and}\\,\\,\\,\\alpha_3=\\frac{v_3}{v_1}Y_3(v_1).$$ \nDefine $\\phi_f\\colon M^3\\to {\\mathbb R}^6$ by $\\phi_f=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$. \n\nNow, it follows from Theorem \\ref{main3} that each point $p\\in M^3$ has a connected open neighborhood $U\\subset M^3$ endowed with principal coordinates $u_1, u_2, u_3$ such that the pair $(v,V)$ associated to $f$ satisfies (\\ref{eq:vis}) and \n(\\ref{holo.flat.min.2}). Let $\\phi_U\\colon U\\to {\\mathbb R}^6$ be given by $\\phi_U=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$,\nwith $(\\alpha_1,\\alpha_2,\\alpha_3)=\\big(\\frac{1}{v_2}\\frac{\\partial v_2}{\\partial u_1},\\frac{1}{v_3}\\frac{\\partial v_3}{\\partial u_2},\\frac{1}{v_1}\\frac{\\partial v_1}{\\partial u_3}\\big)$. It is easy to check that $\\phi_f|_U=\\Theta\\circ \\phi_U$ for some $\\Theta\\in G$. \nOn the other hand, by Proposition \\ref{flat.min.reduz.result.} we have that $\\phi_U$ \n satisfies (\\ref{flat.min.reduz.}), as well as the algebraic equation \n(\\ref{equ.chave}). \nIt follows that $\\phi_U(U)\\subset M^4$ and that \n$$\\frac{\\partial \\phi_U}{\\partial u_i}(u)=X_i(\\phi_U(u)), \\,\\,\\, \\,\\,\\, 1\\leq i\\leq 3,$$\nfor all $u\\in U$. \nTherefore either $\\phi_U(U)$ is an open subset of a leaf $\\sigma_{U}$ of the distribution ${\\cal D}$ on $\\tilde M^4$ spanned by $X_1, X_2, X_3$, or $\\phi_U(U)$ is an open segment of either $\\ell_+$ or $\\ell_-$. If the latter possibility holds for some open subset $U\\subset M^3$, then $v_1=v_3$ on $U$, hence $\\lambda_2=0$ on $U$. By analyticity, $\\lambda_2=0$ on $M^3$, and hence Proposition \\ref{thm:minimalpczero} implies that $f(M^3)$ is an open subset of a cone over a Clifford torus in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset {\\mathbb R}^4$, $\\tilde c>0$.\n\n\n\n\n\nOtherwise we have that each point $p\\in M^3$ has an open neighborhood $U\\subset M^3$ such that\n$\\phi_f(U)$ is an open subset of a leaf $\\sigma_{U}$ of ${\\cal D}$. It follows that $\\phi_f(M^3)$ is an open subset of a leaf $\\sigma$ of ${\\cal D}$. If $\\rho\\colon M^3\\to U_\\sigma$ is a lift of $\\phi_f$ with respect to $\\phi_\\sigma$, that is, $\\phi_f=\\phi_\\sigma\\circ \\rho$, then $\\rho$ is a local diffeomorphism such that $f$ and $f_\\sigma\\circ \\rho$ have the same first and second fundamental forms.\nTherefore $f$ is congruent to $f_\\sigma\\circ \\rho$.\\qed\n \n \n \n \n \n \n \n \n\n\n\n\n\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLast decade witnessed an increased effort in the deployment of automated vehicle (AV) technology because it can enhance passenger safety, improve travel mobility, reduce fuel consumption, and maximize traffic throughput \\cite{VanderWerfShladover02, Askari_2016, Li_AAP_2017}. Early deployment dates back to research projects, such as DARPA challenges \\cite{Campbell_2010}, PATH program \\cite{Rajam_Shlad_2001}, Grand Cooperative Driving Challenges \\cite{GCDCIntro2011}. Meanwhile, the automotive industry made achievements in equipping production vehicles with advanced driver assistant systems (ADAS) technology. Currently attentions are attracted to AVs with higher levels of autonomy \\cite{SAE_J3016_2016}.\n\nThe ultimate objective is to navigate and guide AVs to destinations following driving conventions and rules.\nThe key components include perception, estimation, planning and control. Perception algorithms collect and process raw sensor (camera, radar, lidar, etc.) data, and convert them to human-readable physical data. Estimation algorithms \\cite{Farrell_2017, Wischnewski_2019, Bersani_AEIT_2019} typically apply sensor fusion technique to obtain clean and sound vehicle state estimations based on sensor characteristics. Planning \\cite{Karaman_IJRR_2011, Paden_TIV_2016} can be further divided into mission planning (or route planning), behavior planning (or decision making), and motion planning: i) mission planning algorithms select routes to destinations through given road networks based on requirements; ii) behavior planning generates appropriate driving behaviors in real-time according to driving conventions and rules, to guide the interaction with other road-users and infrastructures; iii) motion planning translates the generated behavior primitives into a trajectory based on vehicle states. Control algorithms utilize techniques from control theory \\cite{Karl_Richard_Feedback, Khalil_NC, Luenberger_1997, Ioannou_Sun_RAC, Yuri_SMC} that enable vehicles to follow aforementioned trajectories in longitudinal and lateral directions.\n\n\\IEEEpubidadjcol\n\nVehicle control performance highly depends on perception and estimation outcomes, among which lane perception and lane estimation \\cite{AHuang_phd, Fakhfakh_JAIHC_2020} are crucial. In longitudinal control, estimated lanes are needed to identify preceding in-path vehicles, and obtain maximum allowable speed based on road curvature at a preview distance. In lateral control, estimated lanes serve as desired paths for controllers to follow. In practice, lanes are typically perceived by cameras in real-time when high-resolution GPS or high-definition map are not installed. Perception algorithms detect lanes using techniques from computer vision or machine learning and then characterize them as polynomial functions \\cite{Xu_ECCV_2020, Liu_ICCV_2021_CondLaneNetAT, Wang_ECCV_2020, Tabelini_2021} in vehicle body-fixed frame. However, representing lanes as polynomial functions brings great inconvenience to practical estimation and control algorithms. On the one hand, estimation algorithms usually predict polynomial coefficients based on a nominal model, among which Kalman filter \\cite{Klmn_1960, Humpherys_2012, Zarchan_KF_2015, Pei_KF_2019} is the most famous. However, it is infeasible to mathematically derive such a model that characterizes the evolution of coefficients for polynomial functions. This is because: i) vehicle body-fixed frame is translating and rotating as the vehicle moves, and ii) polynomial function representation does not preserve the form in coordinate transformation.\nOn the other hand, iterative methods need to be applied when control algorithms extract attributes (curvature, heading, etc.) from polynomial function representation of lanes at a preview distance.\n\nThis paper attempts to resolve this problem by characterizing lanes with arc-length-based parametric representation. To ensure compatibility with current platforms, we still assume that perception algorithms provide polynomial function representation of lanes in vehicle body-fixed frame.\nThe major contributions are as follows. Firstly, we mathematically derived a transformation and its inverse that reveal the relationship between polynomial function representation and arc-length-based parametric representation. Secondly, it is shown that such parametric representation preserves the form in coordinate transformation. Therefore, we are able to derive a mathematical model to characterize the evolution of coefficients that can be used for prediction during locomotion. Moreover, to simulate the whole process of lane perception, lane estimation and control, we set up a novel simulation framework that includes: i) usage of curvature as a function of arc length to represent lanes based on differential geometry, ii) derivation of coefficients for polynomial function representation to simulate perception in vehicle body-fixed frame, and iii) transformation from absolute dynamics in earth-fixed frame to relative dynamics observable in camera-based control.\n\n\nThis paper is organized follows. In Section~\\ref{sec:prelim}, we start with preliminaries on coordinate transformation, vectorization operator and fundamental theory on curves. In Section~\\ref{sec:curv_repr}, we provide theoretical results on curve representations, and derive transformations between polynomial function representation and arc-length-based parametric representation.\nSection~\\ref{sec:veh_ctrl_setup} investigates camera-based vehicle control problem, and applies theoretical results in Section~\\ref{sec:curv_repr} to obtain an intrinsic linear model for lane estimation. Also, the dynamics describing absolute position and orientation in earth-fixed frame are transformed into those describing relative position and orientation observed in the camera field-of-view (FOV). A controller is introduced from \\cite{Wubing_LC_TIV_2022} to demonstrate that prediction based on the proposed lane estimation approach is adequate for control algorithms. In Section~\\ref{sec:res}, we first show how we set up simulation framework, and then conduct experiments to demonstrate the efficacy of the simulation framework and lane estimation approach. In Section~\\ref{sec:conclusion}, conclusions are drawn and future research directions are pointed out.\n\n\\section{Preliminaries\\label{sec:prelim}}\nIn this section, preliminaries are provided. In Section~\\ref{sec:coord_transf_gen}, we provide the coordinate transformation applied extensively in later chapters. In Section~\\ref{sec:vec_op}, the definition and theorem on vector operator of matrices are introduced. In Section~\\ref{sec:curv_repr_def}, polynomial function representation and arc-length-based parametric representation are discussed to approximate curves based on Taylor series.\n\n\\subsection{Coordinate Transformation \\label{sec:coord_transf_gen}}\n\nGiven two coordinate systems ($O$-$xyz$ and $Q$-$\\tau\\eta z$) as shown in Fig.~\\ref{fig:gen_coord_transf}, where the $Q$-$\\tau\\eta z$ frame is obtained by translating the $O$-$xyz$ frame with vector $\\pvec{OQ}$, and then rotating with angle $\\psi$, we recall the following theorem.\n\n\\begin{theorem}[Change of Coordinates]\\label{thm:gen_coord_transf}\n Suppose the coordinates of $Q$ expressed in $O$-$xyz$ frame are\n \\begin{align}\n \\boldsymbol{d} &=\n \\begin{bmatrix}\n x_{\\rm Q} & y_{\\rm Q}\n \\end{bmatrix}^{\\top}\\,,\n \\end{align}\n and the coordinates of an arbitrary point $P$ are\n \\begin{align}\n \\boldsymbol{r} &=\n \\begin{bmatrix}\n x_{\\rm P} & y_{\\rm P}\n \\end{bmatrix}^{\\top}\\,,&\n \\hat{\\boldsymbol{r}} &=\n \\begin{bmatrix}\n \\tau_{\\rm P} & \\eta_{\\rm P}\n \\end{bmatrix}^{\\top}\\,,\n \\end{align}\n expressed in the $O$-$xyz$ and $Q$-$\\tau\\eta z$ frame, respectively, then one can change coordinates from $Q$-$\\tau\\eta z$ frame to $O$-$xyz$ frame by\n \\begin{align}\n \\boldsymbol{r} &= \\bfR\\, \\hat{\\boldsymbol{r}}+\\boldsymbol{d}\\,,\n \\end{align}\n where\n \\begin{align}\n \\bfR &=\n \\begin{bmatrix}\n \\cos\\psi & -\\sin\\psi \\\\ \\sin\\psi &\\cos\\psi\n \\end{bmatrix}\n \\end{align}\n is the rotation matrix.\n\\end{theorem}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=0.5]{Fig01.pdf}\\\\\n \\caption{Coordinate transformation.\\label{fig:gen_coord_transf}}\n\\end{figure}\n\n\\subsection{Vectorization of Matrices \\label{sec:vec_op}}\nVectorization of a matrix represents the operation that cuts the matrix into columns and stacks them sequentially as a column vector.\nOne can refer to \\cite{Laub2005} for more details. Here we recall the following definition and theorem.\n\n\\begin{definition}\\label{def:vec_op}\nLet $h_{i}\\in \\mathbb{R}^{n}$ denote the $i$-th column of matrix $\\bfH\\in \\mathbb{R}^{n\\times m}$, i.e., $\\bfH=[h_{1}\\quad h_{2}\\quad\\ldots\\quad\nh_{m}]$. The vector operator is defined as\n\\begin{equation}\\label{eqn:vec_def}\n \\vectr(\\bfH)=\n \\left[\n \\begin{array}{cccc}\n h_{1}^{\\rm T} & h_{2}^{\\rm T} &\\ldots & h_{m}^{\\rm T}\n \\end{array}\n \\right]^{\\rm T}\\in\n\\mathbb{R}^{mn}\\,.\n\\end{equation}\n\\end{definition}\n\n\\begin{theorem}\\label{thm:vec_kron_relation}\nFor any three matrices $\\mathbf{A}$, $\\mathbf{B}$ and $\\bfC$ where the matrix product\n$\\mathbf{ABC}$ is defined, we have\n\\begin{equation}\n \\vectr(\\mathbf{A}\\mathbf{B}\\bfC)=(\\bfC^{\\rm T}\\otimes \\mathbf{A})\\vectr(\\mathbf{B})\\,,\n\\end{equation}\nwhere $\\otimes$ denotes Kronecker product.\n\\end{theorem}\n\n\\subsection{Curves in 2-D Space \\label{sec:curv_repr_def}}\nAccording to Taylor series, curves can be approximated with polynomials.\nIn practice, lanes are typically represented as 2-D curves in vehicle body-fixed frame.\nThus, in this part we discuss Taylor approximation for 2-D curves in $xy$-plane. The shorthand notation\n\\begin{align}\n \\dfrac{\\diff^{0} y}{\\diff x^{0}}=y(x)\\,,\n\\end{align}\nis introduced, which will be kept throughout this paper.\n\n\\begin{definition}\n Given a curve $\\mathcal{C}$ in $xy$-plane and a point $P$ on the curve whose coordinates are $(x_{0}, y(x_{0}))$, its $N$-th order polynomial function representation about point $P$ is its Taylor approximation at point $P$ until the $N$-th order, i.e.,\n \\begin{equation}\n \\begin{split}\\label{eqn:def_func_repr_algb_form}\n \\mathcal{C}'&=\\{(x,\\, y)\\,|\\,\n y(x;x_{0})=\\varphi_{0}+\\varphi_{1}\\, (x-x_{0}) +\\cdots \\\\\n &+\\varphi_{N}\\, (x-x_{0})^{N}\\,,\\,x\\in\\mathbb{R}\\}\\,,\n \\end{split}\n \\end{equation}\n where the coefficients are\n \\begin{align}\\label{eqn:func_repr_coeff_def}\n \\varphi_{n} &= \\dfrac{1}{n!}\\dfrac{\\diff^{n} y}{\\diff x^{n}}\\Big|_{x=x_{0}}\\,,\n \\end{align}\n for $n=0, 1, \\ldots, N$.\n\\end{definition}\n\n\\begin{definition}\n Given a curve $\\mathcal{C}$ in $xy$-plane and a point $P$ on the curve, the $N$-th order parametric representation using arc length\n \\begin{align}\\label{eqn:arc_length_def}\n s &= \\int_{x_{0}}^{x}\\sqrt{1+\\Big(\\dfrac{\\diff y}{\\diff x}\\Big)^{2}}\\, \\diff x\\,,\n \\end{align}\n about point $P$ is its Taylor approximation with respect to arc length parameter $s$ at point $P$ until the $N$-th order, i.e.,\n \\begin{equation}\\label{eqn:def_param_repr_algb_form}\n \\begin{split}\n \\mathcal{C}'&=\\{(x,\\, y)\\,|\\,\n x(s; s_{0})=\\bar{\\phi}_{0}+\\bar{\\phi}_{1} \\,(s-s_{0})+\\cdots\\\\\n & +\\bar{\\phi}_{N} \\,(s-s_{0})^{N},\\;\n y(s; s_{0})=\\hat{\\phi}_{0}+\\hat{\\phi}_{1}\\, (s-s_{0})\\\\\n & +\\cdots +\\hat{\\phi}_{N}\\, (s-s_{0})^{N},\\,s\\in\\mathbb{R}\\}\\,\n \\end{split}\n \\end{equation}\n where $s_{0}$ is the corresponding arc length of point $P$, and the coefficients are\n \\begin{align}\\label{eqn:param_repr_coeff_def}\n \\bar{\\phi}_{n}&= \\dfrac{1}{n!}\\dfrac{\\diff^{n} x}{\\diff s^{n}}\\Big|_{s=s_{0}}\\,,&\n \\hat{\\phi}_{n}&= \\dfrac{1}{n!}\\dfrac{\\diff^{n} y}{\\diff s^{n}}\\Big|_{s=s_{0}}\\,,\n \\end{align}\n for $n=0, 1, \\ldots, N$.\n\\end{definition}\n\nWe remark that in the remainder of the paper polynomial function representation \\eqref{eqn:def_func_repr_algb_form} and arc-length-based parametric representation \\eqref{eqn:def_param_repr_algb_form} will be referred to as function representation and parametric representation, respectively. To simplify the notation, we rewrite \\eqref{eqn:def_func_repr_algb_form} into the matrix form, that is,\n\\begin{align}\\label{eqn:def_func_repr_matx_form}\n \\mathcal{C}'&=\\{(x,\\, y)\\,|\\,y(x; x_{0})=\\boldsymbol{\\varphi}(x_{0})\\,\\boldsymbol{p}_{N}(x-x_{0})\\,,\\,x\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere\n\\begin{equation}\n\\begin{split}\\label{eqn:func_repr_coeff_vec_def}\n \\boldsymbol{\\varphi} & =\n \\begin{bmatrix}\n \\varphi_{0} &\\varphi_{1} & \\cdots & \\varphi_{N}\n \\end{bmatrix}\\,,\\\\\n \\boldsymbol{p}_{N}(x) & =\n \\begin{bmatrix}\n 1 & x & \\cdots & x^{N}\n \\end{bmatrix}^\\top \\,.\n\\end{split}\n\\end{equation}\nSimilarly, the matrix form of \\eqref{eqn:def_param_repr_algb_form} is\n\\begin{align}\\label{eqn:def_param_repr_matx_form}\n \\mathcal{C}'&=\\{\\boldsymbol{r}\\,|\\,\n \\boldsymbol{r}(s; s_{0})=\\boldsymbol{\\phi}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0})\\,,\\,s\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere\n\\begin{equation}\\label{eqn:param_repr_coeff_vec_def}\n\\begin{split}\n \\boldsymbol{r}(s; s_{0}) & =\n \\begin{bmatrix}\n x(s; s_{0}) \\\\ y(s; s_{0})\n \\end{bmatrix}\\,,\\enskip\n \\boldsymbol{\\phi} =\n \\begin{bmatrix}\n \\bar{\\phi}_{0} & \\bar{\\phi}_{1} & \\cdots & \\bar{\\phi}_{N} \\\\\n \\hat{\\phi}_{0} & \\hat{\\phi}_{1} & \\cdots & \\hat{\\phi}_{N}\n \\end{bmatrix}\\,,\n\\end{split}\n\\end{equation}\nand $\\boldsymbol{p}_{N}(\\cdot)$ uses the same definition in \\eqref{eqn:func_repr_coeff_vec_def}. Note that the dependency of coefficients ($\\boldsymbol{\\varphi}$ and $\\boldsymbol{\\phi}$) on the coordinates of point P ($x_{0}$ and $s_{0}$) is highlighted in the matrix form.\n\n\\begin{remark}\\label{remark:curv_repr_gen}\n In 2-D space, a curve is uniquely determined if the curvature $\\kappa(s)$ is given for arbitrary arc length position $s$.\n\\end{remark}\n\\begin{IEEEproof}\n For a general point $P$ on the curve, we assume that its coordinates, slope angle, curvature and arc length are $(x, y)$, $\\alpha$, $\\kappa$ and $s$, respectively.\n Based on differential geometry, their relationship is described by\n \\begin{equation}\\label{eqn:EOM_ref_path_ds}\n \\begin{split}\n x' &=\\cos\\alpha\\, ,\n \\\\\n y' &=\\sin\\alpha\\, ,\n \\\\\n \\alpha' &=\\kappa\\, ,\n \\end{split}\n \\end{equation}\n where prime denotes differentiation with respect to $s$. Solving \\eqref{eqn:EOM_ref_path_ds} with initial conditions, one can obtain the tuple $(x, y,\\alpha)$ as functions of $s$ that characterizes the curve.\n\\end{IEEEproof}\n\n\\section{Representations of Curves \\label{sec:curv_repr}}\nThis part presents theoretical results on the representations of curves. Section~\\ref{sec:property_param_repr} shows that parametric representations have property of preserving the form in parameter shifting and coordinate transformation. Section~\\ref{sec:transform_repr} derives the changes of coefficients that reveal the relationship between function representation and parametric representation. In Section~\\ref{sec:coeff_repr_curves}, we derive the coefficients for both representations for the curve \\eqref{eqn:EOM_ref_path_ds} with known curvature at arbitrary arc length position.\n\n\\subsection{Properties of Parametric Representation \\label{sec:property_param_repr}}\n\\begin{theorem}[Conformal Representation in Shifting]\\label{thm:curve_param_repr_shift}\n The parametric curve \\eqref{eqn:def_param_repr_matx_form} preserves its form when the Taylor expansion is shifted to parameter location $s_{1}=s_{0}+\\tilde{s}$. That is, $\\boldsymbol{r}(s; s_{0})$ can be rewritten into\n \\begin{align}\n \\boldsymbol{r}(s; s_{1}) &= \\boldsymbol{\\phi}(s_{1})\\,\\boldsymbol{p}_{N}(s-s_{1})\\,,\n \\end{align}\n where the change of coefficients is\n \\begin{align}\n \\boldsymbol{\\phi}^{\\top}(s_{1}) &= \\boldsymbol{T}(\\tilde{s})\\,\\boldsymbol{\\phi}^{\\top}(s_{0})\\,,\n \\end{align}\n and\n \\begin{align}\\label{eqn:T_expansion}\n \\boldsymbol{T}(\\tilde{s})&=\n \\begin{bmatrix}\n C_{0}^{0} & C_{1}^{0}\\tilde{s} & C_{2}^{0}\\tilde{s}^{2} & \\cdots & C_{N}^{0}\\tilde{s}^{N}\\\\\n & C_{1}^{1} & C_{2}^{1}\\tilde{s} &\\cdots & C_{N}^{1}\\tilde{s}^{N-1}\\\\\n & & C_{2}^{2} & \\ddots & \\vdots \\\\\n & & & \\ddots & C_{N}^{N-1}\\tilde{s}& \\\\\n & & & & C_{N}^{N}\n \\end{bmatrix}.\n \\end{align}\n Here, $C_{N}^{k}$ represents the binomial coefficients and $C_{0}^{0}=1$.\n\\end{theorem}\n\\begin{IEEEproof}\n \n Substituting ${s_{0}=s_{1}-\\tilde{s}}$ into \\eqref{eqn:def_param_repr_matx_form}, and then utilizing binomial theorem, one can obtain\n\\begin{align}\n \\boldsymbol{r}&(s; s_{0})= \\boldsymbol{\\phi}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0}) \\nonumber\\\\\n &=\n \\begin{bmatrix}\n \\sum\\limits_{n=0}^{N} \\bar{\\phi}_{n} (s-s_{0})^{n}\\\\\n \\sum\\limits_{n=0}^{N} \\hat{\\phi}_{n} (s-s_{0})^{n}\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\sum\\limits_{n=0}^{N} \\bar{\\phi}_{n} (s-s_{1}+\\tilde{s})^{n}\\\\\n \\sum\\limits_{n=0}^{N} \\hat{\\phi}_{n} (s-s_{1}+\\tilde{s})^{n}\n \\end{bmatrix} \\nonumber\\\\\n &=\n \\begin{bmatrix}\n \\sum\\limits_{n=0}^{N} \\bar{\\phi}_{n} \\sum\\limits_{m=0}^{n}C_{n}^{m}\\tilde{s}^{n-m}(s-s_{1})^{m}\\\\\n \\sum\\limits_{n=0}^{N} \\hat{\\phi}_{n} \\sum\\limits_{m=0}^{n} C_{n}^{m}\\tilde{s}^{n-m}(s-s_{1})^{m}\n \\end{bmatrix}\\\\\n &=\n \\begin{bmatrix}\n \\sum\\limits_{m=0}^{N} \\Big(\\sum\\limits_{n=m}^{N} C_{n}^{m}\\tilde{s}^{n-m}\\bar{\\phi}_{n}\\Big)(s-s_{1})^{m}\\\\\n \\sum\\limits_{m=0}^{N} \\Big(\\sum\\limits_{n=m}^{N} C_{n}^{m}\\tilde{s}^{n-m}\\hat{\\phi}_{n}\\Big)(s-s_{1})^{m}\n \\end{bmatrix}\\,,\\nonumber\n\\end{align}\nwhich is the $N$-th order parametric curve about $s_{1}$. Comparing the coefficients, we obtain the change of coefficients.\n\\end{IEEEproof}\n\n\\begin{theorem}[Conformal Representation in Transformation] \\label{thm:curve_param_repr_conformal}\n The parametric curve \\eqref{eqn:def_param_repr_matx_form} preserves its form in coordinate transformation. Suppose we are given two coordinate systems (frame $\\cFE_{1}$ and $\\cFE_{2}$), and the change of coordinates from $\\cFE_{2}$ to $\\cFE_{1}$ is\n \\begin{align}\n \\boldsymbol{r} &= \\bfR\\,\\hat{\\boldsymbol{r}} + \\bfd\\,,\n \\end{align}\n where $\\boldsymbol{r}$ and $\\hat{\\boldsymbol{r}}$ are the coordinates of an arbitrary point expressed in frame $\\cFE_{1}$ and $\\cFE_{2}$, respectively, $\\bfR$ is the rotation matrix, and $\\bfd$ is the origin of frame $\\cFE_{2}$ expressed in frame $\\cFE_{1}$.\n Then the parametric curve \\eqref{eqn:def_param_repr_matx_form} expressed in frame $\\cFE_{1}$ possesses the same form when expressed in frame $\\cFE_{2}$, that is,\n \\begin{align}\n \\hat{\\boldsymbol{r}}(s; s_{0}) &=\\widehat{\\boldsymbol{\\phi}}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0})\\,,\n \\end{align}\n where the change of coefficients is\n \\begin{align}\n \\widehat{\\boldsymbol{\\phi}}(s_{0})& = \\bfR^{\\top}\\big(\\boldsymbol{\\phi}(s_{0})-\\bfD \\big)\\,,\n \\end{align}\n and\n \\begin{align}\n \\bfD& = \\begin{bmatrix}\n \\bfd & 0 & \\cdots & 0\n \\end{bmatrix}\\in \\mathbb{R}^{2\\times (N+1)}\\,.\n \\end{align}\n\\end{theorem}\n\\begin{IEEEproof}\n \n Based on change of coordinates, the curve \\eqref{eqn:def_param_repr_matx_form} can be expressed in $\\cFE_{2}$ as\n\\begin{equation}\n \\begin{split}\n \\hat{\\boldsymbol{r}}(s; s_{0}) &= \\bfR^{\\top}\\boldsymbol{r}(s; s_{0})- \\bfR^{\\top}\\bfd\\,,\\\\\n &=\\bfR^{\\top}\\boldsymbol{\\phi}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0}) - \\bfR^{\\top}\\bfd\\,,\n \\end{split}\n\\end{equation}\nBy noticing that\n\\begin{align}\n \\bfd & = \\bfD\\,\\boldsymbol{p}_{N}(s-s_{0})\\,,\n\\end{align}\none can obtain the change of coefficients given in the theorem.\n\n\\end{IEEEproof}\n\n\\begin{remark}\\label{remark:func_repr_preserve_shift}\n Function representation preserves its form in parameter shifting, but not in coordinate transformation.\n\\end{remark}\n\n\\subsection{Transformations between Function Representation and Parametric Representation \\label{sec:transform_repr}}\n\nIn this part, we derive the transformations between function representation and parametric representation.\nResults are only provided until the fifth-order, which is considered adequate in practice.\nThe following assumptions are made: i) Taylor series are expanded about the intersection between the curve and the $y$-axis (also referred to as $y$-intercept); ii) this $y$-intercept marks the starting point of arc length, i.e., $s=0$; and iii) the positive direction of arc length parameter $s$ corresponds to the positive direction of the $x$-axis. We obtain the following transformations based on those assumptions.\n\n\n\\begin{theorem}\\label{thm:curve_func_2_param_repr}\n Given a curve with function representation \\eqref{eqn:def_func_repr_matx_form}, its parametric representation \\eqref{eqn:def_param_repr_matx_form} is uniquely determined in the same coordinate system. In other words, there exists a unique map $\\boldsymbol{\\phi}=\\boldsymbol{f}(\\boldsymbol{\\varphi})$. The coefficients until the fifth order are\n \\begin{equation\n \\begin{split}\n \\bar{\\phi}_{0} &= 0\\,, \\qquad\\qquad\\qquad\\qquad\\quad\\enskip\\;\n \\hat{\\phi}_{0} = \\varphi_{0}\\,,\\\\\n \\bar{\\phi}_{1} &= \\dfrac{1}{\\lambda}\\,, \\qquad\\qquad\\qquad\\qquad\\quad\\enskip\n \\hat{\\phi}_{1} = \\dfrac{\\varphi_{1}}{\\lambda}\\,,\\\\\n \\bar{\\phi}_{2} &= -\\dfrac{\\varphi_{1} \\varphi_{2}}{\\lambda^{4}}\\,, \\qquad\\qquad\\qquad\\quad\n \\hat{\\phi}_{2} =\\dfrac{\\varphi_{2}}{\\lambda^{4}}\\,,\\\\\n \\bar{\\phi}_{3} &= \\dfrac{2 \\varphi_{2}^{2}-\\varphi_{1} \\varphi_{3}}{\\lambda^{5}}-\\dfrac{8 \\varphi_{2}^{2}}{3\\lambda^{7}}\\,,\\qquad\n \\hat{\\phi}_{3} =\\dfrac{\\varphi_{3}}{\\lambda^{5}}- \\dfrac{8\\varphi_{1} \\varphi_{2}^{2}}{3\\lambda^{7}}\\,, \\\\\n \\bar{\\phi}_{4} &= \\dfrac{5 \\varphi_{2} \\varphi_{3}-\\varphi_{1} \\varphi_{4}}{\\lambda^{6}}\n -\\dfrac{10 \\varphi_{1} \\varphi_{2}^{3}+13 \\varphi_{2} \\varphi_{3}}{2\\lambda^{8}}\\\\\n &+\\dfrac{28\\varphi_{1} \\varphi_{2}^{3}}{3\\lambda^{10}}\\,,\\\\\n %\n \\hat{\\phi}_{4} &=\\dfrac{\\varphi_{4}}{\\lambda^{6}}\n +\\dfrac{16 \\varphi_{2}^{3}-13 \\varphi_{1} \\varphi_{2}\\varphi_{3}}{2\\lambda^{8}}\n - \\dfrac{28\\varphi_{2}^{3}}{3\\lambda^{10}} \\,,\\\\\n \\bar{\\phi}_{5} &= \\dfrac{3 \\varphi_{3}^{2}+6 \\varphi_{2} \\varphi_{4}-\\varphi_{1} \\varphi_{5}}{\\lambda^{7}}\\\\\n &-\\dfrac{39 \\varphi_{3}^{2}+210 \\varphi_{1}\\varphi_{2}^{2} \\varphi_{3}+76 \\varphi_{2} \\varphi_{4}-140 \\varphi_{2}^{4}}{10\\lambda^{9}}\\\\\n &+\\dfrac{188 \\varphi_{1} \\varphi_{2}^{2}\\varphi_{3}-248 \\varphi_{2}^{4}}{5\\lambda^{11}}\n + \\dfrac{112\\varphi_{2}^{4}}{3\\lambda^{13}}\\,,\\\\\n %\n \\hat{\\phi}_{5} &=\\dfrac{\\varphi_{5}}{\\lambda^{7}}\n +\\dfrac{326 \\varphi_{2}^{2} \\varphi_{3}-76 \\varphi_{1}\\varphi_{2} \\varphi_{4}-39 \\varphi_{1}\\varphi_{3}^{2}}{10\\lambda^{9}}\\\\\n &-\\dfrac{128 \\varphi_{1}\\varphi_{2}^{4}+188 \\varphi_{2}^{2} \\varphi_{3}}{5\\lambda^{11}}+ \\dfrac{112\\varphi_{1}\\varphi_{2}^{4} }{3\\lambda^{13}}\\,,\n \\end{split}\n \\end{equation}\n where\n \\begin{align}\n \\lambda &= \\sqrt{1+\\varphi_{1}^{2}}\\,.\n \\end{align}\n\\end{theorem}\n\n\\begin{IEEEproof}\n \n Based on the assumptions, we have\n\\begin{align}\\label{eqn:fun_repr_eval_0}\n x (0)&=0\\,, &\n y (0)&=\\varphi_{0}\\,,&\n x '(s) &>0\\,.\n\\end{align}\nwhere $'$ denotes differentiation with respect to $s$, implying $\\bar{\\phi}_{0}$ and $\\hat{\\phi}_{0}$ given in the theorem.\nNotice that \\eqref{eqn:arc_length_def} yields\n\\begin{align}\\label{eqn:ds_comp_equality}\n \\diff s^{2} &=\\diff x ^{2}+\\diff y ^{2} &\\Longrightarrow \\quad\n ( x ')^{2}+( y ')^{2} &=1\\,,\n \n\\end{align}\nand the derivative of \\eqref{eqn:def_func_repr_algb_form} with respect to $s$ yields\n\\begin{align}\\label{eqn:fun_repr_deriv_1}\n y ' &= \\varphi_{1} x '+\\cdots +n\\varphi_{n} x ^{n-1} x '\\,.\n\\end{align}\nEvaluating (\\ref{eqn:ds_comp_equality}, \\ref{eqn:fun_repr_deriv_1}) at $s=0$ and utilizing \\eqref{eqn:fun_repr_eval_0}, we obtain\n\\begin{align}\\label{eqn:fun_repr_deriv1_eval_0}\n x '(0) &=\\dfrac{1}{\\sqrt{1+\\varphi_{1}^{2}}}\\,,&\n y '(0) &=\\dfrac{\\varphi_{1}}{\\sqrt{1+\\varphi_{1}^{2}}}\\,,\n\\end{align}\nwhich implies $\\bar{\\phi}_{1}$ and $\\hat{\\phi}_{1}$ given in the theorem. Then taking the derivatives of (\\ref{eqn:ds_comp_equality}, \\ref{eqn:fun_repr_deriv_1}) with respect to $s$ yields\n\\begin{equation}\\label{eqn:fun_repr_deriv2}\n \\begin{split}\n & x ' x ''+ y ' y '' =0\\,,\\\\\n & y '' = (\\varphi_{1} +\\cdots +n\\varphi_{n} x ^{n-1}) x ''\\\\\n &\\quad +\\big(2\\varphi_{2} +\\cdots +n(n-1)\\varphi_{n} x ^{n-2}\\big)( x ')^{2}\\,.\n \\end{split}\n\\end{equation}\nEvaluating (\\ref{eqn:fun_repr_deriv2}) at $s=0$ and utilizing (\\ref{eqn:fun_repr_eval_0}, \\ref{eqn:fun_repr_deriv1_eval_0}), we obtain\n\\begin{align}\\label{eqn:fun_repr_deriv2_eval_0}\n x ''(0) &=-\\dfrac{2\\varphi_{1}\\varphi_{2}}{(1+\\varphi_{1}^{2})^{\\frac{3}{2}}}\\,,&\n y ''(0) &=\\dfrac{2\\varphi_{2}}{(1+\\varphi_{1}^{2})^{\\frac{3}{2}}}\\,,\n\\end{align}\nwhich implies $\\bar{\\phi}_{2}$ and $\\hat{\\phi}_{2}$ given in the theorem.\nSimilarly, evaluating the derivative of (\\ref{eqn:fun_repr_deriv2}) at $s=0$ and then utilizing (\\ref{eqn:fun_repr_eval_0}, \\ref{eqn:fun_repr_deriv1_eval_0}, \\ref{eqn:fun_repr_deriv2_eval_0}), one can obtain an algebraic linear equation about $ x '''(0)$ and $ y '''(0)$ and thus obtain $\\bar{\\phi}_{3}$ and $\\hat{\\phi}_{3}$ . Following this procedure, we can derive all the derivatives and obtain the coefficients given in the theorem.\n\n\\end{IEEEproof}\n\n\\begin{theorem}\\label{thm:curve_param_2_func_repr}\n Given a curve with parametric representation \\eqref{eqn:def_param_repr_matx_form}, its function representation \\eqref{eqn:def_func_repr_matx_form} is uniquely determined in the same coordinate system. In other words, there exists a unique map $\\boldsymbol{\\varphi}=\\boldsymbol{f}^{-1}(\\boldsymbol{\\phi})$. The coefficients until the fifth-order are\n \\begin{equation\n \\begin{split}\n \\varphi_{0} &= \\hat{\\phi}_{0}\\,,\\qquad\\,\n \\varphi_{2} = \\dfrac{\\hat{\\phi}_{2} \\bar{\\phi}_{1}-\\hat{\\phi}_{1} \\bar{\\phi}_{2}}{\\bar{\\phi}_{1}^3}\\,,\\\\\n \\varphi_{1} &= \\dfrac{\\hat{\\phi}_{1}}{\\bar{\\phi}_{1}}\\,,\\qquad\n \\varphi_{3} = \\dfrac{\\hat{\\phi}_{3}}{\\bar{\\phi}_{1}^3}\n -\\dfrac{\\hat{\\phi}_{1} \\bar{\\phi}_{3}+2 \\hat{\\phi}_{2} \\bar{\\phi}_{2}}{\\bar{\\phi}_{1}^4}\n +\\dfrac{2 \\hat{\\phi }_{1} \\bar{\\phi}_{2}^2}{\\bar{\\phi}_{1}^5}\\,,\\\\\n \\varphi_{4} &= \\dfrac{\\hat{\\phi}_{4}}{\\bar{\\phi}_{1}^4}\n -\\dfrac{\\hat{\\phi}_{1} \\bar{\\phi}_{4}+2 \\hat{\\phi}_{2} \\bar{\\phi}_{3}+3 \\hat{\\phi}_{3} \\bar{\\phi}_{2}}{\\bar{\\phi}_{1}^5}\\\\\n &+\\dfrac{5 \\bar{\\phi}_{2}(\\hat{\\phi}_{1} \\bar{\\phi}_{3} + \\hat{\\phi}_{2} \\bar{\\phi}_{2})}{\\bar{\\phi}_{1}^6}\n -\\dfrac{5 \\hat{\\phi}_{1} \\bar{\\phi}_{2}^3}{\\bar{\\phi}_{1}^7}\\,,\\\\\n \\varphi_{5} &=\\dfrac{\\hat{\\phi}_{5}}{\\bar{\\phi}_{1}^5}\n -\\dfrac{\\hat{\\phi }_{1} \\bar{\\phi}_{5}+2 \\hat{\\phi}_{2} \\bar{\\phi}_{4}+3 \\hat{\\phi }_{3} \\bar{\\phi}_{3}+4 \\hat{\\phi}_{4} \\bar{\\phi}_{2}}{\\bar{\\phi }_{1}^6}\\\\\n &+\\dfrac{3 \\hat{\\phi}_{1} (\\bar{\\phi}_{3}^2+2 \\bar{\\phi}_{2} \\bar{\\phi}_{4})+3\\bar{\\phi}_{2}(3 \\hat{\\phi}_{3} \\bar{\\phi}_{2}+4 \\hat{\\phi}_{2} \\bar{\\phi}_{3})}{\\bar{\\phi}_{1}^7}\\\\\n &-\\dfrac{7 \\bar{\\phi}_{2}^{2}(3 \\hat{\\phi}_{1} \\bar{\\phi}_{3} +2 \\hat{\\phi}_{2} \\bar{\\phi}_{2})}{\\bar{\\phi}_{1}^8}\n +\\dfrac{14 \\hat{\\phi}_{1} \\bar{\\phi}_{2}^4}{\\bar{\\phi}_{1}^9}\\,.\n \\end{split}\n \\end{equation}\n\\end{theorem}\n\n\\begin{IEEEproof}\n \n Given the parametric representation \\eqref{eqn:def_param_repr_algb_form} of the curve, one can obtain the derivatives ($x'$, $y'$, $x''$, $y''$, ...) with respect to $s$ until the required order. Also, notice that\n\\begin{equation}\\label{eqn:proof_dyx}\n \\dfrac{\\diff y}{\\diff x} =\\dfrac{y'}{x'}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{eqn:proof_dyx_recurv}\n \\dfrac{\\diff^{n+1} y}{\\diff x^{n+1}} =\\dfrac{\\diff}{\\diff x}\\left(\\dfrac{\\diff^{n} y}{\\diff x^{n}}\\right)\n =\\dfrac{\\left(\\frac{\\diff^{n} y}{\\diff x^{n}}\\right)'}{ x'}\\,, \\quad n=1,\\, 2,\\, 3,\\, \\ldots\n\\end{equation}\nCalculating the derivatives in (\\ref{eqn:proof_dyx}, \\ref{eqn:proof_dyx_recurv}) recursively and utilizing \\eqref{eqn:func_repr_coeff_def}, one can obtain the coefficients given in the theorem by noticing that $x_0=0$ implies $s=0$.\n\n\\end{IEEEproof}\n\n\\begin{remark}\nThe transformed representation is not necessarily equal to the original representation due to truncation errors in Taylor series, but can provide a very good approximation. This is because their derivatives are equal up until the specified order at the point where Taylor series are expanded.\n\\end{remark}\n\n\\subsection{Representations of a Given Curve \\label{sec:coeff_repr_curves}}\nIt is useful to derive the coefficients of parametric representation and function representation for curves described by \\eqref{eqn:EOM_ref_path_ds} with given $\\kappa(s)$. On the one hand, these coefficients can be used to validate the transformations derived in Section~\\ref{sec:transform_repr}. On the other hand, they can be used to simulate perception outcomes for camera-based control. In the following, we assume: i) the tuple $(x(s), y(s),\\alpha(s))$ are obtained with the given $\\kappa(s)$ according to \\eqref{eqn:EOM_ref_path_ds}; and ii) the representations are expanded about point $P$ whose arc length position is $s_{0}$. Other attributes at point P are\n\\begin{equation}\\label{eqn:gen_curv_P_loc}\n\\begin{split}\n &x(s_{0}) =x_{0}\\,, \\enskip\n y(s_{0}) =y_{0}\\,, \\enskip\n \\alpha(s_{0}) =\\alpha_{0}\\,, \\enskip\n \\kappa(s_{0}) =\\kappa_{0}\\,,\\\\\n %\n &\\dfrac{\\diff \\kappa}{\\diff s}(s_{0})=\\kappa'_{0}\\,,\\quad\n \\dfrac{\\diff^{2} \\kappa}{\\diff s^{2}}(s_{0})=\\kappa''_{0}\\,,\\quad\n \\dfrac{\\diff^{3} \\kappa}{\\diff s^{3}}(s_{0})=\\kappa'''_{0}\\,.\n\\end{split}\n\\end{equation}\n\n\n\\begin{theorem}\\label{thm:gen_curve_param_repr}\n Given a curve in $xy$-plane with known ($x(s)$, $y(s)$, $\\alpha(s)$, $\\kappa(s)$), its parametric representation about point $P$ is unique. The coefficients until fifth-order are\n \\begin{equation}\n \\begin{split}\n \\bar{\\phi}_{0}&=x_{0}\\,,\\qquad \\qquad \\qquad\\qquad\n \\hat{\\phi}_{0}=y_{0}\\,,\\\\\n \\bar{\\phi}_{1}&=\\cos\\alpha_{0}\\,,\\qquad\\hspace{45pt}\n \\hat{\\phi}_{1}=\\sin\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{2}&=-\\tfrac{1}{2}\\kappa_{0}\\sin\\alpha_{0}\\,,\\quad\\hspace{31pt}\n \\hat{\\phi}_{2}=\\tfrac{1}{2}\\kappa_{0}\\cos\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{3}&=-\\tfrac{1}{6}\\kappa_{0}^{2}\\cos\\alpha_{0}-\\tfrac{1}{6}\\kappa_{0}'\\sin\\alpha_{0}\\,,\\\\\n \\hat{\\phi}_{3}&=-\\tfrac{1}{6}\\kappa_{0}^{2}\\sin\\alpha_{0}+\\tfrac{1}{6}\\kappa_{0}'\\cos\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{4}&=-\\tfrac{1}{24}(\\kappa_{0}''-\\kappa_{0}^{3})\\sin\\alpha_{0}-\\tfrac{1}{8}\\kappa_{0}\\kappa_{0}'\\cos\\alpha_{0}\\,,\\\\\n \\hat{\\phi}_{4}&=\\tfrac{1}{24}(\\kappa_{0}''-\\kappa_{0}^{3})\\cos\\alpha_{0}-\\tfrac{1}{8}\\kappa_{0}\\kappa_{0}'\\sin\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{5}&=\\tfrac{1}{120}\\big(\\kappa_{0}^{4}-3(\\kappa_{0}')^{2}-4\\kappa_{0}\\kappa_{0}''\\big)\\cos\\alpha_{0}\\\\\n &\\quad+\\tfrac{1}{120}(6\\kappa_{0}^{2}\\kappa_{0}'-\\kappa_{0}''')\\sin\\alpha_{0}\\,,\\\\\n \\hat{\\phi}_{5}&=\\tfrac{1}{120}\\big(\\kappa_{0}^{4}-3(\\kappa_{0}')^{2}-4\\kappa_{0}\\kappa_{0}''\\big)\\sin\\alpha_{0}\\\\\n &\\quad-\\tfrac{1}{120}(6\\kappa_{0}^{2}\\kappa_{0}'-\\kappa_{0}''')\\cos\\alpha_{0}\\,.\n \\end{split}\n \\end{equation}\n\\end{theorem}\n\\begin{IEEEproof}\n \n Given an arbitrary point $(x, y)$ on the curve (\\ref{eqn:EOM_ref_path_ds}), the derivatives of the curve at that point are\n\\begin{equation}\\label{eqn:xy_derivs}\n \\begin{split}\n x' &= \\cos\\alpha\\,,\\qquad\\qquad\\qquad\\quad\\enskip\n y' =\\sin\\alpha\\,,\\\\\n x''&=-\\kappa\\sin\\alpha\\,,\\qquad\\qquad\\qquad\n y''=\\kappa\\cos\\alpha\\,,\\\\\n x'''&=-\\kappa^{2}\\cos\\alpha-\\kappa'\\sin\\alpha\\,,\\\\\n x^{(4)}&=-(\\kappa''-\\kappa^{3})\\sin\\alpha-3\\kappa\\kappa'\\cos\\alpha\\,,\\\\\n x^{(5)}&=\\big(\\kappa^{4}-3(\\kappa')^{2}-4\\kappa\\kappa''\\big)\\cos\\alpha\\\\\n &+(6\\kappa^{2}\\kappa'-\\kappa''')\\sin\\alpha\\\\\n y''' &=-\\kappa^{2}\\sin\\alpha+\\kappa'\\cos\\alpha\\,,\\\\\n y^{(4)}&=(\\kappa''-\\kappa^{3})\\cos\\alpha-3\\kappa\\kappa'\\sin\\alpha\\,,\\\\\n y^{(5)}&=\\big(\\kappa^{4}-3(\\kappa')^{2}-4\\kappa\\kappa''\\big)\\sin\\alpha\\\\\n &-(6\\kappa^{2}\\kappa'-\\kappa''')\\cos\\alpha\\,.\n \\end{split}\n\\end{equation}\nEvaluating \\eqref{eqn:xy_derivs} at point $P$ and utilizing \\eqref{eqn:param_repr_coeff_def}, we obtain the coefficients given in the theorem.\n\n\\end{IEEEproof}\n\n\n\\begin{theorem}\\label{thm:gen_curve_func_repr}\n Given a curve in $xy$-plane with known ($x(s)$, $y(s)$, $\\alpha(s)$, $\\kappa(s)$), its function representation about point $P$ is unique. The coefficients until fifth-order are\n \\begin{equation\n \\begin{split}\n \\varphi_{0} &= y_{0}\\,, \\qquad\\hspace{50pt}\n \\varphi_{1} = \\tan\\alpha_{0}\\,,\\\\\n \\varphi_{2} &= \\dfrac{\\kappa_{0}}{2\\cos^{3}\\alpha_{0}}\\,, \\qquad\\qquad\n \\varphi_{3} = \\dfrac{\\kappa_{0}'+3\\kappa_{0}^{2}\\tan\\alpha_{0}}{6\\cos^{4}\\alpha_{0}}\n \\,,\\\\\n %\n \\varphi_{4} &= \\dfrac{5\\kappa_{0}^3 }{8\\cos^{7}\\alpha_{0}}\n +\\dfrac{\\kappa_{0}''-12 \\kappa_{0}^3+10 \\kappa_{0}\\kappa_{0}'\\tan\\alpha_{0} }{24\\cos^{5}\\alpha_{0}} \\,,\\\\\n \\varphi_{5} &= \\dfrac{7\\kappa_{0}^{2} (\\kappa_{0}'+\\kappa_{0}^{2}\\tan\\alpha_{0})}{8\\cos^{8}\\alpha_{0}}\n +\\dfrac{\\kappa_{0}'''-86 \\kappa_{0}^{2} \\kappa_{0}'}{120\\cos^{6}\\alpha_{0}}\\\\\n &+\\dfrac{ \\big(3\\kappa_{0}\\kappa_{0}''+2(\\kappa_{0}')^{2}-12\\kappa_{0}^{4}\\big) \\tan\\alpha_{0} }{24\\cos^{6}\\alpha_{0}}\n \\,.\n \\end{split}\n \\end{equation}\n\\end{theorem}\n\\begin{IEEEproof}\n \n Given an arbitrary point $(x, y)$ on the curve (\\ref{eqn:EOM_ref_path_ds}), the derivatives of the coordinates $x$ and $y$ with respect to $s$ at that point are given in \\eqref{eqn:xy_derivs}. Also, notice that the derivatives of $y$ with respect to $x$ are the same as those given in (\\ref{eqn:proof_dyx}, \\ref{eqn:proof_dyx_recurv}). Substituting \\eqref{eqn:xy_derivs} into (\\ref{eqn:proof_dyx}, \\ref{eqn:proof_dyx_recurv}), one can obtain the derivatives recursively until the required order. Then the coefficients can be obtained by evaluating these derivatives at point $P$ and utilizing \\eqref{eqn:func_repr_coeff_def}.\n\\end{IEEEproof}\n\n\\begin{remark}\nOne can verify that coefficients given in Theorem~\\ref{thm:gen_curve_param_repr} and \\ref{thm:gen_curve_func_repr} satisfy the transformations given in Theorem~\\ref{thm:curve_func_2_param_repr} and \\ref{thm:curve_param_2_func_repr}.\n\\end{remark}\n\n\\section{Application in Lateral Control \\label{sec:veh_ctrl_setup}}\nThis section applies arc-length-based parametric representation to camera-based lateral control problem. We first present a typical architecture using function representation, and discuss the related issues in lane estimation and control. Then a new architecture is proposed to use parametric representation that can facilitate and improve lane estimation as well as control-related information extraction.\n\nFig.~\\ref{fig:block_diag}(a) illustrates a typical architecture of camera-based vehicle control. At first, lanes are captured by cameras. Then perception applies lane detection algorithms using computer vision or machine learning techniques, and outputs coefficients $\\boldsymbol{\\varphi}$ that represent lanes with polynomial function representations. In the scenarios where perception is not perfect (noisy or corrupted) or temporarily unavailable, lane estimation becomes extremely important, which is achieved through predictors, observers or estimators. Those techniques require a model on the evolution of coefficients for prediction. For example, the orange box in Fig.~\\ref{fig:block_diag}(a) represents a typical Kalman filter implementation, which is decoupled into two steps: time update and measurement update. Time update step utilizes a model to predict a priori estimate of the coefficients, while the measurement update step generates a posteriori estimate based on the newest measurement and signal characteristics.\nHowever, when function representation is used, it is rather difficult to find such a model because: i) the vehicle body-fixed frame is translating and rotating due to vehicle movement; and ii) function representation does not preserve the form in coordinate transformation.\nTherefore, in practice it is common to use simple approximated models for prediction, such as $\\boldsymbol{\\varphi}_{k+1}=\\boldsymbol{\\varphi}_{k}$. These predictions only work for a very short period, and underlying inaccuracies in the model lead to unexpected behaviors due to the aforementioned issues.\nThe other issue appears when control algorithms need to extract state information with respect to lanes at a preview distance, such as lateral deviation, relative heading angle, road curvature, etc. Extracting such information requires numerical iterations for function representation because the preview distance is implicit in the representation.\n\n\\begin{figure}[!t]\n \\centering\n \n \\includegraphics[scale=1.2]{Fig02.pdf}\\\\\n \\caption{Block diagrams of camera-based vehicle control. (a) A typical architecture using function representation. (b) Proposed architecture using arc-length-based parametric representation. }\\label{fig:block_diag}\n\\end{figure}\n\nThese issues can be resolved if arc-length-based parametric representation is utilized to characterize lanes.\nWhen the vehicle moves, the nice properties in Theorem~\\ref{thm:curve_param_repr_shift} and \\ref{thm:curve_param_repr_conformal} indicate an intrinsic linear model on the evolution of coefficients. Also, it is straightforward to extract information with respect to lanes at a preview distance since this distance is actually the arc length parameter explicitly used in the representation.\nFig.~\\ref{fig:block_diag}(b) proposes a new architecture for camera-based control problem using arc-length-based parametric representation.\nWe still assume perception outputs coefficients $\\boldsymbol{\\varphi}$ of polynomial function representation to ensure compatibility with current platforms. An additional step is introduced to transform function representation to parametric representation by applying Theorem~\\ref{thm:curve_func_2_param_repr}. Thus, we can derive an intrinsically linear model on the evolution of coefficients $\\boldsymbol{\\phi}$, and easily extract information needed by control algorithms. The details are provided in the remainder of this part. In Section~\\ref{sec:perception}, we introduce notations, and discuss lane representations and transformations. Section~\\ref{sec:estmation_lane} derives the model on the evolution of coefficients $\\boldsymbol{\\phi}$ for lane estimation. In Section~\\ref{sec:ctrl}, we introduce a vehicle dynamic model and a lateral controller as an example to demonstrate how to utilize the derived lane estimation model.\n\n\n\n\\subsection{Perception and Transformation \\label{sec:perception}}\n\n\nFig.~\\ref{fig:camera_control} depicts the camera-based perception of the path represented as the green dashed curve. Remark that in practice perception algorithms output representations of lane markers captured by cameras, which can be transformed into representations of lane centers. For simplicity, in this paper we use paths or lanes to indicate lane centers.\nWe also assume that the camera with FOV angle $2\\delta$ is mounted at point $Q$ along the longitudinal symmetry axis. The wheelbase length is $l$, and the distance from point $Q$ to the rear axle center is $d$. The light purple sector region denotes the FOV of the camera, which is also symmetric about the longitudinal axis. The closest on-path point observed in the camera is $\\Omega$. Note that the camera can only capture the segment $\\mathcal{C}$ of the path within the FOV that is highlighted as the solid green curve beyond point $\\Omega$. In this part, we maintain the following notations:\n\\begin{enumerate}\n \\item ${(x,y,z)}$ denotes the earth-fixed frame $\\cFE$.\n\n \\item ${(\\tau, \\eta, z)}$ denotes the vehicle body-fixed frame $\\cFB$ with the origin located at $Q$. Axes $\\tau$ and $\\eta$ are along the longitudinal and lateral directions, with the corresponding unit vectors denoted as $\\unitvec[\\tau]$ and $\\unitvec[\\eta]$, respectively.\n\n \\item The position of $Q$ is ($x_{Q}$, $y_{Q}$) expressed in $\\mathcal{F}$, and the vehicle heading angle is $\\psi$ with respect to the $x$-axis.\n\n \\item The position of $\\Omega$ is ${(x_{\\rm \\Omega}, y_{\\rm \\Omega})}$ expressed in $\\mathcal{F}$, while the slope angle and curvature at point $\\Omega$ are $\\alpha_{\\rm \\Omega}$ and $\\kappa_{\\rm \\Omega}$, respectively.\n\n \\item When it is needed to distinguish different time instants, subscripts $k$ or $k+1$ indicating time steps will be added to the points, axes, frames and time-varying variables.\n\\end{enumerate}\n\n\\begin{figure}[!t]\n \\centering\n \n \\includegraphics[scale=0.9]{Fig03.pdf}\\\\\n \\caption{Schematics of camera-based vehicle control.\\label{fig:camera_control}}\n\\end{figure}\n\n\nPerception algorithms process the observed path segment $\\mathcal{C}$ and approximate it with a polynomial function\n\\begin{align}\\label{eqn:poly_func_repr_D}\n \\mathcal{C}'=\\{(\\tau, \\eta)\\,|\\,\\eta = \\boldsymbol{\\varphi}(0)\\,\\boldsymbol{p}_{N}(\\tau)\\,,\\,\\tau\\in\\mathbb{R}\\}\\,,\n\\end{align}\nthat is expressed in frame $\\cFB$ and indicated by the red curve in Fig.~\\ref{fig:camera_control}.\nNote that point $D$ corresponding to $\\tau=0$ is invisible in the FOV since it is typically less than $180$ degrees. Strictly speaking, \\eqref{eqn:poly_func_repr_D} is not the Taylor approximation of $\\mathcal{C}$ about point $D$, but the expansion of Taylor approximation of $\\mathcal{C}$ about the closest on-path point (point $\\Omega$) in the FOV, that is,\n\\begin{align}\\label{eqn:poly_func_repr_omega}\n \\mathcal{C}'=\\{(\\tau, \\eta)\\,|\\,\\eta = \\boldsymbol{\\varphi}(\\tau_{\\Omega})\\,\\boldsymbol{p}_{N}(\\tau-\\tau_{\\Omega})\\,,\\,\\tau\\in\\mathbb{R}\\}\\,.\n\\end{align}\nAccording to Remark~\\ref{remark:func_repr_preserve_shift}, the change of coefficients is\n\\begin{align}\n \\boldsymbol{\\varphi}^\\top(0)&=\\boldsymbol{T}(-\\tau_{\\Omega})\\boldsymbol{\\varphi}^{\\top}(\\tau_{\\Omega})\\,,\n\\end{align}\nwhere $\\boldsymbol{T}(\\cdot)$ is the same as that given in \\eqref{eqn:T_expansion}.\n\nNext step is to transform function representation \\eqref{eqn:poly_func_repr_D} to arc-length-based parametric representation.\nIndeed, one should transform \\eqref{eqn:poly_func_repr_omega} to parametric representation about point $\\Omega$ since \\eqref{eqn:poly_func_repr_D} is the expansion of approximation \\eqref{eqn:poly_func_repr_omega} about point $\\Omega$.\nHowever, it is rather complicated to transform to parametric representation about point $\\Omega$; see proof of Theorem~\\ref{thm:curve_func_2_param_repr}. Also, transformation about point $\\Omega$ leads to the issue of floating origin of the arc length coordinate $s$ while the vehicle is moving. Therefore, we transform \\eqref{eqn:poly_func_repr_D} to parametric representation about point $D$, which is the $\\eta$-intercept of $\\mathcal{C}'$ in frame $\\cFB$.\nAs shown in Fig.~\\ref{fig:camera_control}, point $\\Omega$ is close to point $D$ when the following holds: i) the lateral deviation to the path is small; or ii) the camera has a relatively wide view. Applying Theorem~\\ref{thm:curve_func_2_param_repr}, we obtain the parametric representation of $\\mathcal{C}'$ given in \\eqref{eqn:poly_func_repr_D} about point $D$ as\n\\begin{align}\\label{eqn:param_repr_body_frame}\n \\mathcal{C}'&=\\{\\bfr\\,|\\,\\bfr(s; 0)= \\boldsymbol{\\phi}(0)\\,\\boldsymbol{p}_{N}(s)\\,,\\,s\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere $\\bfr = [\\tau, \\; \\eta]^\\top$ is the coordinates in frame $\\cFB$, the location of arc length coordinate $s=0$ corresponds to point $D$, and the change of coefficients is\n\\begin{align}\\label{eqn:param_repr_body_frame_coeff}\n \\boldsymbol{\\phi}(0)&= \\boldsymbol{f}(\\boldsymbol{\\varphi}(0))\\,.\n\\end{align}\n\n\n\\subsection{Lane Estimation \\label{sec:estmation_lane}}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=0.6]{Fig04.pdf}\\\\\n \\caption{Vehicle movement from step $k$ to $k+1$.\\label{fig:lanepoly_time_evolve}}\n\\end{figure}\n\nFig.~\\ref{fig:lanepoly_time_evolve} depicts vehicle movement from time step $k$ to $k+1$. As mentioned earlier, Fig.~\\ref{fig:lanepoly_time_evolve} uses subscripts $k$ or $k+1$ on points, frames and time-varying variables to distinguish different instants; cf.~Fig.~\\ref{fig:camera_control}. Specifically, point $Q_{k}$, frame $\\cFB[k]$ with $\\tau_{k}\\eta_{k}z$ axis and point $D_{k}$ denote the location of the camera, the body-fixed frame and the $\\eta$-intercept of the lane at step $k$, respectively. The red curve $\\mathcal{C}'$ represents the curve \\eqref{eqn:param_repr_body_frame} at step $k$.\n$(x_{k}, y_{k})$ are the coordinates of $Q_{k}$ in the earth-fixed frame $\\cFE$, while $\\psi_{k}$ is the vehicle heading. $(\\tilde{x}_{k}, \\tilde{y}_{k})$ and $(\\tilde{\\tau}_{k}, \\tilde{\\eta}_{k})$ are the coordinates of the displacement vector $Q_{k}Q_{k+1}$ expressed in the earth-fixed frame $\\cFE$ and body-fixed frame $\\cFB[k]$, respectively. $\\tilde{\\psi}_{k}$ is the change of vehicle heading from step $k$ to step $k+1$. $s_{k}$ is the arc length coordinate at step $k$ such that $s_{k}=0$ corresponds to point $D_{k}$, while $\\tilde{s}_{k}$ represents the arc length distance from $D_{k}$ to $D_{k+1}$ along the curve $\\mathcal{C}'$. In summary,\n\\begin{equation}\\label{eqn:term_diff_k_to_k1}\n\\begin{split}\n \\tilde{x}_{k} & = x_{k+1}-x_{k}\\,,\\qquad\n \\tilde{y}_{k} = y_{k+1}-y_{k}\\,,\\\\\n \\tilde{\\psi}_{k} & = \\psi_{k+1}-\\psi_{k}\\,,\\qquad\n \\tilde{s}_{k} = s_{k}-s_{k+1}\\,.\n\\end{split}\n\\end{equation}\nIn the following we assume vehicle state changes $\\tilde{x}_{k}$, $\\tilde{y}_{k}$, $\\tilde{\\psi}_{k}$, $\\tilde{\\tau}_{k}$, $\\tilde{\\eta}_{k}$ and $\\tilde{s}_{k}$ are known, and will investigate the details on how to obtain them in Section~\\ref{sec:ctrl}.\n\nTo derive the evolution of coefficients in \\eqref{eqn:param_repr_body_frame}, we highlight the time-dependency and rewrite it for step $k$ as\n\\begin{align}\\label{eqn:param_repr_body_frame_k_in_k}\n \\mathcal{C}'&=\\{\\bfr_{k}\\,|\\,\\bfr_{k}(s_{k}; 0) = \\boldsymbol{\\phi}_{k}(0)\\,\\boldsymbol{p}_{N}(s_{k})\\,,\\,s_{k}\\in \\mathbb{R\\}}\\,,\n\\end{align}\nwhich is the parametric representation using arc length coordinates $s_{k}$ about point $D_{k}$ in frame $\\cFB[k]$. The objective is to derive the new representation\n\\begin{equation}\\label{eqn:param_repr_body_frame_k+1}\n\\begin{split}\n \\mathcal{C}'&=\\{\\bfr_{k+1}\\,|\\,\n \\bfr_{k+1}(s_{k+1}; 0)= \\boldsymbol{\\phi}_{k+1}(0)\\,\\boldsymbol{p}_{N}(s_{k+1})\\,,\\, s_{k+1}\\in\\mathbb{R}\\}\\,,\n\\end{split}\n\\end{equation}\ngiven \\eqref{eqn:param_repr_body_frame_k_in_k} based on vehicle state changes from step $k$ to step $k+1$ such that the relationship between the coefficients $\\boldsymbol{\\phi}_{k}(0)$ and $\\boldsymbol{\\phi}_{k+1}(0)$ can be obtained. Note that \\eqref{eqn:param_repr_body_frame_k+1} is based on arc length coordinates $s_{k+1}$ about point $D_{k+1}$ in frame $\\cFB[k+1]$. Hence, three steps are performed sequentially in the following: i) coordinate transformation from frame $\\cFB[k]$ to frame $\\cFB[k+1]$; ii) shifting expansion point from $D_{k}$ to $D_{k+1}$; iii) changing coordinate from $s_{k}$ to $s_{k+1}$.\n\nAccording to Theorem~\\ref{thm:gen_coord_transf}, the change of coordinates from body-fixed frame $\\cFB[k+1]$ to $\\cFB[k]$ is\n\\begin{align}\\label{eqn:coord_transf_k_k_1}\n\\boldsymbol{r}_{k} &= \\bfR_{k}\\, \\boldsymbol{r}_{k+1}+\\boldsymbol{d}_{k}\\,,\n\\end{align}\nwhere\n\\begin{align}\n \\bfR_{k} &=\n \\begin{bmatrix}\n \\cos\\tilde{\\psi}_{k} & -\\sin\\tilde{\\psi}_{k} \\\\ \\sin\\tilde{\\psi}_{k} &\\cos\\tilde{\\psi}_{k}\n \\end{bmatrix}\\,,&\n \\bfd_{k} &=\n \\begin{bmatrix}\n \\tilde{\\tau}_{k} \\\\ \\tilde{\\eta}_{k}\n \\end{bmatrix}\\,.\n\\end{align}\nApplying Theorem~\\ref{thm:curve_param_repr_conformal} to curve \\eqref{eqn:param_repr_body_frame_k_in_k} with coordinate transformation \\eqref{eqn:coord_transf_k_k_1}, we obtain the parametric representation using arc length coordinates $s_{k}$ about point $D_{k}$ in frame $\\cFB[k+1]$ as\n\\begin{align}\\label{eqn:param_repr_body_frame_k_in_k+1}\n \\mathcal{C}'&=\\{\\bfr_{k+1}\\,|\\,\n \\bfr_{k+1}(s_{k}; 0)= \\widehat{\\boldsymbol{\\phi}}_{k}(0)\\,\\boldsymbol{p}_{N}(s_{k})\\,,\\,s_{k}\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere the change of coefficients is\n\\begin{align}\\label{eqn:coeff_change_step_1}\n \\widehat{\\boldsymbol{\\phi}}_{k}(0) & = \\bfR_{k}^{\\top}\\big(\\boldsymbol{\\phi}_{k}(0)-\\bfD_{k} \\big)\\,,\n\\end{align}\nand\n\\begin{align}\n\\bfD_{k} & = \\begin{bmatrix}\n \\bfd_{k} & 0 & \\cdots & 0\n\\end{bmatrix}\\in \\mathbb{R}^{2\\times (N+1)}\\,.\n\\end{align}\n\nNotice that the expansion point has shifted from $D_{k}$ to $D_{k+1}$ with distance $\\tilde{s}_{k}$.\nApplying Theorem~\\ref{thm:curve_param_repr_shift} to \\eqref{eqn:param_repr_body_frame_k_in_k+1}, we obtain the parametric representation of $\\mathcal{C}'$ using arc length coordinates $s_{k}$ about point $D_{k+1}$ in frame $\\cFB[k+1]$ as\n\\begin{align}\\label{eqn:param_repr_body_frame_k_in_k+1_sk}\n \\mathcal{C}'&=\\{\\bfr_{k+1}\\,|\\,\n \\bfr_{k+1}(s_{k}; \\tilde{s}_{k}) = \\widehat{\\boldsymbol{\\phi}}_{k}(\\tilde{s}_{k})\\,\\boldsymbol{p}_{N}(s_{k}-\\tilde{s}_{k})\\,,\\,s_{k}\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere the change of coefficients is\n\\begin{align}\\label{eqn:coeff_change_step_2}\n \\widehat{\\boldsymbol{\\phi}}_{k}^\\top(\\tilde{s}_{k})&=\\boldsymbol{T}(\\tilde{s}_{k})\\,\\widehat{\\boldsymbol{\\phi}}_{k}^\\top(0)\\,,\n\\end{align}\nwhere $\\boldsymbol{T}(\\cdot)$ is the same as that given in \\eqref{eqn:T_expansion}.\n\nNext utilizing \\eqref{eqn:term_diff_k_to_k1}, we can change coordinates from $s_{k}$ to $s_{k+1}$ to obtain the parametric representation \\eqref{eqn:param_repr_body_frame_k+1} where the change of coefficients is\n\\begin{align}\\label{eqn:coeff_change_step_3}\n \\boldsymbol{\\phi}_{k+1}(0)&=\\widehat{\\boldsymbol{\\phi}}_{k}(\\tilde{s}_{k})\\,.\n\\end{align}\nIn summary, the evolution of coefficients in the parametric representation from step $k$ to step $k+1$ is\n\\begin{align}\\label{eqn:coeff_dyn_gen_equiv}\n \\boldsymbol{\\phi}_{k+1}(0)&=\\bfR_{k}^{\\top}\\big(\\boldsymbol{\\phi}_{k}(0)-\\bfD_{k} \\big)\\, \\boldsymbol{T}^\\top(\\tilde{s}_{k})\\,,\n\\end{align}\ncf. (\\ref{eqn:coeff_change_step_1}, \\ref{eqn:coeff_change_step_2}, \\ref{eqn:coeff_change_step_3}), implying a linear relationship by nature.\n\n\\begin{remark}\nUtilizing vectorization operator and Theorem~\\ref{thm:vec_kron_relation}, one can rewrite \\eqref{eqn:coeff_dyn_gen_equiv} into the standard form of linear system\n\\begin{align}\n \\boldsymbol{\\Phi}_{k+1}&=\\mathbf{A}_{k}\\boldsymbol{\\Phi}_{k}+\\mathbf{B}_{k}\\,,\n\\end{align}\nwhere\n\\begin{equation}\n \\begin{split}\n \\boldsymbol{\\Phi}_{k}&=\\vectr(\\boldsymbol{\\phi}_{k}(0))\\,, \\qquad\\qquad\n \\mathbf{A}_{k} =\\boldsymbol{T}(\\tilde{s}_{k})\\otimes \\bfR_{k}^{\\top}\\,,\\\\\n \\mathbf{B}_{k} &= -\\big(\\boldsymbol{T}(\\tilde{s}_{k})\\otimes \\bfR_{k}^{\\top}\\big)\\vectr(\\mathbf{D}_{k})\\,.\n \\end{split}\n\\end{equation}\n\\end{remark}\n\n\n\\subsection{Vehicle Dynamics and Control \\label{sec:ctrl}}\n\nThe proposed architecture and model on the evolution of coefficients using arc-length-based parametric representation can be applied to any vehicle models with reasonable control algorithms that require lane estimation and information extraction. To demonstrate the work flow, in this part we provide an example on the model derived in \\cite{Wubing_ND_2022} using the path-following controller proposed in \\cite{Wubing_LC_TIV_2022}.\n\n\\subsubsection{Dynamics and Transformation on States\\label{sec:dyn_estimation_states}}\n\nIn general, vehicle dynamics describe the evolution of absolute position (e.g., $x_{\\rm Q},\\, y_{\\rm Q}$) and orientation (heading angle $\\psi$) in the earth-fixed frame $\\cFE$. They have different levels of complexity based on fidelity.\nHere, we consider the model derived in \\cite{Wubing_ND_2022} on the camera location point $Q$, that is\n\\begin{equation}\\label{eqn:EOM_point_Q}\n\\begin{split}\n\\dot{x}_{\\rm Q} &= V\\,\\cos\\psi-d\\,\\dfrac{V}{l}\\tan\\gamma \\sin\\psi\\, , \\\\\n\\dot{y}_{\\rm Q} &= V\\,\\sin\\psi+d\\,\\dfrac{V}{l}\\tan\\gamma \\cos\\psi\\, , \\\\\n\\dot{\\psi} &= \\dfrac{V}{l}\\,\\tan\\gamma\\, .\n\\end{split}\n\\end{equation}\nwhere $x_{\\rm Q}$, $y_{\\rm Q}$, ${\\psi}$, $l$ and $d$ utilize the same notations introduced in Section~\\ref{sec:perception}, $V$ is the constant longitudinal speed, and $\\gamma$ is the steering angle.\n\nTo facilitate control design, the vehicle dynamics \\eqref{eqn:EOM_point_Q} on absolute position ($x_{\\rm Q},\\, y_{\\rm Q}$) and orientation (heading angle $\\psi$) expressed in the earth-fixed frame $\\cFE$ can be transformed to dynamics on relative position and orientation with respect to the path. Such transformation allows us to obtain the evolution of arc length position, lateral deviation and relative heading with respect to lanes perceived in the camera. In this paper, we choose the following relative position and orientation as new states: i) the arc length $s_{\\Omega}$ that the vehicle has travelled along the lane; ii) the observed distance $\\varepsilon_{\\Omega}$ that characterizes the length of vector $\\pvec{\\Omega Q}$ (cf.~Fig.~\\ref{fig:camera_control}), whose sign is positive when $Q$ is on the left side of the path;\nand iii) the relative heading angle\n\\begin{align}\\label{eqn:rel_heading_def}\n \\theta_{\\Omega} &= \\psi-\\alpha_{\\Omega}\\,,\n\\end{align}\nwith respect to point $\\Omega$ on the path.\nThe details on the state transformation are provided in Appendix~\\ref{append:coord_transf}, and the resulting transformed relative dynamics are\n\\begin{equation}\\label{eqn:transf_xy_se_diff_Q}\n \\begin{split}\n \\dot{s}_{\\Omega} &= \\dfrac{ V }{\\sin(\\delta-\\theta_{\\Omega})}\\big(\\sin\\delta+\\dfrac{d}{l}\\cos\\delta \\tan\\gamma\\big)\\\\\n &+\\dfrac{|\\varepsilon_{\\Omega}|\\cos\\big(\\delta-\\sign(\\varepsilon_{\\Omega})\\,\\delta\\big)}{\\sin(\\delta-\\theta_{\\Omega})}\\dfrac{V}{ l}\\tan\\gamma\\,,\\\\\n %\n \\dot{\\varepsilon}_{\\Omega} &=\\dfrac{ V }{\\sin(\\delta-\\theta_{\\Omega})}\\big(\\sin\\theta_{\\Omega}+\\dfrac{d}{l}\\cos\\theta_{\\Omega} \\tan\\gamma\\big)\\\\\n &+\\dfrac{|\\varepsilon_{\\Omega}|\\cos(\\delta-\\sign(\\varepsilon_{\\Omega})\\,\\theta_{\\Omega})}{\\sin(\\delta-\\theta_{\\Omega})}\\dfrac{V}{l}\\tan\\gamma\\,,\\\\\n %\n \\dot{\\theta}_{\\Omega} &=-\\dfrac{ V\\, \\kappa_{\\Omega}}{\\sin(\\delta-\\theta_{\\Omega})}\\big(\\sin\\delta+\\dfrac{d}{l}\\cos\\delta \\tan\\gamma\\big)\\\\\n &+\\Big(1-\\dfrac{\\kappa_{\\Omega} |\\varepsilon_{\\Omega}|\\cos\\big(\\delta-\\sign(\\varepsilon_{\\Omega})\\,\\delta\\big)}{\\sin(\\delta-\\theta_{\\Omega})}\\Big)\\dfrac{V}{l}\\tan\\gamma\\,.\n \\end{split}\n\\end{equation}\n\n\n\\subsubsection{Control}\n\nThe objective of a path-following controller is to generate desired steering angle $\\gamma_{\\rm des}$ such that point Q can follow the given path. In literature, there are many available controllers demonstrated to be effective in different scenarios. In this part we use the nonlinear controller proposed in \\cite{Wubing_LC_TIV_2022} as an example since cameras are typically mounted at the front of the vehicle.\nWe hightlight the key ideas of this controller, and refer readers to \\cite{Wubing_LC_TIV_2022} for more details on the design.\n\nWith the assumption that the steering angle $\\gamma$ can track any desired value $\\gamma_{\\rm des}$, that is, $\\gamma = \\gamma_{\\rm des}$,\nthe path-following controller is\n\\begin{align}\\label{eqn:lateral_controller}\n \\gamma_{\\rm des} & = \\gamma_{\\rm ff} + \\gamma_{\\rm fb}\\,,\n\\end{align}\nwhich consists of a feedforward control law\n\\begin{align}\n \\gamma_{\\rm ff} &= \\arctan \\dfrac{l\\,\\kappa_{\\rm D}}{\\sqrt{1-(d\\,\\kappa_{\\rm D})^{2}}}\\,, \\label{eqn:steer_controller_ff_sum}\n\\end{align}\nand a feedback control law\n\\begin{align}\\label{eqn:steer_controller_fb_sum}\n \\gamma_{\\rm fb} &= \\gamma_{\\rm sat}\\cdot g \\Big( \\tfrac{k_{1}}{\\gamma_{\\rm sat}}\\big(\\theta_{\\rm D}-\\theta_{0}+\\arctan (k_{2}\\,\\varepsilon_{\\rm D})\\big)\\Big)\\,.\n\\end{align}\nHere, $\\kappa_{\\rm D}$ is the road curvature at point $D$, $\\theta_{\\rm D}$ is the relative heading angle with respect to point $D$ (cf.~\\eqref{eqn:rel_heading_def}), and ${\\varepsilon_{\\rm D}=-\\eta_{\\rm D}}$ is the $\\eta$-deviation of point $D$ where $\\eta_{\\rm D}$ indicates the $\\eta$-coordinate of point $D$ in frame $\\cFB$. In other words, $\\varepsilon_{\\rm D}$ characterizes the length of vector $\\pvec{D Q}$ (cf.~Fig.~\\ref{fig:camera_control}), and is positive when $\\pvec{D Q}$ points towards the positive $\\eta$-axis.\nAlso,\n\\begin{align}\\label{eqn:theta0_des}\n \\theta_{0} & = -\\arcsin(d\\,\\kappa_{\\rm D})\\,\n\\end{align}\nis the desired yaw angle error, ($k_{1}$, $k_{2}$) are tunable control gains, $\\gamma_{\\rm sat}$ is the maximum allowable steering angle, and $g (x)$ denotes the wrapper function\n\\begin{equation}\n g(x) =\\dfrac{2}{\\pi}\\arctan \\Big(\\dfrac{\\pi}{2} x\\Big)\\,. \\label{eqn:satfunction}\n\\end{equation}\nThe feedforward control essentially provides the estimated steering angle to handle a given road curvature $\\kappa_{\\rm D}$ whereas the feedback control makes corrections based on lateral deviation $\\varepsilon_{\\rm D}$ and yaw angle error $\\theta_{\\rm D}-\\theta_{0}$.\n\nThe controller (\\ref{eqn:lateral_controller}-\\ref{eqn:satfunction}) relies on the information about point $D$ or with respect to point $D$. We remark that point $D$ is not the closest on-path point to $Q$ as that used in \\cite{Wubing_LC_TIV_2022}, or the closest observable on-path point (point $\\Omega$) to $Q$ in the FOV. One can verify that when the road curvature is constant (i.e., $\\kappa_{\\rm D}=\\kappa^{\\ast}$), the closed loop system (\\ref{eqn:transf_xy_se_diff_Q}-\\ref{eqn:satfunction}) possesses the desired equilibrium\n\\begin{align}\\label{eqn:equilb}\n s_{\\Omega}^{\\ast}=\\dfrac{V\\,t}{\\sqrt{1-(d\\,\\kappa^{\\ast})^{2}}}\\,,\\;\n \\varepsilon_{\\Omega}^{\\ast}=0\\,,\\;\n \\theta_{\\Omega}^{\\ast}=-\\arcsin(d\\,\\kappa^{\\ast})\\,,\n\\end{align}\nwhich can be stabilized with properly chosen gain $k_{1}$ and $k_{2}$. The equilibrium \\eqref{eqn:equilb} represents the scenario where the vehicle follows the given path perfectly with no lateral deviations.\nTo extract the information ($\\kappa_{\\rm D}$, $\\theta_{\\rm D}$, $\\varepsilon_{\\rm D}$) from representation \\eqref{eqn:param_repr_body_frame}, we provide the following lemma.\n\n\n\\begin{lemma}\\label{thm:terms_from_param_repr}\n Given the parametric representation \\eqref{eqn:param_repr_body_frame} of the lane in frame $\\cFB$, the $\\eta$-deviation $\\varepsilon_{\\rm D}$, the relative heading angle $\\theta_{\\rm D}$, and the curvature $\\kappa_{\\rm D}$ at point $D$ are\n \\begin{equation}\\label{eqn:extract_ctrl_info_from_repr}\n \\begin{split}\n \\varepsilon_{D} &=-\\hat{\\phi}_{0}\\,,\\,\\qquad\\qquad\\qquad\n \\theta_{D} = -\\arctan \\dfrac{\\hat{\\phi}_{1}}{\\bar{\\phi}_{1}}\\,,\\,\\\\\n \\kappa_{D} &= \\dfrac{2\\hat{\\phi}_{2}\\bar{\\phi}_{1}-2\\hat{\\phi}_{1}\\bar{\\phi}_{2}}{\\hat{\\phi}_{1}^{2}+\\bar{\\phi}_{1}^{2}}\\,.\n \\end{split}\n \\end{equation}\n\\end{lemma}\n\\begin{IEEEproof}\n Given \\eqref{eqn:param_repr_body_frame}, we obtain the $\\eta$-deviation $\\varepsilon$, the relative heading angle $\\theta$, and the road curvature $\\kappa$ at an arbitrary point $P$ with a preview distance $s_{\\rm P}$ as\n \\begin{equation}\\label{eqn:lemma_get_terms_from_repr}\n \\begin{split}\n \\varepsilon(s_{\\rm P}) &= -\\eta(s_{\\rm P}) \\,,\\qquad\n \\theta(s_{\\rm P}) = -\\arctan \\dfrac{\\eta'(s_{\\rm P})}{\\tau'(s_{\\rm P})}\\,,\\\\\n \\kappa(s_{\\rm P}) &= \\dfrac{\\eta''(s_{\\rm P})\\,\\tau'(s_{\\rm P})-\\eta'(s_{\\rm P})\\,\\tau''(s_{\\rm P})}{(\\eta'(s_{\\rm P}))^{2}+(\\tau'(s_{\\rm P}))^{2}}\\,,\n \\end{split}\n \\end{equation}\n where the derivatives $\\eta'$, $\\tau'$, $\\eta''$ and $\\tau''$ can be obtained by differentiating \\eqref{eqn:param_repr_body_frame} with respect to $s$.\n Point D marks the origin of arc length coordinate $s$. Thus, evaluating \\eqref{eqn:lemma_get_terms_from_repr} at $s_{\\rm P}=0$ yields \\eqref{eqn:extract_ctrl_info_from_repr} at point $D$.\n\\end{IEEEproof}\n\n\n\\subsubsection{Vehicle State Changes}\nIn the derivation of \\eqref{eqn:coeff_dyn_gen_equiv}, vehicle state changes $\\tilde{x}_{k}$, $\\tilde{y}_{k}$, $\\tilde{\\psi}_{k}$, $\\tilde{\\tau}_{k}$, $\\tilde{\\eta}_{k}$ and $\\tilde{s}_{k}$ are assumed to be known at each step in Section~\\ref{sec:estmation_lane}. In practice, they can be obtained based on a nominal vehicle dynamic model and sensor data. In this part we take model \\eqref{eqn:EOM_point_Q} as an example, and assume the longitudinal speed $V$ and yaw rate $\\omega=\\dot{\\psi}$ can be measured by onboard sensors. We rewrite model \\eqref{eqn:EOM_point_Q} into\n\\begin{equation}\\label{eqn:model_w_sensor_data}\n \\begin{split}\n \\dot{x}&= V\\cos\\psi -d\\, \\omega \\sin\\psi\\, ,\\\\\n \\dot{y}&= V\\sin\\psi+d\\, \\omega\\cos\\psi\\,, \\\\\n \\dot{\\psi}&=\\omega\\,.\n \\end{split}\n\\end{equation}\nbased on measurable data $V$ and $\\omega$.\nApplying Euler method to integrate \\eqref{eqn:model_w_sensor_data} at step $k$, we obtain\n\\begin{equation}\\label{eqn:changes_x_y_psi}\n \\begin{split}\n \\tilde{x}_{k}&= (V_{k}\\cos\\psi_{k} -d\\, \\omega_{k} \\sin\\psi_{k})T\\,,\\\\\n \\tilde{y}_{k}&= (V_{k}\\sin\\psi_{k}+d\\, \\omega_{k}\\cos\\psi_{k})T\\,,\\\\\n \\tilde{\\psi}_{k}&=\\omega_{k} T\\,,\n \\end{split}\n\\end{equation}\nwhere $T$ is the step size. Notice that $(\\tilde{x}_{k}, \\tilde{y}_{k})$ and $(\\tilde{\\tau}_{k}, \\tilde{\\eta}_{k})$ are the coordinates of displacement vector $Q_{k}Q_{k+1}$ expressed in earth-fixed frame $\\cFE$ and body-fixed frame $\\cFB[k]$, respectively. Thus, one can apply Theorem~\\ref{thm:gen_coord_transf} on coordinate transformation and obtain\n\\begin{align}\\label{eqn:changes_tau_eta_psi}\n\\tilde{\\tau}_{k}&= V_{k}T\\,,&\n\\tilde{\\eta}_{k}&= d\\, \\omega_{k} T\\,.\n\\end{align}\n\nNote that $\\tilde{s}_{k}$ is the arc length distance that point $D$ has travelled along the curve $\\mathcal{C}'$ from step $k$ to step $k+1$. Realizing that point $D$ coincides with point $\\Omega$ if $\\delta=\\frac{\\pi}{2}$, we can easily derive the evolution of $s_{\\rm D}$ from the relative dynamics \\eqref{eqn:transf_xy_se_diff_Q} that is transformed from \\eqref{eqn:EOM_point_Q}.\nBy setting $\\delta=\\frac{\\pi}{2}$ in \\eqref{eqn:transf_xy_se_diff_Q} and replacing point $\\Omega$ with point $D$, the evolution of $s_{\\rm D}$ becomes\n\\begin{equation}\n \\begin{split}\\label{eqn:transf_xy_se_diff_D}\n \\dot{s}_{\\rm D} &= \\dfrac{ V+\\omega\\,\\varepsilon_{\\rm D} }{\\cos\\theta_{\\rm D}}\\,,\n \\end{split}\n\\end{equation}\nwhere yaw rate $\\omega=\\tfrac{V}{l}\\,\\tan\\gamma$ is utilized; cf.~\\eqref{eqn:model_w_sensor_data}.\nApplying Lemma~\\ref{thm:terms_from_param_repr} to the parametric representation \\eqref{eqn:param_repr_body_frame},\nthe $\\eta$-deviation $\\varepsilon_{D}$ and relative heading angle $\\theta_{D}$ can be obtained at step $k$. Thus, integrating \\eqref{eqn:transf_xy_se_diff_D} with Euler method yields\n\\begin{equation}\\label{eqn:changes_s}\n \\tilde{s}_{k}=\\dfrac{V_{k}+\\omega_{k}\\,\\varepsilon_{\\textrm{D}_{k}}}{\\cos(\\theta_{\\textrm{D}_{k}})}\\,T\\,.\n\\end{equation}\n\n\n\n\\section{Results \\label{sec:res}}\nIn this section, simulations of the whole process explained in Section~\\ref{sec:veh_ctrl_setup} are performed to demonstrate the usage of arc-length-based parametric representation in lane estimation and vehicle control. In Section~\\ref{sec:sim_setup}, we start with the details on how we set up experiment scenarios and simulate lane perception in camera-based control. In Section~\\ref{sec:sim_res}, simulations with or without using model \\eqref{eqn:coeff_dyn_gen_equiv} for prediction are conducted for the path-following control problem and results indicate the efficacy and large potential of \\eqref{eqn:coeff_dyn_gen_equiv} in facilitating lane estimation in vehicle control.\n\n\n\\subsection{Simulation Setup\\label{sec:sim_setup}}\n\nThe model \\eqref{eqn:coeff_dyn_gen_equiv} integrated with the controller (\\ref{eqn:lateral_controller}-\\ref{eqn:satfunction}) is applicable to lane estimation in path-following problems where the path can have any reasonable shape. Remark~\\ref{remark:curv_repr_gen} indicates that a path can be fully described by $\\kappa(s)$. Also, if $\\kappa(s)$ has an analytical form, we can derive the coefficients for polynomial function representation \\eqref{eqn:poly_func_repr_D} given vehicle state with respect to the path. These coefficients can be used as: i) the output of perception algorithms at perception updating instants; and ii) the true values to compare against the predicted values when perception outcome is not available. Therefore, we consider the closed path used in \\cite{Wubing_ND_2022} and derive the coefficients for polynomial function representation in this part.\nThe curvature of the path is given as\n\\begin{align}\\label{eqn:curv_s_func}\n \\kappa(s) &= \\dfrac{\\kappa_{\\max}}{2}\\left(1-\\cos\\bigg(\\dfrac{2\\pi}{s_{\\rm T}}s\\bigg)\\right)\\ ,\n\\end{align}\nwhere $\\kappa_{\\max}$ is the maximum curvature along the path, and $s_{\\rm T}$ is the arc length period.\nThis path has $N$ corners and perimeter $Ns_{\\rm T}$ if\n\\begin{equation}\\label{eqn:curv_close_cond}\n \\kappa_{\\max}\\,s_{\\rm T}=\\dfrac{4\\pi}{N}\\ , \\quad N=2, 3, \\ldots \\ .\n\\end{equation}\n\nAs discussed in Section~\\ref{sec:perception}, perception algorithms provide representation \\eqref{eqn:poly_func_repr_D} of the lane in real-time, which can be viewed as the expansion of Taylor approximation \\eqref{eqn:poly_func_repr_omega} about point $\\Omega$ in the FOV.\nTo simulate perception, we need to derive the coefficients of function representation \\eqref{eqn:poly_func_repr_omega} in frame $\\cFB$ given the vehicle states and the path information \\eqref{eqn:curv_s_func}.\n\n\\begin{lemma}\\label{lemma:gen_curve_func_repr}\n Given the current vehicle relative states ($s_{\\Omega}$, $\\varepsilon_{\\Omega}$ and $\\theta_{\\Omega}$) with respect to the path, and the half angle $\\delta$ of camera FOV, the lane segment $\\mathcal{C}$ can be approximated as function representation \\eqref{eqn:poly_func_repr_omega} in frame $\\cFB$ by applying Taylor series to point $\\Omega$, where $\\tau_{\\Omega}:=\\tau(s_{\\Omega})=|\\varepsilon_{\\Omega}| \\cos\\delta$ and\n \\begin{equation}\\label{eqn:gen_curv_func_repr_coeff}\n \\begin{split}\n \\varphi_{0}(\\tau_{\\Omega}) &= -\\varepsilon_{\\Omega}\\sin\\delta\\,, \\quad\n \\varphi_{1}(\\tau_{\\Omega}) = -\\tan\\theta_{\\Omega}\\,, \\\\\n %\n \\varphi_{2}(\\tau_{\\Omega}) &= \\dfrac{\\kappa_{\\Omega}}{2\\cos^{3}\\theta_{\\Omega}}\\,,\\quad\n \\varphi_{3}(\\tau_{\\Omega}) = \\dfrac{\\kappa'_{\\Omega}-3\\kappa_{\\Omega}^{2}\\tan\\theta_{\\Omega}}{6\\cos^{4}\\theta_{\\Omega}}\\,,\\\\\n %\n \\varphi_{4}(\\tau_{\\Omega}) &= \\dfrac{5\\kappa_{\\Omega}^3}{8\\cos^{7}\\theta_{\\Omega}}\n +\\dfrac{\\kappa''_{\\Omega}-12 \\kappa_{\\Omega}^{3}-10\\kappa_{\\Omega}\\kappa'_{\\Omega}\\tan\\theta_{\\Omega} }{24\\cos^{5}\\theta_{\\Omega}} \\,,\\\\\n %\n \\varphi_{5}(\\tau_{\\Omega}) &=\n \\dfrac{ 7\\kappa_{\\Omega}^{2} (\\kappa'_{\\Omega}-\\kappa_{\\Omega}^{2}\\tan \\theta_{\\Omega}) }{8\\cos^{8}\\theta_{\\Omega}}\n +\\dfrac{\\kappa'''_{\\Omega}-86\\kappa_{\\Omega}^{2} \\kappa'_{\\Omega}}{120\\cos^{6}\\theta_{\\Omega}} \\\\\n &+\\dfrac{\\big(12\\kappa_{\\Omega}^{4}-3\\kappa_{\\Omega}\\kappa''_{\\Omega}-2(\\kappa'_{\\Omega})^{2}\\big)\\tan\\theta_{\\Omega}}{24\\cos^{6}\\theta_{\\Omega}}\\,.\n \\end{split}\n \\end{equation}\n\\end{lemma}\n\\begin{IEEEproof}\n See Appendix~\\ref{append:path_func_repr_body_frame}.\n\\end{IEEEproof}\n\n\n\\subsection{Simulation Results \\label{sec:sim_res}}\n\n\nIn this part, we simulate the whole process of perception, estimation and control explained in Section~\\ref{sec:veh_ctrl_setup} when the vehicle tries to follow the given path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}) with camera-based perception. The controller (\\ref{eqn:lateral_controller}-\\ref{eqn:satfunction}) updates commands every $T$ seconds while perception provides the function representation \\eqref{eqn:poly_func_repr_D} every $T_{\\rm p}$ seconds. To demonstrate the efficacy of the proposed estimation algorithm, we also assume: i) $T_{\\rm p}$ is larger than $T$; ii) when the representation \\eqref{eqn:poly_func_repr_D} is not updated by perception, the controller may use the model \\eqref{eqn:coeff_dyn_gen_equiv} to update the parametric representation with the estimated state changes (\\ref{eqn:changes_x_y_psi}, \\ref{eqn:changes_tau_eta_psi}, \\ref{eqn:changes_s}) at each step; iii) the measurement \\eqref{eqn:poly_func_repr_D} is free of noise such that prediction using model \\eqref{eqn:coeff_dyn_gen_equiv} can be easily compared against true values without noise effects; and iv) the true values about coefficients of representation are obtained by calculating $\\boldsymbol{\\varphi}(\\tau_{\\Omega})$ using Lemma~\\ref{lemma:gen_curve_func_repr} at every control period with vehicle states ($s_{\\rm \\Omega}$, $\\varepsilon_{\\Omega}$, $\\theta_{\\Omega}$). The initial conditions are set to $s_{\\rm \\Omega}=0$ [m], $\\varepsilon_{\\Omega}=0.1$ [m], $\\theta_{\\Omega}=0$ [deg], and the other parameters used in the simulations are provided in TABLE~\\ref{tab:params}.\n\n\\begin{table}[!t]\n\\begin{center}\n\\renewcommand{\\arraystretch}{1.3}\n\\rowcolors{1}{LightCyan}{LightMagenta}\n\\begin{tabular}{l|c|l}\n\\hline\\hline\n \\rowcolor{Gray} Parameter & Value & Description\\\\\n \\hline\n $l$ [m]& $2.57$ & wheelbase length\\\\\n $d$ [m]& $2$ & distance from Q to rear axle center\\\\\n $\\delta$ [deg]& $60$ & half angle of camera FOV\\\\\n $k_{1}$ [m\/s]& $-l\/d$ & control gain\\\\\n $k_{2}$ [m$^{-1}$]& $0.02$ & control gain\\\\\n $\\gamma_{\\max}$ [deg]& $30$ & physical steering angle limit\\\\\n $s_{\\rm T}$ [m] & $250$ & arc length period of the path\\\\\n $N$ [1] & $4$ & number of corners for closed path\\\\\n $\\kappa_{\\max}$ [m$^{-1}$] & $0.004\\pi$ & maximum curvature of the path\\\\\n $V$ [m\/s] & 20 & longitudinal speed \\\\\n\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Parameters used in the simulations. \\label{tab:params}}\n\\end{table}\n\n\nFig.~\\ref{fig:sim_50ms} shows the simulation results without using \\eqref{eqn:coeff_dyn_gen_equiv} to predict coefficients when $T=0.05$ [s] and $T_{\\rm p}=0.15$ [s]. That is, the controller outputs a new command at one step based on the updated lane representation from perception, and then holds this command for the following two steps. This is because when new representation is not available, the outdated one is still used by the controller which outputs the same command. This is a typical and easy solution when characterizing lanes using function representation \\eqref{eqn:poly_func_repr_D} directly, since the evolution of coefficients in such representation is not obtainable as explained in Section~\\ref{sec:veh_ctrl_setup}.\n\nIn panel (a), the dotted black curve denotes the closed path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}), while the solid red curve represents the position of the vehicle at point $Q$. Panel (b) plots the lateral deviation $\\varepsilon_{\\Omega}$ using the blue curve, and the steering command $\\gamma_{\\rm des}$ using the red curve with the axis marked on the right. Panels (c, d, e) show the time profiles of the coefficients $\\hat{\\phi}_{0}$, $\\hat{\\phi}_{1}$ and $\\hat{\\phi}_{2}$, respectively, of the parametric representation \\eqref{eqn:param_repr_body_frame} when the vehicle moves along the path. The blue curves denote the measurement outputs of perception algorithm, while the red curves indicate the true values of those coefficients. We remark that: i) only the coefficients until the second order in the lateral direction are shown here since they contain more important information in lateral control than other coefficients; ii) although this represents the scenario using function representation, the coefficients are still transformed to facilitate comparison against parametric representation; and iii) the time profiles of other coefficients in the representation \\eqref{eqn:param_repr_body_frame} are similar and reveal similar results.\nFig.~\\ref{fig:sim_50ms}(a,b) indicate that the controller (\\ref{eqn:lateral_controller}-\\ref{eqn:theta0_des}) allows the vehicle to follow the given path with reasonable performance in this scenario. However, during simulation we also observe that when $T_{\\rm p}\\ge 0.2$ [s], vehicle using this controller is not able to follow the given path using the outdated lane information without prediction. This result implies a high requirement on lane perception latency especially when measurements may also be corrupted with noises in practice.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=1.05]{Fig05.pdf}\\\\\n \\caption{Simulation results for camera-based lateral control without prediction on lane representation when $T=0.05$ [s] and $T_{\\rm p}=0.15$ [s]. (a) Vehicle position and the closed path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}) in ($x$, $y$)-plane. (b) lateral deviation $\\varepsilon_{\\Omega}$ and steering command $\\gamma_{\\rm des}$. (c, d, e) time profiles of coefficients in the parametric representation \\eqref{eqn:param_repr_body_frame} while the vehicle moves along the path. \\label{fig:sim_50ms}}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=1.05]{Fig06.pdf}\\\\\n \\caption{Simulation results for camera-based lateral control using \\eqref{eqn:coeff_dyn_gen_equiv} for estimation on lane representation when $T=0.05$ [s] and $T_{\\rm p}=2$ [s]. (a) Vehicle position and the closed path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}) in ($x$, $y$)-plane. (b) lateral deviation $\\varepsilon_{\\Omega}$ and steering command $\\gamma_{\\rm des}$. (c, d, e) time profiles of coefficients in the parametric representation \\eqref{eqn:param_repr_body_frame} while the vehicle moves along the path. \\label{fig:sim_2s}}\n\\end{figure}\n\nFig.~\\ref{fig:sim_2s} shows the simulation results when function representation \\eqref{eqn:poly_func_repr_D} is transformed to parametric representation and \\eqref{eqn:coeff_dyn_gen_equiv} is used to predict the coefficients at every control step in the absence of measurement. Here, the control period $T$ is still $0.05$ [s], but perception period $T_{\\rm p}$ is set to $2$ [s] to demonstrate the effectiveness. Besides the same layout and color scheme as those used in Fig.~\\ref{fig:sim_50ms}, panels (c,d,e) use blue dots to represent new measurements of coefficients from perception algorithm, and also zoomed-in plots to highlight the errors on coefficients between the predicted values and the true values. Fig.~\\ref{fig:sim_2s}(a,b) indicate that with such low perception updating rate, the controller still achieves reasonable performance in following the given path by predicting the new representation using \\eqref{eqn:coeff_dyn_gen_equiv}. Fig.~\\ref{fig:sim_2s}(c,d,e) show that prediction errors on the coefficients of higher orders are much smaller than those of lower orders. This is natural because errors on higher orders lead to less accurate prediction for the future than errors on lower orders. The zoomed-in plots illustrate that the estimation errors in the coefficients $\\hat{\\phi}_{0}$, $\\hat{\\phi}_{1}$ and $\\hat{\\phi}_{2}$ become noticeable after predicting with \\eqref{eqn:coeff_dyn_gen_equiv} for about 0.3, 1.2, and 1.6 [s], respectively. The error in $\\hat{\\phi}_{0}$ leads to a noticeable offset in lateral deviation $\\varepsilon_{\\Omega}$ around $0.05$ [m] depicted in Fig.~\\ref{fig:sim_2s}(b).\nOne may notice that the vehicle travels 40 [m] during $2$ [s] prediction period since $V=20$ [m\/s]. Function representation \\eqref{eqn:poly_func_repr_D} is the Taylor approximation about point $\\Omega$ at the perception updating instant. The approximation error gradually becomes noticeable while the vehicle moves forward. Also, prediction with \\eqref{eqn:coeff_dyn_gen_equiv} requires estimated state changes (\\ref{eqn:changes_tau_eta_psi}, \\ref{eqn:changes_s}) using Euler integration that aggregates errors. Although the errors in Taylor approximation cannot be mitigated, the aggregation errors in Euler integration can be reduced by increasing the estimation frequency.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=1.05]{Fig07.pdf}\\\\\n \\caption{Comparison results for camera-based lateral control using \\eqref{eqn:coeff_dyn_gen_equiv} for estimation on lane representation when $T_{\\rm p}=2$ [s]. (a, c) $T=0.02$ [s]. (b, d) $T=0.01$ [s]. \\label{fig:sim_T_diff}}\n\\end{figure}\n\nFig.~\\ref{fig:sim_T_diff} shows the comparison results for the same scenario as that for Fig.~\\ref{fig:sim_2s} when perception updating period $T_{\\rm p}$ is still $2$ [s], but control updating period $T$ is reduced to $0.02$ [s] and $0.01$ [s] for panels (a,c) and (b,d), respectively.\nFig.~\\ref{fig:sim_2s}(c) and Fig.~\\ref{fig:sim_T_diff}(c,d) show that the error of $\\hat{\\phi}_{0}$ is decreased as control updating period $T$ decreases, which leads to decreased offset in lateral deviation $\\varepsilon_{\\Omega}$ depicted in Fig.~\\ref{fig:sim_2s}(b) and Fig.~\\ref{fig:sim_T_diff}(a,b).\n\nWe remark that in practice perception updating period is faster than $2$ [s], but the simulations shown here demonstrate the large potential in estimating lanes using parametric representation with \\eqref{eqn:coeff_dyn_gen_equiv}. This prediction can be used in the time update step in Kalman filter to get more accurate estimations, or as a pure prediction when measurements are temporarily not available. When noises and model mismatches appear in practice, performance degradation is inevitable. However, it is expected that prediction within $0.5\\sim 1$ [s] can still provide reasonable performance since integration aggregation is not long and the vehicle travelling distance is not far.\n\n\\section{Conclusion \\label{sec:conclusion}}\n\nThis paper revisited the fundamental mathematics on approximating curves as polynomial functions or parametric curves. It is shown that arc-length-based parametric representations possess the nice properties of preserving the form in coordinate transformation and parameter shifting. These properties have the potential in facilitating lane estimation for vehicle control since lanes are characterized as curves expressed in vehicle body-fixed frame by perception algorithms. As the vehicle moves, the body-fixed frame is translating and rotating. Thus, we proposed a new architecture using parametric representation in lane estimation and control. To ensure compatibility with most of current platforms, perception algorithms are still assumed to output coefficients of polynomial function representation. We derived the change of coefficients to transform polynomial function representation to arc-length-based parametric representation, and the evolution of coefficients using parametric representation. This evolution reveals an intrinsic linear relationship as the vehicles moves, which can be easily used for prediction or integrated with Kalman filters. We also set up a framework to simulate the whole process, including perception, estimation and control, for camera-based vehicle control problems. Simulation results indicate that controllers relying on predicted lanes using parametric representations can still achieve reasonably good performance at extremely low perception updating rate. These results are practically important in improving control performance with reduced perception updating rate, and obtaining better estimates when coefficients of representations are corrupted with noises. Future research directions may include lane estimation in the presence of noises and model mismatches, and field implementation with the proposed architecture and estimation model.\n\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe study of the interaction of Alfv\\'en waves (AWs) \nwith plasma inhomogeneities is \nimportant for\nboth astrophysical and laboratory plasmas.\nThis is because both AWs and inhomogeneities \noften coexist in these physical systems.\nAWs are believed to be\ngood candidates for plasma heating, energy and momentum transport.\nOn the one hand, in many physical situations AWs are easily excitable \n(e.g. through convective motion of the solar interior)\nand thus they are present in a number of astrophysical systems.\nOn the other hand, these waves dissipate due to \nthe shear viscosity as opposed to\ncompressive fast and slow magnetosonic waves which dissipate due to the\nbulk viscosity.\nIn astrophysical plasmas shear viscosity is extremely small\nas compared to bulk viscosity. Hence,\nAWs are notoriously difficult to dissipate.\nOne of the possibilities to improve AW dissipation is to introduce progressively\ndecreasing spatial scales, $\\delta l \\to 0$, into the system (recall that the \nclassical dissipation is $\\propto \\delta l^{-2}$). \nHeyvaerts and Priest have proposed (in the astrophysical context) one such \nmechanism, called \nAW phase mixing \\citep{hp83}. It occurs when a linearly polarised\nAW propagates in the plasma with a \none dimensional density inhomogeneity transverse \nto the uniform magnetic field.\nIn such a situation the initially plane AW front is progressively \ndistorted because of different Alfv\\'en speeds across the field. \nThis creates progressively stronger gradients\nacross the field (in the inhomogeneous regions \nthe transverse scale collapses to zero),\nand thus in the case of finite resistivity, dissipation is greatly enhanced.\nHence, it is believed that phase mixing can provide significant plasma\nheating. Phase mixing could be also important for laboratory plasmas.\n\\citet{hc74} proposed the heating of collisionless plasma by \nutilising spatial phase mixing by shear Alfv\\'en wave resonance and discussed potential \napplications to toroidal plasma.\nA significant amount of work has been done in the context of heating open\nmagnetic structures in the solar \ncorona \\citep{hp83,nph86,p91,nrm97,dmha00,bank00,tan01,hbw02,tn02,tna02,tnr03}.\nAll phase mixing studies so far have been performed in the MHD approximation,\nhowever, since the transverse scales in the AW collapse progressively to zero,\nthe MHD approximation is inevitably violated. \nThis happens when the transverse scale approaches\nthe ion gyro-radius $r_i$ and then electron gyro-radius $r_e$.\nThus, we proposed to study the phase mixing effect in the kinetic regime, i.e.\nwe go beyond a MHD approximation. Preliminary results were \nreported in \\citet{tss05}, where\nwe discovered a new mechanism for the acceleration of electrons\ndue to wave-particle interactions. This has important implications\nfor various space and laboratory plasmas, e.g. the \ncoronal heating problem and acceleration of the solar wind.\nIn this paper we present a full analysis of the discovered effect\nincluding an analysis of the broadening of the ion\ndistribution function due to the presence of Alfv\\'en waves and the generation of \ncompressive perturbations due to both\nweak non-linearity and plasma density inhomogeneity.\n\n\n\\section{The model}\nWe used 2D3V, the fully relativistic, electromagnetic, particle-in-cell (PIC)\ncode with MPI parallelisation, modified from the 3D3V TRISTAN code \\citep{b93}.\nThe system size is $L_x=5000 \\Delta$ and $L_y=200 \\Delta$, where\n$\\Delta(=1.0)$ is the grid size. The periodic boundary conditions for\n$x$- and $y$-directions are imposed on particles and fields. There are about\n478 million electrons and ions in the simulation. The average number of\nparticles per cell is 100 in low density regions (see below). \nThe thermal velocity of electrons is $v_{th,e}=0.1c$\nand for ions is $v_{th,i}=0.025c$.\nThe ion to electron\nmass ratio is $m_i\/m_e=16$. The time step is $\\omega_{pe} \\Delta t=0.05$. Here\n$\\omega_{pe}$ is the electron plasma frequency.\nThe Debye length is $v_{th,e}\/\\omega_{pe}=1.0$. The electron skin depth \nis $c\/\\omega_{pe}=10 \\Delta$, while the ion skin depth is $c\/\\omega_{pi}=40 \\Delta$.\nHere $\\omega_{pi}$ is the ion plasma frequency.\nThe electron Larmor radius is $v_{th,e}\/\\omega_{ce}=1.0 \\Delta$, while\nthe same for ions is $v_{th,i}\/\\omega_{ci}=4.0 \\Delta$.\nThe external uniform magnetic field, $B_0(=1.25)$,\nis in the $x$-direction and the initial\nelectric field is zero. \nThe ratio of electron cyclotron frequency to the electron plasma\nfrequency is $\\omega_{ce}\/\\omega_{pe}=1.0$, while the same for ions is\n$\\omega_{ci}\/\\omega_{pi}=0.25$. The latter ratio is essentially $V_A\/c$ -- the Alfv\\'en\nspeed normalised to the speed of light. Plasma $\\beta=2(\\omega_{pe}\/\\omega_{ce})^2(v_{th,e}\/c)^2=0.02$.\nHere all plasma parameters are quoted far away from the density \ninhomogeneity region. The dimensionless (normalised to some reference constant value of $n_0=100$ particles per cell) \nion and electron density inhomogeneity is described by\n\\begin{equation}\n {n_i(y)}=\n{n_e(y)}=1+3 \\exp\\left[-\\left(\\frac{y-100\\Delta}{50 \\Delta}\\right)^6\\right]\n\\equiv F(y).\n\\end{equation}\nThis means that in the central region (across the \n$y$-direction), the density is\nsmoothly enhanced by a factor of 4, and there are the \nstrongest density gradients having \na width of about ${50 \\Delta}$ around the \npoints $y=51.5 \\Delta$ and $y=148.5 \\Delta$.\nThe background temperature of ions and electrons, \nand their thermal velocities\nare varied accordingly\n\\begin{equation}\n{T_i(y)}\/{T_0}=\n{T_e(y)}\/{T_0}=F(y)^{-1},\n\\end{equation}\n\\begin{equation}\n {v_{th,i}}\/{v_{i0}}=\n{v_{th,e}}\/{v_{e0}}=F(y)^{-1\/2},\n\\end{equation}\nsuch that the thermal pressure remains constant. Since the background magnetic field\nalong the $x$-coordinate is also constant, the total pressure remains constant too.\nThen we impose a current of the following form\n\\begin{equation}\n{\\partial_t E_y}=-J_0\\sin(\\omega_d t)\\left(1-\\exp\\left[-(t\/t_0)^2\\right]\\right),\n\\end{equation}\n\\begin{equation}\n{\\partial_t E_z}=-J_0\\cos(\\omega_d t)\\left(1-\\exp\\left[-(t\/t_0)^2\\right]\\right).\n\\end{equation}\nHere $\\omega_d$ is the driving frequency which was fixed at $\\omega_d=0.3\\omega_{ci}$.\nThis ensures that no significant ion-cyclotron damping is present. Also,\n$\\partial_t$ denotes the time derivative.\n$t_0$ is the onset time of the driver, which was fixed at $50 \/\\omega_{pe}$\ni.e. $3.125 \/ \\omega_{ci}$. This means that the driver onset time is about 3 ion-cyclotron\nperiods. Imposing such a current on the system results in the generation of\nleft circularly polarised AW, which is driven at the left \nboundary of simulation box and has spatial width of $1 \\Delta$.\nThe initial amplitude of the current is such that \nthe relative AW amplitude is about 5 \\% of the background\n(in the low density homogeneous regions),\nthus the simulation is weakly non-linear.\n\n\\section{Main results}\n\nBecause no initial (perpendicular to the external magnetic field) velocity excitation\nwas imposed in addition to the above specified currents \n(cf. \\citet{tn02,dvl01,tt03,tt04}), \nthe circularly polarised AW excited (driven) at the left boundary\nis split into two circularly polarised AWs which travel in opposite directions. The dynamics of these\nwaves as well as other physical quantities is shown in Fig.~1.\n(cf. Fig.~1 from \\citet{tss05} where \n$B_z$ and $E_y$, the circularly polarised Alfv\\'en wave\ncomponents, were shown for three different times).\nA typical simulation, untill $t=875 \/ \\omega_{ce}=54.69 \/ \\omega_{ci}$ takes about 8 days\non the parallel 32 dual 2.4 GHz Xeon processors.\nIt can be seen from the figure \nthat because of the periodic boundary conditions, a circularly polarised\nAW that was travelling to the left, reappeared \non the right side of the simulation box.\nThe dynamics of the AW ($B_z,E_y$) progresses in a similar manner as in the \nMHD, i.e. it phase mixes \\citep{hp83}.\nIn other words, the middle region (in $y$-coordinate), i.e. $50 \\Delta \\leq y \\leq 150 \\Delta$, travels \nslower because of the density enhancement (note that \n$V_A(y) \\propto 1\/\\sqrt{n_i(y)}$).\nThis obviously causes a \ndistortion of initially plain wave front and the creation of strong gradients\nin the regions around $y \\approx 50$ and $150$.\nIn the MHD approximation when resistivity, $\\eta$, is finite, \nthe AW is strongly dissipated in these regions. This effectively means that the outer and inner parts of the\ntravelling AW are detached from each other and propagate independently.\nThis is why the effect is called phase mixing -- after a long time (in the case\nof developed phase mixing), \nphases in the wave front become effectively uncorrelated.\nBefore \\citet{tss05}, it was not clear what to expect from our PIC simulation. \nThe code is collisionless and there\nare no sources of dissipation in it (apart from the \npossibility of wave-particle interactions).\nIt is evident from Fig.~1 that in the developed stage of phase mixing \n($t=54.69 \/ \\omega_{ci}$), the AW front is substantially damped in the strongest density\ngradient regions.\nContrary to the AW ($B_z$ and $E_y$) dynamics we do not see any phase mixing for \n$B_y$ and $E_z$. The latter two behave similarly.\nIt should be noted that $E_z$ contains both driven (see Eq.(4) above) and \nnon-linearly generated fast magnetosonic wave components, with the former being dominant over the\nlatter. \nSince $B_x$ is not driven, and initially its perturbations are absent,\nwe see only non-linearly generated slow magnetosonic perturbations confined to the\nregions of strongest density gradients (around $y\\approx50$ and $150$).\nNote that these also have rapidly decaying amplitude.\nAlso, we gather from Fig.~1 that the density perturbation ($\\approx 10$\\%)\nwhich is also generated through the weak non-linearity is present too.\nThese are propagating density oscillations with variation both in overall magnitude (perpendicular to the\nfigure plane) and across the $y$-coordinate, and they are mainly confined to the strongest density gradients regions\n(around $y\\approx 50$ and $150$).\nNote that dynamics of $B_x,B_y$ and $B_z$ with \nappropriate geometrical switching (because in our\ngeometry the uniform magnetic field lies along the $x$-coordinate)\nis in qualitative agreement with \\citet{bank00} (cf their Fig.~9).\nThe dynamics of remaining $E_x$ component is treated separately in the next figure.\nIt is the inhomogeneity of the medium $n_i(y)$, $V_A(y)$, i.e.\n$\\partial \/ \\partial y \\not = 0$, is the cause of weakly non-linear coupling of the AWs to the \ncompressive modes (see \\citet{nrm97,tan01} for further details).\n\n\\begin{figure*}\n\\centering\n \\epsfig{file=fig1a.eps,width=6cm}\n \\epsfig{file=fig1b.eps,width=6cm}\n \\epsfig{file=fig1c.eps,width=6cm}\n \\epsfig{file=fig1d.eps,width=6cm}\n \\epsfig{file=fig1e.eps,width=6cm}\n \\epsfig{file=fig1f.eps,width=6cm}\n\\caption{Contour (intensity) plots of electromagnetic field components and electron density \nat time $t=54.69 \/ \\omega_{ci}$ (developed stage of phase mixing). \nThe phase mixed Alfv\\'en wave components are $B_z$ and $E_y$. \nThe excitation source is at the left\nboundary. Because of periodic boundary conditions, the \nleft-propagating AW re-appears from the \nright side of the simulation box. Note how the (initially plain) AW is stretched because of \ndifferences in local Alfv\\'en speed across the \n$y$-coordinate. Significant ($\\approx 10$\\%) \ndensity fluctuations can be seen.}\n\\end{figure*}\n\n\nIn Fig.~2 we try to address the question of \nwhere the AW energy went? (as we saw strong\ndecay of AWs in the regions of strong density gradients).\nThus in Fig.~2 we plot $E_x$, the longitudinal\nelectrostatic field, and electron phase space ($V_x\/c$ vs. $x$ and $V_x\/c$ vs. $y$) for different \ntimes.\nIn the \nregions around $y \\approx 50$ and $150$, for later times, a significant electrostatic field\nis generated. This is the consequence of stretching of the \nAW front in those regions\nbecause of the difference in local Alfv\\'en speed.\nIn the middle column of this figure we see that exactly in those regions\nwhere $E_x$ is generated, \nmany electrons are accelerated along $x$-axis.\nWe also gather from the right column that for \nlater times ($t=54.69 \/ \\omega_{ci}$), the\nnumber of high velocity electrons is increased around the \nstrongest density gradient regions\n(around $y \\approx 50$ and $150$). Thus, the generated $E_x$ field is somewhat oblique\n(not exactly parallel to the external magnetic field).\nHence, we conclude that the \nenergy of the phase-mixed AW goes into acceleration of electrons.\nLine plots of $E_x$ show that this electrostatic field is strongly damped,\ni.e. the energy is channelled to electrons via Landau damping.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig2.eps}\n\\caption{Left column: contour plots of the generated electrostatic field $E_x$\nnearly parallel to the\nexternal magnetic field at instances: \n$t=(0, 31.25, 54.69) \/ \\omega_{ci}$. Central column: $V_x\/c$ versus $x$ of electron phase\nspace at the same times. To reduce figure size, only electrons with $V_x > 0.15c$\nwere plotted. Right column: $V_x\/c$ versus $y$ of electron phase\nspace at the same times. Only electrons with $V_x > 0.12c$\nwere plotted (note the dip in the middle due to the \ndensity inhomogeneity across $y$-coordinate).}\n\\end{figure*}\n\n\n\nIn Fig.~3 we investigate ion phase space\n($V_z\/c$ vs. $x$ and $V_z\/c$ vs. $y$) for the different \ntimes. The reason for choice of $V_z$ will become clear below.\nWe gather from this plot that in $V_z\/c$ vs. $x$ phase space, clear\npropagating oscillations are present (left column). These oscillations \nare of the incompressible, Alfv\\'enic \"kink\" type, \ni.e. for those $x$s where there is an increase of\ngreater positive velocity ions, there is also a \ncorresponding decrease of lower negative velocity ions.\nIn the $V_z\/c$ vs. $y$ plot we also see no clear acceleration of\nions.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig3.eps}\n\\caption{Left column: $V_z\/c$ versus $x$ of ion phase\nspace at instances: \n$t=(0, 31.25, 54.69) \/ \\omega_{ci}$. \nRight column: $V_z\/c$ versus $y$ of ion phase\nspace at the same times. Only ions with $V_z > 0.03c$\nare plotted (note the dip in the middle due to the \ndensity inhomogeneity across the $y$-coordinate).}\n\\end{figure*}\n\n\n\nWe next look at the distribution functions of electrons and ions\nbefore and after the phase mixing took place.\nIn Fig.~4 we plot distribution functions of electrons and ions at $t=0$ and $t=54.69 \/ \\omega_{ci}$.\nNote that at $t=0$ the distribution functions do not look as \npurely Maxwellian because\nof the fact that the \ntemperature varies across the $y$-coordinate (to keep total pressure\nconstant) and the graphs are produced for the entire simulation domain.\nAlso, note that for electrons in $f(V_x)$ there is \na substantial difference at $t=54.69 \/ \\omega_{ci}$\nto its original form because of the aforementioned \nelectron acceleration.\nWe see that the number of electrons having velocities $V_x=\\pm (0.1-0.3)c$ is increased.\nNote that the acceleration of electrons takes place mostly along\nthe external magnetic field (along the $x$-coordinate). Thus, very little electron acceleration \noccurs for $V_y$ or $V_z$ (solid and dotted curves practically overlap each other).\nFor the ions the situation is different: we see broadening of the ion velocity distribution functions\nin $V_z$ and $V_y$ (that is why we have chosen to present the \n$V_z$ component of ion phase space in Fig.~3).\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig4.eps}\n\\caption{All three components of the distribution functions of electrons \n(top row) and ions (bottom row) \nat $t=0$ (dotted curves) and $t=54.69 \/ \\omega_{ci}$\n(solid curves), i.e. for the developed stage of phase mixing.}\n\\end{figure*}\n\n\n\nThe reason for this broadening of the ion distribution function becomes \nclear in Fig.~5 where we plot\nkinetic energy $x,y,z$ components ($\\propto V_{x,y,z}^2$) and total kinetic energies for\nelectrons (top row) and ions (bottom row). For ions we gather that $y$ and $z$ components of\nthe kinetic energy (bottom left figure) oscillate in anti-phase and their oscillatory part perfectly cancels \nout in the total energy (bottom right figure). Thus, the broadening of the $y$ and $z$ components of the\nion velocity distribution functions is due to the presence of AWs (usual wave broadening, which is actually observed\ne.g. in the solar corona and solar wind, \\cite{bd71,sbnm95,bpb00}), and hence there is no ion acceleration present.\nNote that $y$ and $z$ components and hence total kinetic energy of ions is monotonously increasing due to continuous\nAW driving. Note that no significant motion of ions along the field is present.\nFor electrons, on the other hand, we see a \nsignificant increase of the \n$x$ component (along the magnetic field) of kinetic energy\nwhich is due to the new electron acceleration mechanism discovered by us (cf. Fig.10).\nNote that for ions the $y$ component reaches lower values (than the \n$z$ component) because of lower AW velocity in the middle \npart of the simulation domain.\n\n\\begin{figure}[]\n\\resizebox{\\hsize}{!}{\\includegraphics{fig5.eps}} \n\\caption{Top row: kinetic energies (calculated by $x$ (solid), $y$ (dotted), $z$ (dashed) velocity (squared) components) of \nelectrons (left), and total kinetic energy for electrons (right)\nas a function of time. Bottom row: As above but for ions. The units on $y$-axis are arbitrary.}\n\\end{figure}\n\n\n\nThe next step is to check whether the increase in electron velocities really comes from the\nresonant wave particle interactions. For this purpose in Fig.~6, left \npanel, we plot two snapshots of the\nAlfv\\'en wave $B_z(x,y=148)$ component at instances $t=54.69 \/ \\omega_{ci}$ (solid line)\nand $t=46.87 \/ \\omega_{ci}$ (dotted line).\nThe distance between the two upper leftmost peaks (which is the distance \ntravelled by the\nwave in the time span between the snapshots) \nis about $\\delta L=150\\Delta=15(c\/\\omega_{pe})$.\nThe time difference between the snapshots is $\\delta t=7.82 \/ \\omega_{ci}$.\nThus, the \nmeasured AW speed at the point of the strongest density gradient ($y=148$)\nis $V_A^M=\\delta L \/\\delta t=0.12c$. We can also work out \nthe Alfv\\'en speed from theory.\nIn the homogeneous low density region the Alfv\\'en speed was set to be\n$V_A(\\infty)=0.25 c$. From Eq.(1) it follows that for $y=148$ the \ndensity is increased by a factor of\n$2.37$ which means that the Alfv\\'en wave speed at this position is\n$V_A(148)=0.25\/\\sqrt{2.37}c=0.16c$.\nThe measured ($0.12c$) and calculated ($0.16c$) Alfv\\'en speeds in the inhomogeneous regions \ndo not coincide. This is probably because the \nAW front is decelerated (due to momentum conservation) \nas it passes on energy and momentum to the\nelectrons in the inhomogeneous regions (where electron acceleration takes place). \nHowever, this possibly is not the case if wave-particle interactions\nplay the same role as dissipation in MHD \\citep{sg69}:\nThen wave-particle interactions would result only in the decrease of the AW\namplitude (dissipation) and not in its deceleration.\nIf we compare these values to Fig.~4 (top left panel for $f(V_x)$), we deduce\n that these are the\nvelocities $>0.12c$ above which electron numbers with higher velocities\nare greatly increased. This deviation peaks at about $0.25c$ which,\nin fact, corresponds to the Alfv\\'en speed in the lower density regions.\nThis can be explained by the fact the electron acceleration takes\nplace in wide regions (cf. Fig.~2) along and around $y \\approx 148$ (and $y \\approx 51$) -- hence\nthe spread in the accelerated velocities.\nIn Fig.~6 we also plot a visual fit curve (dashed line) to \nquantify the amplitude decay law for the AW (at $t=54.69 \/ \\omega_{ci}$)\nin the strongest density inhomogeneity region.\nThe fitted (dashed) cure is represented by $0.056 \\exp \\left[ -\n\\left({x}\/{1250}\\right)^3\\right]$.\nThere is a surprising similarity of this fit with the \nMHD approximation results.\n\\citet{hp83} found that for large times (developed phase mixing),\nin the case of a harmonic driver, the amplitude decay law\nis given by $\\propto \\exp \\left[ -\n\\left(\\frac{\\eta \\omega^2 V_A^{\\prime 2}}{6 V_A^{5}}\\right)x^3\\right]$ which \nis much faster \nthan the usual resistivity dissipation\n$\\propto \\exp(-\\eta x)$. Here $V_A^{\\prime}$ is the derivative\nof the Alfv\\'en speed with respect to the $y$-coordinate.\nThe most interesting fact is that even in the kinetic approximation\nthe same $\\propto \\exp (-A x^3)$ law holds as in MHD.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig6.eps}\n\\caption{Left: two snapshots of the\nAlfv\\'en wave $B_z(x,y=148)$ component at instances $t=54.69 \/ \\omega_{ci}$ (solid line)\nand $t=46.87 \/ \\omega_{ci}$ (dotted line). The dashed line represents fit\n$0.056 \\exp \\left[ -\\left({x}\/{1250}\\right)^3\\right]$. Center: $B_z(x,y=10)$ \n(low density homogeneous region), $B_z(x,y=100)$ (high density homogeneous region).\nNote the\ndifferences in amplitudes and propagation speeds, which are consistent with the equilibrium\ndensity and thus Alfv\\'en speed dependence on $y$-coordinate.}\n\\end{figure*}\n\n\n\nIn MHD, finite resistivity and\nAlfv\\'en speed non-uniformity are responsible for the\nenhanced dissipation via phase mixing.\nIn our PIC simulations (kinetic phase mixing), however, we do not have dissipation\nand collisions (dissipation). Thus, in our case,\nwave-particle interactions play the same role as \nresistivity $\\eta$ in the MHD phase mixing \\citep{sg69}.\nNo significant AW dissipation\nwas found away from the density inhomogeneity regions (Fig.~6 middle and right panels, note also the\ndifferences in amplitudes and propagation speeds, which are consistent with the imposed density and hence Alfv\\'en speed variation\nacross the $y$-coordinate).\nThis has the same explanation as in the case of MHD --\nit is in the regions of density of inhomogeneities ($V_A^{\\prime}\\not=0$) \nthat the dissipation is greatly enhanced, while in the regions\nwhere $V_A^{\\prime}=0$ there is no substantial dissipation (apart from the \nclassical $\\propto \\exp(-\\eta x)$\none).\nIn the MHD approximation, the aforementioned amplitude decay law is derived\nfrom the diffusion equation, to which MHD equations reduce for large times (developed\nphase mixing \\citep{tnr03}). It seems that the kinetic description \nleads to the same type of diffusion equation.\nIt is unclear at this stage, however, what physical quantity \nwould play the role of resistivity $\\eta$ (from the MHD approximation) in the\nkinetic regime. \n\n\n\n\\subsection{Homogeneous plasma case}\n\nIn order to clarify the broadening of the ion velocity distribution function\n and also for a consistency check\nwe performed an additional simulation in the case of homogeneous plasma.\nNow the density was fixed at 100 ions\/electrons per cell in the entire simulation\ndomain and hence plasma temperature and thermal velocities were fixed too.\nIn such a set up no phase mixing should take place as the AW speed is uniform.\n\n\\begin{figure*}\n\\centering\n \\epsfig{file=fig7a.eps,width=6cm}\n \\epsfig{file=fig7b.eps,width=6cm}\n \\epsfig{file=fig7c.eps,width=6cm}\n \\epsfig{file=fig7d.eps,width=6cm}\n \\caption{As in Fig.~1 but for the case of homogeneous plasma density (no phase mixing).\n Note only non-zero (above noise level) components are plotted. There are no $B_x,E_x$ or density\n fluctuations present in this case.}\n\\end{figure*}\n\n\n\nIn Fig.~7 we plot the only non-zero (above noise level) components at $t=54.69 \/ \\omega_{ci}$, which\nare left circularly polarised AW fields: $B_z,B_y,E_z,E_y$. \nNote there are no $B_x,E_x$ or density fluctuations present in this case (cf. Fig.~1) as\nit is the plasma inhomogeneity that facilitates the coupling between AW and the compressive\nmodes.\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig8.eps}\n\\caption{As in Fig.~3 but for the case of homogeneous plasma density (no phase mixing).}\n\\end{figure*}\n\n\n\nIn Fig.~8 we plot ion phase space ($V_z\/c$ vs. $x$ and $V_z\/c$ vs. $y$) in the\nhomogeneous plasma case for different times (cf. Fig.~3). We gather from the graph that\npropagating, incompressible, Alfv\\'enic \"kink\" type oscillations are still present (left column),\nwhile no significant ion\nacceleration takes place (right column). This is better understood from Fig.~9 where we plot electron and ion \ndistribution functions for $t=0$ and $t=54.69 \/ \\omega_{ci}$ (as in Fig.~4) for the \nhomogeneous plasma case. \nThere are three noteworthy points: (i) no electron acceleration takes place because of\nthe absence of phase mixing; (ii) there is (as in the inhomogeneous case) broadening of the ion\nvelocity distribution functions (in $V_y$ and $V_z$) due to the present AW (wave broadening);\n(iii) The distribution now looks Maxwellian (cf. Fig.~4) because \nthe distribution function\nholds for the entire homogeneous region.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig9.eps}\n\\caption{As in Fig.~4 but for the case of homogeneous plasma density (no phase mixing).}\n\\end{figure*}\n\n\nIn Fig.~10 we plot the\nkinetic energy $x,y,z$ components ($\\propto V_{x,y,z}^2$) and \ntotal kinetic energy for\nelectrons (top row) and ions (bottom row). For ions we see (as in the inhomogeneous case) \nthat $y$ and $z$ components of\nthe kinetic energy (bottom left figure) oscillate in anti-phase and their oscillatory part perfectly cancels \nout in the total energy (bottom right figure). Thus, the broadening of the $y$ and $z$ components of the\nion velocity distribution functions is due to the presence of AWs, and, in turn, there is no ion acceleration.\nAgain the $y$ and $z$ components and hence the \ntotal kinetic energy of the ions is monotonously increasing due to continuous\nAW driving. No significant motion of ions along the field is present.\nFor electrons we do not observe any acceleration due to the absence of phase mixing (cf. Fig.5).\nNote that the $y$ component now attains the same values as the \n$z$ component (bottom left figure)\nbecause of the same AW velocity in the entire simulation domain.\n\n\\begin{figure}[]\n\\resizebox{\\hsize}{!}{\\includegraphics{fig10.eps}} \n\\caption{As in Fig.~5 but for the case of homogeneous plasma density (no phase mixing).} \n\\end{figure} \n\n\n\\section{Discussion}\n\nIn our preliminary work \\citep{tss05} we outlined the main results of our\nnewly-discovered mechanism of electron acceleration. Here we \npresented a more detailed analysis\nof the phenomenon. We have established the following:\n\n\\begin{itemize}\n\\item Progressive distortion of the Alfv\\'en wave front, due to the differences in \nlocal Alfv\\'en speed, generates oblique (nearly parallel to the magnetic\nfield) electrostatic fields, which accelerate electrons.\n\n\\item The amplitude decay law in the inhomogeneous regions, \nin the kinetic regime, is shown to be the same as in the MHD approximation \ndescribed by \\citet{hp83}.\n\n\\item The density perturbations ($\\approx 10$\\% of background)\nare generated due to both the weak non-linearity and plasma inhomogeneity. \nThese are propagating density oscillations with variations both \nin overall magnitude \nand across the $y$-coordinate. They are mainly confined to the strongest density gradients regions\n(around $y \\approx 50$ and $150$) i.e. edges of the density structure (e.g. boundary of a solar coronal loop). \nLongitudinal to the external magnetic field, $B_x$, perturbations are also generated in the\nsame manner, but with smaller ($\\approx 3$\\%) amplitudes.\n\n\\item Both in the homogeneous and inhomogeneous cases the presence \nof AWs causes broadening of the perpendicular \n(to the external magnetic field) ion velocity \ndistribution functions, while no ion acceleration is observed.\n\n\\end{itemize}\n\nIn the MHD approximation \n\\citet{hbw02} and \\citet{tnr03} showed that \nin the case of localised Alfv\\'en pulses,\nHeyvaerts and Priest's amplitude decay\nformula $\\propto \\exp (-A x^3)$ (which is true for\nharmonic AWs) is replaced by the power law $B_z \\propto x^{-3\/2}$. \nA natural next step forward\nwould be to check whether \nin the case of localised Alfv\\'en pulses the same power law holds\nin the kinetic regime.\n\nAfter this study was complete\nwe became aware of a study by \\citet{vh04}, who used a hybrid\ncode (electrons treated as a neutralising fluid, with ion\nkinetics retained) as opposed to our (fully kinetic) PIC code,\nto simulate resonant absorption. They found that \na planar (body) Alfv\\'en\nwave propagating at less than $90^{\\circ}$ to a background gradient \nhas field lines which lose wave energy to another set of field lines by\ncross-field transport. Further, \\citet{v04} found that \nwhen perpendicular scales of the\norder of 10 proton inertial lengths ($10 c\/\\omega_{pi}$) \ndevelop from wave refraction\nin the vicinity of the resonant field lines, a non-propagating \ndensity fluctuation begins\nto grow to large amplitudes. This saturates by exciting highly \noblique, compressive and\nlow-frequency waves which dissipate and heat protons. \nThese processes lead to a faster development of small\nscales across the magnetic field, i.e. the phase mixing mechanism, \nstudied here.\n\n\n\\begin{acknowledgements}\nThe authors gratefully acknowledge support from\nCAMPUS (Campaign to Promote University of Salford) which funded\nJ.-I.S.'s one month fellowship to the Salford University \nthat made this project possible.\nDT acknowledges use of E. Copson Math cluster \nfunded by PPARC and University of St. Andrews.\nDT kindly acknowledges support from Nuffield Foundation \nthrough an award to newly appointed lecturers in Science,\nEngineering and Mathematics (NUF-NAL 04).\nThe authors would like to \nthank the referee, Dr. Bernie J. Vasquez, for pointing out some minor\ninconsistencies, which have been now corrected.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}