diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmlqs" "b/data_all_eng_slimpj/shuffled/split2/finalzzmlqs" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmlqs" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nA field theoretic approach to the study of the random stirred Navier-Stokes\nequation (rsNSE) can be traced back to the seminal paper by Martin, Siggia and\nRose \\cite{MSR}. This was the starting point for the application of many\nfield-theoretic strategies, e.g. diagramatic expansions, renormalization\ngroup methods \\cite{ren} \n(for recent developments and applications the reader can be addressed to\n\\cite{Dev}), instanton-based approaches (for \napplications of instantonic methods in turbulence see, e.g., \n\\cite{ist1,ist2,ist3,ist4} and references therein) and combinations of \nthem \\cite{GIL}. \nThe many technical difficulties encountered in\ndeveloping these approaches avoided to gather conclusive achievements.\\\\\nIn this paper we show that one step\nforward along this field-theoretic approach allows one to cast the\naction associated with the rsNSE into the form\nof a large deviation functional. Recently, large-deviation theory\nhas scored sensible success in describing fluctuations in stationary\nnon-equilibrium regimes of various microscopic models \\cite{BDGJL}.\nThis approach is mainly based on the extension of\nthe time-reversal conjugacy property introduced by Onsager and\nMachlup \\cite{OM} to stationary non-equilibrium states.\nIn practice, thermal fluctuations in irreversible stationary processes\ncan be traced back to a proper hydrodynamic description derived from\nthe microscopic evolution rules.\nThe general form of the action functional is\n\\be\nI_{[(t_1,t_2)]}(\\rho) = \\frac{1}{2} \\int_{t_1}^{t_2} dt\\,\\,\n\\langle W, K(\\rho) \\,\\, W\\rangle\n\\label{onsag}\n\\end{equation}\nwhere $\\rho(t,\\vec x)$ represents in general a vector of thermodynamic\nvariables depending on time $t$ and space variables $\\vec x$.\nThe symbol $\\langle \\cdot \\, ,\\, \\cdot \\rangle$ denotes the integration\nover space variables.\n$W$ is a hydrodynamic evolution operator acting on\n$\\rho$: it vanishes when $\\rho$ is equal to the stationary solution\n$\\bar\\rho$, which is assumed to be unique. The positive kernel\n$K(\\rho)$ represents the stochasticity of the system\nat macroscopic level.\nAccording to the large deviation-theory, the entropy $S$ of a stationary\nnon-equilibrium state is related to the action functional $I$ as\nfollows:\n\\be\nS(\\rho) = \\inf_{\\rho} I_{[-\\infty,0]} (\\hat\\rho)\n\\end{equation}\nwhere the minimum is taken over all trajectories connecting $\\bar\\rho$\nto $\\hat{\\rho}$.\n\nFor our purposes it is enough to consider that the action functional\n$I$ provides a natural measure for statistical fluctuations in\nnon-equilibrium stationary states, so that, formally, any statistical\ninference can be obtained from $I$.\nIndeed, from the very beginning we have to deal with a hydrodynamic\nformulation, namely the rsNSE: in the next Section\nwe will argue that an action functional of the form (\\ref{onsag}) can be\nobtained by field-theoretic analytic calculations.\n\nIn particular,\nexplicit integration over all longitudinal components of the velocity\nfield and over the associated auxiliary fields can be performed.\nThis allows to obtain a hydrodynamic evolution operator $W$ which depends\nonly on the transverse components of the velocity field $v^{\\alpha}_T(t,\\vec\nx)$ $(\\alpha = 1, 2, 3)$. Moreover,\nthe positive kernel $K$ amounts to the inverse correlation function\nof the stochastic source. This formulation allows to overcome some of \nthe technical difficulties characterizing standard perturbative methods\nand diagramatic expansions.\n\n\nOn the other hand, we have to face with new difficulties. \nThe hydrodynamic operator appearing in the\nlarge deviation functional is nonlinear, so that\nfunctional integration is unfeasible. One has to\nidentify a solution $ {\\bar v}^{\\alpha}_T(t,\\vec x)$ \nof the associated hydrodynamic\nequation and linearize the hydrodynamic operator around\nsuch a solution. Then, functional integration can be performed\nexplicitly on the ``fluctuation'' field.\nIn order to be well defined, this\napproximate procedure would demand the uniqueness of the \nsolution of the nonlinear hydrodynamic equation. \nFor this reason we have restricted our choice to a class of space--time\nfunctions which are also solutions of the linear problem.\nAmong them, there is only one function which satisfies physically relevant\nboundary conditions (see Section III).\nStatistical fluctuations have been estimated with respect\nto this solution, which has also the advantage of reducing the\ndependence of the generating functional on the pressure field to a \ntrivial constraint.\nIn practice, we construct a perturbative saddle-point approach based on a\nlinearization procedure of the velocity field\n$v^{\\alpha}_T(t,\\vec{x})$ around $\\bar{v}^{\\alpha}_T(t,\\vec{x})$.\nAs a consequence of the nonlinear character of the original problem.\nthe fluctuation field \n$u^{\\alpha}_T(t,\\vec{x})=v^{\\alpha}_T(t,\\vec{x}) -\n\\bar{v}^{\\alpha}_T(t,\\vec{x})$ is found to obey a linearized hydrodynamic\nproblem with coefficients depending on space and time through \n$\\bar{v}^{\\alpha}_T(t,\\vec{x})$.\nIt is worth stressing that even the solution of the linearized problem is \nnontrivial and it\nis found to depend naturally on a perturbative parameter ${\\cal R}^{-1}$,\nthe inverse of the Reynolds number. We exploit this property by\nconstructing a further perturbation procedure to obtain an explicit\nexpression for \n$u^{\\alpha}_T(t,\\vec{x})$ at different orders in ${\\cal R}^{-1}$. These\npoints are discussed in Section IV.\n\n\nSince our main purpose here is the estimation of the structure function\n(see Section V)\nas an average over the non-equilibrium measure induced by the action\n$I$, we have to assume that the perturbative expansion applies in\na wide range of values of ${\\cal R}$. In particular, we guess that it holds\nalso for moderately large ${\\cal R}$, since a statistical average of any\nobservable cannot be valid for too large values of ${\\cal R}$, i.e. in a\nregime of fully developed turbulence. We will argue that\nstatistical estimates can be consistently obtained for values of ${\\cal R}$\nwhich extend up to the region of stability of the solution\n$ {\\bar v}^{\\alpha}_T(t,\\vec x) $. Beyond this region we have no practical\nway of controlling the convergence of the linearization procedure.\nIt is worth stressing that we obtain an analytic expression of\nthe structure function: the so--called K41 scaling law \\cite{K41} \nis recovered on a spatial\nscale, whose nontrivial dependence on ${\\cal R}$ is explicitly\nindicated.\n\nAt the present stage, we are not able to say at which extent our \n results on the dimensional\nscaling are dependent on the particular choice we did for the solution \naround which we\nstudied the fluctuations. Further investigations are needed to clarify this \nimportant point,\nwhich probably require the combination of analytical and numerical techniques.\n\n\n \n\n\\section{The model}\nWe consider the Navier-Stokes equation for \nthe velocity vector-field components $v^{\\alpha}(t,\\vec{x})$\ndescribing a divergence-free homogeneous isotropic flow:\n\\ba\n\\label{NS1}\n&&\\left({\\partial\\over\n\\partial t} - \\nu\\nabla^2\\right)v^{\\alpha}(t,\\vec{x}) +\nv^{\\beta}(t,\\vec{x}){\\partial\\over \\partial\nx^{\\beta}}v^{\\alpha}(t,\\vec{x}) + {1\\over\n\\rho}{\\partial\\over \\partial x^{\\alpha}}P(t,\\vec{x}) -\nf^{\\alpha}(t,\\vec{x}) = 0,\\\\\n&&\\label{cons1}\n{\\partial\\over \\partial x^{\\alpha}}v^{\\alpha}(t,\\vec{x}) = 0.\n\\end{eqnarray}\nHere, $P$ is the pressure and the field $f^{\\alpha}$ represents a \nsource\/sink of momentum necessary\nto maintain velocity fluctuations. Customarily \\cite{Niko98}, we assume\n$f^{\\alpha}$ to be a white-in-time zero-mean Gaussian random force with\ncovariance\n\\be\n\\label{ff}\n\\langle f^{\\alpha}(t,\\vec{x})f^{\\beta}(t^{\\prime},\n\\vec{x}^{\\prime})\\rangle = F^{\\alpha\\beta}\\left(\\vec{x} -\n\\vec{x}^{\\prime}\\right)\\delta\\left(t - t^{\\prime}\\right)\\ .\n\\end{equation}\nDue to constraint \n(\\ref{cons1}), the\nfield\n$v^{\\alpha}(t,\\vec{x})$ depends only on the transverse degrees of\nfreedom of\n$f^{\\alpha}(t,\\vec{x})$. Without prejudice\nof generality we can also assume divergence-free forcing, yielding the\nadditional relation\n\\be\n{\\partial\\over \\partial x^{\\alpha}}F^{\\alpha\\beta}\\left(\\vec{x} -\n\\vec{x}^{\\prime}\\right)\n= {\\partial\\over \\partial x^{\\beta}}F^{\\alpha\\beta}\\left(\\vec{x} -\n\\vec{x}^{\\prime}\\right) = 0\\ .\n\\end{equation}\nA standard choice for $F^{\\alpha\\beta}$ is\n\\be\nF^{\\alpha\\beta}(\\vec{x}) ={D_0L^3\\over (2\\pi)^3}\\int d^3p\\\ne^{i\\vec{p}\\cdot\\vec{x}}(Lp)^s e^{-(Lp)^2}\n{\\cal P}^{\\alpha\\beta}(p) ,\n\\label{misura}\n\\end{equation}\nwhere $D_0$ is the power dissipated by the unitary mass, $p =\n\\left|\\vec{p}\\right|$,\n$L$ is the integral scale, $s$ is an integer exponent (typically, $s=2$)\nand \n$$\n{\\cal P}^{\\alpha\\beta}(p) = \\delta^{\\alpha\\beta}-{p^{\\alpha}p^{\\beta}\\over \np^2}\n$$ \nis the projector on the transverse degrees of freedom.\n\n\nFollowing the Martin-Siggia-Rose formalism \\cite{MSR} we introduce\nthe Navier-Stokes density of Lagrangian\n\\ba\n{\\cal L}(v, w, P, Q, f) &&= w^{\\alpha}(t,\\vec{x})\\left[\n\\left({\\partial\\over\n\\partial t} - \\nu\\nabla^2\\right)v^{\\alpha}(t,\\vec{x}) +\nv^{\\beta}(t,\\vec{x}){\\partial\\over \\partial\nx^{\\beta}}v^{\\alpha}(t,\\vec{x})\\right.\\nonumber\\\\ \n&&\\left.+ {1\\over\n\\rho}{\\partial\\over \\partial x^{\\alpha}}P(t,\\vec{x}) -\nf^{\\alpha}(t,\\vec{x})\\right] + {1\\over\n\\rho}Q(t,\\vec{x}) {\\partial\\over \\partial\nx^{\\alpha}}v^{\\alpha}(t,\\vec{x})\\ ,\n\\end{eqnarray}\nwhere the field $w^{\\alpha}$ is the conjugated variable to the velocity field\n$v^{\\alpha}$ and the field $Q$ is the Lagrangian multiplier related to\nconstraint (\\ref{cons1}). The generating functional is given by\nthe integral\n\\ba\n{\\cal W}\\left(J, P\\right) \n&&= \\int {\\cal D}v{\\cal D}w{\\cal D}Q{\\cal D}f\n\\exp\\left\\{i\\int dt\\ d^3x\\left[{\\cal L}(v, w, P, Q, f) +\nJ_{\\alpha}v^{\\alpha}\\right]\\right. \\nonumber\\\\\n&&\\left.-{1\\over 2}\\int dt\nd^3xd^3yf^{\\alpha}F^{-1}_{\\alpha\\beta}f^{\\beta}\\right\\}\n\\label{func01}\n\\end{eqnarray}\nwhere $J_{\\alpha}$ are the components of the \"external source\" vector $J$.\nBy integration over the statistical measure, ${\\cal D}f e^{-{1\\over 2}\\int \nfF^{-1}f}$ and over the Lagrange multiplier $Q$, we obtain an expression \nwhich depends only on the transverse component $v_T$ of\nthe velocity field $v$. By decomposing the auxiliary field $w$ in terms of \nits transverse ($w_T$) and longitudinal ($w_L$) components, $w =\nw_L + w_T$, the measure ${\\cal D}w$ factorizes into \n${\\cal D}w_L{\\cal D}w_T$\nand the generating functional (\\ref{func01}) reduces to\n\\ba\n{\\cal{W}}(J, P) = \\int{\\cal{D}}w_T{\\cal{D}}w_L{\\cal{D}}v_T\\ exp\\left\\{\ni\\int dtd^3x\\left[w^{\\alpha}_T\\left\\{\\left({\\partial\\over \\partial t}\n-\\nu\\nabla^2\\right) v_{\\alpha T} + \\left(v^{\\beta}_T{\\partial\\over \\partial\nx^{\\beta}}v_{\\alpha T}\\right)_T\\right\\}\\right.\\right.\\nonumber\\\\\n\\left.\\left.+ w^{\\alpha}_L\\left\\{\\left(v^{\\beta}_T{\\partial\\over \\partial\nx^{\\beta}}v_{\\alpha T}\\right)_L + {1\\over \\rho}{\\partial P\\over \\partial\nx^{\\alpha}}\\right\\} + J_{\\alpha}v^{\\alpha}_T\\right]-{1\\over 2}\\int dt\\int \nd^3xd^3y w^{\\alpha}_TF_{\\alpha\\beta}w^{\\beta}_T\\right\\}\\ .\n\\label{funct00}\n\\end{eqnarray}\nDiagramatic strategies are usually applied at this level. We want to point\nout that one can go further by observing\nthat also the transverse and longitudinal components of the auxiliary field\n$w$ can be integrated out, yielding the equation\n\\be\n{\\cal W}(J, P) = \\int {\\cal D}v_T e^{-{1\\over 2}I(v_T) + i\\int dt d^3x\nJ_{\\alpha}v^{\\alpha}_T}\n\\delta\\left(\\left(v^{\\beta}_T{\\partial\\over \\partial\nx^{\\beta}}v_{\\alpha T}\\right)_L + {1\\over \\rho}{\\partial P\\over \\partial\nx^{\\alpha}}\\right)\n\\label{funct1}\n\\end{equation} \nwhere the action functional $I$ has the form\n\\ba\nI(v_T) &&= \\int dt d^3x d^3y\\left[\\left({\\partial\\over \\partial t} -\n\\nu\\nabla^2\\right)v_T^{\\alpha}(t, \\vec{x}) + v_T^{\\rho}(t, \\vec{x})\n\\partial_{\\rho}v_T^{\\alpha}(t, \\vec{x})\\right]\\nonumber\\\\\n&&F^{-1}_{\\alpha\\beta}(|\\vec{x}-\\vec{y}|)\n\\left[\\left({\\partial\\over \\partial t} -\n\\nu\\nabla^2\\right)v_T^{\\beta}(t, \\vec{y}) + v_T^{\\lambda}(t, \\vec{y})\n\\partial_{\\lambda}v_T^{ \\beta}(t, \\vec{y})\\right]\\ .\n\\label{act}\n\\end{eqnarray}\nThe computation of (\\ref{funct1}) would require to solve the constraint\n\\be\n\\left(v^{\\beta}_T{\\partial\\over \\partial\nx^{\\beta}}v_{\\alpha T}\\right)_L + {1\\over \\rho}{\\partial P\\over \\partial\nx^{\\alpha}} =0\\ ,\n\\label{press1}\n\\end{equation}\nIn principle, this is a very difficult task due to the nonlinear character\not the constraint.\\\\\nIn the following section we show that we can identify a particular \nextremal solution, $\\bar{v}_T$, of the functional (\\ref{act}). This solution \nis found\nto be independent of the stochastic source and, moreover, it satisfies\nconstraint (\\ref{press1}) for any constant value of the pressure.\nAccordingly, $I(v_T)$ can be interpreted as a large deviation functional\n(see eq.(1))\nand the statistical nonequilibrium measure of the rsNSE can be \neffectively evaluated by integrating over the fluctuations around this\nextremal solution.\nIt is worth observing that\nthe entropy is related to the functional\n$I(v_T)$ by the relation \\cite{BDGJL}\n\\be\nS(v_T) = \\frac{1}{2}\\inf_{v_T}I(v_T )\\ ,\n\\end{equation}\nwhere the minimum is taken over all trajectories connecting \n$\\bar{v}_T$ to $v_T$ .\\\\ In what follows we are\ngoing to show that a suitable perturbative strategy can be applied for\nobtaining explicit analytic calculations of the statistical properties \nof the rsNSE.\n\n\n\\section{A quasi-steady solution and its stability}\nAny analytic approach aiming at the estimation of the generating functional\n(\\ref{funct1}) demands the identification of an explicit solution of\nthe action functional (\\ref{act}). In practice, this amounts to solve\nthe stationarity condition\n\\ba\n{\\delta I(v_T)\\over \\delta v^{\\sigma}_T(t, \\vec{x})} &&= 2\\int d^3y\\left[\n-\\delta_{\\sigma}^{\\ \\alpha}\\left({\\partial\\over \\partial t} +\n\\nu\\nabla^2\\right) + \\partial_{\\sigma}v_T^{\\alpha}(t,\n\\vec{x})\\right.\\nonumber\\\\\n&&-\\delta_{\\sigma}^{\\ \\alpha}v_T^{\\rho}(t,\n\\vec{x})\\partial_{\\rho}\\bigg]\\left(F^{-1}_{\\alpha\\beta}(|\\vec{x}-\\vec{y}|)\n\\left[\\left({\\partial\\over \\partial t} -\n\\nu\\nabla^2\\right)v_T^{\\beta}(t, \\vec{y})\\right.\\right.\\nonumber\\\\\n&&+ v_T^{\\lambda}(t, \\vec{y})\n\\partial_{\\lambda}v_T^{ \\beta}(t, \\vec{y})\\bigg]\\bigg) = 0\\ .\n\\label{estremo}\n\\end{eqnarray}\nWe want to observe that for any arbitrary scalar field\n$\\Phi(t, \\vec{x})$ a solution of the equation\n\\be\n\\left({\\partial\\over \\partial t} -\n\\nu\\nabla^2\\right)v_T^{\\beta}(t, \\vec{x}) + v_T^{\\lambda}(t, \\vec{x})\n\\partial_{\\lambda}v_T^{ \\beta}(t, \\vec{x}) =\n\\partial^{\\beta}\\Phi(t, \\vec{x})\\ ,\n\\label{ridotta}\n\\end{equation}\nis also a solution of (\\ref{estremo}). \nSince $F^{\\alpha\\beta}(|\\vec{x}-\\vec{y}|)$ contains a projector on the\ntransverse degrees of freedom we can fix, \nwithout prejudice of generality,\nthe condition $\\partial^{\\beta}\\Phi=0$.\nIt is worth pointing out that, for what concerns eq.(\\ref{ridotta}), \nthis condition implies also that the longitudinal component of the \nnonlinear term vanishes, i.e. the solution \n$v_T^{\\lambda}(t, \\vec{x})$ has to satisfy the additional condition\n\\be\n\\partial_{\\beta}\\left(v_T^{\\lambda}(t, \\vec{x})\n\\partial_{\\lambda}v_T^{ \\beta}(t, \\vec{x})\\right) = 0\\ .\n\\label{cond2} \n\\end{equation}\nSeveral different solutions can be found: among them, the only one\nunaffected by divergences in space and time is the following:\n\\be\n\\bar{v}^{\\alpha}_T(t, \\vec{x}) = {U^{\\alpha}\\over 2}\\left\\{1 + e^{-{t\\over\n\\tau_D}}\\sin\\left({1\\over\n2\\sqrt{b^2-(\\vec{a}\\cdot\\vec{b})^2}}\n\\left(\\vec{b}\\wedge\\vec{a}\\right)\\cdot{\\vec{x}\\over L}\\right)\\right\\}\\ ,\nwith \\quad t>0\\ .\\\\\n\\label{soluz}\n\\end{equation}\nThe $U^{\\alpha}$ are the components of the vector of velocity amplitude \n$\\vec{U}$ ($U=|\\vec{U}|$)~,\n$\\vec{a} = {\\vec{U}\\over U}$ is the corresponding \nunit vector and $\\vec{b}$ identifies a rotation axis. \nBoth vectors $\\vec{U}$ and $\\vec{b}$ can be fixed arbitrarily.\nWe assume also that the length--scale $L$ is the same as the forcing \nintegral scale defined in (\\ref{misura}). This implies that \nsolution (\\ref{soluz}) decays exponentially in time to the constant\n${U^{\\alpha}\\over 2}$ with the rate $\\tau_D=4L^2\/\\nu$, which is the \ndiffusion time scale. The dependence of solution (\\ref{soluz}) \non the Reynolds number ${\\cal R}$ can be made explicit by the\nrelation ${\\cal R} = {LU\\over \\nu}$, so that \n$\\tau_D= 4\\nu{\\cal R}^2\/U^2$. \nNotice that condition\n(\\ref{cond2}) is trivially satisfied by solution (\\ref{soluz}), because\n\\be\n\\bar{v}^{\\beta}_T(t, \\vec{x})\\partial_{\\beta}\\bar{v}^{\\alpha}_\nT(t, \\vec{x}) = 0\\ .\n\\label{triv}\n\\end{equation}\nAccordinlgy, $\\bar{v}^{\\alpha}_T(t, \\vec{x})$ is also a solution of the \ndiffusion equation\n$\\left(\\partial_t-\\nu\\nabla^2\\right)\\bar{v}_T^{\\alpha}(t, \\vec{x}) = 0$.\nThere are two main consequences to be pointed out: i) as a \nsolution of the linear diffusion equation $\\bar{v}^{\\alpha}_T(t, \\vec{x})$\nis unique, which is a crucial requirement for the large\ndeviation approach; ii) the solution has to be defined only for positive \ntimes.\\\\\nMoreover, due to condition (\\ref{triv}), the constraint (\\ref{press1}) \nis trivially solved by $P=constant$.\\\\\n\nFor $|\\vec{x}| \\ll L$ solution (\\ref{soluz}) approximates a linear shear \nflow: this is well known to produce instabilities for sufficiently \nlarge Reynolds numbers ${\\cal R}$. In this perspective, it is worth analyzing\nthe dynamical stability of (\\ref{soluz}). For this aim we consider the\nperturbed velocity vector, whose components are:\n\\be\nv^{\\alpha}(t, \\vec x) =\n\\bar{v}^{\\alpha}_T(t, \\vec x) + \\delta v^{\\alpha}_T(t, \\vec x)\n\\label{perturb}\n\\end{equation}\nThe perturbation vector $\\delta {v}^{\\alpha}_T$ is\nassumed to be much smaller than $\\bar{v}^{\\alpha}_T$ with respect to\nany proper functional measure $\\mu$ , i.e. $ \n|\\delta v^{\\alpha}_T(t, \\vec x)|_{\\mu} << \n|v^{\\alpha}(t, \\vec x)|_{\\mu} \\,\\,\\, , \\forall t$ and \n$\\forall \\vec x$~. One can substitute (\\ref{perturb}) into\n(\\ref{ridotta}) with $\\partial^{\\beta}\\Phi=0$, while\nassuming that it satisfies constraint (\\ref{cond2}).\nIn the linear approximation one obtains an equation for \n$\\delta v^{\\alpha}_T(t, \\vec x)$, which can be solved explicitly\nby performing an expansion in the inverse Reynolds number $\n{\\cal R}^{-1}$. As shown in Appendix A, \nthe perturbation field vanishes and, accordingly, (\\ref{soluz}) is stable\nfor sufficiently large times and \nReynolds numbers and provided the following inequality\nholds:\n\\be\n{8\\nu^2{\\cal R}\\over U^2}k^2> 1\\ .\n\\label{cond1}\n\\end{equation}\n\nThis inequality implies that for increasing values of $\\cal R$ the band \nof unstable modes becomes thinner and thinner. As a consequence, solving the \nstability problem by expanding the solution of the linearized dynamics \n(\\ref{2-5}) in powers of ${\\cal R}^{-1}$ is consistent with this\nfinding. Since condition (\\ref{cond1}) has been derived by assuming \n${\\cal R}$ large, it is not in contradiction with the Landau \nscenario for the origin of turbulence.\n\nIn summary, $\\bar{v}^{\\alpha}_T(t, \\vec{x})$ exhibits all\nthe expected features of a physically relevant solution, which\ncorresponds to stationarity conditions for the large--deviation\nfunctional. Accordingly, it can be effectively used \nfor computing statistical non--equilibrium fluctuations of the\nrsNSE. In the next section we will exploit\na saddle point strategy for performing explicit calculations\nfrom the generating functional.\n\n\n\\section{Perturbative analysis of the generating functional}\nAll statistical properties concerning the rsNSE are contained in\nthe structure functions which can be obtained by performing\nderivatives of the generating functional (\\ref{funct1}) with respect to\nthe current $J^{\\alpha}$. \nAn explicit calculation is\nunfeasible due to the nonlinear character of the action functional \n$I(v_T)$. Since in the previous section we have identified the \nsolution $\\bar{v}^{\\alpha}_T$, we can tackle the problem \nby introducing the velocity field $u^{\\alpha}_T = v^{\\alpha}_T - \n\\bar{v}^{\\alpha}_T $,\nwhich represents fluctuations with respect to $\\bar{v}^{\\alpha}_T$, and\nby applying a saddle--point strategy.\\\\\n\nDue to the translational invariance of the functional measure,\nthe generating functional (\\ref{funct1}) can be rewritten as\n\\be\n{\\cal W}(J) = \\int{\\cal D}u_T\\ e^{-{1\\over 2}I(u_T) +i\\int\ndtd^3x J_{\\alpha} u^{\\alpha}_T} .\n\\label{AF1}\n\\end{equation}\nA linearized expression for the action functional can be obtained by\nassuming that higher order terms in $u_T^{\\alpha}$\ngenerated by the saddle-point expansion around the solution\n$\\bar{v}^{\\alpha}_T$ are negligible with respect\nto the functional measure ${\\cal D}u_T$:\n\\ba\nI(u_T) &&= \\int_0^{\\infty} dt \\int \nd^3xd^3y\\left[(\\partial_t-\\nu\\nabla^2_x)u^{\\alpha}_T(\\hat{x})\n+\\bar{v}^{\\rho}_T\n(\\hat{x}) \\partial_{\\rho}u^{\\alpha}_T(\\hat{x})\n+ u^{\\rho}_T(\\hat{x})\\partial_{\\rho}\n\\bar{v}^{\\alpha}_T\n(\\hat{x})\\right]\nF^{-1}_{\\alpha\\beta}(|\\vec{x}-\\vec{y}|)\\times\\nonumber\\\\\n&&\\left[(\\partial_t-\\nu\\nabla^2_y)u^{\\beta}_T(\\hat{y})\n+\\bar{v}^{\\lambda}_T\n(\\hat{y})\\partial_{\\lambda}u^{\\beta}_T(\\hat{y})\n+ u^{\\lambda}_T(\\hat{y})\\partial_{\\lambda}\n\\bar{v}^{\\beta}_T\n(\\hat{y})\\right] \n\\ .\n\\label{FF}\n\\end{eqnarray}\nWe have also introduced the shorthand notation\n$\\hat{x}\\equiv (t, \\vec{x})$.\\\\\nConsistently with this\nperturbative approach, we can also assume that, at leading order,\nconstraint (\\ref{press1}) is still trivially solved by (\\ref{soluz}), i.e.\nthe pressure $P$ is a constant.\n\nIn this way the action functional (\\ref{FF}) has a \nbilinear form in the field $u_T^{\\alpha}$, with coefficients depending\non $\\bar{v}^{\\alpha}_T$. In order to perform explicit\nGaussian integration of the generating functional one has first to\nunderstand how the technical difficulties inherent such a dependence\ncan be circumvented. The first problem that we have to face with\nis that, since (\\ref{soluz}) is defined only for $t>0$, also \n(\\ref{FF}) is defined for positive times. As we discuss in Appendix B,\na standard procedure allows one to get rid of any singularity of the action\nintegral that might emerge for $t \\to 0^+$. This is a consequence of the\nstucture of the linearized hydrodynamic operator appearing in\n(\\ref{FF}). The second problem concerns the possibility of obtaining an\nanalytic expression for the generating functional. To this aim one can\nexploit a perturbative expansion of (\\ref{soluz}) in powers of the inverse\nReynolds number\n${\\cal R}^{-1}$. Actually, it is worth rewriting the solution \n(\\ref{soluz}) making explicit its dependence on the\nReynolds number: \n\\be\n\\bar{v}^{\\alpha}_T(t, \\vec{x}) = {U^{\\alpha}\\over 2}\\left\\{1 + e^{-{U^{2}\\over\n4\\nu{\\cal R}^2}t}\\sin\\left({2\\over\n\\sqrt{b^2-(\\vec{a}\\cdot\\vec{b})^2}}\n{\\left(\\vec{b}\\wedge\\vec{U}\\right)\\cdot\\vec{x}\\over 4\\nu{\\cal\nR}}\\right)\\right\\}\\ ,\nwith \\quad t>0\\ .\\\\\n\\label{soluz1}\n\\end{equation}\nUsing ${\\cal R}^{-1}$ as a perturbative parameter, one can expand\n$\\bar{v}^{\\alpha}_T$ at all orders in \n${\\cal R}^{-1}$. When this expansion is substituted \ninto (\\ref{FF}) at leading order \nthe action functional, in Fourier transformed variables, takes the form\n\\be\nI(u_T) = \\int {d^4p\\over (2\\pi)^4}u^{\\rho}_T(-\\hat{p})M_{\\rho}^{\\ \n\\alpha}(-\\hat{p})\nF^{-1}_{\\alpha\\beta}(p)\nM^{\\beta}_{\\ \\zeta}(\\hat{p})\nu^{\\zeta}_T(\\hat{p}) + O\\left({1\\over {\\cal R}^2}\\right)\\ .\n\\label{act1}\n\\end{equation}\nWe denote with $u^{\\zeta}_T(\\hat{p})$ the Fourier\ntransform of the field $u^{\\zeta}_T(\\hat{x})$ with\n$\\hat{p}\\equiv (p_0,\\vec{p})$, $p_0$ and $\\vec{p}$ being\nthe Fourier--conjugated variables of $t$ and $\\vec{x}$, respectively.\nWe introduce the representation of the action functional in terms of\nthe Fourier--transformed variables because this makes more transparent \nthe diagonalization procedure required to arrive at the final result.\n\\\\\nThe hydrodynamic evolution term\n$M^{\\beta}_{\\ \\zeta}(\\hat{p})u^{\\zeta}_T(\\hat{p})$ is given\nby the expression\n\\be\nM^{\\beta}_{\\ \\zeta}(\\hat{p})u^{\\zeta}_T(\\hat{p})\n= \\left\\{\\delta^{\\beta}_{\\ \\zeta}\\left[i\\left(p_0 + {1\\over \n2}\\vec{p}\\cdot \\vec{U}\\right) +\\nu\np^2 -{C\\over 4}\\vec{p}\\cdot\\vec{U} \n{\\left(\\vec{b}\\wedge\\vec{U}\\right)^{\\gamma}\\over 4\\nu{\\cal\nR}}\\partial_{p_{\\gamma}}\\right] -{C\\over\n4}U^{\\beta}{\\left(\\vec{b}\\wedge\\vec{U}\\right)_{\\zeta}\\over\n4\\nu{\\cal R}}\\right\\}u^{\\zeta}_T(\\hat{p}) ,\n\\label{matr1}\n\\end{equation}\nwhere $C = {2\\over \\sqrt{b^2-(\\vec{a}\\cdot\\vec{b})^2}}$.\n\\\\\nThe next step in this calculation requires the diagonalization of the\nmatrix \n$M_{\\rho}^{\\ \\alpha}(-\\hat{p}){F^{-1\\, \\alpha\\beta}(p)}M^{\\beta}_{\\\n\\zeta}(\\hat{p})$. \nSince by definition the factor $[F^{\\alpha\\beta}(p)]^{-1}$ is \nproportional to the identity operator in the space of the transverse \nsolutions\\footnote{More explicitly we have $[F^{\\alpha\\beta}(p)]^{-1} =\nF^{-1}(p)\n{\\cal P}^{\\alpha\\beta}(p)$ where $F(p) = D_0L^3(Lp)^s e^{-(Lp)^2}$} we have \njust\nto diagonalize the matrix of the hydrodynamic operator\n$M^{\\beta}_{\\ \\zeta}(\\hat{p})$.\\\\\nThe computation of the eigenvalues, $\\lambda_1$, $\\lambda_2$ and\n$\\lambda_3$ of\n$M^{\\beta}_{\\ \\zeta}(\\hat{p})$ deserves lengthy calculations sketched in\nAppendix \\ref{C}. Hereafter, we report the final form of the generating\nfunctional:\n\\be\n{\\cal W}(\\eta) = \\int {\\cal J}(H){\\cal D}\\phi_T\\ e^{-{1\\over \n2}\\int_{\\hat{p}}\\phi_T^{\\rho}(-\\hat{p})F^{-1}(p)\nI_{\\rho\\gamma}(\\hat{p})\\phi_T^{\\gamma}(\\hat{p})\n+i\\int_{\\hat{p}}\n\\eta_{T\\alpha}(-\\hat{p})\\phi_T^{\\alpha}(\\hat{p})}\\ ,\n\\end{equation}\nwhere we have used the shorthand notation $\\int_{\\hat{p}}\\equiv \\int\n{d^4p\\over (2\\pi)^4}$ and\n\\be\nI_{\\rho\\gamma}(p) = \\left(\\begin{array}{ccc}\n\\lambda_1^*(\\hat{p})\\lambda_1(\\hat{p}) & 0 & 0\n\\\\\n0 & \\lambda_2^*(\\hat{p})\\lambda_2(\\hat{p}) & 0\n\\\\\n0 & 0 &\\lambda_3^*(\\hat{p})\\lambda_3(\\hat{p})\n\\end{array}\n\\right)\\ ,\n\\end{equation}\n${\\cal J}(H)$ is the Jacobian of the basis transformation\n$u\\longrightarrow \\phi$, $J\\longrightarrow \\eta$ engendered by\nthe matrix $H$, which diagonalizes\n$M_{\\rho}^{\\ \\alpha}(\\hat{p})$.\nIt is worth pointing out that\nthe transformed vector $\\phi_T^{\\alpha}(\\hat{p})$ still represents\ntransverse components.\nGaussian integration yields the following expression of the\n{\\it normalized} functional in terms of the $\\eta^{\\alpha}$ source fields\n\\be\n{\\cal W}(\\eta) = e^{-{1\\over \n2}\\int_{\\hat{p}}\\eta^{\\rho}_T(-\\hat{p})F(p)I^{-1}_{\\rho\\gamma}\n\\eta^{\\gamma}_T(\\hat{p})}\\ .\n\\label{FJ1}\n\\end{equation}\nIn practice, the explicit computation of the structure functions can be\naccomplished by returning to the original representation, where the\ngenerating functional has the form\n\\be\n{\\cal W}(J) = e^{-{1\\over 2}\\int_{\\hat{p}}J^{\\rho}_T(-\\hat{p})F(p)\n\\left(HI^{-1}H^T\\right)_{\\rho\\sigma}(\\hat{p})J^{\\sigma}_T(\\hat{p})}.\n\\label{FJ2} \n\\end{equation}\nIn the next section we are going to derive an explicit expression for the \nsecond--order structure function.\n\n\\section{Short-distance behavior of the\nsecond order structure function}\nThe analytic expression obtained for \nthe generating functional (\\ref{FJ2}) allows one to obtain\nall the statistical information about the fluctuations\naround the basic solution $\\bar{v}^{\\alpha}_T$. In this section\nwe perform the explicit calculation of the \nsecond-order structure function of the\nvelocity field $u^{\\alpha}$, defined as\n\\ba\nS_2&&=\\langle\\left|u_T(t, \\vec{r}+\\vec{x})-u_T(t,\\vec{x})\\right|^2\n\\rangle\\nonumber\\\\\n&&= \\langle\\left|(u^{\\alpha}_T(t,\n\\vec{r}+\\vec{x})-u^{\\alpha}_T(t, \\vec{x}))(u_{T\\alpha}(t,\n\\vec{r}+\\vec{x})-u_{T\\alpha}(t, \\vec{x}))\\right|\\rangle ,\n\\label{struttura}\n\\end{eqnarray}\nThe brackets denote averages over the stochastic forcing.\\\\\nBy assuming isotropy and homogeneity of the velocity field $u^{\\alpha}$,\nexpression (\\ref{struttura}) is expected to assume the typical\nform of a scale invariant function\n\\begin{equation}\nS_2(r) = r^{\\zeta_2}F_2\\left(t, {r\\over L}\\right) \n\\end{equation}\nHere $r = |\\vec{r}|$ and $L$ is the integral scale associated with the\nnoise source. It is worth stressing that, at variance with\nfully developed turbulent regimes, here the assumption of\nisotropy and homogeneity have to be taken as a plausible\nhypothesis allowing for analytic computations.\\\\\n\nWe want to point out that any exponent $\\zeta_n$ must be\nindependent of the basis chosen for representing\nthe functional ${\\cal W}$. For the sake of simplicity, it is worth using\n(\\ref{FJ1}) rather than (\\ref{FJ2}) to obtain:\n\\be \nS_2(r) =\\left.\\left({\\delta\\over\ni\\delta\\eta^{\\alpha}_T(t,\\vec{x}+\\vec{r})} - {\\delta\\over\ni\\delta\\eta^{\\alpha}_T(t,\\vec{x})}\\right)\n\\left({\\delta\\over i\\delta\\eta_{T\\alpha}(t,\\vec{x}+\\vec{r})}\n- {\\delta\\over i\\delta\\eta_{T\\alpha}(t,\\vec{x})}\\right){\\cal\nW}(\\eta)\\right|_{\\eta=0}\\ .\n\\label{S21}\n\\end{equation}\nAs shown in Appendix \\ref{D}, it turns out that $S_2(r)$ can be\nrewritten as follows:\n\\begin{equation}\n{S}_2(r) =-{1\\over \\nu}\\left(I_1(r) +I_2(r)\\right) .\n\\end{equation}\nwhere \n\\be\nI_1(r) = {D_0\\over \n(2\\pi)^2}r^2\\sum_{n=0}^{\\infty}(-1)^{n+1}{\\Gamma\\left({s+3+2n\\over\n2}\\right)\\over\n\\Gamma\\left(2n+4\\right)}\\left({r\\over L}\\right)^{2n}\\ \n\\label{i11}\n\\end{equation}\nand\n\\ba\nI_2(r) &&= D_0L^3{32\\nu^2{\\cal R}\\over \nU^2}\\left\\{\\int_0^{\\infty}{p^2dp\\over \n(2\\pi)^2}(Lp)^se^{-(Lp)^2}\\int_{-1}^1dx\\left(e^{iprx} -\n1\\right)\\right.\\nonumber\\\\\n&&\\left.\\times\\left(\\sum_{l=1,2}{\\left(1-x^2\\right)^{1\\over 3}\\over \nx^{2\\over 3}}\n{\\left[\\sum_{m=0}^2s_{lm}\nF_m\\left(x, {8\\nu^2{\\cal R}\\over U^2}p^2; \\Sigma, \\Xi\\right)\n + {1\\over 2}\\Sigma{x^{2\\over 3}\\over \\left(1-x^2\\right)^{1\\over \n3}}\\right]\\over\n\\prod_{i\\not= l}\\left(\\sum_{k=0}^2\\left(s_{lk}\n-s_{ik}\\right)F_k\\left(x, {8\\nu^2{\\cal R}\\over U^2}p^2; \\Sigma, \n\\Xi\\right)\\right)}\n+ O\\left({1\\over {\\cal R}^2}\\right)\\right)\\right\\} . \\nonumber\\\\\n\\label{tremendo}\n\\end{eqnarray}\nThe coefficients $s_{ij}$ and the functions $F_i$, together with\ntheir arguments, are specified in Appendix \\ref{D} .\\\\\nThe main contribution of the stochastic measure $p^{2+s}e^{-(L p)^2}dp$ \nto the first integral in (\\ref{tremendo}) comes from a narrow region of\nwavenumbers close to $\\bar{p}$, where the function $p^{2+s}e^{-(Lp)^2}$ has\nits maximum, i.e.\n\\be\n\\bar{p}={1\\over L}\\sqrt{s+2\\over 2}\\ .\n\\end{equation}\nAccordingly, the function ${8\\nu^2{\\cal R}\\over U^2}p^2$ contributes to the\nintegral by taking values close to $4(s+2)\\over {\\cal R}$.\\\\\nMoreover, for $p=\\bar{p}$ the sufficient condition (\\ref{cond1}) for the \nstability of small perturbations determines an upper bound for the\nReynolds number:\n\\be\n{\\cal R}\n\\lesssim 4(s+2)\\ ,\n\\label{stabi}\n\\end{equation}\nThis implies that for sufficiently small $ {\\cal R} $ the \nwavenumber $\\bar{p}$ is stable. Under this condition, the leading \ncontribution in (\\ref{tremendo}), consistently with the expansion in \n${\\cal R}^{-1}$, can be obtained\nby performing an expansion in powers of $U^2\\over 8\\nu^2{\\cal R}p^2$.\\\\\nOne finally obtains the complete expression of the structure function\n(see Appendix {D} for details)\n\\ba\n{S}_2(r) &&= -{1\\over \\nu}\\left(I_1(r)+I_2(r)\\right)\\nonumber\\\\\n&&\\sim -{D_0\\over\n(2\\pi)^2\\nu}r^2\\sum_{n=0}^{\\infty}\\left\\{(-1)^{n+1}\\Gamma\\left({s+2n+3\\over \n2}\\right)\n\\left[{1+\\Xi\\over \\Gamma\\left(2n + 4\\right)}\n-{2^{13\\over 3}\\Xi\\over \\Sigma^{2\\over 3}}{2n+4\\over\n\\Gamma (2n+6)}\\right]\\left({r\\over L}\\right)^{2n}\\right\\}\\ ,\\nonumber\\\\\n&&\\quad\n\\mbox{for} \\quad 1<{\\cal R}\\ll 4(2+s)\\ ,\n\\label{SS2}\n\\end{eqnarray}\nAt leading order in the distance $r$ this expression is dominated by a \ndissipative contribution.\n\nWe conjecture that this analysis can be extended\nto the parameter region defined by the condition ${\\cal R}\\gtrsim 4(2+s)$,\nwhere the statistically relevant wavenumbers can be unstable.\nAs shown in Appendix \\ref{D}, in this case $I_2(r)$ has two contributions: one\nis again dissipative, while there is another one yielding the nontrivial\nscaling behavior $r^{2\/3}$.\nSpecifically, the expression of $S_2(r)$ for ${\\cal R}\\gtrsim 4(2+s)$\nis found to be \n\\ba\n{S}_2(r)&&\\sim -{D_0\\over \\pi\\nu}\\left\\{{1+{\\Xi\\over 2}\\over\n4\\pi}r^2\\sum_{n=0}^{\\infty}(-1)^{n+1}{\\Gamma\n\\left({s+2n+3\\over 2}\\right)\\over \\Gamma(2n+4)}\n\\left({r\\over L}\\right)^{2n}\\right.\\nonumber\\\\\n&&\\left.+ {{\\cal R}^{1\\over 3}\\over \\Gamma\\left({2\\over 3}\\right)}\n\\left({\\nu\\over U}\\right)^{4\\over 3}r^{2\\over 3}\n\\sum_{n=0}^{\\infty}C_n(\\Sigma)\\Gamma\\left({3s+3n+5\\over \n6}\\right)\\left({r\\over\nL}\\right)^n \\right\\}\n\\label{last}\n\\end{eqnarray}\nThis expression is dominated by the term $r^{2\/3}$ for sufficiently small \ndistances. \nIndeed, \nthe crossover scale between the $r^2$ and the $r^{2\\over 3}$ terms\noccurs at\n\\be\n{r\\over L}\\sim F{\\cal R}^{-{3\\over 4}}\\ .\n\\end{equation}\nIn Appendix \\ref{D} we evaluate the constant $F\\sim 0.6$ and we report\nthe expression of the numerical coefficient $C_0(\\Sigma)$. The general\nexpression of the coefficients $C_n(\\Sigma)$ appearing in (\\ref{last}) \nhas been omitted, because it has no practical interest for explicit\ncalculations.\n\nIt is a remarkable fact that $S_2$ can exhibit the scaling \nbehavior predicted by the K41 theory, which is assumed to hold (apart\nfrom intermittency corrections)\nwhen the velocity fluctuations are turbulent\nin the so-called inertial range of scales. This suggests\nthat hydrodynamic fluctuations in a system at the very\ninitial stage of instability development already contain\nsome properties attributed to the developed turbulence regime.\n\n\n\n\\section{Conclusions}\n\nIn this paper we have exploited the field-theoretic approach\nto reformulate the random forced Navier--Stokes\nproblem in terms of the evaluation of a quadratic action. \nThis has the formal structure of a large--deviation functional,\ndescribing thermal fluctuations of irreversible stationary processes.\nThe crucial step for obtaining such a statistical representation\nis the integration over all longitudinal components\nof both velocity and the associated auxiliary fields. With respect\nto the standard formulation which yields usual\ndiagramatic strategies, we perform one more field\nintegration.\nThe positive definite kernel, which connects the hydrodynamic\nevolution operator in the action functional, \nis the inverse of the forcing correlation function.\\\\\nIn terms of the action functional, the knowledge of the\nwhole velocity statistics\nreduces to the computation of functional integrals. However,\ndue to the intrinsic nonlinear character of the hydrodynamic\noperator several technical difficulties have been solved \nfor performing analytic calculations.\nIn particular, one has to to introduce suitable approximations.\\\\\nIn order to obtain an analytic expression of the generating functional \nwe have identified a solution\naround which we have linearized the hydrodynamic evolution operator. \nWe have also introduced a velocity field which represents \nfluctuations with respect to this solution. A perturbative\nexpansion in the inverse Reynolds number finally yields the wanted\nresult.\n\nIn principle, from this analytic treatment one can obtain all relevant \nstatistical\ninformation about the rsNSE by computing any velocity multipoint\nstructure function. In this paper we report only the explicit\ncalculation of the two--point second order moment of the velocity\nfield. \nAs shown in the Appendices, the algebraic manipulations needed for\nobtaining the final result are far from trivial also in this simple case.\n\nIn fact, in this paper we aim at understanding whether fluctuations at the\nearly stage of their development (accordingly, we dub them as pre-turbulent\nfluctuations) already contain some important features of developed\nturbulence. We are interested, in particular, to characterize the\nscale invariant properties of such fluctuations.\nIn this respect, we find that they are organized at\ndifferent scales in a self--similar way. Remarkably, the scaling exponent\ncoincides with the dimensional prediction of the Kolmogorov 1941\ntheory \\cite{K41}, valid for developed turbulence regimes. Whether or not such\nexponent is a genuine reminiscence of the developed turbulence\nphenomenology needs further investigations. \\\\\nUnfortunately, the complexity of the derivation leading to the\nK41 scaling law does not\nallow us to identify precisely the very origin of such a dimensional\nprediction. We can however argue a relationship between the\nobserved dimensional scaling and the conservation laws (for momentum\nand energy) associated with the two eigenvalues of the matrix appearing\nin the action functional (\\ref{act1}).\\\\\nFinally, it is worth observing that the dimensional scaling law\nemerges for a particular choice we did for the pressure field:\nfluctautions have been restricted around a solution for which \nthe pressure is constant. Unfortunately, owing to the fact that the\nanalytical treatment is not duable in the general case,\nwe cannot substantiate the fact on whether \nthe dimensional prediction we found is not a consequance of our\nparticular choice for the pressure fields.\\\\\nAt least three scenaries might be possible. Firstly, pressure field does\nnot affect neither the leading (dimensional) scaling law nor its\nprefactor. \nIt only affects the subleading scaling contributions.\nIn this case our simplification would capture the relevant physics of\nthe \nproblem. The second possibility is that the leading scaling law does not\nchange\nbut this is not for the prefactor. The last possibility is that pressure \nchanges the (domensional) scaling law giving rise to intermittency \ncorrections. \nUnfortunately, at the present stage of our knowledge, we are not in the\nposition to select\none scenary among the three we have pointed out. Further investigations\nare needed for this aim, which probably call to deep numerical\ninvestigations of the system under consideration.\n\\\\\nWe want to conclude by outlining some open problems and perspectives.\nA first question concerns the physical relevance of \nthe solution (\\ref{soluz}) around which we linerize the evolution\noperator. It represents a shear-like solution, which is a well-known\ngenerator of instability. Moreover,\nits unicity and stability properties seem to indicate that this solution\ncan play a major role in the determination of stationary nonequilibrium \nfluctuation statistics to be attributed to the rsNSE. As a mathematical\nobject, it exhibits all the wanted features that one would like to\nattribute to such a solution. On the other hand, the authors have not\nyet a physical intuition for its relevance and aim at making some future\nprogress in this direction.\\\\\nAnother interesting point to be tackled concerns the computation of the\nthird-order momentum of the velocity correlators. In this case the\npredictions of our approach could be compared with the $4\/5$-law, which\nis one among the very few exact results of turbulence theories.\\\\\nFinally, the extension of our results to other classes of transport\nproblems, including passive scalar advection, could provide\na better understanding of the basic mechanism at the origin of\nthe observed scaling behaviors. \n\n\n\\begin{acknowledgments}\nThis work has been supported by Cofin 2003 \n``Sistemi Complessi e Problemi a Molti Corpi'' (AM).\nWe acknowledge useful discussions with G. Jona-Lasinio, M. Vergassola,\nP. Constantin and P. Muratore--Ginanneschi.\n\\end{acknowledgments}\n\n\\begin{appendix}\n\\section{}\n\\label{A}\n\nIn this Appendix we perform the stability analysis of the solution\n$\\bar{v}^{\\alpha}_T$ by the linearized equation\n\\be\n\\left({\\partial\\over \\partial t} - \\nu\\nabla^2\\right) \\delta \nv^{\\gamma}_T(t,\\vec{x}) +\n\\bar{v}_T^{\\beta}(t,\\vec{x}){\\partial\\over \\partial x^{\\beta}}\n\\delta v^{\\gamma}_T(t,\\vec{x}) + \\delta v^{\\beta}_T(t,\\vec{x}){\\partial\\over \n\\partial\nx^{\\beta}} \\bar{v}_T^{\\gamma}(t,\\vec{x}) = 0\\ ,\n\\label{2-5}\n\\end{equation}\nwith the constraint\n$$\n{\\partial\\over \\partial \nx^{\\gamma}}\\left(\\bar{v}_T^{\\beta}(t,\\vec{x}){\\partial\\over \\partial \nx^{\\beta}}\n\\delta v^{\\gamma}_T(t,\\vec{x}) + \\delta va^{\\beta}_T(t,\\vec{x}){\\partial\\over \n\\partial\nx^{\\beta}} \\bar{v}_T^{\\gamma}(t,\\vec{x})\\right) = 0\\ .\n$$ \nIn Section III we have already observed that\n$\\bar{v}^{\\alpha}_T$ is a quasi-steady solution for a time \\\n$t\\ll\\tau_D={4\\nu{\\cal R}^2\\over\nU^2}$.\nThe Fourier transform of eq.(\\ref{2-5})\nwith respect to the space vector $\\vec{x}$ yields:\n\\ba\n&&{\\partial\\over \\partial t}\\delta \\tilde{v}^{\\alpha}_T\\left(t,\n\\vec{k}\\right) - \\nu k^2\\delta \\tilde{v}^{\\alpha}_T\\left(t, \\vec{k}\\right)\n+{i\\over 2}\\vec{k}\\cdot\\vec{U}\\delta \\tilde{v}^{\\alpha}_T\\left(t,\n\\vec{k}\\right) + {1\\over 4}e^{-{t\\over \\tau_D}}\\left\\{U^{\\beta}k_{\\beta}\n\\left[\\delta \\tilde{v}^{\\alpha}_T\\left(t, \\vec{k}\n- C{\\vec{b}\\wedge\\vec{U}\\over 4\\nu{\\cal R}}\\right)\\right.\\right.\\nonumber\\\\\n&&\\left.\\left.- \\delta \\tilde{v}^{\\alpha}_T\\left(t, \\vec{k}\n+ C{\\vec{b}\\wedge\\vec{U}\\over 4\\nu{\\cal R}}\\right)\\right]\n+ U^{\\alpha}C{\\left(\\vec{b}\\wedge\\vec{U}\\right)_{\\beta}\\over 4\\nu{\\cal R}}\n\\left[\\delta \\tilde{v}^{\\beta}_T\\left(t, \\vec{k} -\nC{\\vec{b}\\wedge\\vec{U}\\over 4\\nu{\\cal R}}\\right)\n+ \\delta \\tilde{v}^{\\beta}_T\\left(t, \\vec{k}\n+ C{\\vec{b}\\wedge\\vec{U}\\over 4\\nu{\\cal R}}\\right)\\right]\\right\\}\\nonumber\\\\\n&&= 0\\ ,\n\\end{eqnarray}\nwhere $C = {2\\over \\sqrt{b^2-(\\vec{a}\\cdot\\vec{b})^2}}$.\nBy performing a perturbative expansion up to second order in the parameter \n${\\cal R}^{-1}$, one obtains the system of equations\n\\ba\n\\label{AA1}\n&&{\\partial\\over \\partial t}\\delta \\tilde{v}^{\\alpha}_{T(0)}\\left(t, \n\\vec{k}\\right) + \\nu k^2\\delta \\tilde{v}^{\\alpha}_{T(0)}\\left(t,\n\\vec{k}\\right) +{i\\over \n2}\\vec{k}\\cdot\\vec{U}\\delta \\tilde{v}^{\\alpha}_{T(0)}\\left(t, \\vec{k}\\right)\n= 0\\ ,\\\\\n\\label{AA2}\n&&{\\partial\\over \\partial t}\\delta \\tilde{v}^{\\alpha}_{T(1)}\\left(t, \n\\vec{k}\\right) + \\nu k^2\\delta \\tilde{v}^{\\alpha}_{T(1)}\\left(t,\n\\vec{k}\\right) +{i\\over \n2}\\vec{k}\\cdot\\vec{U}\\delta \\tilde{v}^{\\alpha}_{T(1)}\\left(t, \n\\vec{k}\\right)\\nonumber\\\\\n&&={1\\over \n2}\\vec{k}\\cdot\\vec{U}C{\\left(\\vec{b}\\wedge\\vec{U}\\right)_{\\beta}\\over \n4\\nu{\\cal R}}\n{\\partial\\over \\partial k_{\\beta}}\\delta \\tilde{v}^{\\alpha}_{T(0)}\\left(t, \n\\vec{k}\\right)\n-{1\\over 2}U^{\\alpha}C{\\left(\\vec{b}\\wedge\\vec{U}\\right)_{\\beta}\\over \n4\\nu{\\cal R}}\n\\delta \\tilde{v}^{\\beta}_{T(0)}\\left(t, \\vec{k}\\right)\\ ,\\\\\n\\label{AA3}\n&&{\\partial\\over \\partial t}\\delta \\tilde{v}^{\\alpha}_{T(2)}\\left(t, \n\\vec{k}\\right) + \\nu k^2\\delta \\tilde{v}^{\\alpha}_{T(2)}\\left(t,\n\\vec{k}\\right) +{i\\over \n2}\\vec{k}\\cdot\\vec{U}\\delta \\tilde{v}^{\\alpha}_{T(2)}\\left(t, \n\\vec{k}\\right)\\nonumber\\\\\n&&={1\\over \n2}\\vec{k}\\cdot\\vec{U}C{\\left(\\vec{b}\\wedge\\vec{U}\\right)_{\\beta}\\over \n4\\nu{\\cal R}}\n{\\partial\\over \\partial k_{\\beta}}\\delta \\tilde{v}^{\\alpha}_{T(1)}\\left(t, \n\\vec{k}\\right)\n-{1\\over 2}U^{\\alpha}C{\\left(\\vec{b}\\wedge\\vec{U}\\right)_{\\beta}\\over \n4\\nu{\\cal R}}\n\\delta \\tilde{v}^{\\beta}_{T(1)}\\left(t, \\vec{k}\\right)\\ ,\\\\\n&&...................................\\nonumber\n\\end{eqnarray}\nThis system of equations yields the perturbative solution\n\\ba\n\\delta \\tilde{v}^{\\alpha}_T\\left(t, \\vec{k}\\right) &&= e^{-\\left(\\nu\nk^2+{i\\over\n2}\\vec{U}\\cdot\\vec{k}\\right)t}\\Bigg\\{F^{\\alpha}_{(0)}\\left(\\vec{k}\\right)+ \nF^{\\alpha}_{(1)}\\left(\\vec{k}\\right)\\nonumber\\\\ &&+\nC{\\vec{k}\\cdot\\vec{U}\\over 8\\nu{\\cal \nR}}\\left[\\left(\\vec{b}\\wedge\\vec{U}\\right)\\cdot\\vec{\\nabla}_k\nF^{\\alpha}_{(0)}\\left(\\vec{k}\\right) t -{U^{\\alpha}\\over \n\\vec{k}\\cdot\\vec{U}}\n\\left(\\vec{b}\\wedge\\vec{U}\\right)\\cdot\\vec{F}_{(0)}\\left(\\vec{k}\\right)t\\right\n.\\nonumber\\\\\n&&\\left.- \n\\left(\\vec{b}\\wedge\\vec{U}\\right)\\cdot\\vec{k}F^{\\alpha}_{(0)}\\left(\\vec{k}\n\\right)\\nu \nt^2\\right]\n+ O\\left({1\\over {\\cal R}^2}\\right)\\Bigg\\}\\ \n\\label{sol-0}\n\\end{eqnarray}\nwhere the functions $F$'s are determined by the initial conditions: they\nare found to be of $O(1)$ for any $k$.\\\\\n\nThe exponential term in front of (\\ref{sol-0}) makes the \nperturbative solution vanish in the limit of large time $t$, provided\nthe perturbative series contained in the curly brackets does not diverge\nfaster in such a limit. \nThis requirement can be translated into the following spectral condition\n\\be\n{8\\nu^2{\\cal R}\\over U^2}k^2>1\\ .\n\\end{equation}\nThis inequality indicates that the instability of solution (\\ref{soluz})\nmay originate only from sufficiently small values of the wave--number\n$k$.\n\n\\section{}\n\\label{B}\nAs shown in Section III the solution $\\bar v^{\\alpha}_T$ \nof the hydrodynamic operator in the action functional (\\ref{act}) \nis defined for $t>0$. Accordingly, it breaks Galilean invariance,\nthus giving rise to the well-known Doppler effect, i.e.\n$k_0\\rightarrow k_0 + {1\\over 2}\\vec{k}\\cdot\\vec{U}$.\\\\\n\nMoreover, since in Section IV we evaluate the action functional\nby applying a saddle--point expansion around $\\bar v^{\\alpha}_T$,\nthe approximated expression (\\ref{FF}) contains a time integral\nthat has to be restricted to $t>0$ only. This amounts to assume that the\naction should be identically zero for $t<0$. Accordingly, one cannot\nexclude the possibility that a singularity in the time integral\nmay originate at $t=0$.\n\nIn this appendix we want to show that one can easily exclude the \npresence of any singularity by passing to a Fourier--transformed \nrepresentation of the action functional (\\ref{FF}): according to\na standard field-theoretic technique the addition of a small immaginary\npart to the frequency appearing in the Fourier--transformed integral\nallows one to control its regular behavior for $t \\to 0^+$.\n\nFor the sake of clarity, we present this procedure only for two of the\nterms appearing in (\\ref{FF}). Actually, one can easily realize that\nthe procedure can be extended to all the terms: we just report the\nfinal result, thus avoiding the writing of lengthy formulae.\n\nLet us consider the term\n\\be\nI_1 \n= \\int_0^{\\infty} dt\\int d^3x\\int d^3y {\\partial\\over \\partial t}\nu^{\\alpha}_T(t,\\vec{x})F^{-1\\alpha\\beta}(|\\vec{x}-\\vec{y}|)\n{\\partial\\over \\partial t}u^{\\beta}_T(t,\\vec{y})\n\\label{GG1}\n\\end{equation}\nIn principle, the integral in the time domain is ill--defined. We can\npass to Fourier--transformed variables and rewrite it as follows:\n\n\\be\nI_1 \n=-\\int_{-\\infty}^{+\\infty}{dk_0\\over\n2\\pi}\\int_{-\\infty}^{+\\infty}{dq_0\\over 2\\pi}\n\\int {d^3k\\over (2\\pi)^3}\n\\int_0^{+\\infty}dt\\ e^{i(k_0+q_0)t}\\tilde{u}^{\\alpha}_T(k_0, \\vec{k})\n{k_0q_0\\over F^{\\alpha\\beta}(k)}\\tilde{u}^{\\beta}_T(q_0, -\\vec{k}).\n\\label{GG}\n\\end{equation}\nThe time integral can be regularized by adding a small immaginary\npart $i\\epsilon$ to the frequency component and the integral\n$I_1$ is transformed into \n\\ba\nI_1^{\\prime} &&=-\n\\int {d^3k\\over (2\\pi)^3}\n\\int_{-\\infty}^{+\\infty}{dk_0\\over\n2\\pi}\\int_{-\\infty}^{+\\infty}{dq_0\\over 2\\pi}\n\\int_0^{+\\infty}dt\\ e^{i(k_0+q_0 + i\\epsilon)t}\\tilde{u}^{\\alpha}_T(k_0,\n\\vec{k}) k_0q_0\\tilde{u}^{\\beta}_T(q_0, -\\vec{k})\\nonumber\\\\\n&&= \n\\int {d^3k\\over (2\\pi)^3}\n\\int_{-\\infty}^{+\\infty}{dk_0\\over\n2\\pi}\\int_{-\\infty}^{+\\infty}{dq_0\\over 2\\pi}{k_0q_0\\over i(k_0+q_0 +\ni\\epsilon)}\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\\tilde{u}^{\\beta}_T(q_0,\n-\\vec{k})\\nonumber\\\\\n&&= \n\\int {d^3k\\over (2\\pi)^3}\n\\int_{-\\infty}^{+\\infty}{dk_0\\over 2\\pi}\n\\int_{-\\infty}^{+\\infty}{dq_0\\over 2\\pi}{k_0(q_0-k_0)\\over i(q_0 +\ni\\epsilon)}\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n\\tilde{u}^{\\beta}_T(q_0-k_0,-\\vec{k}) \\ .\n\\end{eqnarray}\nBy performing the limit $\\epsilon \\to 0^+$ one obtains\n\\ba\nI_1^{\\prime} &&= -\n\\int {d^3k\\over (2\\pi)^3}\ni\\int_{-\\infty}^{+\\infty}{dk_0\\over 2\\pi}\\left[P\n\\int_{-\\infty}^{+\\infty}{dq_0\\over 2\\pi}{1\\over\nq_0}k_0(q_0-k_0)\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n\\tilde{u}^{\\beta}_T(q_0-k_0,-\\vec{k})\\right.\\nonumber\\\\\n&&\\left.-i\\pi\\int_{-\\infty}^{+\\infty}{dq_0\\over\n2\\pi}\\delta(q_0)k_0(q_0-k_0)\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n\\tilde{u}^{\\beta}_T(q_0-k_0,-\\vec{k})\\right]\\nonumber\\\\ \n&&= -i\\int {d^3k\\over (2\\pi)^3}\n\\int_{-\\infty}^{+\\infty}{dk_0\\over 2\\pi}\\left[{1\\over 2\\pi}\nP\\int_{-\\infty}^{+\\infty}dq_0{k_0(q_0-k_0)\\over q_0}\n\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n\\tilde{u}^{\\beta}_T(q_0-k_0,-\\vec{k})\\right.\\nonumber\\\\\n&&\\quad \\left.+ {i\\over 2} k_0^2\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n\\tilde{u}^{\\beta}_T(-k_0,-\\vec{k})\\right]\n\\end{eqnarray}\nIn this equation $P$ denotes the principal value. The nontrivial part to be \ncomputed is contained in the square brackets. One has to consider that\nthe fluctuations \n$u^{\\alpha}_T(t, \\vec{x})$ become negligible for scales smaller than \nthe Kolmogorov scale. Since they are defined for\n$t>0$ and the time integral is singular in $t=0$, we have that \nits Fourier--transformed representation should exhibit a unique singularity \nat infinity, where it vanishes for $Im\\ q_0<0$. One can write:\n\\ba\n&&{1\\over 2\\pi}\nP\\int_{-\\infty}^{+\\infty}dq_0{k_0(q_0-k_0)\\over q_0}\n\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n\\tilde{u}^{\\beta}_T(q_0-k_0,-\\vec{k})\\nonumber\\\\\n&&= - {k_0^2\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\\over 2\\pi}\nP\\int_{-\\infty}^{+\\infty}dq_0\n{\\tilde{u}^{\\beta}_T(q_0-k_0,-\\vec{k})\\over q_0}\\nonumber\\\\\n&&= {i\\over 2}k_0^2\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n\\tilde{u}^{\\beta}_T(-k_0,-\\vec{k})\\ .\n\\end{eqnarray}\nMaking use of this result, one can easily conclude that (\\ref{GG}) can\nbe written as follows:\n\\ba\nI_1 = \\int {dk_0d^3k\\over (2\\pi)^4}k_0^2\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\nF^{-1\\alpha\\beta}(k)\\tilde{u}^{\\beta}_T(-k_0,-\\vec{k})\\ .\n\\label{final}\n\\end{eqnarray}\nNow, let us consider one of the terms of (\\ref{FF}) which exhibits the\nDoppler effect in its Fourier--transformed representation: \n\\ba\nI_2 &&= \\int_0^{\\infty} dt\\int d^3x\\int d^3y {\\partial\\over \\partial t}\nu^{\\alpha}_T(t,\\vec{x})F^{-1\\alpha\\beta}(|\\vec{x}-\\vec{y}|)\n\\bar{v}^{\\lambda}_T(t,\\vec{y})\n\\partial_{\\lambda}u^{\\beta}_T(t,\\vec{y})\\nonumber\\\\\n&&= {U^{\\lambda}\\over 2}\\int {d^3k\\over (2\\pi)^3}\n\\int_{-\\infty}^{+\\infty}{dk_0\\over 2\\pi}\n\\int_{-\\infty}^{+\\infty}{dq_0\\over 2\\pi}\\left\\{\\int_0^{+\\infty}dt\\\ne^{i(k_0+q_0 +\ni\\epsilon)t}\\tilde{u}^{\\alpha}_T(k_0,\\vec{k}){k_0k_{\\lambda}\\over\nF^{\\alpha\\beta}(k)}\\tilde{u}^{\\beta}_T(q_0,-\\vec{k})\\right.\\nonumber\\\\\n&&+ \\int_0^{+\\infty}dt\\ e^{i(k_0+q_0 + i{U^2\\over 4\\nu{\\cal R}^2})t}\n{i\\over 2}\\tilde{u}^{\\alpha}_T(k_0,\\vec{k}){k_0\\over\nF^{\\alpha\\beta}(k)}\\left[-\\left(k_{\\lambda}+\nC{(\\vec{b}\\wedge\\vec{U})_{\\lambda}\\over 4\\nu {\\cal R}}\\right)\n\\tilde{u}^{\\beta}_T\\left(q_0, -\\vec{k}-C{\\vec{b}\\wedge\\vec{U}\\over\n4\\nu {\\cal R}}\\right)\\right.\\nonumber\\\\\n&&\\left.\\left.+ \\left(k_{\\lambda}-\nC{(\\vec{b}\\wedge\\vec{U})_{\\lambda}\\over 4\\nu {\\cal R}}\\right)\n\\tilde{u}^{\\beta}_T\\left(q_0, -\\vec{k}+C{\\vec{b}\\wedge\\vec{U}\\over\n4\\nu {\\cal R}}\\right)\\right]\\right\\}\\nonumber\\\\\n&&= {1\\over 2}\\int {d^3k\\over (2\\pi)^3}\n\\int_{-\\infty}^{+\\infty}{dk_0\\over 2\\pi}\n\\int_{-\\infty}^{+\\infty}{dq_0\\over 2\\pi}\\left\\{{i\\over k_0+q_0 +\ni\\epsilon}\\tilde{u}^{\\alpha}_T(k_0,\\vec{k}){k_0(\\vec{k}\\cdot\\vec{U})\\over\nF^{\\alpha\\beta}(k)}\\tilde{u}^{\\beta}_T(q_0,-\\vec{k})\\right.\\nonumber\\\\\n&&\\left.-{1\\over (k_0+q_0 + i{U^2\\over 4\\nu{\\cal R}^2})}\n\\tilde{u}^{\\alpha}_T(k_0,\\vec{k}){k_0(\\vec{k}\\cdot\\vec{U})\\over\nF^{\\alpha\\beta}(k)}C{(\\vec{b}\\wedge\\vec{U})_{\\lambda}\\over 4\\nu {\\cal R}}\n{\\partial\\over \\partial k_{\\lambda}}\\tilde{u}^{\\beta}_T(q_0,-\\vec{k})\n+O\\left({1\\over {\\cal R}^2}\\right)\\right\\}\\ .\n\\end{eqnarray}\nWe expand the \nsolution $ \\bar{v}^{\\lambda}_T$ up to first order in\npowers of ${\\cal R}^{-1}$ and we obtain the final expression:\n\\ba\nI_2 = {1\\over 2}\\int {dk_0d^3k\\over (2\\pi)^4}\n\\tilde{u}^{\\alpha}_T(k_0,\\vec{k})\n{k_0(\\vec{k}\\cdot\\vec{U})\\over F^{\\alpha\\beta}(k)}\\left\\{\n\\tilde{u}^{\\beta}_T(-k_0,-\\vec{k})\n+iC{(\\vec{b}\\wedge\\vec{U})_{\\lambda}\\over 8\\nu {\\cal R}}{\\partial\\over \n\\partial k_{\\lambda}}\\tilde{u}^{\\beta}_T(-k_0,-\\vec{k})\n+ O\\left({1\\over {\\cal R}^2}\\right)\\right\\}\\ .\\nonumber \n\\end{eqnarray}\nAs in the previous case, one can regularize the integral in $t=0$\nby performing the limit $\\epsilon \\to 0^+$.\nBy applying this procedure to all of the remaining terms in (\\ref{FF})\none arrives at the final expression (\\ref{act1}).\n\n\\section{}\n\\label{C}\n\nIn this Appendix we sketch the calculation of the eigenvalues of the\nmatrix $M^{\\beta}_{\\zeta}(\\hat{p})$ defined in (\\ref{matr1}).\nIn fact, the perturbative expansion of\nthe solution (\\ref{soluz}) in powers of$1\\over \\cal R$\ninduces an analogous expansion for this matrix. Formally, one can\nwrite\n\\be\nM = M_{(0)} + M_{(1)} + ...\n\\label{Mexp}\n\\end{equation}\nwhere\n\\ba\nM^{\\alpha}_{(0)\\ \\beta} &&= \\delta^{\\alpha}_{\\ \\beta}\\left[i\\left(p_0\n+ {1\\over 2}\\vec{p}\\cdot \\vec{U}\\right)\n+\\nu p^2 \\right]\\ ,\\nonumber\\\\\nM^{\\alpha}_{(1)\\ \\beta} &&= -\\delta^{\\alpha}_{\\ \\beta}{C\\over \n4}\\vec{p}\\cdot\\vec{U}\n{\\left(\\vec{b}\\wedge\\vec{U}\\right)^{\\gamma}\\over 4\\nu{\\cal \nR}}\\partial_{p_{\\gamma}}\n-{C\\over 4}U^{\\alpha}{\\left(\\vec{b}\\wedge\\vec{U}\\right)_{\\beta}\\over \n4\\nu{\\cal R}}\\ .\n\\end{eqnarray}\nThe matrix $M^{\\beta}_{\\zeta}(\\hat{p})$\nacts on the two-dimensional space of the transverse functions and on\nthe one--dimensional space of the longitudinal functions.\nOnly the transverse degrees of freedom are physically\nrelevant.\\\\\n\nA complete orthonormal basis in $R^3$ is given by the vectors\n\\ba\n\\Pi_1^{\\alpha} &&= {\\left(\\vec{b}\\wedge\\vec{p}\\right)^{\\alpha}\\over \n\\sqrt{f(p)}}\\ ,\n\\nonumber\\\\\n\\Pi_2^{\\alpha} &&= {g(p)\\left(\\vec{b}\\wedge\\vec{p}\\right)^{\\alpha}\n- f(p)\\left(\\vec{U}\\wedge\\vec{p}\\right)^{\\alpha}\\over\n\\sqrt{f(p)}\\sqrt{f(p)h(p)-g^2(p)}},\\nonumber\\\\\n\\Pi_3^{\\alpha} &&= {p^{\\alpha}\\over p}\\ ,\n\\end{eqnarray}\nwhere \n\\ba\nf(p) = b^2p^2-(\\vec{b}\\cdot\\vec{p})^2, \\quad g(p) = (\\vec{b}\\cdot\\vec{U})p^2\n- (\\vec{b}\\cdot\\vec{p})(\\vec{U}\\cdot\\vec{p}), \\quad h(p) =\nU^2p^2-(\\vec{U}\\cdot\\vec{p})^2\\ .\n\\end{eqnarray}\n$\\Pi_1^{\\alpha}$ and $\\Pi_2^{\\alpha}$ span the transverse\nsubspace, while\n$\\Pi_3^{\\alpha}$ spans the longitudinal one. \nIn analogy with (\\ref{Mexp}), also the eigenvalues of \n$M^{\\beta}_{\\zeta}(\\hat{p})$ can be represented by a perturbative expansion\nin powers of$1\\over \\cal R$, namely\nas\n\\be\n\\lambda^a = \\lambda^a_{(0)} + \\lambda^a_{(1)} + ...\\quad where\\quad a=1, \n2, 3\\ .\n\\end{equation}\nThe zero-order eigenvalues $\\lambda^a_{(0)}$ are degenerate and \nhave the form\n\\be\n\\lambda^a_{(0)} =\\left(i\\left(p_0 + {1\\over 2}\\vec{p}\\cdot \n\\vec{U}\\right) +\\nu\np^2\\right)\\ .\n\\end{equation}\nThe evaluation of the first order corrections $\\lambda^a_{(1)}$ requires the\ndiagonalization of the matrix with elements $M_{(1)ij}=\\left(\\Pi_i,\nM_{(1)}\\Pi_j\\right)$, ($i,j=1, 2, 3$). After some simple but lengthy\ncalculations one finds\n\\ba\n\\lambda^1_{(1)} &&= {1\\over 2}\\left(M_{(1)11}+ M_{(1)22}\n- \\sqrt{\\left(M_{(1)11}+ M_{(1)22}\\right)^2 + 4M_{(1)21}M_{(1)12}}\\right)\\\n,\\nonumber\\\\\n\\lambda^2_{(1)} &&= {1\\over 2}\\left(M_{(1)11}+ M_{(1)22}\n+ \\sqrt{\\left(M_{(1)11}+ M_{(1)22}\\right)^2 + 4M_{(1)21}M_{(1)12}}\\right)\\\n,\\nonumber\\\\\n\\lambda^3_{(1)} &&= M_{(1)33}\\ ,\n\\end{eqnarray}\nwith\n\\ba\nM_{(1)11} &&={C\\over 16\\nu{\\cal R}}{\\left(\\vec{b}\\wedge\\vec{U}\\right)\n\\cdot\\vec{p}\\over f(p)}w(p)\\ ,\\nonumber\\\\\nM_{(1)22} &&= - {C\\over 16\\nu{\\cal \nR}}{\\left(\\left(\\vec{b}\\wedge\\vec{U}\\right)\n\\cdot\\vec{p}\\right)\\left(\\vec{b}\\cdot\\vec{p}\\right)g(p)\\over\nf(p)\\left(f(p)h(p)-g^2(p)\\right)}\\left[\\left(\\vec{p}\\cdot\\vec{U}\\right)w(p)\n+ (\\vec{b}\\cdot\\vec{U})g(p) -U^2f(p)\\right],\\nonumber\\\\\nM_{(1)12} &&=- {C\\over 16\\nu{\\cal\nR}}{\\left(\\left(\\vec{b}\\wedge\\vec{U}\\right)\\cdot\\vec{p}\\right)\n\\left(\\vec{b}\\cdot\\vec{p}\\right)\\over\nf(p)\\sqrt{f(p)h(p)-g^2(p)}}\\left[\\left(\\vec{b}\\cdot\\vec{U}\\right)g(p)\n+2\\left(\\vec{p}\\cdot\\vec{U}\\right)w(p)\n-U^2f(p)\\right],\\nonumber\\\\\nM_{(1)21} &&= -{C\\over 16\\nu{\\cal \nR}}{\\left(\\left(\\vec{b}\\wedge\\vec{U}\\right)\n\\cdot\\vec{p}\\right)\\over f(p)\\sqrt{f(p)h(p)-g^2(p)}}\n\\left(\\vec{b}\\cdot\\vec{U}\\right)\\left[\\left(\\vec{b}\\cdot\\vec{p}\\right)g(p)\n- \\left(\\vec{p}\\cdot\\vec{U}\\right)f(p)\\right],\\nonumber\\\\\nM_{(1)33} &&= -{C\\over 16\\nu{\\cal R}}{\\left(\\vec{p}\\cdot\\vec{U}\\right)\\over\np^2}\\left(\\left(\\vec{b}\\wedge\\vec{U}\\right)\\cdot\\vec{p}\\right)\\ ,\n\\end{eqnarray}\nwhere we have introduced the further definition:\n\\be\nw(p) = b^2(\\vec{p}\\cdot\\vec{U}) - \n(\\vec{b}\\cdot\\vec{p})(\\vec{b}\\cdot\\vec{U})\\ .\n\\end{equation}\nWithout prejudice of generality, we can \nspecify the geometrical structure of the flow. For the\nsake of simplicity, we assume that the vector $\\vec{r}$ (i.e. the \nFourier--conjugated variable of $\\vec{p}$) corresponds to the polar \naxis and that the vector\n$\\vec{b}$ is orthogonal to both\n$\\vec{r}$ and $\\vec{U}$. With this assumption the two physically \nrelevant first-order corrections to the eigenvalues are\n\\ba\n\\lambda^1_{(1)} &&= 0\\ ,\\nonumber\\\\\n\\lambda^2_{(1)} &&= {U^2\\over 16\\nu {\\cal R}}\\left\\{\n\\sin\\theta_U\\cos\\theta_U\\left[\\cos^2\\phi_U + \n\\cos\\left(2(\\phi_U-\\phi)\\right)\\right]\n\\sin^2\\theta\n\\right.\\nonumber\\\\\n&&\\left.+\\cos^2\\theta_U\\sin 2\\theta\\cos(\\phi_U -\\phi)\\right\\}\\ .\n\\label{B1}\n\\end{eqnarray}\nSince $\\lambda_{(i)}^{3}$ is associated to the longitudinal\npart, it does not play any role in our calculations.\n\n\\section{}\n\\label{D}\nIn this appendix we aim at reporting the main calculations\nneeded for obtaining an explicit expression for (\\ref{S21}).\nAccording to the perturbative approach discussed in detail\nin Appendix \\ref{C}, $S_2(r)$ can be written as follows:\n\\be\nS_2(r)\\sim -2\\int {dp_0d^3p\\over (2\\pi)^4}\\left(e^{i\\vec{p}\\cdot\\vec{r}}\n- 1\\right)\\sum_{\\alpha=1}^2{F(p)\\over \\left(p_0+{1\\over \n2}\\vec{p}\\cdot\\vec{U}\\right)^2\n+ \\left(\\nu p^2 +\n\\lambda^{\\alpha}_{(1)}\n(\\vec{p},\\vec{U}, \\vec{b}) \n\\right)^2}\\ .\n\\end{equation}\nThe eigenvalues $ \\lambda^{\\alpha}_{(1)} $ which appear in this\nequation have been computed up to first order of the perturbative\nexpansion in ${\\cal R}^{-1}$. Notice that the sum is restricted\nto the first two eigenvalues ($\\alpha = 1,2$), which correspond to the\ntransverse components of the velocity field. Actually, the third\neigenvalue, corresponding to the longitudinal components of the velocity\nfield, is ineffective for our calculations.\n\nExplicit integration over $p_0$ yields\n\\be\n{S}_2(r) \\sim -\\int {d^3p\\over (2\\pi)^3}{e^{i\\vec{p}\\cdot\\vec{r}} - 1\\over\n\\nu}\\sum_{\\alpha=1}^2 {F(p)\\over p^2 +{1\\over \n\\nu}\\lambda^{\\alpha}_{(1)}(\\vec{p},\\vec{U}, \\vec{b})+\n...}\\\n\\label{S22}\n\\end{equation}\nWith the particular choice performed in Appendix \\ref{C} for the\ngeometrical structure of the flow, $S_2(r)$ can be expressed \nas the sum of two terms: the first one is associated with the null eigenvalue \n$\\lambda_{(1)}^1$, while the second one depends on the nonzero eigenvalue\n$\\lambda_{(1)}^2$. Namely,\n\\begin{equation}\nS_2(r) =-{1\\over \\nu}\\left(I_1(r) +I_2(r)\\right) \n\\end{equation}\nBy considering the explicit expressions of the statistical function $F(p)$ and \nof the eigenvalues $\\lambda^{\\alpha}_{(1)}$ (see eq.(\\ref{B1})~), one has\n\\ba\n\\label{C2}\nI_1(r) &&= D_0L^3\\int{d^3p\\over (2\\pi)^3}\\left(e^{i\\vec{p}\\cdot\\vec{r}}\n- 1\\right) {(Lp)^se^{-(Lp)^2}\\over p^2}\\ , \\\\\nI_2(r) &&= D_0L^3\\int{d^3p\\over (2\\pi)^3}\n{\\left(e^{i\\vec{p}\\cdot\\vec{r}}\n- 1\\right)(Lp)^se^{-(Lp)^2}\\over p^2+ {U^2\\over 16\\nu^2 {\\cal R}}\n\\left[2\\sin\\theta_U\\cos\\theta_U\\sin^2\\theta\\cos^2\\phi\n+\\cos^2\\theta_U\\sin 2\\theta\\cos \\phi\\right]}\\ .\\nonumber\\\\\n\\label{C3}\n\\end{eqnarray}\nIn the r.h.s. of this equation we have also exploited \ntranslational invariance for applying the transformation\n$(\\phi_U-\\phi)\\rightarrow -\\phi$. \nThe analytic calculation of (\\ref{C2}) is obtained by a standard\nprocedure:\n\\ba\nI_1(r) &&= D_0L^3\\int{d^3p\\over (2\\pi)^3}\\left(e^{i\\vec{p}\\cdot\\vec{r}}\n- 1\\right) {(Lp)^se^{-(Lp)^2}\\over p^2}\\nonumber\\\\\n&&= {D_0L^2\\over 2\\pi^2}\\sum_{n=1}^{\\infty}{(-1)^n\\over\n(2n)!(2n+1)}\\left({r\\over L}\\right)^{2n}\\int_0^{\\infty} d\\zeta\\\n\\zeta^{s+2n}e^{-\\zeta^2}\\nonumber\\\\\n&&= {D_0\\over (2\\pi)^2}r^2\\sum_{n=0}^{\\infty}(-1)^{n+1}\n{\\Gamma\\left({s+3+2n\\over 2}\\right)\\over\n\\Gamma\\left(2n+4\\right)}\\left({r\\over L}\\right)^{2n}\\ .\n\\end{eqnarray}\nFor what concerns $I_2(r)$, we first perform the integration over\nthe variable $\\phi$, namely:\n\\be\nI_2(r) = D_0L^3\\int_0^{\\infty}{p^2dp\\over (2\\pi)^3}(Lp)^se^{-(Lp)^2}\n\\int_{-1}^{+1} d(\\cos\\theta)\\left(e^{ipr\\cos\\theta} - 1\\right)I_0\n\\label{C5}\n\\end{equation}\nwhere\n\\ba\nI_0 &&=\\int_0^{2\\pi}{d\\phi\\over p^2+ {U^2\\over 16\\nu^2 {\\cal R}}\n\\left[2\\sin\\theta_U\\cos\\theta_U\\sin^2\\theta\\cos^2\\phi\n+\\cos^2\\theta_U\\sin 2\\theta\\cos \\phi\\right]}\\nonumber\\\\\n&&=-i{32\\nu^2{\\cal R}\\over U^2}\\int_{\\gamma}{zdz\\over \naz^4+bz^3+cz^2+bz+a}\\ ,\n\\label{C6}\n\\end{eqnarray}\nwith $z=e^{i\\phi}$ and the integration is on the unit circle\n$\\gamma$. The coefficients $a, b, c$ are given by\n\\ba\n&&a = \\sin\\theta_U\\cos\\theta_U\\sin^2\\theta\\ ,\\quad\nb=2\\cos^2\\theta_U\\sin\\theta\\cos\\theta\\nonumber\\\\\n&&c = {32\\nu^2{\\cal R}\\over U^2}p^2 +\n2\\sin\\theta_U\\cos\\theta_U\\sin^2\\theta\\ .\n\\label{C7}\n\\end{eqnarray}\nThe evaluation of the integral (\\ref{C6}) requires the knowledge of\nthe root of a fourth--order\nalgebrical equation. By exploiting the Euler method \\cite{EUL} we end up\nwith the expression\n\\ba\nz_i&&=z_i\\left(x, {8\\nu^2{\\cal R}\\over U^2}p^2; \\Sigma, \n\\Xi\\right)\\nonumber\\\\\n&&={x^{1\\over 3}\\over \\left(1-x^2\\right)^{1\\over 6}}\\left[\\sum_{l=0}^2s_{il}\nF_l\\left(x, {8\\nu^2{\\cal R}\\over U^2}p^2; \\Sigma, \\Xi\\right)\n+ {1\\over 2}\\Sigma{x^{2\\over 3}\\over \\left(1-x^2\\right)^{1\\over\n3}}\\right]\n\\quad \\quad i=1, 2, 3, 4\\ .\\nonumber\n\\end{eqnarray}\nThe following definition has been adopted:\n\\ba\nF_l&&=F_l\\left(x, {8\\nu^2{\\cal R}\\over U^2}p^2; \\Sigma, \n\\Xi\\right)\\nonumber\\\\\n&&=\n\\left\\{{\\Sigma^{2\\over 3}\\over 12}\\left[ {81\\over 4}\\Sigma^4 {x^4\\over\n\\left(1-x^2\\right)^2} + {81\\over 2}\\Sigma^2{x^2\\over \\left(1-x^2\\right)}\n - 90\\right.\\right.\\nonumber\\\\\n&&-{64\\over \\Sigma^2}{1-x^2\\over x^2} + {8\\nu^2{\\cal R}\\over U^2}p^2\n\\left(189{\\Sigma^2\\over \\Xi}{x^2\\over \\left(1-x^2\\right)^2} + {382\\over\n\\Xi\\left(1-x^2\\right)} -120{\\Sigma^2\\over \\Xi\\ x^2}\\right)\\nonumber\\\\\n&&\\left.+\\left({8\\nu^2{\\cal R}\\over U^2}p^2\\right)^2\n\\left({504\\over \\Xi^2\\left(1-x^2\\right)^2} + {47\\ \\Sigma^2\\over \\Xi^2\\\nx^2\\left(1-x^2\\right)}\\right) +\n\\left({8\\nu^2{\\cal R}\\over U^2}p^2\\right)^3{32\\ \\Sigma^2\\over \\Xi^3\\ x^2\n\\left(1-x^2\\right)^2}\\right]^{1\\over 3}\\nonumber\\\\ \n&&\\times\\left(\\epsilon^l\\left[1 + \\left(1 -4\\times 27\\ h\\right)^{1\\over\n2}\\right]^{1\\over 3}+\\epsilon^{l-3}\\left[1 - \\left(1 -4\\times 27\\ \nh\\right)^{1\\over\n2}\\right]^{1\\over 3}\\right) + {1\\over 2}{\\Sigma^{4\\over 3}x^{4\\over 3}\\over\n\\left(1-x^2\\right)^{4\\over 6}}\\nonumber\\\\\n&&\\left.+{1\\over 3}\n{\\left(1-x^2\\right)^{1\\over 3}\\over \\Sigma^{2\\over 3}x^{2\\over 3}} + \n{8\\nu^2{\\cal\nR}\\over U^2}p^2{2\\over 3\\Sigma^{2\\over 3}\\Xi\\ x^{2\\over \n3}\\left(1-x^2\\right)^{4\\over\n6}}\\right\\}^{1\\over 2}\\ ,\n\\label{C9}\n\\end{eqnarray}\nwith\n\\ba\n\\label{C10}\n&&x=\\cos\\theta\\ ,\\quad\n\\Xi = \\sin\\theta_U\\cos\\theta_U\\ ,\\quad \\Sigma =\\cot\\theta_U\\ ,\\\\\n\\label{C11}\n&&s_{il}\\Leftrightarrow\n\\left(\\begin{array}{ccc}\n1 & 1 & 1\\\\\n1 & -1 & -1\\\\\n-1 & 1 & -1\\\\\n-1 & -1 & -1\n\\end{array}\n\\right)\\ .\n\\end{eqnarray}\nHere $\\epsilon$ is the cubic root of unit: $\\epsilon = {-1+ i \n\\sqrt{3}\\over 2}$.\nThe explicit expression of the function $h$ follows:\n\\ba\nh &&=\\left[16 +30\\\n\\Sigma^2{x^2\\over 1-x^2} + {111\\over 4}\\Sigma^4{x^4\\over(1-x^2)^2}\n+16{8\\nu^2{\\cal R}\\over U^2}{p^2\\over\n\\Xi}{17\\Sigma^2x^2+3(1-x^2)\\over (1-x^2)^2}\\right.\\nonumber\\\\\n&&\\left.+48\\left({8\\nu^2{\\cal R}\\over U^2}{p^2\\over \\Xi}\\right)^2{1\\over\n(1-x^2)^2}\\right]^3\\times\\left[\n-128 +81\\ \\Sigma^4{x^4\\over (1-x^2)^2} + {81\\over 2}\\Sigma^6{x^6\\over \n(1-x^2)^3}\n\\right.\\nonumber\\\\\n&&-180\\Sigma^2{x^2\\over 1-x^2}+{8\\nu^2{\\cal R}\\over U^2}{p^2\\over \\Xi}\n\\left(378{\\Sigma^4x^4\\over (1-x^2)^3}+764{\\Sigma^2x^2\\over\n(1-x^2)^2}-240{1\\over 1-x^2}\\right)\\nonumber\\\\\n&&\\left.+\\left({8\\nu^2{\\cal R}\\over U^2}{p^2\\over \\Xi}\\right)^2\n\\left(1008{\\Sigma^2x^2\\over (1-x^2)^3}+94{1\\over (1-x^2)^2}\\right)\n+64\\left({8\\nu^2{\\cal R}\\over U^2}{p^2\\over \\Xi}\\right)^3{1\\over\n(1-x^2)^3}\\right]^{-2}\\ .\n\\label{C12}\n\\end{eqnarray}\nOnly the roots $z_1$ and $z_2$ are included into the unit circle, \ntherefore\n(\\ref{C5}) becomes\n\\ba\nI_2(r) &&= D_0L^3{32\\nu^2{\\cal R}\\over U^2}\\int_0^{\\infty}\n{p^2dp\\over (2\\pi)^2}(Lp)^se^{-(Lp)^2}\\int_{-1}^1dx\\left(e^{iprx} -\n1\\right)\\nonumber\\\\\n&&\\times\\sum_{l=1,2}{\\left(1-x^2\\right)^{1\\over 3}\\over x^{2\\over 3}}\n{\\left[\\sum_{m=0}^2s_{lm}\nF_m\\left(x, {8\\nu^2{\\cal R}\\over U^2}p^2; \\Sigma, \\Xi\\right)\n+ {1\\over 2}\\Sigma{x^{2\\over 3}\\over \\left(1-x^2\\right)^{1\\over\n3}}\\right]\\over\n\\prod_{i\\not= l}\\left(\\sum_{k=0}^2\\left(s_{lk}-s_{ik}\\right)\nF_k\\left(x, {8\\nu^2{\\cal R}\\over U^2}p^2; \\Sigma, \\Xi\\right)\\right)}\\ .\n\\label{C13}\n\\end{eqnarray}\nAs we have already observed in Section V, only the values of the\nvariable $p$ around\n$\\bar{p}= {1\\over L}\\sqrt{s+2\\over 2}$ give a significant contribution\nto the integral in (\\ref{C13}). We observe that ${8\\nu^2{\\cal R}\\over\nU^2}\\bar{p}^2\\rightarrow{4(s+2)\\over {\\cal R}}$ and the\nstability condition (\\ref{cond1}) imposes:\n\\be\n1<{\\cal R}<4(s+2)\\ .\n\\label{C14}\n\\end{equation}\nThe evaluation of the leading terms is then possible by performing an\nexpansion in the parameter\n${U^2\\over 8\\nu^2{\\cal R}}p^{-2}\\rightarrow {{\\cal R}\\over\n8}\\zeta^{-2}$ that,\nby virtue of (\\ref{C14}), is smaller than unit if\n$\\zeta<\\sqrt{s+2\\over 2}$.\\\\\n For\n$\\zeta>\\sqrt{s+2\\over 2}$ the contribution\n to the integral rapidly vanishes. For\n$1<{\\cal R}\\ll 4(s+2)$ we obtain\n\\ba\n\\bar{S}_2(r) &&= -{1\\over \\nu}\\left(I_1(r)+I_2(r)\\right)\\nonumber\\\\\n&&\\sim -{D_0\\over\n(2\\pi)^2\\nu}r^2\\sum_{n=0}^{\\infty}\\left\\{(-1)^{n+1}\\Gamma\\left({s+2n+3\\over \n2}\\right)\n\\left[{1+\\Xi\\over \\Gamma\\left(2n + 4\\right)}\n-{2^{13\\over 3}\\Xi\\over \\Sigma^{2\\over 3}}{2n+4\\over\n\\Gamma (2n+6)}\\right]\\left({r\\over L}\\right)^{2n}\\right\\}\\ .\\nonumber\\\\\n\\end{eqnarray}\nBy extending the validity of our calculations to ${\\cal R}>4(s+2)$, we have\n${8\\nu^2{\\cal R}\\over U^2}p^2\\rightarrow{8\\over {\\cal R}}\\zeta^2<1$ for\n$\\zeta<\\sqrt{s+2\\over 2}$. As in the previous case,\nwe expand (\\ref{C13})\nin power of the parameter ${8\\over {\\cal R}}\\zeta^2<1$ and we obtain:\n\\ba\nI_2(r) &&\\sim \\Xi D_0L^2\\int_0^{\\infty}{d\\zeta\\over \n(2\\pi)^2}\\zeta^se^{-\\zeta^2}\n\\int_{-1}^1dx\\left(e^{i\\zeta {r\\over L}x} - 1\\right)\n\\left\\{{1 +{8\\over {\\cal R}}\\zeta^2 +...\\over 2}\\right.\\nonumber\\\\\n&&+ {8\\over {\\cal R}\\Xi}\\sum_{l=1,2}{\\left(1-x^2\\right)^{1\\over 3}\\over \nx^{2\\over 3}}\n\\left({\\left[\\sum_{m=0}^2s_{lm}\nF_m\\left(x, 0; \\Sigma, \\Xi\\right) \\right]\\over\n\\prod_{i\\not= l}\\left(\\sum_{k=0}^2\\left(s_{lk}-s_{ik}\\right)F_k\n\\left(x, 0; \\Sigma, \\Xi\\right)\\right)}\\right.\\nonumber\\\\\n&&\\left.\\left.+ {8\\over {\\cal R}}\\zeta^2\\left.{\\partial\\over \\partial y}\n{\\left[\\sum_{m=0}^2s_{lm}\nF_m\\left(x, y; \\Sigma, \\Xi\\right) \\right]\\over\n\\prod_{i\\not= l}\\left(\\sum_{k=0}^2\\left(s_{lk}-s_{ik}\\right)F_k\n\\left(x, y; \\Sigma, \\Xi\\right)\\right)}\\right|_{y=0} + ...\\right)\\right\\}\\ .\n\\label{C16}\n\\end{eqnarray}\nTwo different terms, $I_2^A(r)+I_2^B(r)= I_2(r)$,\ncan be identified in (\\ref{C16}). The\nevaluation of the first term is straightforward:\n\\ba\nI_2^A(r) &&\\sim {\\Xi D_0L^2\\over 2}\\int_0^{\\infty}\n{d\\zeta\\over (2\\pi)^2}\\zeta^se^{-\\zeta^2}\\int_{-1}^1dx\\left(e^{i\\zeta \n{r\\over L}x} -\n1\\right)\n\\left(1 +{8\\over {\\cal R}}\\zeta^2 + ...\\right)\\nonumber\\\\\n&&= {\\Xi D_0\\over 2(2\\pi)^2}r^2\\sum_{n=0}^{\\infty}{(-1)^{n+1}\\over \n\\Gamma(2n+4)}\n\\left(\\Gamma\\left({s+2+2n\\over 2}\\right)+ {8\\over {\\cal R}}\n\\Gamma\\left({s+5+2n\\over 2}\\right)\\right)\\left({r\\over \nL}\\right)^{2n}\\nonumber\\\\\n&&\\times\\left(1 +{4(s+2)\\over {\\cal R}} + ...\\right)\\ .\n\\label{C17}\n\\end{eqnarray}\nThe evaluation of the second term is more cumbersome.\nThe leading term can be recasted in the form:\n\\ba\nI_2^B(r) &&\\sim {8D_0L^2\\over {\\cal R}}\\int_0^{\\infty}{d\\zeta\\over \n(2\\pi)^2}\\zeta^s\ne^{-\\zeta^2}\\int_{-1}^1dx\\left(e^{i\\zeta {r\\over L}x} -\n1\\right){\\left(1-x^2\\right)^{1\\over 3}\\over x^{2\\over\n3}}\\sum_{n=0}^{\\infty}A_n(\\Sigma)x^{2n}\\ .\n\\label{B2}\n\\end{eqnarray}\nThe coefficients $A_i$ are $\\Sigma$-dependent numerical constants. The \nfirst two of them are given by the expressions\n\\ba\nA_0(\\Sigma) &&= {1\\over 16\\sqrt{3}\\left(1-\\sin{\\pi\\over 6}\\right)\n\\cos\\left({1\\over 3}\\tan^{-1}\\sqrt{26}\\right)}\\ ,\\nonumber\\\\\nA_1(\\Sigma) &&= -{65\\sin\\left({2\\over 3}\\tan^{-1}\\sqrt{26}\\right)\\over\n512\\sqrt{26}\\cos^2\\left({1\\over 3}\\tan^{-1}\\sqrt{26}\\right)}\\Sigma^2\\ ,...\n\\end{eqnarray} \nThe exact form of these coefficients is however irrelevant for our \nanalysis.\nSome tedious standard calculations yield:\n\\ba\nI^B_2(r) &&= {D_0{\\cal R}^{1\\over 3}\\over \\pi\\Gamma\\left({2\\over 3}\\right)}\n\\left({\\nu\\over U}\\right)^{4\\over 3}r^{2\\over 3}\n\\sum_{n=0}^{\\infty}C_n(\\Sigma)\\Gamma\\left({3s+3n+5\\over 6}\\right)\n\\left({r\\over L}\\right)^n\\ ,\n\\end{eqnarray}\nwhere the coefficients $C_n(\\Sigma)$ depend on the constants \n$A_i$. For $n=0$ one has\n\\be\nC_0(\\Sigma) = {54\\sqrt{3}-74\\over 27\\sqrt{3}}A_0 + {128\\over\n9\\sqrt{3}}A_1(\\Sigma)\\ .\n\\end{equation}\nThe comparison between $I_2^B(r)$ and $I_2^A(r)$ \nindicate that a crossover between the corresponding scaling behaviors\noccurs at\n\\be\nr\\sim \\left|2\\times 8.328 \\sqrt{\\pi}{0.0336 - 0.1127\\cot^2\\theta_U\\over 2+\n\\sin\\theta_U\\cos\\theta_U}\\right|^{3\\over 4} {\\cal R}^{-{3\\over 4}}L\\ .\n\\end{equation}\nFor the perturbative expansion in $1\\over {\\cal R}$ to be meaningful, \nthe parameter $\\theta_U$ must have a value close to\n$\\pi\\over 2$. This implies:\n$$\nr\\sim F{\\cal R}^{-{3\\over 4}}L, \\quad with\\quad F\\sim 0.6\\ .\n$$\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe distribution of host star obliquities $\\psi$ --- the angle between a planet's orbital angular momentum and its host star's spin angular momentum --- constrains hot Jupiters' origin. Hot Jupiters are thought to form at several AU \\citep{Raf}, reaching orbital periods $P$ of several days via: a) high eccentricity migration, in which a Jupiter's initially highly eccentric orbit shrinks and circularizes because of tidal dissipation in the planet, or b) disk migration (e.g., \\citealt{Gold}). Mechanisms for the former produce a broad $\\psi$ distribution (e.g., \\citealt{FT,Naoz11,Chatterjee}), whereas the latter preserves $\\psi=0$ (e.g., \\citealt{Bitsch}), unless the disk or star becomes misaligned \\citep{Tremaine,Batygin,Rogers12, Lai14}. Rossiter-McLaughlin measurements of $\\lambda$ \\citep{Ross,McL} ($\\psi$ sky-projected) could distinguish the mechanism of hot Jupiter migration \\citep{MJ,Naoz12} but $\\psi$ is sculpted also by tidal dissipation in the star, through which the planet transfers angular momentum from its orbit to the star's spin. \n\nFrom early obliquity measurements, \\citet{FW} (FW09 hereafter) inferred two hot Jupiter populations: well-aligned and isotropic. Subsequent discoveries linked them to host stars' properties: hosts of misaligned Jupiters have effective temperature ${~T_{\\rm eff}}>6250$K (\\citealt{W10},W10 hereafter) and $M>1.2M_\\sun$ \\citep{KS}. Among hosts with $M>1.2M_\\sun$, those older than 2-2.5 Gyr are aligned, the age at which such stars develop a significant convective envelope \\citep{2013T}. W10 proposed that tidal dissipation is more efficient in cool stars with thick convective envelopes, allowing realignment. \\citet{A12} (A12 hereafter) confirmed the temperature break with a larger sample, constructed a tidal dissipation parameter, and demonstrated that misalignment is correlated with that parameter. To allow the realignment to occur on timescale shorter than the planet's orbital decay, W10 suggested that the convective envelopes of cool stars may be sufficiently decoupled from the radiative interior to be realigned separately. Even without stronger dissipation, this decoupling would result in a much shorter timescale for the realignment of cool stars than hot stars.\n\nIn the W10 framework, only cool stars experience realignment of their convective envelopes. However, hot Jupiters may also influence the outer layers of hot stars. Hot stars have convective envelopes, which are thinner and therefore arguably easier to realign. For example, a hot Jupiter has synchronized hot star (${~T_{\\rm eff}}=6387$K) $\\tau$ Bootis, accomplishable if the star has a thin convective envelope weakly coupled to the interior \\citep{Catala}. A second obliquity trend, the mass cut-off for retrograde planets (e.g., \\citealt{Hebrard}), may further be evidence that hot stars are not immune to their hot Jupiters' tidal influence. Furthermore, attempts to reproduce the observed trends have resulted in major inconsistencies, such as a missing population of $\\psi=180^\\circ$ planets or too many oblique cool stars (e.g., \\citealt{Lai12,Rogers13, Xue}), although individual systems are modeled successfully \\citep{VR,2012H} and $\\lambda$ correlates with the theoretical tidal realignment timescale (A12,\\citealt{2012H}).\n\\begin{figure*}[h]\n\\includegraphics[width=\\textwidth]{fig1.eps}\n\\caption{Left: Observed sky-projected spin-orbit alignment $(\\lambda)$ and rotation frequencies $\\Omega_\\star=v\\sin~i\/R_\\star$ (\\citealt{Wright}, A12). Right: Simulated population (\\S4). \\label{fig:obs}}\n\\end{figure*}\nHere I reconsider the cause of the observed trends. In \\S2, I summarize the observations, quantifying the temperature cut-off and linking it to the onset of magnetic braking, which I identify from the \\citet{Mc} sample of \\emph{Kepler\\ } rotation periods. In \\S3, I show that\\clearpage \\citet{Eggs} equilibrium tides combined with magnetic braking lead to different timescales for orbital decay, spin-orbit alignment, and retrograde flips, producing the observed trends. In \\S4, I demonstrate via Monte Carlo that the tidal evolution defined in \\S3 matches the observed trends and distributions. In \\S5, I summarize the main features of the framework presented and discuss implications for discerning hot Jupiter migration mechanisms.\n\n\\section{Observed spin-orbit alignment trends}\n\nI wish to explain: a) the host star effective temperature cut-off ${~T_{\\rm cut}}$ for misaligned planets (W10), and b) the mass cut-off $M_{p,\\rm retro}$ for retrograde planets \\citep{Hebrard}. Fig. \\ref{fig:obs} (left panels) displays $\\lambda$ for hot Jupiters\\footnote{\\label{note:hj} Defined as $M_p>0.5M_{\\rm Jup}$ and $P<7$ days. These criteria exclude three exceptions to the ${~T_{\\rm eff}}$ trend: low mass HAT-P-11-b \\citep{2010W} and long period WASP-8-b \\citep{2010Q} and HD-80606-b \\citep{2009M}. WASP-80-b, a $0.5~M_{\\rm Jup}$ planet orbiting a 4150~K star, would also be an exception, but its spin-orbit alignment remains under investigation \\citep{2013T}. I exclude measurements that A12 characterizes as poorly-constrained: CoRoT-1, CoRoT-11, CoRoT-19, HAT-P-16, WASP-2, WASP-23, XO-2. I only include stars with ${~T_{\\rm eff}}<6800$~K, which excludes KOI-13b.} and the host stars' projected stellar rotation frequency $\\Omega_\\star\\sin~i_\\star$ (computed from $v\\sin~i_\\star$, measured from spectral line broadening, and $R_\\star$) versus ${~T_{\\rm eff}}$. (I plot $\\Omega_\\star\\sin~i_\\star$ instead of $v \\sin i_\\star$ for comparison with the simulations in \\S3.) I note that, particularly below ${~T_{\\rm cut}}$, stars hosting more massive planets have larger rotation frequencies (Fig. 1, bottom left). Due to the scarcity of massive hot Jupiters, the two trends relating to planetary mass are less robust than that with stellar temperature. Above ${~T_{\\rm cut}}$, the chance occurrence that of the twelve significant obliquities, the three lowest correspond to the most massive planets is $\\frac{12!3!}{15!}=0.2\\%$; below ${~T_{\\rm cut}}$, the chance occurrence that of the nineteen $\\Omega_\\star~\\sin i_\\star$, the two highest correspond to the most massive planets is $\\frac{17!2!}{19!}=0.6\\%$. These formal estimates of significance do not account for the variety of other patterns we might have observed\\footnote{In the case of the mass stratification, I noticed the pattern in my simulations (\\S3) before I examined the collection of observed $\\Omega_\\star$.}, which lowers their true significance.\n\n\nAs A12 showed (Fig. 20), because ${~T_{\\rm cut}}$ for $\\lambda$ is also a cut-off for $v\\sin~i$ (or, plotted here, $\\Omega_\\star\\sin~i$), it may correspond to the magnetic braking cut-off. This cut-off is equivalent to gyrochronological Kraft Break for stellar rotation period versus color \\citep{Kraft}. Braking is thought to be strongest for stars with thick convective envelopes \\citep{1962S}. Therefore A12 interpreted the correspondence of ${~T_{\\rm cut}}$ for $\\Omega_\\star$ and $\\psi$ as evidence that strong dissipation in the convective envelope plays a role in spin-orbit alignment. I will later argue that magnetic braking, rather than the tidal dissipation efficiency or the participation of the convective envelope versus entire star in tidal realignment, is the cause of ${~T_{\\rm cut}}$ for $\\psi$. \n\nI estimate the posterior distribution ${\\rm~prob}({~T_{\\rm cut}})$ using a model with $\\psi=0$ below ${~T_{\\rm cut}}$ and isotropic above (following FW09 except assigning membership to the aligned versus isotropic population based on ${~T_{\\rm cut}}$):\n\\begin{eqnarray}\n\\label{eqn:tcut}\n\\nonumber\\\\\n\\nonumber\\\\\n{\\rm~prob} ({~T_{\\rm cut}}) = \\prod_i^{N \\rm planets}({\\rm~prob}(\\lambda_i=0){\\rm~prob}({~T_{\\rm eff}}_i<{~T_{\\rm cut}})\\nonumber \\\\\n+\\int_0^\\pi~d\\lambda_i~d\\psi~{\\rm~prob}(\\lambda_i|\\psi) \\nonumber \\\\ {\\rm~prob}(\\psi){\\rm~prob}\\lambda_i{\\rm~prob}({~T_{\\rm eff}}_i>{~T_{\\rm cut}})) \\nonumber\\\\\n\\end{eqnarray}\n\\noindent for which ${\\rm~prob}(\\lambda_i|\\psi)$ is FW09 Eqn. 19, ${\\rm~prob}(\\psi)=\\frac{1}{2}\\sin\\psi$, and ${\\rm~prob}({~T_{\\rm eff}})$ and ${\\rm~prob}(\\lambda_i)$ are normal distributions defined by their reported uncertainties. I plot ${\\rm~prob}({~T_{\\rm cut}})$, peaking at 6090$^{+150}_{-110}~$K, and rotation rates measured by \\citet{Mc} for $\\sim10,000$ \\emph{Kepler\\ } targets (Fig. 2). The turnover in rotation rate is the Kraft Break; stars above this temperature remain rapidly rotating due to weaker magnetic braking. The Kraft Break matches the peak of ${\\rm~prob}({~T_{\\rm cut}})$.\n\n\n\\section{Origin of the trends}\n\nI hypothesize on the origin of the observed trends from the equations governing the planet's specific orbital angular momentum vector $\\vec{h}$ [length$^2$\/time] and the host star's spin angular frequency vector $\\vec{\\Omega_\\star}$[time$^{-1}$] (\\citealt{Eggs}, Eqn. 2--3). I assume the planet's orbit is circular, neglect terms that only cause orbital precession, and add a braking term (\\citealt{1981V}, Eqn. 4; employed by \\citealt{Barker} and W10). Equations \\ref{eqn:h}--\\ref{eqn:omegas} here correspond to \\citet{Barker}, Eqn. A7 and A12 with the eccentricity vector $\\vec{e}=0$.\n\n\\begin{equation}\n\\label{eqn:h}\n\\frac{\\dot{\\vec{h}}}{h}=-\\frac{1}{\\tau}\\frac{\\vec{h}}{h}+\\frac{1}{\\tau}\\frac{\\Omega_\\star}{2n}(\\frac{\\vec{\\Omega_\\star}\\cdot\\vec{h}}{\\Omega_\\star~h}\\cdot\\frac{\\vec{h}}{h}+\\frac{\\vec{\\Omega_\\star}}{\\Omega_\\star})\\nonumber\\\\\n\\end{equation}\n\\begin{equation}\n\\label{eqn:omegas}\n\\dot{\\vec{\\Omega_\\star}}=-\\frac{M_p}{I_{\\star,{\\rm~eff}}}\\dot{\\vec{h}}-{\\alpha_{\\rm~brake}}\\Omega_\\star^2\\vec{\\Omega_\\star}\n\\end{equation}\n\\noindent for which \n\\begin{eqnarray}\n\\label{eqn:tau}\n\\tau=\\frac{Q}{6k_L}\\frac{M_\\star}{R_\\star^5(M_\\star+M_p)^8G^7}\\frac{M_\\star}{M_p}h^{13} \\nonumber \\\\=\\tau_0\\left(\\frac{h}{h_0}\\right)^{13}\\frac{0.5M_{\\rm~Jup}}{M_p}\n\\end{eqnarray} is an orbital decay timescale, $k_L$ is the Love number, $Q$ is the tidal quality factor, $M_p$ is the planet mass, $I_{\\star,{\\rm~eff}}$ is the effective stellar moment of inertia participating in the tidal realignment, ${\\alpha_{\\rm~brake}}$ is a braking constant, and $h_0 = \\sqrt{a_0 G (M_\\star+M_p)}$ is the initial specific angular momentum. In the simplified model here, $\\tau_0$ is a constant. The timescale $\\tau$ is related to \\citet{Eggs} Eqn. 7 by replacing the viscous timescale with $Q$ (\\citealt,Eqn. A10), altering the semi-major axis scaling from $a^8$ to $a^{13\/2}$.\n\\begin{figure}\n\\includegraphics{fig2.eps}\n\\caption{Black dots: Squared stellar rotation rates $\\Omega_\\star^2$ ($0.8> {\\tau_{\\rm~align}}$), $\\psi$ accelerates (very approximately) like:\n\\begin{equation}\n\\label{eqn:accel}\n\\frac{\\ddot{\\psi}}{\\dot{\\psi}}\\sim\\frac{1}{{\\tau_{\\rm~brake}}}-\\frac{\\cos\\psi}{{\\tau_{\\rm~align}}}\n\\end{equation}\n\\noindent for which \n\\begin{equation}\n{\\tau_{\\rm~brake}}=\\frac{1}{\\alpha\\Omega_\\star^2},\n\\end{equation}\n\\noindent the first term dominates for cool stars, and --- for hot stars --- the second terms leads to quick flips for retrograde planets ($\\cos\\psi<0$) and gradual deceleration for prograde planets ($\\cos\\psi>0$). Thus ${\\tau_{\\rm~brake}}$ and ${\\tau_{\\rm~align}}$, when compared to the orbital decay timescale ${\\tau_{\\rm~decay}} \\sim \\tau$ and the star's age (${\\tau_{\\star,\\rm~age}}$; more precisely, the time since the hot Jupiter arrived at its close-in location), define tidal evolution regimes leading to the observed trends:\n\\begin{enumerate}\n\\item Misaligned regime (${\\tau_{\\rm~decay}},{\\tau_{\\rm~brake}},{\\tau_{\\rm~align}}>{\\tau_{\\star,\\rm~age}}$): $\\vec{h}$ and $\\vec{\\Omega_\\star}$ change very little, the {\\bf regime for hot Jupiters with $M_p{~T_{\\rm cut}}$)\n\\item Flipped regime (${\\tau_{\\rm~decay}},{\\tau_{\\rm~brake}}>{\\tau_{\\star,\\rm~age}}>{\\tau_{\\rm~align}}$): For planets massive enough to cause a short ${\\tau_{\\rm~align}}$, $\\psi$ drops rapidly when the planet's orbit is retrograde ($\\cos\\psi<0,\\frac{\\ddot{\\psi}}{\\dot{\\psi}}>0$), but falls off slowly when prograde ($\\cos\\psi>0,\\frac{\\ddot{\\psi}}{\\dot{\\psi}}<0)$, allowing sufficiently massive planets to flip from a retrograde to prograde orbit, the {\\bf regime for hot Jupiters with $M_p>M_{p,\\rm retro}$ orbiting hot stars} (Fig. 1, red dots, ${~T_{\\rm eff}}>{~T_{\\rm cut}}$)\n\\item Realigned regime (${\\tau_{\\rm~decay}}>{\\tau_{\\star,\\rm~age}}>{{\\tau_{\\rm~align}}}_0> {\\tau_{\\rm~brake}}$): braking dominates the $\\psi$ acceleration (Equation \\ref{eqn:accel}), triggering a fast ${\\tau_{\\rm~align}}$ (due to a small $\\Omega_\\star$), the {\\bf regime for hot Jupiters orbiting cool stars} (Fig. 1, ${~T_{\\rm eff}}<{~T_{\\rm cut}}$)\n\\item Spin-down regime (${\\tau_{\\rm~decay}},{\\tau_{\\rm~align}}>{\\tau_{\\star,\\rm~age}}>{\\tau_{\\rm~brake}}$): the star slows down but its spin orientation relative to the planet's orbit remains roughly constant $(\\dot{\\psi}\\sim0)$, the {\\bf expected regime for low mass planets orbiting cool stars}. HAT-P-11-b (footnote \\ref{note:hj}, \\citealt{2010W}) falls in this category.\n\\item Fast decay regime $({\\tau_{\\star,\\rm~age}}>{\\tau_{\\rm~decay}})$: the planet is consumed or tidally disrupted\n\\end{enumerate}\n\\noindent The ${\\tau_{\\rm~align}}\/{\\tau_{\\rm~decay}}$ ratio is equivalent to the ratio of stellar spin to planetary orbital angular momentum, e.g., employed by \\citet{Rogers13}.\n\nTherefore I argue that the distinction between the hot versus cool stars is not the convective envelope versus entire star participating in the realignment but the host star's rotation rate due to magnetic braking. Although a smaller $I_{\\star,{\\rm~eff}}$ for cool stars than for hot stars could in principle cause ${~T_{\\rm cut}}$, hot stars also need a small $I_{\\star,{\\rm~eff}}$ for regime 2. Therefore the distinction must be in $\\Omega_\\star$ (\\S2), which differs systematically (hot stars spin quickly and cool stars slowly) due to a different magnetic braking strength. Furthermore, hot stars need weak magnetic braking for regime 2, whereas cool stars need strong magnetic braking for the planet to tidally realign the star's outer layer of without synchronization (Fig. 1). Therefore I consider strong versus weak magnetic braking to be the cause of the observed trends, in concert with a small $I_{\\star,{\\rm~eff}}$ for all stars, corresponding to an outer layer that participates in the tidal realignment while weakly coupled to the interior. Although in principle a small $I_{\\star,{\\rm~eff}}$ is unnecessary if $\\Omega_\\star$ is initially extremely small, $\\Omega_\\star$ needs to match the observed $\\Omega_\\star\\sin~i_s$ measurements (Fig. 1, bottom left panel). \n\nI plot several illustrative cases, using the parameters below (Fig. 3). The black dashed lines show a low-mass planet realigning a cool star. The red dashed lines represent a high-mass planet realigning a cool star. The more massive planet spins up the star, resulting in a shorter stellar rotation period. The solid black lines depict a massive planet that orbits a hot star flipping from retrograde to prograde (regime 2), whereas the solid red lines represent a less massive planet inducing little realignment over the host star's lifetime (regime 1). In all cases, the planets experience little orbital decay (top panel).\n\n\\begin{figure}[h]\n\\includegraphics{fig3.eps}\n\\caption{Planetary $P$ (top), stellar spin period (middle), and $\\psi$ (bottom) for $M_p=1M_{\\rm~Jup}$ (black) and $M_p=3M_{\\rm~Jup}$ (red) and hot (solid) or cool (dotted) host stars for an initial misalignment $\\psi=145^\\circ$ (corresponding to higher lines with bumps in middle panel) and $35^\\circ$. \\label{fig:example}}\n\\end{figure}\n\nUsing the timescales above, I estimate physical constants that produce the observed regimes. For cool stars to realign and slow to the observed $(\\Omega_\\star)_{\\rm~final}\\sim225$ rad\/yr, strong braking and low $I_{\\star,{\\rm~eff}}$ are required:\n\n\\begin{eqnarray}\n\\label{eqn:cool}\n\\alpha_{\\rm~cool}\\sim3\\times10^{-13}{\\rm~yr}\\left(\\frac{225\\rm~rad\/yr}{(\\Omega_\\star)_{\\rm~final}}\\right)^2\\frac{0.07\\rm~Gyr}{t_{\\rm~align}}\\nonumber\\\\\\nonumber\\\\\\\nI_{\\star,{\\rm~eff}}\\sim0.0004M_\\odot~R_\\odot^2\\nonumber\\\\ \n\\frac{t_{\\rm~align}\/t_{\\rm~decay}}{0.003}\\frac{M_p}{0.5M_{\\rm~Jup}}\\frac{h_0}{1.33\\rm~AU^2\/yr}\\frac{225\\rm~rad\/yr}{(\\Omega_\\star)_{\\rm~final}}, \\nonumber\\\\\n\\end{eqnarray}\n\\noindent much smaller than the sun's $I\\sim(0.06M_\\odot~R_\\odot^2)$.\n\nNext I estimate $I_{\\star,{\\rm~eff}}$ for hot stars. For planets with $M_p>M_{p,\\rm retro}$ to flip to prograde, I require:\n\\begin{eqnarray}\n\\label{eqn:hot}\nI_{\\star,{\\rm~eff}}\\sim0.0019M_\\odot~R_\\odot^2\\nonumber\\\\ \n\\frac{{\\tau_{\\rm~align}}\/t_{\\rm~decay}}{0.007}\\frac{M_{p,\\rm retro}}{2.5M_{\\rm~Jup}}\\frac{h_0}{1.33\\rm~AU^2\/yr}\\frac{600\\rm~rad\/yr}{(\\Omega_\\star)_{\\rm~final}} \\nonumber\\\\\n\\end{eqnarray}\n\\noindent Although this corresponds to a slightly larger $I_{\\star,{\\rm~eff}}$ than for cool stars, it is still much smaller than the moment of inertia of the entire star (e.g., a $1.2~M_\\odot,~1.2~R_\\odot$ $n=3$ polytrope has $I=0.14M_\\odot~R_\\odot^2$). In \\S\\ref{sec:conclude}, I discuss whether such a small $I_{\\star,{\\rm~eff}}$ for either hot or cool stars is plausible. \n\n\n\\section{Reproducing the observed trends}\n\nI show via Monte Carlo that the framework from \\S3 can reproduce the observed trends. I use the constants derived above (Eqn. \\ref{eqn:cool}--\\ref{eqn:hot}) and adopt $\\tau_0=1000$ Gyr ($\\tau$ scales with planet mass according to Eqn. \\ref{eqn:tau}; $\\tau_0$ corresponds to $Q\\sim5\\times10^6$ for a sun-like star), $(\\Omega_\\star)_{0,\\rm~hot}=1000$ rad\/yr, $(\\Omega_\\star)_{0,\\rm~cool}=400$ rad\/yr, $\\alpha_{\\rm~hot}=3\\times10^{-16}$ yr, and $h_0=1.33$ AU$^2$\/yr (corresponding to $P\\sim3$ days, where the hot Jupiter pile-up is observed, e.g., \\citealt{Gaudi}). The values are tuned to match the observations (Fig. 1).\n\nTo produce a predicted population, for 200 planets I:\n\\begin{enumerate}\n\\item Select a uniform random $4800<{~T_{\\rm eff}}<6800~$K, a log-uniform $0.5M_{\\rm~Jup}2.5M_{\\rm Jup}$ would be particularly valuable. I assumed that the initial $\\psi$ distribution is independent of $M_p$; if the retrograde mass-cut were caused by the mechanism leading to misalignment, a small $I_{\\star,{\\rm~eff}}$ for hot stars would not be necessary. Furthermore, whether such a small $I_{\\star,{\\rm~eff}}$ --- an outer layer participating in the tidal realignment while weakly coupled to the interior --- is plausible remains an open question. As proposed by W10, this weakly coupled outer layer could be the convective zone. Following W10, I compute the convective zone $I$ of a 5 Gyr sun-like star and a 2 Gyr 1.2 solar-mass star using the EZ Web stellar evolution models \\citep{2004P}: $0.01~M_\\odot~R_\\odot^2$ and $0.002M_\\odot~R_\\odot^2$ respectively. The latter is consistent with the value estimated here ($0.0019~M_\\odot~R_\\odot^2$). The former is inconsistent with the estimated value $0.0004M_\\odot~R_\\odot^2$ using nominal parameters and marginally consistent with $0.004~M_\\odot~R_\\odot^2$ if I allow cool stars to have a much shorter orbital decay timescale. The weakly coupled outer layer does not necessarily correspond to the convective zone, just an effective distance for the transferred angular momentum to penetrate. A weakly coupled outer layer would account $\\tau$ Bootis~b synchronizing its hot star. For cool stars, a weakly coupled outer layer seems at odds with the sun's radially uniform rotation profile. One possibility is that the weakly coupled outer layer corresponds to our sun's near-surface shear outer layer at about 0.95$~R_\\odot$ (e.g., \\citealt{1996T}), which is decoupled from the rest of the convective zone at low latitudes. The depth from the surface probed by $\\lambda$ and $v\\sin~i_\\star$ measurements is $<100$ km (e.g., \\citealt{Gray}). Another possibility is that the timescale for coupling between layers, often treated as a free parameter in tidal evolution models due to uncertainty over which proposed physical process is responsible for coupling (e.g., \\citealt{Allain,Penev}), is much longer than the tidal forcing timescale.\n\nI used a simplistic tidal evolution model to illuminate an origin for the observed stellar obliquity trends. Ultimately a detailed, realistic treatment is necessary, incorporating stellar evolution (accounting for age trends, i.e. \\citealt{2011T}), coupling between the star's layers, changes in ${\\alpha_{\\rm~brake}}$, dynamic tides, stellar properties affecting tides (e.g., $R_\\star$, and tides raised on the planet. The values of physical constants I tuned to match observations--- such as the braking coefficient and initial host star spin rate --- are not precise constraints due to the model's simplistic treatment, but better models may allow meaningful constraints. For example, a constraint on the initial host star spin rate could pinpoint the star's age when the realignment begins, potentially distinguishing between disk migration (which occurs before the gas disk evaporates) versus high eccentricity migration (which delivers the hot Jupiter over a larger range of timescales).\n\nA key take-away for using the $\\psi$ distribution to constrain hot Jupiter migration mechanisms is that cool star obliquities are significantly sculpted by tides, as are hot star obliquities for the more massive hot Jupiters. For example, if one were to take the $\\psi$ distribution of hot stars as pristine, one would overestimate the fraction of prograde planets due to the flipping of massive retrograde planets. Better tidal evolution models will allow robustly forward-modeling the migration and tidal sculpting process to match the observed distribution of sky-projected $\\lambda$.\n\n\\acknowledgments\nMy gratitude to the referee for an especially helpful report. I thank Joshua Winn, Gwena{\\\"e}l Bou{\\'e}, Eliot Quataert, Daniel Fabrycky, Ryan O'Leary, Eugene Chiang, Simon Albrecht, Amaury Triaud, Marta Bryan, Howard Isaacson, Ruth Murray-Clay, and Ian Czekala for helpful feedback, comments, and discussions, Ruth Angus for an inspiring presentation on gyrochronology, the Miller Institute for Basic Research in Science, University of California Berkeley for funding, and the Exoplanet Orbit Database ({\\tt exoplanets.org}) .\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:1}\nRecent advances in artificial intelligence allow researchers to recover laws of physics and predict dynamics of physical systems from observed data by utilizing machine learning techniques, e.g., evolutionary algorithms \\cite{schmidt09, lonie11}, sparse optimizations \\cite{schaeffer17, brunton16}, Gaussian process regressions \\cite{seikel12, cui2015}, and neural networks \\cite{iten20, battaglia16, greydanus19, zhong19, sanchez19}. Among various models, the neural networks are considered as one of the most powerful tools to model complicated physical phenomena, owing to their remarkable ability to approximate arbitrary functions \\cite{hornik89}. One notable aspect of the observations in physical systems is that they manifest some fundamental properties including conservation or invariance \\cite{goldstein02, bluman13}. However, it is not straightforward for neural networks to learn and model the embedded physical properties from observed data only. Consequently, they often overfit to short-term training trajectories and fail to predict the long-term behaviors of complex dynamical systems \\cite{greydanus19, zhong19}. \n\nTo overcome these issues, it is important to introduce appropriate inductive biases based on knowledge of physics, dynamics and their properties \\cite{zhong19, sanchez19}. Common approaches to incorporate physics-based inductive bias include modifying neural network architectures or introducing regularization terms based on specialized knowledge of physics and natural sciences \\cite{nabian18, long18, schutt17, schutt19}. These methods demonstrate impressive performance on their target problems, but such a problem-specific model suffers from generalizing across domains. {Namely, they can be used only when the governing physics for the target domain is exactly known, e.g., Navier-Stokes equation for fluid mechanics \\cite{nabian18, long18}.} As for more general approaches, the authors in \\cite{chen18, chang19} propose the \\emph{ordinary differential equation (ODE) networks}, which view the neural networks as parameterized ODE functions. They are shown to be able to represent the vast majority of dynamical systems with higher precision over vanilla recurrent neural networks and their variants \\cite{chen18, chang19}, but are still unable to learn underlying physics such as the law of conservation \\cite{greydanus19}. Recent works~\\cite{greydanus19, zhong19, sanchez19, chen19, toth19} apply the Hamiltonian mechanics to ODE networks, and succeed in enforcing the energy conservation as well as the accurate time evolution of classical conservative systems. However, these Hamiltonian ODE networks have inherent limitations that they cannot be applied to non-conservative systems, since the Hamiltonian structures require to strictly conserve the total energy \\cite{greydanus19}.\n\nTo address such limitations of existing works on modeling classical dynamics, we introduce a physics-inspired, general, and flexible inductive bias, \\emph{symmetries}. It is at the heart of the physics: the laws of physics are invariant under certain transformations in space and time coordinates, thus show the universality \\cite{elliott79, livio12}. For example, the classical dynamics possess the \\emph{time-reversal symmetry}, which means the classical equations of motion should not change under the transformation of time reversal: $t \\mapsto -t$ \\cite{lamb98, roberts92, strogatz01} (see Figure \\ref{fig:1}). Therefore, if the target underlying physics being approximated has some symmetries, it is natural that the approximated physics using neural networks should also comply with these properties. Motivated by this, we feed the symmetry as an additional information to help neural networks learn the physical systems more efficiently. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{figure1.png} \n\\caption{\\small (a) \\textbf{Time-reversal symmetry of dynamical systems.} The gray ellipse is a phase space trajectory, which does not change under $t \\mapsto -t$. The reversing of forward time evolution (blue arrows) of an arbitrary state should yield an equal state to what is estimated by the backward time evolution of the reversed state (orange arrows). For more mathematical details, see Section \\ref{sec:3.2}. Examples of (b) non-linear and (c) non-ideal dynamical systems modeled by various ODE networks including TRS-ODENs. TRS-ODENs can learn appropriate long-term dynamics from noisy and short-term training samples.}\n\\label{fig:1}\n\\end{figure}\n\nSpecifically, we focus on the \\emph{time-reversal symmetry} of classical dynamics described above, due to its simplicity and popularity. We propose a new ODE learning framework, which we refer to as \\emph{Time-Reversal Symmetry ODE Network (TRS-ODEN)}\\footnote{Code is available at \\url{https:\/\/github.com\/inhuh\/trs-oden}.}, that utilize the time-reversal symmetry as a regularizer in training ODE networks, by unifying recent studies of ODE networks \\cite{chen19} and classical symmetry theory for ODE systems \\cite{lamb98}. Our scheme can be easily implemented with a small modification of codes for conventional ODE networks, and is also compatible with extensions of ODE networks, such as Hamiltonian ODE networks \\cite{zhong19, sanchez19, chen19}. It can be used to predict many branches of physical systems, because the isolated classical and quantum dynamics exhibit the perfect time-reversal symmetry \\cite{lamb98, sachs87}. Moreover, even for the case when the full time-reversal symmetry are broken \\cite{lamb98}, e.g., in the presence of the entropy production \\cite{maes2003time} through heat or mass transfer, we also show that TRS-ODENs are beneficial to learn such system by annealing the strength of the proposed regularizer appropriately. This flexibility with regard to the target problem is the main advantage of the proposed framework, in contrast to prior methods, e.g., only for suitable explicitly conservative systems \\cite{greydanus19}.We validate our proposed model in several domains including synthetic Duffing oscillators \\cite{kovacic11} (see Section \\ref{sec:4.1}), real-world coupled oscillators \\cite{schmidt09} (see Section \\ref{sec:4.2}), and reversible strange attractors \\cite{sprott2015symmetric} (see Section \\ref{sec:4.3}). In summary, our contribution is threefold:\n\n\\begin{itemize}\n\\item We propose a novel loss function that measures the discrepancy in the time evolution of ODE networks between forward and backward dynamics, thus estimates whether the ODE networks are time-reversal symmetric or not.\n\\item We show ODE networks with the proposed loss, coined TRS-ODENs, achieve better predictive performance than baselines, e.g., from 50.81 to 10.85 for non-linear oscillators. \n\\item We validate even for time-irreversible systems, the proposed framework still works well compared to baselines, e.g., from 3.68 to 0.12 in terms of error for damped oscillators.\n\\end{itemize}\n\n\\section{Background and Setup} \\label{sec:2}\n\n\\subsection{Predicting dynamical systems} \\label{sec2:1}\nIn a dynamical system, its states evolve over time according to the governing time-dependent differential equations. The state is a vector in the phase space, which consists of all possible positions and momenta of all particles in the system. If one knows the governing differential equation and initial state of the system, the future state is predictable by solving the equation analytically or numerically.\nOn the other hand, if one does not know the exact governing equation, but has some state trajectories of the system, one can try to model the dynamical system, e.g., by using neural networks. More specifically, one can build a neural network whose input is current state (or trajectory) and the output is the next state, from the perspective of the sequence prediction. However, such a method may overfit to short-term training trajectories and fail to predict the long-term behaviors \\cite{zhong19}. It is also not straightforward to predict the continuous-time dynamics, because neural network models typically assume the discrete time-step between states \\cite{greydanus19}. \nNeural ODE and its applications \\cite{chen18, chang19, greydanus19, zhong19, sanchez19, chen19, toth19}, alias ODE networks (ODENs), tackle these issues by learning the governing equations, rather than the state transitions directly. Moreover, some of them use special ODE functions such as Hamilton's equations to incorporate physical properties to neural network structurally \\cite{greydanus19, zhong19, sanchez19, chen19, toth19}. In the rest of this section, we briefly review ODENs and Hamiltonian ODE networks (HODENs), which are closely related to our work. \n \n\\subsection{ODE networks (ODENs) for learning and predicting dynamics} \\label{sec:2.2}\nWe consider dynamics of state $\\mathbf{x}$ in phase space $\\Omega$ ($=\\mathbb{R}^{2n}$, in classical dynamics\\footnote{For Hamiltonian as an example, $\\mathbf{x} = (\\mathbf{q}, \\mathbf{p})$, where $\\mathbf{q} \\in \\mathbb{R}^n$ and $\\mathbf{p} \\in \\mathbb{R}^n$ are positions and momenta.}) given by:\n\\begin{equation} \\label{eq:1}\n\\frac{d\\mathbf{x}}{dt} = f(\\mathbf{x}) \\quad \\text{for } t \\in \\mathbb{R},\\, \\mathbf{x} \\in \\Omega,\\, f: \\Omega \\mapsto T\\Omega.\n\\end{equation}\nThe continuous time evolution between arbitrary two time points $t_i$ and $t_{i+1}$ by (\\ref{eq:1}) is equal to:\n\\begin{equation} \\label{eq:2}\n\\mathbf{x}({t_{i+1}}) = \\mathbf{x}({t_i}) + \\int_{t_i}^{t_{i+1}}{f(\\mathbf{x})dt}.\n\\end{equation}\nThe recent works \\cite{zhong19, sanchez19, chen18, chen19} propose the ODENs, which represent the ODE functions $f$ in (\\ref{eq:1}) by neural networks and learn the unknown dynamics from data. For ODENs, fully-differentiable numerical ODE solvers are required to train the black-box ODE functions, e.g., Runge-Kutta (RK) method \\cite{dormand80} or symplectic integrators such as leapfrog method \\cite{leimkuhler04}. With an ODE solver, say $\\mathtt{Solve}$, one can estimate the time evolution by ODENs:\n\\begin{equation} \\label{eq:3}\n\\tilde{\\mathbf{x}}({t_{i+1}})=\\mathtt{Solve}\\{\\tilde{\\mathbf{x}}({t_i}),f_{\\theta},{\\Delta}t_i\\},\\, \\tilde{\\mathbf{x}}({t_0}) = \\mathbf{x}({t_0}),\n\\end{equation}\nwhere $f_{\\theta}$ is a $\\theta$-parameterized neural network, $\\tilde{\\mathbf{x}}(t_i)$ is a prediction of ${\\mathbf{x}}(t_i)$ using ODENs, ${\\Delta}t_i = t_{i+1} - t_i$ is a time-step, and $\\mathbf{x}(t_0)$ is a given initial value. Given observed trajectory ${\\mathbf{x}}(t_1),...,{\\mathbf{x}}(t_T)$, ODENs can learn the dynamics by minimizing the loss function $\\mathcal{L}_\\text{ODE} \\equiv \\sum\\nolimits_{i=0}^{T-1} \\| {\\mathtt{Solve}\\{\\tilde{\\mathbf{x}}(t_i), f_\\theta, \\Delta{t_i}\\} - \\mathbf{x}}(t_{i+1}) \\|_2^2$. We omit the sample-wise mean for the simple notation.\n\n\\subsection{Hamiltonian ODE networks (HODENs)} \\label{sec:2.3}\nThe Hamiltonian mechanics describes the phase space equations of motion for conservative systems by following two first-order ODEs called Hamilton's equations \\cite{goldstein02}:\n\\begin{equation} \\label{eq:4}\n\\frac{d\\mathbf{q}}{dt}= \\frac{\\partial\\mathcal{H}(\\mathbf{q}, \\mathbf{p})}{\\partial{\\mathbf{p}}},\\, \n\\frac{d\\mathbf{p}}{dt}= -\\frac{\\partial\\mathcal{H}(\\mathbf{q}, \\mathbf{p})}{\\partial{\\mathbf{q}}},\\, \n\\end{equation}\nwhere $\\mathbf{q} \\in \\mathbb{R}^n$, $\\mathbf{p} \\in \\mathbb{R}^n$, and $\\mathcal{H}: \\mathbb{R}^{2n} \\mapsto \\mathbb{R}$ are positions, momenta, and Hamiltonian of the system, respectively. Recent works \\cite{zhong19, sanchez19, chen19} apply the Hamilton's equations to ODENs, by parameterizing the Hamiltonian as $\\mathcal{H}_\\theta$, and replacing $f_\\theta(\\mathbf{q}, \\mathbf{p})$ to the gradients of $\\mathcal{H}_\\theta$ with respect to inputs $(\\mathbf{p}, \\mathbf{q})$ according to (\\ref{eq:4}). Thus, the time evolution of HODENs is equal to:\n\\begin{equation} \\label{eq:5}\n(\\tilde{\\mathbf{q}}({t_{i+1}}), \\tilde{\\mathbf{p}}({t_{i+1}})) = \\mathtt{Solve}\\{(\\tilde{\\mathbf{q}}({t_i}), \\tilde{\\mathbf{p}}({t_i})), (\\partial\\mathcal{H}_\\theta\/\\partial{\\mathbf{p}}, -\\partial\\mathcal{H}_\\theta\/\\partial{\\mathbf{q}}), {\\Delta}t_i\\}.\n\\end{equation}\nHODENs show better predictive performance for conservation systems. Furthermore, they can learn the underlying law of conservation of energy automatically, since they fully exploit the nature of the Hamiltonian mechanics \\cite{greydanus19}. However, a fundamental limitation of HODENs is that they do not work properly for the non-conservative systems \\cite{greydanus19}, because they always conserve the energy.\n\n\\section{Time-Reversal Symmetry Inductive Bias for ODENs} \\label{sec:3}\n \n\\subsection{Target systems} \\label{sec:3.1}\nBefore introducing the time-reversal symmetry, we briefly explain two perspectives of the classical dynamical systems: \\emph{conservative} and \\emph{reversible}. The former is the system that its Hamiltonian does not depend on time explicitly, i.e., $\\partial{\\mathcal{H}}\/\\partial{t} = 0$. \nThe latter is the system that possesses the time-reversal symmetry, whose mathematical details will be discussed in the following section.\n\n\\textbf{Conservative and reversible systems.}\nAll conservative systems that their Hamiltonians satisfy $\\mathcal{H}(\\mathbf{q}, \\mathbf{p})$ = $\\mathcal{H}(\\mathbf{q}, -\\mathbf{p})$ are also reversible \\cite{lamb98}. It means that many kinds of classical dynamics are both conservative and reversible\\footnote{Note that the most basic definition of the Hamiltonian is the sum of kinetic and potential energy, i.e., $\\mathcal{H}(\\mathbf{q}, \\mathbf{p}) = \\mathbf{p}^2\/2 + V(\\mathbf{q})$ (if we omit the mass) \\cite{goldstein02}, which possess $\\mathcal{H}(\\mathbf{q}, \\mathbf{p})$ = $\\mathcal{H}(\\mathbf{q}, -\\mathbf{p})$ naturally.}. For these systems, both Hamiltonian and time-reversal symmetry inductive biases are appropriate. Furthermore, combining two inductive biases can improve the sample efficiency of a learning scheme.\n\n\\textbf{Non-conservative and reversible systems.}\nIt is noteworthy that reversible systems are not necessarily conservative systems. Some examples about non-conservative but reversible systems can be found in \\cite{lamb98, roberts92}. Clearly, baselines such as HODENs that enforce conservative property would break down in this environment. On the other hand, our scheme, named TRS-ODEN, presented in Section \\ref{sec:3.3} would accurately model the dynamics of given data by exploiting time-reversal symmetry.\n\n\\textbf{Non-conservative and irreversible systems.}\nUnder interactions with environments, the dynamical systems become non-conservative and often irreversible\\footnote{Let's consider a damped pendulum. They are irreversible since one can distinguish the motion of the pendulum in forward (amplitude increases) and that in backward directions (amplitude decreases).}. Depending on the intensity of such interactions, the Hamiltonian or time-reversal symmetry inductive bias can be beneficial or harmful. HODENs strictly enforce the conservation, thus they are not suitable for this \\cite{greydanus19}. On the other hand, TRS-ODENs are more flexible, since they use the inductive bias as a form of regularizer, which is easily controlled via hyper-parameter tuning \\cite{schmidhuber15}. \n\n\\subsection{Time-reversal symmetry in dynamics} \\label{sec:3.2}\nFirst-order ODE systems (\\ref{eq:1}) are said to be \\emph{time-reversal symmetric} if there is an invertible transformation $R: \\Omega \\mapsto \\Omega$, that reverses the direction of time:\n\\begin{equation} \\label{eq:6}\n\\frac{dR(\\mathbf{x})}{dt} = -f(R(\\mathbf{x})),\n\\end{equation}\nwhere $R$ is called \\emph{reversing operator} \\cite{lamb98}. Comparing (\\ref{eq:1}) and (\\ref{eq:6}), one can find that the equation is invariant under the transformations of phase space $R$ and time-reversal $t \\mapsto -t$. For notational simplicity, let's introduce a time evolution operator $U_{\\tau}: \\Omega \\mapsto \\Omega$ for (\\ref{eq:1}) as follows \\cite{lamb98}: \n\\begin{equation} \\label{eq:7}\nU_{\\tau}: \\mathbf{x}(t) \\mapsto U_{\\tau}(\\mathbf{x}(t)) = \\mathbf{x}(t + \\tau),\n\\end{equation}\nfor arbitrary $t, \\tau \\in \\mathbb{R}$. Then, in terms of the time evolution operator (\\ref{eq:7}), (\\ref{eq:6}) imply:\n\\begin{equation} \\label{eq:8}\nR \\circ U_{\\tau} = U_{-\\tau} \\circ R,\n\\end{equation}\nwhich means that \\emph{the reversing of the forward time evolution of an arbitrary state should be equal to the backward time evolution of the reversed state} (see Figure \\ref{fig:1}). \n\nIn classical dynamics, generally, even-order and odd-order derivatives with respect to $t$ are respectively preserved and reversed under $R$ \\cite{lamb98, roberts92}. For example, consider a conservative and reversible Hamiltonian $\\mathcal{H}(\\mathbf{q}, \\mathbf{p})$ = $\\mathcal{H}(\\mathbf{q}, -\\mathbf{p})$, as mentioned in Section \\ref{sec:3.1}. Because $\\mathbf{q}$ and $\\mathbf{p}$ are respectively zeroth and first order derivatives with respect to $t$, $R$ is simply given by $R(\\mathbf{q}, \\mathbf{p}) = (\\mathbf{q}, -\\mathbf{p})$. In this case, one can easily check the Hamilton's equations (\\ref{eq:4}) are invariant under $R$ and $t \\mapsto -t$. {We use this classical definition $R(\\mathbf{q}, \\mathbf{p}) = (\\mathbf{q}, -\\mathbf{p})$ in the remainder of the paper, unless otherwise specified.}\n \n\\subsection{Time-reversal symmetry ODE networks (TRS-ODENs)} \\label{sec:3.3}\nInspired from ODENs (\\ref{eq:3}) and time-reversal symmetry (\\ref{eq:8}), here we propose a novel \\emph{time-reversal symmetry loss}. First, the backward time evolution of the reversed state for ODENs is equal to:\n\\begin{equation} \\label{eq:9}\n\\tilde{\\mathbf{x}}_R({t_{i+1}})=\\mathtt{Solve}\\{\\tilde{\\mathbf{x}}_R({t_i}),f_{\\theta},-{\\Delta}t_i\\},\\, \\tilde{\\mathbf{x}}_R({t_0}) = R(\\tilde{\\mathbf{x}}({t_0})).\n\\end{equation}\nThen, using (\\ref{eq:3}) and (\\ref{eq:9}), we define the time-reversal symmetry loss $\\mathcal{L}_\\text{TRS}$ as an ODEN version of (\\ref{eq:8}):\n\\begin{equation} \\label{eq:10}\n\\mathcal{L}_{\\text{TRS}} \\equiv \\sum^{T-1}_{i=0} \\left \\| R(\\mathtt{Solve}\\{\\tilde{\\mathbf{x}}(t_i), f_\\theta, {\\Delta}t_i\\}) - \\mathtt{Solve}\\{\\tilde{\\mathbf{x}}_R(t_i), f_\\theta, -{\\Delta}t_i\\} \\right \\|^2_2 \\\\. \n\\end{equation}\nNote that we assume the system is autonomous. For non-autonomous systems, see Section A in the supplementary material. Finally, we define the TRS-ODEN as a class of ODENs whose loss function $\\mathcal{L}_\\text{TRS-ODEN}$ is given by the sum of the ODE error $\\mathcal{L}_\\text{ODE}$ and symmetry regularizer $\\mathcal{L}_\\text{TRS}$ as follows\\footnote{{TRS-ODENs require approximately 2$\\times$ larger training time than the vanilla\/default ODENs because the backward as well as forward evolutions need to be calculated.}}:\n\\begin{equation} \\label{eq:11}\n\\mathcal{L}_{\\text{TRS-ODEN}}(\\mathbf{x}(t), \\tilde{\\mathbf{x}}(t), \\tilde{\\mathbf{x}}_R(t), R, \\theta) \\equiv \\mathcal{L}_{\\text{ODE}}(\\mathbf{x}(t), \\tilde{\\mathbf{x}}(t), \\theta) + \\lambda\\cdot \\mathcal{L}_{\\text{TRS}}(\\tilde{\\mathbf{x}}(t), \\tilde{\\mathbf{x}}_R(t), R, \\theta),\n\\end{equation}\nwhere $\\lambda \\geq 0$ is a hyper-parameter. It is noteworthy that $\\lambda$ can be also a function of time $t$, especially when dealing with irreversible systems. {It is owing to the heuristic that the irreversible term is also a function of $(t, \\mathbf{x}(t))$, i.e., although target dynamics do not possess the full time-reversal symmetry, they can be partially reversible when the irreversible term becomes negligible at certain time points.}\n\n\\section{Experiments} \\label{sec:4}\n\\textbf{Default model setting.} \nWe compare three models: vanilla ODEN, HODEN, and TRS-ODEN. A single neural network $f_\\theta(\\mathbf{q}, \\mathbf{p})$ is used for ODENs and TRS-ODENs, while HODENs consist of two neural networks $K_{\\theta_1}(\\mathbf{p})$ and $V_{\\theta_2}(\\mathbf{q})$, i.e., separable $\\mathcal{H}_\\theta(\\mathbf{q}, \\mathbf{p}) = K_{\\theta_1}(\\mathbf{p}) + V_{\\theta_1}(\\mathbf{q})$ \\cite{chen19}. Also, we use the leapfrog integrator for $\\mathtt{Solve}$ \\cite{chen19}, unless otherwise specified. The maximum allowed trajectory length at training phase is set to 10. If training trajectories are longer that 10, we divide them properly. We train models by using the Adam \\cite{kingma14} with initial learning rate of $2 \\times 10^{-4}$ during 5,000 epochs. We use full-batch training because training sample sizes are quite small, except for Experiment VI.\n\n\\textbf{Performance metric.}\nAs primary performance metrics, we use the mean-squared error (MSE) between test ground truths and models' predictive phase space trajectories as well as total energies\\footnote{They can be calculated from trajectories. For example, a total energy of simple oscillator is $\\mathbf{q}^2 + \\mathbf{p}^2$.}. The predictive trajectories are obtained by recursively solving (\\ref{eq:3}) or (\\ref{eq:5}), thus errors accumulate and diverge over time if the models do not learn the accurate time evolution. \n\n\\subsection{Experiment I-IV: Learning the Duffing oscillators} \\label{sec:4.1}\nFirstly, we focus on the Duffing oscillator \\cite{kovacic11}, a generalized model of oscillators, that given by\\footnote{Typically, Duffing oscillator is given by a second order ODE $\\ddot{\\mathbf{x}} + \\alpha{\\mathbf{x}} + \\beta{\\mathbf{x}^3} + \\gamma\\dot{\\mathbf{x}} = \\delta{\\cos({t})}$. We separate this equation from the perspective of the pseudo-phase space, although they are not in canonical coordinates.}:\n\\begin{equation} \\label{eq:12}\n\\frac{d\\mathbf{q}}{dt} = \\mathbf{p},\\; \\frac{d\\mathbf{p}}{dt} = -\\alpha{\\mathbf{q}} -\\beta{\\mathbf{q}^3} - \\gamma{\\mathbf{p}} + \\delta{\\cos({t})},\n\\end{equation}\nwhere $\\alpha$, $\\beta$, $\\gamma$, and $\\delta$ are scalar parameters that determine the linear stiffness, non-linear stiffness, damping, and driving force terms, respectively. For non-zero parameters, Duffing oscillators are neither conservative nor reversible. Furthermore, they often exhibit chaotic behaviors \\cite{kovacic11}. However, the characteristics of Duffing oscillator can be changed greatly by adjusting parameters. Thus, by using these coupled equations, we can simulate several dynamical systems mentioned in Section \\ref{sec:3.1}.\n\nUnless otherwise stated, we generate 50 trajectories each for training and test sets. For each trajectory, The initial state $(\\mathbf{q}(t_0), \\mathbf{p}(t_0))$ is uniformly sampled from annulus in $[0.2, 1]$. The lengths of training and test trajectories are 30 and 200, respectively, while the time-step is fixed at 0.1, i.e., $\\Delta{t_i} = 0.1$ for all $i$. Thus, we can evaluate whether the models can mimic long-term dynamics. We add Gaussian noise $0.1{n}, n \\sim \\mathcal{N}(0, 1)$ to training set. We use the fourth order RK method to get trajectories.\n\nIn Experiment I and II, we consider conservative and reversible systems, where we show that TRS-ODENs are comparable with or even outperform HODENs. Moreover, we confirm combining HODENs and time-reversal symmetry loss can lead further improvement for these systems. Then, we evaluate proposed framework for a non-conservative and reversible system in Experiment III. \nFinally, in Experiment IV, we validate our proposed framework for a non-conservative and irreversible damped system. HODENs cannot learn this system because of their strong tendency to conserve the energy, as previously reported in \\cite{greydanus19}. We demonstrate TRS-ODENs can learn this system flexibly.\n\n\n\\textbf{Experiment I: Simple oscillator.}\nFor a toy example, we choose a simple oscillator, i.e., $\\alpha = 1$, $\\beta = \\gamma = \\delta = 0$. We use single hidden layer neural networks consists of 1,000 hidden units and $\\texttt{tanh}$ activations for all models. Figure \\ref{fig:2} (a-b) show that the TRS-ODEN with $\\lambda = 10$\\footnote{{One can force the TRS-ODEN to be more symmetric by increasing the regularization strength $\\lambda$. See Section B in the supplementary material for details.}} outperforms both ODEN and HODEN. For qualitative analysis, we plot a test trajectory and its total energy (see Figure \\ref{fig:2} (c-h)). It shows the TRS-ODEN can learn the energy conservation as well as accurate dynamics. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figure2.png}\n\\vspace{2pt}\n\\caption{\\small \\textbf{Summary of Experiment I.} (a-b) Test (a) trajectory MSE and (b) energy MSE across the models. (c-h) Sampled test trajectory and its total energy for the (c-d) ODEN, (e-f) HODEN, and (g-h) TRS-ODEN.}\n\\label{fig:2}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{figure3.png}\n\\vspace{-10pt}\n\\caption{\\small \\textbf{Summary of Experiment II.} (a-b) Test (a) trajectory MSE (b) and energy MSE across the models. (c-h) Sampled test trajectories and their total energies for the (c-d) ODEN, (e-f) HODEN, (g-h) TRS-ODEN, and (i-j) TRS-HODEN.}\n\\label{fig:3}\n\\end{figure}\n\n\\textbf{Experiment II: Non-linear oscillator.}\nAs a more interesting problem, we choose the undamped and unforced non-linear oscillator, i.e., $\\alpha = -1$, $\\beta = 1$, $\\gamma = \\delta = 0$. We use neural networks consist of two hidden layers with 100 units and $\\texttt{tanh}$ activations. \n\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n\\centering\n\\vspace{-15pt}\n\\includegraphics[width=0.45\\textwidth]{figure4.png}\n\\vspace{5pt}\n\\caption{\\small Test MSE \\textit{vs.} the number of training samples across models. Means and error bars are calculated by repeating the experiment 5 times, with varying datasets.}\n\\vspace{-5pt}\n\\label{fig:4}\n\\end{wrapfigure}\n\nIn this experiment, the TRS-ODEN outperforms HODEN in terms of the trajectory MSE, and vice-versa for total energy MSE (see Figure \\ref{fig:3} (a-b)). For qualitative analysis, we sample five trajectories and their energy values (see Figure \\ref{fig:3} (c-h)). It shows the HODEN fails to learn time evolution especially near the origin point, while the TRS-ODEN shows undesirable peaks in energy. This room for improvement leads us to combining the HODEN and TRS-ODEN, the \\emph{Time-Reversal Symmetric Hamiltonian ODE Network (TRS-HODEN)}\\footnote{It can be obtained straightforwardly by combining (\\ref{eq:5}) and (\\ref{eq:8}), similar to (\\ref{eq:9}-\\ref{eq:10}).}. After estimation, We find that the TRS-HODEN can achieve almost same performance as HODEN in terms of energy MSE, and clearly outperform baselines for trajectory MSE (see Figure \\ref{fig:3} (a-b) and (i-j)). Furthermore, we evaluate the sample efficiency and find that the combination of two inductive bias improves the learning process more reliable (see Figure \\ref{fig:4}). For a more detailed analysis of reasoning on the improvement made by TRS-HODENs, see Section C in the supplementary material. Also, we find that TRS-ODENs and TRS-HODENs can predict critical points (stable centers and homoclinic orbits) of non-linear oscillators. See Section D in the supplementary material for details. \n\n\n\n\\textbf{Experiment III: Forced non-linear oscillator.}\nWe set $\\alpha = -0.2$, $\\beta = 0.2$, $\\gamma = 0$, and $\\delta = 0.15$ for system parameters. Due to the periodic driving force $\\delta\\cos{t}$, this system is non-autonomous. Therefore, we use a tuple $(\\mathbf{q}, \\mathbf{p}, t)$ as an input of the neural networks for this experiment\\footnote{In \\cite{greydanus19, chen19}, the authors say for HODENs, time dependency should be modeled separately from them. However, we use time-dependent HODENs in here to prevent large modifications of HODENs for fair comparison.}. Hyper-parameters of neural networks are same as them for Experiment II, except for $\\lambda$: $\\lambda \\in \\{0.5, 1, 5\\}$ is estimated in here. We use the fourth order RK method for $\\texttt{Solve}$, since it is not straightforward to apply the leapfrog solver for non-autonomous systems. We generate 200 and 50 trajectories whose lengths are 50 and 100, respectively, for train and test sets in this experiment, considering the complexity of the target system. Also, we reduce the noise to training dataset as $0.05{n}, n \\sim \\mathcal{N}(0, 1)$.\n\nWe find that TRS-ODENs clearly outperform their baselines with significant margin in both trajectory and energy MSE metrics (see Figure \\ref{fig:5} (a-b)). From Figure \\ref{fig:5} (c-h), one can check the dynamics predicted by the ODEN or HODEN diverge as times passes, while the TRS-ODEN shows reliable long-term behaviors. As a result, the total energy of the TRS-ODEN follows the ground truth reasonably, while that estimated by baselines soars explosively in $t > 6$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figure5.png}\n\\vspace{2pt}\n\\caption{\\small \\textbf{Summary of Experiment III.} (a-b) Test (a) trajectory MSE (b) and energy MSE across the models. (c-h) Sampled test trajectories and their total energies for the (c-d) ODEN, (e-f) HODEN, and (g-h) TRS-ODEN.}\n\\label{fig:5}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figure6.png}\n\\vspace{2pt}\n\\caption{\\small \\textbf{Summary of Experiment IV.} (a-b) Test (a) trajectory MSE and (b) energy MSE across the models. (c-h) Sampled test trajectory and its total energy for the (c-d) ODEN, (e-f) HODEN, and (g-h) TRS-ODEN.}\n\\label{fig:6}\n\\end{figure}\n\n\n\\textbf{Experiment IV: Damped oscillator.}\nWe simulate damped oscillators by setting the system parameters as follows: $\\alpha = 1,\\, \\beta = 0,\\, \\gamma = 0.1, \\delta = 0$. In this experiment, we assume the time-reversal symmetry tends to hold as $t\\to \\infty$, thus evaluate the time-dependent $\\lambda$ approach. This assumption is quite reasonable for various disspative irreversible systems, because their irreversibility is typically originated from the (odd powers of) $\\mathbf{p}$\\footnote{It is because of the definition of the classical reversing operator $R(\\mathbf{q}, \\mathbf{p}) = (\\mathbf{q}, -\\mathbf{p})$.} in their governing ODEs, e.g., $\\gamma \\mathbf{p} $ in (\\ref{eq:12}). Since dissipative systems lose their kinetic energy as time passes, i.e., $\\mathbf{p} \\to 0$ as $t \\to \\infty$, we can design $\\lambda$ as a linear increasing function of min-max normalized $t$. In this experiment, we evaluate four cases of $\\lambda$: $\\lambda \\in \\{0.5, 0.5t, 1, t\\}$. Other hyper-parameters are same with them of Experiment I. \n\nIt is shown that the TRS-ODENs can outperform ODEN and HODEN, except for $\\lambda = 1$ case (see Figure \\ref{fig:6} (a-b)). Especially, $\\lambda = 0.5t$ case shows great predictability in both time evolution and total energy of the damped system, while the ODEN loses its energy too excessively and the HODEN conserves its energy too strictly (see Figure \\ref{fig:6} (c-h)). We believe it is owing to the balance between physics-based inductive bias and data-driven learning process. We summarize test MSEs of all Duffing oscillator experiments in Table \\ref{table:1}.\n\n\\begin{table}\n\\centering\n\\caption{\\small Summary of test MSEs of Duffing oscillator experiments. All MSE values are multiplied by $10^2$.}\n\\label{table:1}\n\\begin{tabular}{llllll}\n\\toprule\nMetric & Model & Experiment I & Experiment II & Experiment III & Experiment IV \\\\\n\\midrule\n\\multirow{4}{*}{Traj.} & ODEN & 4.05 $\\pm$ 2.66 & 50.81 $\\pm$ 26.80 & 39.21 $\\pm$ 21.19 & 1.28 $\\pm$ 0.82 \\\\\n & HODEN & 0.84 $\\pm$ 0.37 & 17.40 $\\pm$ 17.74 & 24.09 $\\pm$ 14.29 & 3.68 $\\pm$ 2.19 \\\\\n & TRS-ODEN & \\textbf{0.31 $\\pm$ 0.19} & 13.78 $\\pm$ 14.86 & \\textbf{6.50 $\\pm$ 5.59} & \\textbf{0.12 $\\pm$ 0.06} \\\\\n & TRS-HODEN & N\/A & \\textbf{10.85 $\\pm$ 12.62} & N\/A & N\/A \\\\\n\\midrule \n\\multirow{4}{*}{Energy} & ODEN & 9.04 $\\pm$ 10.14 & 6.14 $\\pm$ 9.13 & 446.61 $\\pm$ 304.91 & 1.04 $\\pm$ 1.17 \\\\\n & HODEN & 0.08 $\\pm$ 0.09 & \\textbf{0.22 $\\pm$ 0.17} & 211.02 $\\pm$ 218.64 & 8.26 $\\pm$ 9.60 \\\\\n & TRS-ODEN & \\textbf{0.07 $\\pm$ 0.09} & 0.53 $\\pm$ 0.75 & \\textbf{5.94 $\\pm$ 4.00} & \\textbf{0.03 $\\pm$ 0.03} \\\\\n & TRS-HODEN & N\/A & 0.29 $\\pm$ 0.18 & N\/A & N\/A \\\\\n\\bottomrule \n\\end{tabular}\n\\end{table}\n\n\\subsection{Experiment V: Learning the real-world dynamics} \\label{sec:4.2}\nWe also conduct an experiment with real-world data from \\cite{schmidt09} to \ntest whether models can learn the accurate dynamics for future behaviors even in real-world problems. This data consists of a measured trajectory of coupled double oscillators, which are neither conservative nor reversible due to the damping, coupling, measurement errors and other non-ideal effects. We use the first 3\/5 of the trajectory for training, and the remains for test. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figure7.png}\n\\vspace{2pt}\n\\caption{\\small {\\textbf{Summary of Experiment V.} (a) Test trajectory MSEs across the models (left: mass 1 \/ right: mass 2). (b-d) Test trajectories from the (b) ODEN, (c) HODEN, and (d) TRS-ODEN (solid: mass 1 \/ dashed: mass2).}}\n\\label{fig:7}\n\\end{figure}\n\n\\begin{wraptable}{r}{0.45\\textwidth}\n\\centering\n\\vspace{-7pt}\n\\caption{\\small Summary of test MSEs of the real-world experiment (repeated 5 times). All MSE values are multiplied by $10^2$.}\n\\label{table:2}\n\\vspace{7pt}\n\\small\n\\begin{tabular}{ccc}\n\\toprule\nModel & Mass 1 MSE & Mass 2 MSE \\\\\n\\midrule\nODEN & 1.00 $\\pm$ 0.22 & 0.37 $\\pm$ 0.05 \\\\\nHODEN & 38.13 $\\pm$ 2.16 & 32.60 $\\pm$ 1.96 \\\\\nTRS-ODEN & \\textbf{0.36 $\\pm$ 0.06} & \\textbf{0.15 $\\pm$ 0.01} \\\\\n\\bottomrule\n\\vspace{-20pt}\n\\end{tabular}\n\\end{wraptable}\n\nWe use single hidden layer neural networks with 1,000 hidden units and $\\texttt{tanh}$ activations for all models. Figure \\ref{fig:7} and Table \\ref{table:2} clearly show that the proposed TRS-ODEN outperforms baselines, especially the HODEN. It reveals 1) while enforcing the conservation may not be a good inductive bias for real world, 2) guiding time-reversal symmetry is helpful for model generalization.\n\n\\subsection{Experiment VI: Learning the chaotic strange attractors} \\label{sec:4.3}\nLearning and predicting strange attractors are challenging tasks due to their chaotic behaviors. Time-reversal symmetry inductive bias can be helpful to model strange attractors, considering some of them are reversible when they consist of symmetric attractor\/repellor pairs \\cite{sprott2015symmetric}. We evaluate our framework for such a reversible strange attractor that is given by:\n\\begin{equation} \\label{eq:13}\n\\frac{dx}{dt} = 1 + yz, \\, \\frac{dy}{dt} = -xz, \\, \\frac{dz}{dt} = y^2 + 2yz, \\, x, y, z \\in \\mathbb{R}.\n\\end{equation}\nNote that this system shows reversal symmetry with non-trivial reversing operators $R: R(x, y, z) = R(-x, -y, -z)$. We generate 1,000 and 50 trajectories of (\\ref{eq:13}) for training and test dataset, respectively, with sampling $z(t_0)$ randomly from uniform distribution [1, 3] while fixing $x(t_0) = y(t_0) = 0$. Even with fixed $x(t_0)$ and $y(t_0)$, this task is still a interesting challenge considering the chaotic behaviors of strange attractors that are highly sensitive to the initial conditions \\cite{koppe2019identifying}. We set the trajectory lengths of both training and test dataset to 400, with regular time-step size of 0.05. We add Gaussian noise $0.05{n}, n \\sim \\mathcal{N}(0, 1)$ to training trajectories.\n\nWe use two-hidden layer neural networks with 200 hidden units and $\\texttt{tanh}$ activations. Since it is not straightforward to set Hamilton's equation for (\\ref{eq:13}), HODENs are not evaluated in here. We set a mini-batch size to 1,024. We use the fourth order RK method for $\\texttt{Solve}$. After the evaluation, we find TRS-ODENs consistently outperform the ODEN (see Figure \\ref{fig:8} (a) and Figure \\ref{fig:8} (c-h) for quantitative and qualitative analysis, respectively). In addition, we calculate the Lyapunov exponent \\cite{wolf1985determining}:\n\\begin{equation} \\label{eq:14}\n\\sigma_\\text{Lyapunov} = \\frac{1}{t_i - t_0} \\log \\frac{\\|\\delta\\mathbf{x}(t_i)\\|_2}{\\|\\delta\\mathbf{x}(t_0)\\|_2},\n\\end{equation}\nwhere $\\|\\delta \\mathbf{x}(t_i)\\|_2$ is equal to the distance between two evolved states whose initial separation is infinitesimally small, i.e., $\\|\\delta\\mathbf{x}(t_0)\\|_2 \\to 0$. From its definition, the Lyapunov exponent $\\sigma_\\text{Lyapunov}$ indicates a sensitivity on initial states for given dynamical system. Thus, it is used to detect and investigate the characteristics of chaotic systems: generally, larger positive $\\sigma_\\text{Lyapunov}$ means the system is more chaotic. Figure \\ref{fig:8} (b) shows the time evolution of $\\sigma_\\text{Lyapunov}$ for test samples obtained from the TRS-ODEN matches well with that of ground truth, while the ODEN underestimates $\\sigma_\\text{Lyapunov}$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figure8.png}\n\\vspace{2pt}\n\\caption{\\small {\\textbf{Summary of Experiment VI.} (a) Test trajectory MSEs across the models. (b) Time evolutions of the Lyapunov exponents obtained from the ground truth, ODEN, and TRS-ODEN. (c-h) Sampled test trajectories in (c-d) $x$-$y$, (e-f) $x$-$z$, and (g-h) $y$-$z$ planes, predicted by the (c, e, g) ODEN and (d, f, h) TRS-ODEN.}}\n\\label{fig:8}\n\\end{figure}\n\\color{black}\n\n\\section{Conclusion} \\label{sec:5}\nIntroducing physics-based inductive bias for neural networks is actively studied. e.g., ODE \\cite{chen18}, Hamiltonian \\cite{greydanus19, sanchez19, toth19, zhong19, chen19}, and other domain knowledge \\cite{schutt17, schutt19, long18, nabian18}. We have proposed a simple yet effective approach to incorporate the time-reversal symmetry into ODEN, coined TRS-ODEN, which is not shown in previous works. The proposed method can learn the dynamical system accurately and efficiently. We have validated our proposed framework with various experiments including non-conservative and irreversible systems. \n\nThere are some papers discuss the use of symmetry for neural networks. For example, the rotational or reflection symmetries are frequently used in computer vision tasks \\cite{funk17, dieleman16, worrall17}. Some researchers have focused on finding symmetries using neural networks, especially in theoretical physics \\cite{decelle19, li2020, bondesan19}. Among them, \\cite{bondesan19, li2020} are closely related to our work because they discuss the method of searching a canonical transformation that satisfies the symplectic symmetry of Hamiltonian systems. Combining these approaches, i.e., finding symmetry, with our proposed framework, i.e., exploiting symmetry, would be an interesting direction for future work. \n\n\\section*{Broader Impact}\nIn this paper, we introduce a neural network model that regularized by a physics-originated inductive bias, the symmetry. Our proposed model can be used to identify and predict unknown dynamics of physical systems. In what follows, we summarize the expected broader impacts of our research from two perspectives.\n\n\\textbf{Use for current real world applications.} Predicting dynamics plays a important role in various practical applications, e.g., robotic manipulation \\cite{hersch08}, autonomous driving \\cite{levinson11}, and other trajectory planning tasks. For these tasks, the predictive models should be highly reliable to prevent human and material losses due to accidents. Our propose model have a potential to satisfy this high standard on reliability, considering its robustness and efficiency (see Figure \\ref{fig:4} as an example).\n\n\\textbf{First step for fundamental inductive bias.} According to the CPT theorem in quantum field theory, the CPT symmetry, which means the invariance under the combined transformation of charge conjugate (C), parity transformation (P), and time reversal (T), exactly holds for all phenomena of physics \\cite{kostelecky98}. Thus, the CPT symmetry is a fundamental rule of nature: that means, it is a fundamental inductive bias of deep learning models for natural science. However, this symmetry-based bias has been unnoticed previously. We study one of the fundamental symmetry, the time-reversal symmetry in classical mechanics, as a proof-of-concept in this paper. We expect our finding can encourage researchers to focus on the fundamental bias of nature and extend the research from classical to quantum, and from time-reversal symmetry to CPT symmetry. Our work would also contribute to bring together experts in physics and deep learning in order to stimulate interaction and to begin exploring how deep learning can shed light on physics.\n\n\\begin{ack}\nThe authors received no third party funding for this work.\n\\end{ack}\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\nScale-adjusted metrics (SAMs) are a significant achievement of the urban scaling hypothesis. SAMs remove the inherent biases of \\textit{per capita} measures computed in the absence of isometric allometries. However, this approach is limited to urban areas, while a large portion of the world's population still lives outside cities and rural areas dominate land use worldwide. Here, we extend the concept of SAMs to population density scale-adjusted metrics (DSAMs) to reveal relationships among different types of crime and property metrics. Our approach allows all human environments to be considered, avoids problems in the definition of urban areas, and accounts for the heterogeneity of population distributions within urban regions. By combining DSAMs, cross-correlation, and complex network analysis, we find that crime and property types have intricate and hierarchically organized relationships leading to some striking conclusions. Drugs and burglary had uncorrelated DSAMs and, to the extent property transaction values are indicators of affluence, twelve out of fourteen crime metrics showed no evidence of specifically targeting affluence. Burglary and robbery were the most connected in our network analysis and the modular structures suggest an alternative to ``zero-tolerance'' policies by unveiling the crime and\/or property types most likely to affect each other.\n\n\n\\section*{Introduction}\nCrime is a long-standing problem for society and its understanding has challenged scientists from a wide range of disciplines. From a sociological perspective, crime is treated as a deviant behavior of individuals and the goal of sociologists is often to find the conditions that lead to or favor criminal behavior. There is a vast literature on the sociology of crime seeking to find such conditions. An example is the ``broken windows theory''~\\cite{Wilson1982} that correlates the incidence of crime with the existence of degraded urban environments. Despite the popularity and empirical support for this theory, there is a consensus that other factors than environment disorder are likely to affect or even have a greater influence on the incidence of crime. Situational action theory ~\\cite{Wikstrom,Wikstrom2} seeks to understand how an individual's life history and social conditions interact with settings encouraging crime. More recently, crime has been considered as a complex system~\\cite{Perc} where nonlinearities and self-organized principles create complex patterns that are difficult to understand and even harder to predict and control. This new perspective for studying crime and other social systems has been fostered by the availability of an unprecedented amount of data, making it possible to ask empirical questions that would have been considered unanswerable a few decades ago.\n\nIn the context of city-related metrics, researchers have recently promoted and made remarkable progress towards establishing the \\emph{urban scaling hypothesis}~\\cite{Pumain,Bettencourt,Arbesman,Bettencourt2,Bettencourt3,Mantovani,Gomez-Lievano,Bettencourt4,Lobo,Bettencourt5,Alves,Mantovani2,Alves2,OliveiraCO2,Alves3,Ignazzi,Louf2,Melo,RybskiCO2,Hanley,Rocha,Alves4,Hanley2,Pan,BettencourtN1,vanRaan,Youn,Fluschnik,Schlapfer,CaminhaHuman2017}. This theory states that cities are self-similar regarding their size as measured by population, meaning that several urban metrics (such as unemployment or a particular crime type) are expected to have a deterministic component that depends on the population of the city. The resulting scaling laws arise from only a few general assumptions about the properties of cities and should be universal across urban systems~\\cite{Bettencourt5}. A consequence of these scaling laws is that \\textit{per capita} measures are not appropriate for comparing urban units of different sizes and can exhibit biases favoring large or small cities depending on whether the relationship with the population is super or sublinear. In order to remove this bias, Bettencourt~\\textit{et al.}~\\cite{Bettencourt3,Lobo} proposed the use of a scale-adjusted metric (SAM) for removing the deterministic component associated with the population of an urban area. The SAMs are simply defined as the residuals of the fit to a scaling relationship between indicator and population. Despite their simplicity, SAMs can capture the exceptionality of a city regardless of its size and have proved useful for unveiling relationships that are not observed in \\textit{per capita} measures~\\cite{Lobo,Gomez-Lievano,Alves2,Alves4}.\n\nThe urban scaling hypothesis is supported by a wealth of empirical evidence using a wide range of urban indicators from many countries. However, the hypothesis has also been criticized~\\cite{Masucci,Arcaute,Cottineau,Leitao} and one main criticism relates to the definition of the ``urban unity'' or city. Arcaute~\\textit{et al.}~\\cite{Arcaute} and Cottineau~\\textit{et al.}~\\cite{Cottineau} have shown that definitions of cities based on population density and commuter flows may lead to different observed scaling exponents. This challenges the idea that population size alone is responsible for the deterministic component of urban metrics and opens the possibility for other approaches. In a recent article~\\cite{Hanley2}, we argued that the relationship between an indicator density (\\textit{e.g.} crime per hectare) and population density can provide a far superior framework when compared with traditional population scaling. In particular, this density-based approach is capable of continuously analyzing all human environments, from the most rural to heavily urban systems and identified that some metrics display scaling transitions at high population density, which can enhance, inhibit or even collapse the scaling exponents.\n\nHere we further explore this density-based framework together with the scale-adjusted metrics approach to unveil relationships among different crime types and property values. Our approach extends the ideas of Bettencourt~\\textit{et al.}~\\cite{Bettencourt3,Lobo} by defining a density scale-adjusted metric (DSAM). In addition to removing the deterministic component, DSAMs enable the investigation of crime incidence and its relationships with property transaction values over the full range of human environments. Furthermore, by combining DSAMs, cross-correlation analysis, and complex network tools, we find that crime types have intricate and hierarchically organized relationships among themselves as well as with property values. Our approach reveals that these relationships are characterized by modular and sub-modular structures in which some crime types and\/or property types are more likely to affect each other. \n\n\\section*{Methods}\n\n\\subsection*{Data Sets}\nThe data set used in the present study is the same we have employed in Ref.~\\cite{Hanley2}, where it is described in detail and made freely available (it has been also provided with this paper as \\hyperref[S1_Dataset]{S1 Dataset}). Briefly, the data set consists of police-reported crimes, property transaction values, population size, and area for all 573 Parliamentary Constituencies in England and Wales. These data were collected on the UKCrimeStats (\\url{http:\/\/www.ukcrimestats.com\/}) data platform from different sources and subsequently reported as a snapshot since the data is regularly updated. Reported crimes are broken into 14 types while property data are categorized by 8 types (\\autoref{tab:1}). \n\n\\begin{table}[!ht]\n\\caption{\\textbf{Constituency data analyzed in this study.}}\n\\begin{tabular}{c|l}\n\\hline\n& Constituency metrics, $Y$ \\\\\n\\hline\n\\multirow{14}{*}{\\rotatebox[origin=c]{90}{\\parbox[c]{5cm}{\\centering Crime Types}}} & Anti-Social Behavior (ASB) \\cellcolor{gray!15} \\\\ \n& Bike Theft \\\\\n& Burglary \\cellcolor{gray!15} \\\\ \n& Criminal Damage and Arson (CD and A) \\\\\n& Drugs \\cellcolor{gray!15} \\\\\n& Order \\\\\n& Other Crime \\cellcolor{gray!15} \\\\\n& Other Theft \\\\\n& Robbery \\cellcolor{gray!15} \\\\\n& Shoplifting \\\\\n& Theft from the Person \\cellcolor{gray!15} \\\\\n& Vehicle Crime\\\\\n& Violence \\cellcolor{gray!15} \\\\\n& Weapons\\\\\n\\hline\n\\multirow{8}{*}{\\rotatebox[origin=c]{90}{\\parbox[c]{3cm}{\\centering Property Types}}} & Detached \\cellcolor{gray!15} \\\\\n& Flats \\\\\n& Freehold \\cellcolor{gray!15} \\\\\n& Leasehold \\\\\n& New \\cellcolor{gray!15} \\\\\n& Old \\\\\n& Semi-detached \\cellcolor{gray!15} \\\\\n& Terraced \\\\\n\\hline\n\\multirow{2}{*}{\\rotatebox[origin=c]{90}{\\parbox[c]{1cm}{\\centering }}} & Constituency population, $N$ \\cellcolor{gray!15} \\\\\n& Constituency area, $A$ \\\\\n\\hline\n\\end{tabular}\n\\label{tab:1}\n\\end{table}\n\n\\subsection*{Density Scaling Laws and Scale-Adjusted Metrics}\nWe start by revisiting the characterization of the density scaling laws previously described in Ref.~\\cite{Hanley2}. The usual approach for studying urban scaling is by investigating the relationship between a given urban indicator $Y$ and population $N$ in a system composed of several ``urban units'' (such as municipalities). This relationship is often well described by a power-law relationship defined as\n\\begin{equation}\\label{eq:allometry}\nY = Y_0 N^\\beta~~\\text{or its linearized version}~~\\log Y = \\log Y_0 + \\beta \\log N\\,,\n\\end{equation}\nwhere $Y_0$ is a constant and $\\beta$ is the power-law or allometric exponent. In this context, urban indicators are categorized into three classes depending on whether the value of $\\beta$ is equal (isometry), larger (superlinear allometry) or smaller (sublinear allometry) than 1. Metrics related to individual needs (\\textit{e.g.} household energy and water consumption) usually have isometric relationships with population, while sublinear allometric relationships are observed for infrastructure metrics (\\textit{e.g.} road surface and petrol stations) and superlinear allometric relationships appear for social, economic and health metrics (\\textit{e.g.} crime, unemployment, and AIDS cases)~\\cite{Bettencourt}. Thus, urban indicators have (in general) a nonlinear deterministic component associated with population. For a given city, this means that the value of a particular urban metric is expected to depend on the city's population in a nonlinear deterministic fashion.\n\nA direct consequence of these nonlinearities is that \\textit{per capita} measures are efficient in correctly removing the effect of population size in an urban metric only if the metric has an isometric relationship with the population. Otherwise, \\textit{per capita} measures will be biased towards large populations (for superlinear allometries) or small populations (for sublinear allometries)~\\cite{Alves4}. Consequently Bettencourt \\textit{et al.}~\\cite{Bettencourt3} defined the so-called \\textit{scale-adjusted metric} (SAM). This metric consists of calculating the logarithmic difference between the actual value of an urban indicator and the value expected from the allometric relationship with population (Eq.~\\ref{eq:allometry}); mathematically, we have (for the $i$-th city)\n\\begin{equation}\\label{eq:SAM}\nZ_i = \\log Y_i - [\\log Y_0 + \\beta \\log N_i]\\,.\n\\end{equation}\nIt is worth noting that the scale-adjusted metric, $Z_i$, is the residual following the adjustment of an observation for the power-law defined by Eq.~\\ref{eq:allometry}. The values of $Z_i$ capture the ``exceptionality'' of individual cities regarding a particular metric such that a positive\/negative SAM indicates the metric is above\/below the expectation for a city of that population. \n\nThis approach has been successfully employed in economic and social contexts~\\cite{Podobnik,Lobo,Alves2,Alves4} revealing relationships among metrics in urban systems which cannot be properly identified only by \\textit{per capita} measures. In spite of its success, SAMs naturally share the same limitations of urban scaling. As previously mentioned, the allometric exponent depends on the definition of the ``urban unit'', and the urban scaling hypothesis is limited to urban areas by construction. On the one hand, the proportion of the world's population living in urban areas has been systematically increasing over the past decades and currently is around 54\\%~\\cite{UN}. On the other hand, the urbanization process is not uniform across all countries: there are countries where almost all the population is urban (such as Belgium and Uruguay where the proportion of urban population is larger than 95\\%) while others are predominantly rural (such as India with 33\\% of urban population and Trinidad and Tobago with only 9\\%)~\\cite{WorldBank}. Furthermore, in countries where most of the population is urban, rural areas may represent the vast majority of the countries' land. The United Kingdom is one such country with a population that is 83\\% urban but rural areas cover 85\\% of the land~\\cite{GovtStatServ}. Thus, it is important to develop a framework capable of investigating the full range of human environments.\n\nPreviously, we proposed an approach for taking these problems into account~\\cite{Hanley2}. Our idea was to analyze scaling relationships between an indicator density and population density over all 573 parliamentary constituencies of England and Wales, regions that range in population density from very rural ($0.22$~p\/ha) to heavily urban ($550.3$~p\/ha). In place of Eq.~\\ref{eq:allometry}, we considered the following generalization (see also~\\cite{hanley2016correction})\n\\begin{equation}\\label{eq:allometry_den}\n\\log y =\n\\begin{cases}\n\\log y_0 + \\beta_L \\log d & \\text{for}~~dj$). We further calculate this average when random shuffling of the DSAMs among the constituencies and for a set of uniform random variables with size equal to number constituencies. \\hyperref[S4_Fig]{S4 Fig} shows that the average of the difference $\\mathcal{M}_{ij} - \\rho_{ij}^2$ for the original DSAM set is small ($0.09\\pm0.06$) and not significantly different from the averages calculated from the shuffled DSAMs and random variables. We have also tested the linearity of the DSAMs relationships by comparing the AIC (Akaike information criterion~\\cite{Burnham}) values of linear models adjusted to these relationships with those obtained from quadratic and cubic models. To do so, we bootstrap the AIC values among all possible pairs of DSAMs and test whether the difference is significant by using the two-sample bootstrap mean test. Results show that quadratic relationships are better descriptions (compared with linear) only in 8\\% all pairwise relationships; similarly, cubic relationships are better models only in 10\\% of cases. Therefore, in addition to removing the effect of population density, the DSAMs from each type of metric also show linearly correlation among each other. \n\nFigure~\\ref{fig:4} shows the correlation matrix $\\rho_{ij}$ for every possible pair of DSAM ($i$ and $j$). In order to better understand these inter-relationships, we define the ultrametric distance matrix $d_{ij}=\\sqrt{2(1-\\rho_{ij})}$ for applying the single-linkage clustering algorithm~\\cite{mantegna1999introduction}, yielding the dendrograms shown in Figure~\\ref{fig:4}. Several conclusions are clear from inspection of this figure: \n\\begin{itemize}\n\n\\item For all property types there is a positive correlation in property transaction value DSAMs with those of all other property types. Most were very strong with many above $0.7$ with values reaching $0.93$ (old vs. freehold). Positive correlations indicate the tendency for high values of one property type to be associated with high values in all other property types.\n\n\\item All crime types are positively correlated with all other crime types with some strong correlations (\\textit{e.g.} 0.73 for anti-social behavior vs. criminal damage and arson -- ASB vs. CD and A). In contrast to property types, the correlations among crime metrics were not as strong and some were very weak with insignificant correlation (\\textit{e.g.} 0.02 for vehicle crime vs. drugs).\n\n\\item The only anti-correlations seen are between crime and property DSAMs. This gives rise to the blue regions in the upper right and lower left regions of Fig 4. Anti-correlations indicate the tendency for a positive property DSAM to be associated with a negative crime DSAM (\\textit{e.g.} high property value DSAM is associated with low crime). The majority of crime vs. property DSAMs are anti-correlated which demonstrates a tendency for crime to be associated with depressed property transaction values. The three strongest predictors of depressed property value DSAMs were criminal damage and arson (CD and A), anti-social behavior (ASB), and weapons with old and freehold properties most affected. This does not prove crime as the causative agent, but does demonstrate the association over a wide range of indicators. \n\n\\item Two crime types (theft from the person and bike theft) exhibited positive crime vs. property correlations. This is a good example to illustrate that one has to be careful when trying to associate causal relationships to these correlations. If taken literally, one could absurdly think that to improve property values, we must encourage bike theft and theft from the person. A more logical explanation is that these two crime types tend to rise in regions of relative affluence, assuming that property transaction value DSAMs are metrics of relative affluence. Again, this does not prove causation, however, it does make clear that it is only these 2 (out of 14) crime types which show any evidence of being attracted to or specifically targeting affluence.\n\n\\end{itemize}\n\n\\begin{figure*}[!ht]\n\\begin{adjustwidth}{-2.25in}{0in}\n\\begin{center}\n\\includegraphics[scale=0.56]{fig4.pdf}\n\\end{center}\n\\caption{\n\\textbf{Crime and property DSAMs are cross-correlated and form a hierarchical structure.} The matrix plot shows the value of the Pearson correlation coefficient ($\\rho_{ij}$) evaluated for each combination of crime and property DSAM ($i$ and $j$). The number inside each cell is the coefficient value and the color code also refers to $\\rho_{ij}$ (blue indicates negative correlation, while red is used for positive correlations; the darker the shade, the stronger the correlation). The insets indicated by arrows show examples of relationships among crime and property DSAMs. Upper and right-side panels are dendrograms constructed via the hierarchical clustering algorithm (based on the distance $d_{ij}=\\sqrt{2(1-\\rho_{ij})}$).\n}\n\\label{fig:4}\n\\end{adjustwidth}\n\\end{figure*}\n\nThe hierarchical clustering behavior reinforced many of these conclusions. We note the emergence of two main clusters setting apart crime and property metrics. In the property data, new property appears isolated from the remaining property types. This is a striking result because, with the exception of old property, every property category examined can include new properties as the classifications are not exclusive. Within the crime metrics, there is a sub-cluster consisting of robbery, burglary, and vehicle crime distinct from other crime types. The remaining crime types form a separate group with an important sub-cluster consisting of anti-social behavior (ASB), criminal damage and arson (CD and A), and violence. Interestingly, despite considerable discussion of drugs and burglary in the literature~\\cite{Cromwell, Kuhns2017}, drugs and burglary crime reports are uncorrelated in our data. This discrepancy may be due to the design of many previous studies in which convicted offenders are surveyed. It is likely that drug use contributes to burglars being apprehended and convicted. Hence, the subset of all burglars composed of known offenders may not be representative of burglars in general. In our data, drugs crime reports are much more strongly associated with reports of order and weapons offenses. \n\n\\subsection*{DSAM Networks}\nAnother approach for probing patterns in the complex inter-relationships among crime and property metrics is to create a complex network representation~\\cite{newman2010networks,albert2002statistical}. The hierarchical classification was able to distinguish the difference between the crime and property metrics clearly and also identify subcategories. This representation works well for positive correlations, but failed to identify the important negative correlations between certain types of crime and property. In addition, the two dimensional grid structure limits the number of neighbors that can be placed adjacent to a particular category, and the dendrogram does not account for strength or significance of the correlations. Furthermore, complex networks (or spaces) already have shown very to be quiet useful to understand how several socioeconomic phenomena are related to each other~\\cite{hidalgo2007product,brascoupe2010causes,neffke2011regions,caldarelli2012network,neffke2013skill,muneepeerakul2013urban,yildirim2014using}.\n\nIn order to build these complex networks, we bootstrap the Pearson correlation, $\\rho_{ij}$, for every pair of metrics (over one thousand realizations), identifying those that are statistically significant at 99\\% confidence level. The significant correlations are shown in~\\hyperref[S6_Fig]{S6 Fig}, where we can individually visualize the effect of all crime and property categories on a particular one. Next, we group all pairs of metrics having significant positive correlations to create the weighted complex network of Figure~\\ref{fig:5}A. In this representation, the vertices are crime and property categories, the edges indicate the existence of significant positive correlations, and the edge weights are the correlation values. \n\n\\begin{figure*}[!ht]\n\\begin{adjustwidth}{-2.25in}{0in}\n\\begin{center}\n\\includegraphics[scale=0.22]{fig5.pdf}\n\\end{center}\n\\caption{\n\\textbf{Network of DSAMs that are positively correlated.} (a) Complex network representation of the positive connections among crime and property DSAMs. Each node is a crime or property type and the connection between two nodes occurs whenever there is a statistically significant correlation between their DSAMs (based on bootstrapping the Pearson correlation and 99\\% confidence). Each connection is weighted by the Pearson correlation coefficient and the thickness of the edges are proportional to the connection weight. Node sizes are proportional to their degrees and the color code also refers to node degree. A modular structure composed of two modules (one with all property metrics and a second with all crime metrics) is identified by maximizing the network modularity (yielding $M=0.47$ for the original network and $\\langle M_{\\text{rand}}\\rangle=0.12\\pm0.01$ for a set of randomizations of the original network). Edges highlighted in blue are ones connecting the two modules. (b) Characterization of nodes based on the within-module connectivity ($W$) and participation coefficient ($P$). Each dot in the $W$-$P$ plane corresponds to a crime or property type. All nodes are classified as ultraperipheral ($R1$) or peripheral ($R2$); in particular, the majority of nodes has zero participation coefficient (that is, has only within-module links) and only the six nodes in the $R2$ region have between modules connections. (c) Modular structure of the sub-graph related to the crime metrics. For this case, two modules (colored in purple and green) are found by maximizing the network modularity ($M=0.14$ and $\\langle M_{\\text{rand}}\\rangle=0.06\\pm0.01$). (d) Role discrimination of crime nodes by the $W$-$P$ plane. We note that all nodes are in the peripheral region ($R2$). Drugs, order, and anti-social behavior (ASB) crime types are the most peripheral; robbery and burglary have the largest $P$, and criminal damage and arson (CD and A) has the largest $W$.\n}\n\\label{fig:5}\n\\end{adjustwidth}\n\\end{figure*}\n\nWe apply the network cartography of Rimer\u00e0 and Amaral~\\cite{guimera2005functional,guimera2005cartography} to extract the network modules and classify nodes according to their within- ($W$, in standard score units) and between-module connectivity (or participation coefficient $P$, a fraction). This approach yields the same two main modules observed in the hierarchical clustering, that is, a crime and a property module. We find the significance of the this modular structure by comparing the network modularity $M$ (the fraction of within-module edges minus the fraction expected by random connections~\\cite{guimera2005functional,guimera2005cartography,newman2004finding,newman2004fast}) with the average modularity $\\langle M_{\\text{rand}}\\rangle$ of randomized versions of the original network~\\cite{guimera2004modularity}. For these modules, we have $M=0.47$ and $\\langle M_{\\text{rand}}\\rangle = 0.12 \\pm 0.01$, showing that the modular structure cannot be explained by chance. Figure~\\ref{fig:5}B shows a classification of the crime and property categories based on the $W$-$P$ plane (within-module connectivity vs. between-module connectivity). We note that most metrics have $P=0$, that is, these metrics only have within-module connections (ultraperipheral nodes $R1$ according to~\\cite{guimera2005functional,guimera2005cartography}). Weak positive correlations exist between the crime types: bike theft and theft from the person, and the property categories: flats, leasehold, new, and terraced. Within each module, we find violence and other theft to be the most connected categories in the crime module; while old and freehold are the most connected types in the property module. These crime and property types are expected to have the largest positive impact on their modules, meaning that an increase\/decrease in their DSAM values correlates to an increase\/decrease in several other types within their modules.\n\nWe also ask if these modular structures can be broken into sub-modules. To answer this question, we apply the network cartography to the two sub-graphs composed by the crime and property modules. For the property module, no significant sub-modular structure could be found ($M=0.12$ and $\\langle M_{\\text{rand}}\\rangle = 0.12 \\pm 0.05$). For the crime module, the sub-modular structure shown in Figure~\\ref{fig:5}C is significant ($M=0.14$ and $\\langle M_{\\text{rand}}\\rangle = 0.06 \\pm 0.01$). We note the existence of two modules: one (on the left) is dominated by acquisitive types of crime and consists of theft from the person, other theft, robbery, burglary, and vehicle crime; the other contains all remaining categories. We also find that these sub-modules cannot be broken into statistically significant smaller structures. The role discrimination of crime nodes based on the $W$-$P$ plane is shown in Figure~\\ref{fig:5}D, where all nodes are classified as peripheral nodes ($R2$ -- see~\\cite{guimera2005functional,guimera2005cartography}), which reflects the entanglement among crime types. In spite of that, we find burglary and robbery to be the most interconnected categories (that is, having the largest $P$); while anti-social behavior (ASB), drugs and order are the most ``local'' categories. Naturally, correlation does not imply causation and our analysis must be viewed as a seminal alternative proposal for investigating the inter-relationships among different crime types. Taking these points into account, our approach suggests that policies focused on reducing burglary and robbery are more likely to ``spread'' over other crime types than those eventually focused on categories such as anti-social behavior (ASB), drugs and order. This result suggests that actions such as ``the zero-tolerance policies'' against minor crimes with lower participation and connectedness are unlikely to have a strong positive impact on reducing more serious crimes when compared with policies focused on more entangled crime types. \n\nAnalogous to the previous case, we investigated the network of negative correlations. In this representation, we connect every crime and property type displaying significant negative (or anti-) correlations and the edge weights are proportional to the absolute value of these correlations. Figure~\\ref{fig:6}A shows that this network has a very distinct structure, where crime types are never connected to each other and the same occurs among property types. This means that the increasing\/decreasing of DSAM for a particular crime does not correlate to a decreasing\/increasing of DSAM for any other crime category. The same holds for property types. Thus, an increase\/decrease of DSAMs for crime types is only correlated to a decrease\/increase of DSAMs for property categories, illustrating that criminal activities have an important role in the depreciation process of property values. Interestingly, bike theft and theft from the person deviate from this behavior and have no significant negative correlations to any other metric.\n\n\\begin{figure*}[!ht]\n\\begin{adjustwidth}{-2.25in}{0in}\n\\begin{center}\n\\includegraphics[scale=0.16]{fig6.pdf}\n\\end{center}\n\\caption{\n\\textbf{Network of DSAMs that are negatively correlated.} (a) Complex network representation of the negative correlations among crime and property DSAMs. Each node is a crime or property type and the connection between two nodes occurs whenever there is a statistically significant anti-correlation between their DSAMs (based on bootstrapping the Pearson correlation and 99\\% confidence). Node sizes are proportional to their degrees and the color code also refers to node degree. Each connection is weighted by the absolute value of the Pearson correlation coefficient and the thickness of the edges are proportional to the connection weight. (b) Modular structure of the negatively correlated network. Two modules are identified by maximizing the network modularity ($M=0.13$ and $\\langle M_{\\text{rand}}\\rangle=0.07\\pm0.02$) and are colored in purple and green. (c) Role discrimination of nodes by the $W$-$P$ plane (within-module connectivity versus participation coefficient). We note that all nodes are in the peripheral region ($R2$). (d) Modular structure of the sub-graphs related to the two modules of (b). One of the modules can be divided into two sub-modules that has been colored with purple shades ($M=0.15$ and $\\langle M_{\\text{rand}}\\rangle=0.06\\pm0.02$) and the other yields three sub-modules that are colored with green shades ($M=0.14$ and $\\langle M_{\\text{rand}}\\rangle=0.08\\pm0.02$). These sub-modular structures reveal that some property types have their values more depreciated by specific crime types.\n}\n\\label{fig:6}\n\\end{adjustwidth}\n\\end{figure*}\n\nWe also apply the network cartography to the network of negative correlations, finding that it can be broken into two significant modules ($M=0.13$ and $\\langle M_{\\text{rand}}\\rangle=0.07\\pm0.02$ -- Figure~\\ref{fig:6}B). One module is composed by detached, freehold, and semi-detached property types as well as seven crime categories (drugs, order, other crime, other theft, robbery, shoplifting, and violence). The other module is formed by flats, leasehold, new, old, and terraced properties surrounded by the remaining seven crime categories. Figure~\\ref{fig:6}C shows the role discrimination of nodes based on the $W$-$P$ plane. As in the sub-modular structure of crime metrics (Figures~\\ref{fig:5}C~and~\\ref{fig:5}D), all nodes in the network of negative correlations are classified as peripheral nodes ($R2$). This result reinforces the interconnectedness of this network, indicating that is very hard to find crime types having a very uneven impact on property values. \n\nIn spite of these conditions and remembering that our analysis must be viewed as a first step toward a better understanding of the inter-relationships among crime and property types, we observe that detached, old, semi detached and freehold property types have the largest values of $P$ and $W$. This result suggests that these properties are the most susceptible to having their values depreciated by criminal activities. We also note that anti-social behavior (ASB), criminal damage and arson (CD and A), violence, and weapons have the largest values of $P$, suggesting that these crime types exhibit a distinct influence on the property values; criminal damage and arson (CD and A) also has a large value of $W$, indicating that this crime category has both an influence over its module and over the other module. The most ``local'' crime categories are order and other theft (smallest values of $P$), indicating that they have an important impact only on the property values of their module. Similarly, flats and new properties have the smallest $P$ among property types, suggesting that these properties are most affected by crime types belonging to their module. \n\nWe tested for additional structure and found the modules could be broken into the sub-modules shown in Figure~\\ref{fig:5}D. The sub-graph composed by the module on the left of Figure~\\ref{fig:5}B yields two sub-modules ($M=0.15$ and $\\langle M_{\\text{rand}}\\rangle=0.06\\pm0.02$), while the module on the right of Figure~\\ref{fig:5}B yields three sub-modules ($M=0.14$ and $\\langle M_{\\text{rand}}\\rangle=0.08\\pm0.02$). Each of these sub-modules is composed by one or two property types and from one (the one composed by burglary and terraced) to four crime categories (the one composed by detached, freehold, drugs, other crime, robbery, and violence). It is not easy to explain such groups or to claim that these sub-modular structures are very meaningful since the original network and its modular structure is very entangled (which is quantified by the small values of the modularity $M$). However, the statistical significance of these structures suggests the depreciation process of property values caused by criminal activities is hierarchically organized.\n\n\\section*{Conclusion}\nThis study advances our understanding of the inter-relationship between police reported crime and property transaction values using density scale-adjusted metrics. When the trend attributable to population density is removed using allometric scaling laws, the resulting metrics more effectively compare constituencies. This study reaches a number of important conclusions.\n\nIndividual categories of DSAMs may appear to exhibit no trends and be consistent with a normal distribution, however, when looking at single indicators, important and significant correlations will remain unobserved. In the current study, DSAMs were observed to exhibit significant positive and negative correlations with a host of other metrics.\n\nCorrelations between DSAMs from different crime indicators revealed universally positive correlations with every other crime indicator. Similarly, density scale-adjusted metrics for property transaction values were positively correlated with all other property types. These results indicate that at the level of parliamentary constituencies an increase in the DSAM for one type of crime predicts an increase in all other types of crime. It should be noted, that DSAMs will account for general rises and falls in crime across all scales. Thus, a decrease in absolute numbers does not mean the scale-adjusted metric will decrease.\n\nWith the exceptions of bike theft and theft from the person, crime and property DSAMs are negatively correlated. This means that as a general rule, an increase in DSAM of crime is associated with a decrease in the value of property transactions. Two crime categories exhibit a particularly strong effect: anti-social behavior (ASB) and criminal damage and arson (CD and A). This indicates that in our data twelve out of fourteen crime types show no evidence of crime targeting affluence. Our network approach also revealed that crime and property DSAMs form hierarchically-organized structures with statistically significant modular and sub-modular structures. These structures represent the crime and\/or property categories that are more likely to affect each other. Consequently, such groups may help policy-makers to design more effective actions for reducing crime incidence, with the advantage of having an approach that works over the full range of human environments. \n\n\\clearpage\n\n\n\\section*{Authors Contributions}\nH.V.R. and Q.H. conceived and designed the study, carried out the statistical analysis and drafted the manuscript; D.L. conceived the study and helped draft the manuscript. All authors gave final approval for publication.\n\n\\section*{Competing interests}\nDan Lewis is the Chief Executive of the Economic Policy Centre (EPC) and Director of UKCrimeStats. Views discussed in the manuscript do not represent the views or positions of UKCrimeStats and EPC. There are no patents, products in development or marketed products to declare. This does not alter our adherence to PLOS ONE policies on sharing data and materials. \n\n\\section*{Funding}\nHVR thanks the financial support of CNPq (grant 440650\/2014-3). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\n\\section*{Data Availability}\nAll necessary data to fully reproduce the results of this article are public, freely available, and has been provided as a spreadsheet for publication with the paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSilicon spin qubits are a highly promising candidate for scalable quantum computation.\nHigh-fidelity spin measurement with integrated charge sensing up to relatively high temperature, fast ESR or EDSR manipulation for one- and two-qubit gates, and long coherence times in isotopically purified silicon all point towards the high potential of the silicon platform for solid-state qubit \\cite{yoneda2018quantum, DzurakReflecto, Huang2019, veldhorst2015two, huang2021}.\nAdditionally, industrial expertise in CMOS semiconductor fabrication provides a clear path towards mass production of nanoscale qubit devices \\cite{Veldhorst2017, schaal2019cmos}.\nSpin readout and manipulation have been demonstrated in CMOS quantum dots with high fidelity \\cite{Urdampilleta2019, corna2018, crippa2019}.\nHowever characterization of the spin physics in these types of devices remains an open problem, with indications of local disorder and variability across similar devices \\cite{ciriano2021spin}.\n\nIn this letter, we present the measurement of spin relaxation in CMOS quantum dots fabricated on a foundry-compatible 300 mm wafer.\nWe first show how we can rapidly probe the spin using an energy selective readout with more than $90\\%$ readout fidelity.\nSecondly, we explore the dynamics of spin relaxation in the system and the the coupling of spin and valley states, by measuring the spin lifetime as a function of magnetic field strength and direction.\nFinally, we investigate the charge noise in this system by performing spin-valley relaxometry.\n\n\n\\begin{figure}%\n\\includegraphics[width=\\columnwidth]{FIG_1.pdf}\n \n \\caption{\n (a) Schematic of a CMOS-fabricated nanowire type device identical in structure to the device used here.\nThe polysilicon channel (yellow) connects two electron reservoirs labelled $S$ and $D$.\nThe electrostatic environment of the channel is controlled by two front gates, $G_1$ and $G_2$, which are isolated from the channel by $\\SI{6}{nm}$ of SiO$_2$ and $\\SI{5}{nm}$ of TiN.\nA metallic top gate is positioned $\\SI{400}{nm}$ above the channel in the region indicated by the white dashed lines and is biased to $+\\SI{2}{V}$ to increase the coupling between the dots.\nFinally, the silicon bulk below the buried oxide is polarized and used as a back gate(+5V).\n(b) Stability diagram of the first electron transition.\nThe electron occupancy of the qubit dot under $G_1$ is indicated.\nThe bright regions of current correspond to coulomb peaks where transport is possible through the sensor dot under $G_2$.\nThe addition of an electron to the qubit dot causes a shift in the potential of the sensor, indicated by the sharp cut. \n(c) Representative time traces of the current through the sensor dot $I_{SD}$ during spin measurement.\nIf a spin-up electron is loaded, it is able to rapidly tunnel out of the dot, causing a transient shift in the sensor current as the dot is briefly emptied, indicated in blue.\nIf a spin-down electron is loaded, it remains in the qubit dot and the sensor current does not change, indicated in orange.\nA threshold current $I_{thr}$ is defined to distinguish between the two states.\n(d) State fidelity analysis at an optimized measurement point.\nA histogram of the maximum current of more than $1000$ measurements is binned (blue).\nThe black dashed line is the total distribution obtained from simulation of more than 10 000 sample traces using experimental parameters.\nInset: The individual state fidelity is calculated for varying threshold current level, with $F(\\left | \\uparrow \\right \\rangle)$ in blue and $F(\\left | \\downarrow \\right \\rangle)$ in orange.\nThe product of these, the state visibility $V$, is plotted in green.\n}\n \\label{fig:device\n \\end{figure}\n\nThe two devices we present here are depicted in Fig.\\ref{fig:device}(a).\nThey consist of a pair of split front gates, of length $\\SI{50}{nm}$ and separated by $\\SI{50}{nm}$, which are lying on a silicon nanowire of width $\\SI{90}{nm}$ and thickness $\\SI{15}{nm}$. \nTwo electron reservoirs are formed by ion implantation for the first device and in situ growth of degenerate n-doped silicon for the second one.\nThey have been measured at low temperature, between $200$ and $350~\\si{\\milli\\kelvin}$.\nAt this temperature, a QD is formed under each front gate, $G_1$ and $G_2$.\nWe operate the device as follows: the top quantum dot (QD) is used to trap a single charge and the bottom one as a charge sensor (CS).\nFigure \\ref{fig:device}(b) shows the stability diagram with respect to $V_{G1}$, $V_{G2}$.\nShifts in the coulomb peaks indicate it is possible to detect the first electron entering the top QD, using the CS.\nWhen the QD is depleted to the few-electron regime, the current passes only through the CS and electrons are loaded onto QD from CS via the inter-dot tunnel barrier.\nWe operate the device at the transition signalling the addition of the first electron in QD.\n\n\\section{Single shot readout of a single electron spin}\n\nTo readout the spin state, we exploit a spin-to-charge conversion based on energy-selective tunneling \\cite{morello2010}.\nFor this purpose, at finite magnetic field, we first load QD with a single electron and then pulse the chemical potential where only the highest spin state can tunnel out of the dot.\nTherefore, when the electron is in a up spin state, we observe a signal click, characteristic of the tunneling out of a up spin electron, followed by the tunnelling in of a down spin electron, depicted in Fig. \\ref{fig:device}(c) (blue curve). On the opposite, for a spin down, the signal is constant.\nIt is important to note that similarly to Ref \\cite{morello2010}, in both devices, the loading and unloading of QD is achieved through the CS and not through the reservoir.\nThis single shot detection of the spin state is further analysed by setting the detection threshold at the point of maximum visibility.\nTo find this point, we build the histogram presented in Fig. \\ref{fig:device}(d) where we bin the maximum of each measurement trace which lasts for $1~\\si{\\milli\\second}$ with a sampling rate at $50~\\si{\\kilo\\hertz}$.\nThe visibility is obtained by using the same method as in reference \\cite{morello2010}, which consists in simulating single shot traces using the extracted tunneling in and out times (respectively $250~\\si{\\micro\\second}$ and $411~\\si{\\micro\\second}$) to describe the current distribution.\nWe obtain an average readout fidelity above 92\\% (and 86\\% visibility) limited by the tunneling in and out rates of the single electron \\cite{Keith2019}.\n\n\\begin{comment}\nTo further characterize the readout protocol, we investigate the fidelity of initialising the spin state. \nWe prepare a state with equal probability in ground and excited states by pulsing the chemical potential below the Fermi energy to load any electron.\nWe then immediately measure its spin state and we obtain an initial spin-down population of 58\\% (instead of 50\\%). This discrepancy can be explained by different loading times for the two spin states.\nTo prepare a ground state we simply load in the position where only the spin down state is below the Fermi sea (similar position to the readout).\nIn this case, we obtain a spin down population of 92\\% (instead of 100\\%) which reflects the infidelity of the readout\n\\end{comment}\n\n\n\\begin{figure}%\n\\includegraphics[width=\\columnwidth]{FIG_2.pdf}%\n\\caption{\n(a) Spin-up population during measurement as a function of the waiting time after loading.\nIt is fitted with an exponential decay function to extract $T_1$, the spin relaxation time. \n(b) The average relaxation rate $T_1^{-1}$ for two similar devices is plotted as a function of the magnetic field orthogonal to the nanowire axis.\nIt is fitted (solid line) with a combination of the spin-valley contribution $\\Gamma_{SV}$ (dashed line) and the spin-orbit contributions $\\Gamma_{SO}$ to the relaxation rate.\nA single prominent increase of relaxation rate, occurring at $\\SI{2.6}{T}$ and $\\SI{1.7}{T}$ respectively, is induced by spin valley mixing close to $E_{Z} = E_{SV}$ and coupling to the phonon bath.\nOutside of this hotspot, the spin relaxation is induced by spin-orbit mixing with higher orbital states.\n}\n\\label{fig:hotspot\n\\end{figure}\n\n\n\n\\section{Spin-valley coupling}\nWe now move on to the characterization of relaxation time.\nRelaxation curves are measured by first loading an electron with random spin orientation, and then probing the spin up population after a given waiting time in the loading region.\nWe obtain the curve in Fig. \\ref{fig:hotspot}(a), which is characteristic of spin relaxation and presents a T$_1$ of $\\SI{10}{ms}$.\n\n\\subsection{Determination of valley splitting}\nTo further investigate the presence of excited states, we measure the relaxation rate as a function of the magnetic field on the two devices.\nFigure \\ref{fig:hotspot}(b) present the results obtained on device 1 and device 2 (blue and red data points respectively).\nThe two curves feature a hotspot in relaxation at two different magnetic fields ($\\SI{2.6}{T}$ and $\\SI{1.6}{T}$ respectively).\nAt these points, the two valley states with opposite spins anticross (the valley splitting $E_{VS}$ is equal to the Zeeman splitting $E_Z$), and give rise to a relaxation channel through spin-valley mixing \\cite{huang2014}.\n\nTo support this interpretation we fit the data points with a model comprising spin-orbit and spin-valley contributions \\cite{huang2014,zhang2020}, which is represented by the blue and red solid lines.\n\\begin{equation}\n T_1^{-1}=\\Gamma_{Ph,SV}+\\Gamma_{JN,SO}+\\Gamma_{Ph,SO}\n\\end{equation}\nwhere $\\Gamma_{Ph,SV}$ corresponds to the relaxation rate due to spin-valley mixing and coupling to phonons, while $\\Gamma_{Ph,SO}$ and $\\Gamma_{JN,SO}$ correspond to the relaxation rates due to spin orbit coupling via phonons and Johnson-Nyquist noise respectively.\nFrom these fits we obtain a valley splitting energy, $E_{VS}$, of $\\SI{191 \\pm 16}{\\micro\\eV}$ and $\\SI{300 \\pm13}{\\micro\\eV}$ respectively and a gap at the spin valley anticrossing of $\\SI{4 \\pm 0.3}{\\micro\\eV}$ and $\\SI{0.2\\pm 0.03}{\\micro\\eV}$ respectively.\nThe effect of Johnson-Nyquist noise on the spin-valley contribution has been neglected as at the two hotspots the density of phonon modes is high enough to dominate the relaxation process.\nThe $\\Gamma_{Ph,SV}$ contribution is therefore represented by the dashed blue and red curve which are in good agreement with the experimental data around the hotspot.\nThe spin orbit contribution (baseline) follows in both cases a similar model which accounts for Johnson-Nyquist noise at low field (where the density of phonons is small) and phonon emission at higher field \\cite{huang2014b}.\nInterestingly, the two devices present the same relaxation rates outside the hotspots.\nThis indicates that the overall structure of the quantum dots is similar in both cases.\nThis is an important result toward the realisation of reliable and low variability quantum dots when operated far from the hotspot.\n\\subsection{Anisotropic spin-valley mixing}\n\n\\begin{figure}%\n \n \\includegraphics[width=\\columnwidth]{FIG_3_dev.pdf}%\n \n \\caption{ Evolution of the relaxation rate with rotation of the magnetic field.\n (a-c) Evolution of the relaxation rate within the $XZ$ ($XY$, $YZ$) plane (under rotation about the $y$ ($z$, $x$) axis, corresponding approximately to the $[\\bar{1}10]$ ($[001]$, $[110]$) crystallographic axis).\n Dashed curves in (a) and (b) correspond to sin$^2$ functions fitted to the data.\n (d) Schematic representation of the device and its spatial orientation.\n }\n \\label{fig:3D\n \\end{figure}\n\nThe spin valley mixing is an excellent probe of the local symmetry of the quantum dot \\cite{ciriano2021spin, zhang2020, hofmann2017, tanttu2019}.\nIt is indeed expected that the spin-valley mixing vanishes in the presence of more than one mirror plane in the structure \\cite{corna2018}.\nThe presence of a hotspot is already the sign of a lower symmetry of the system.\nThe present quantum dots are formed in the corner of the nanowire, therefore, in absence of local disorder, we expect a single mirror plane with its normal vector along $\\bold{x}$, the nanowire axis.\n\nTo probe the presence of local disorder, we investigate the anisotropic behaviour of the spin-valley mixing.\nSuch measurements can be used to determine the direction of the spin orbit contribution which can be directly correlated to the local planes of symmetry experienced by the quantum dot.\nFigure \\ref{fig:3D} (a) (b) and (c) presents the measurement of the relaxation rate as the field direction is rotated in three different orthogonal planes.\nTo achieve the maximum sensitivity, the magnitude of the magnetic field is set close to the hotspot where the relaxation rate is dominated by the spin-valley mixing ($\\SI{2.4}{T}$).\nWhen the magnetic field scans the XZ and YZ plane, we observe a drop in the relaxation rate when it crosses the $x$ axis.\nOn the opposite the relaxation rate is constant in the YZ plane.\n\nTo interpret these data, we need to consider the Hamiltonian $H_{\\textrm{SOC}}$ that couples the spin and $v_1$, $v_2$ valley orbitals through the spin-valley mixing matrix element $\\bra{v_1\\uparrow}H_{\\textrm{SOC}}\\ket{v_2\\downarrow}$, \\cite{bourdet2018}.\nIn the presence of a (yz) mirror symmetry plane $H_{\\textrm{SOC}}$ takes the form $H_{\\textrm{SOC}}=(\\alpha_yp_y + \\alpha_zp_z)\\sigma_x + (\\beta_y\\sigma_y + \\beta_z\\sigma_z)p_x$, with $p_x$, $p_y$ and $p_z$ the electron momentum along \\textit{x}, \\textit{y} and \\textit{z} directions and $\\sigma_x$, $\\sigma_y$, $\\sigma_z$ the Pauli matrices.\nThe valley orbitals $v_1$ and $v_2$ being invariant by the (yz) mirror plane, $\\bra{v_1}p_x\\ket{v_2}=\\bra{v_1}-p_x\\ket{v_2}=0$, see SI of reference \\cite{ciriano2021spin}.\nTherefore the spin mixing term reads $\\bra{v_1\\uparrow}(\\alpha_yp_y + \\alpha_zp_z)\\sigma_x \\ket{v_2\\downarrow}$.\nWhen the magnetic field is aligned with \\textit{x}, $\\ket{\\uparrow}$ and $\\ket{\\downarrow}$ are eigenstates of $\\sigma_x$, which leads to a vanishing spin-valley mixing.\nAs the magnetic field is tilted away from \\textit{x} with an angle $\\theta$, the remaining projection leads to $|\\bra{v_1\\uparrow}H_{\\textrm{SOC}}\\ket{v_2\\downarrow}|^2\\propto$ sin$^2\\theta$ \\cite{corna2018,zhang2020}. \nFollowing this model, we obtain a quantitative agreement between the experimental data and the sin$^2$ function as shown in Fig. \\ref{fig:3D}(a) and (b). \nFrom these fits, we obtain the minimum relaxation rate along the nanowire axis ([110]), see diagram in Fig. \\ref{fig:3D}(d). \nAlong this axis, the relaxation is dominated only by the spin-orbit interaction.\nIt is important to note that there a slight offset in the angle in the XY measurement because the nanowire axis is not perfectly aligned with the coil axes.\nMoreover, the small discrepancy between the XY and XZ measurements suggests that Y and Z directions are not equivalent.\nThis second order anisotropy can be explained by the overlap of the front gate being non equivalent between the side and the top facets of the wire.\n\n\n\\section{Low frequency charge noise on valley-splitting}\n\\begin{figure}%\n \\includegraphics[width=\\columnwidth]{FIG_4.pdf}%\n \n \\caption{\n(a) Variation in the spin-lattice relaxation rate over time. \nEach measurement point represents a $T_1$ measurement of duration $\\SI{4}{min}$. \n(b) Frequency-domain fluctuations of the valley splitting energy $E_{VS}$.\nThe fit is proportional to $1\/f^{1.25}$.\nExtrapolation to $\\SI{1}{Hz}$ reveals fluctuations in $E_{VS}$ of $\\SI{23 \\pm 1 }{\\micro eV^2 \/ Hz}$.\n}\n \\label{fig:noise\n\\end{figure}\n\nThe valley splitting is sensitive to the electric field as it arises from the strong confinement against the top interface \\cite{struck2020}.\nHere, we use the relaxation at the hotspot to probe the local fluctuations in electric field induced by low frequency charge noise.\nFor this purpose, we sit at a measurement point next to the hotspot and record the evolution of $T_1$ with time.\nTo transform this time evolution in a power spectral density for $E_{VS}$, we assume that the dependence of the valley splitting on the electric field is linear for small noise amplitude \\cite{ibberson2018}.\nFrom the hotspot fitting we extract the gradient at the measurement point, which combined with a Fourrier transform of the time domain signal, Fig. \\ref{fig:noise}(a), yields the power spectral density (PSD) in Fig. \\ref{fig:noise}(b).\n\nThe PSD follows a $1\/f$ dependence which, extrapolated at $\\SI{1}{Hz}$, gives $\\SI{23}{\\micro eV^2\/Hz}$.\nIn term of qubit energy fluctuations, the obtained value is relatively large if we consider the operation of the qubit in a spin-valley mode. \nIn this mode, the spin-valley mixing is exploited to drive coherent oscillations through electric dipole spin resonance next to the hotspot \\cite{bourdet2018}.\nFluctuations of the valley splitting translate into fluctuations of the Larmor frequency $h\\delta f = 1\/2\\delta E_{VS}$ at the hotspot\\cite{corna2018}.\nIt corresponds to a fluctuation in the spin precession of about $\\SI{0.6}{GHz}\/\\sqrt{Hz}$, which is faster than the decoherence rate due to the hyperfine fluctuations in natural silicon, and is on the order of the decoherence rate of charge and valley qubits \\cite{kim2015, Pent2019}.\nThis could prove detrimental for EDSR exploiting the spin-valley mode, limiting the coherence time when the spin-valley coupling is turned on \\cite{culcer2012, yang2013}.\nIt may therefore be necessary to operate further away from the hotspot to avoid fast decoherence, at the cost of slower Rabi frequency or to lower the confinement potential to reduce the valley splitting sensitivity to electric field \\cite{bourdet2018}.\n\\\\\n\\begin{comment}\nThis value is equivalent to a fluctuation on the quantum dot potential of $\\SI{121 \\pm 8 }{\\micro eV^2\/Hz}$, which is extracted based on the valley splitting dependence on the back gate voltage and the respective lever arms.\nThis is consistent with charge noise amplitude measured via tunnel rate fluctuations in the same device \\cite{spence2021}.\nIt is a strong indication that the low-frequency fluctuations which cause the variation in valley splitting energy is charge noise-induced through trapping and de-trapping events at the material interfaces.\n\\end{comment}\n\n\n\n\\section{Conclusion}\nIn conclusion, we have demonstrated fast and reproducible spin characterization in CMOS nanowire devices.\nSimilar spin-orbit induced relaxation rate was found across two identically patterned and fabricated devices, which is an important result toward the improvement of the CMOS fabrication process at the quantum dot level.\nMoreover, the spin-valley coupling was found to be highly anisotropic, with a strong symmetry plane oriented perpendicular to the channel axis, indicating the absence of strong local disorder at the quantum dot site.\nThese results are of prime importance toward the fabrication of large scale structures containing many quantum dots with low variability.\nFinally, low-frequency fluctuations in the valley splitting were measured to cause fluctuations in the Larmor frequency at $\\SI{0.6}{GHz}\/\\sqrt{Hz}$ which would be detrimental to the operation of the qubit as a spin-valley or valley qubit to enhance Rabi frequencies. This motivates further work on dependence and sensitivity of the valley splitting with respect to the electric field and the investigation of potential sweetspot.\n\n\\section{Acknowledgments}\nWe acknowledge fruitful discussions with J. Li and L. Hutin and support from P. Perrier, H. Rodenas, E. Eyraud, D. Lepoittevin, I. Pheng, T. Crozes, L. Del Rey, D. Dufeu, J. Jarreau, J. Minet and C. Guttin. \nD.J.N. , B.K. and C.S. acknowledge the GreQuE doctoral programs (G.A. No 754303). The device fabrication is funded through the QuCube project (G. A. 810504)\nThis work is supported by the Agence Nationale de la Recherche through the CRYMCO and MAQSi projects (ANR-21-XXX).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}