diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgvki" "b/data_all_eng_slimpj/shuffled/split2/finalzzgvki" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgvki" @@ -0,0 +1,5 @@ +{"text":"\\section*{\\abstractname}%\n \\else\n \\small\n \\begin{center}%\n {\\bfseries \\ackname\\vspace{-.5em}\\vspace{\\z@}}%\n \\end{center}%\n \\quotation\n \\fi}\n {\\if@twocolumn\\else\\endquotation\\fi}\n\\fi\n\\makeatother\n\n\\title{On the convergence of stochastic transport equations to a deterministic parabolic one}\n\\author{Lucio Galeati}\n\\date{\\small{Institute of Applied Mathematics\\\\ University of Bonn, Germany}\\\\[1.2ex] lucio.galeati@iam.uni-bonn.de\\\\[1.5ex] \\today}\n\n\\begin{document}\n\\maketitle\n\\thispagestyle{empty}\n\\begin{abstract}\n\\noindent A stochastic transport linear equation (STLE) with multiplicative space-time dependent noise is studied. It is shown that, under suitable assumptions on the noise, a multiplicative renormalization leads to convergence of the solutions of STLE to the solution of a deterministic parabolic equation. Existence and uniqueness for STLE are also discussed. Our method works in dimension $d\\geq 2$; the case $d=1$ is also investigated but no conclusive answer is obtained. \n\\end{abstract}\n\\section{Introduction}\\label{section 1 - introduction}\n\n\nThroughout this paper, we consider a stochastic transport linear equation of the form\n\\begin{equation}\\label{sec 1 - STLE in compact Stratonovich form}\\tag{STLE}\n\\diff u = b\\cdot\\nabla u \\diff t + \\circ \\diff W\\cdot \\nabla u,\n\\end{equation}\nwhere $b=b(t,x)$ is a given deterministic function and $W=W(t,x)$ is a space-time dependent noise of the form\n\\begin{equation}\\label{sec 1 - explicit description of the noise}\nW(t,x)=\\sum_k \\sigma_k(x)W_k(t).\n\\end{equation}\nHere $\\sigma_k$ are smooth, divergence free, mean zero vector fields, $\\{W_k\\}_k$ are independent standard Brownian motions and the index $k$ might range on an infinite (countable) set; by \\eqref{sec 1 - STLE in compact Stratonovich form} we mean more explicitly the identity\n\\begin{equation}\\label{sec 1 - STLE in non compact Stratonovich form}\n\\diff u = b\\cdot \\nabla u \\diff t + \\sum_k \\sigma_k\\cdot\\nabla u \\circ \\diff W_k,\n\\end{equation}\nwhere $\\circ$ denotes Stratonovich integral. Let us explain the reasons for studying such equation.\n\n\nIn the case of space-independent noise, it has been shown in recent years, starting with \\cite{FlaGub}, that equation \\eqref{sec 1 - STLE in compact Stratonovich form} is well posed under much weaker assumptions on $b$ than its deterministic counterpart (i.e. with $W=0$), for which essentially sharp condition are given by \\cite{DiPLio}, \\cite{Amb}. There is now an extensive literature on the topic of regularization by noise for transport equations, see the review \\cite{Fla2} and the references in \\cite{BecFla}. However, from the modelling point of view, space-independent noise is too simple, since formally the characteristics associated to \\eqref{sec 1 - STLE in compact Stratonovich form} are given by\n\\begin{equation}\\nonumber\n\\diff X_t = -b(t,X_t) \\diff t - \\diff W_t.\n\\end{equation}\nNamely, if we interpret $u$ as an ensemble of ideal particles, the addition of such a multiplicative Stratonovich noise corresponds at the Lagrangian level to non interacting particles being transported by a drift $b$ as well as a random, space independent noise $W$. There are several models, especially those arising in turbulence (see \\cite{Cha} and the discussion in the introduction of \\cite{CogFla}), in which it seems more reasonable to consider all the particles to be subject to the same space-dependent, environmental noise $W$, which is randomly evolving over time and is not influenced by the particles; $W$ may be interpreted as an incompressible fluid in which the particles are immersed. The formal Lagrangian description of \\eqref{sec 1 - STLE in compact Stratonovich form} is\n\\begin{equation}\\label{sec 1 - stochastic characteristics associated, compact Stratonovich form}\n\\diff X_t = -b(t,X_t)\\diff t - \\circ \\diff W(t,X_t),\n\\end{equation}\nwhere the above equation is meaningful once we consider $W$ given by \\eqref{sec 1 - explicit description of the noise} and we explicit the series.\n\nAnother reason to consider a more structured noise is given by the fact that, in the case of nonlinear transport equations, explicit examples in which a space-independent noise doesn't regularize are known, see for instance Section 4.1 of \\cite{Fla}; instead a sufficiently structured, space-dependent noise can provide a partial regularization by avoiding coalescence of particles, as in \\cite{DFV}, \\cite{FlaGub2}.\n\nFinally, if we expect the paradigm \\textquotedblleft the rougher the noise, the better the regularization\\textquotedblright\\ to hold, as it has been observed frequently in regularization by noise phenomena, it is worth to investigate the effect on equation \\eqref{sec 1 - STLE in compact Stratonovich form} of a noise $W$ which has poor regularity in space.\n\nSpecifically, the main goal of this work is not to investigate well posedness of \\eqref{sec 1 - STLE in compact Stratonovich form}, but rather to understand what happens when the space regularity of $W$ is so weak that it's not clear how to give meaning to \\eqref{sec 1 - STLE in compact Stratonovich form} anymore. Indeed, when one writes the corresponding It\\^o formulation of \\eqref{sec 1 - STLE in compact Stratonovich form}, the It\\^o-Stratonovich corrector appearing is finite only if $W$ satisfies a condition of the form\n\\begin{equation}\\label{sec 1 - condition on the regularity of the noise}\n\\mathbb{E}\\Big[\\vert W(1,\\cdot)\\vert_{L^2}^2\\Big]<\\infty.\n\\end{equation}\nIn particular, if the above condition doesn't hold, typically the corrector will be of the form \\textquotedblleft $+\\infty\\,\\Delta u$\\textquotedblright\\ and therefore heuristically one would expect the solution to istantaneously dissipate and become constant, independently of the initial data. A rigorous proof of this assertion, by means of a Galerkin approximation, has been given in a specific case in \\cite[Theorem 1.3]{FlaLuo}, but the technique applied there seems sufficiently robust to be generalized to this setting as well. It turns out that, in order to obtain a non trivial limit when we consider solutions of \\eqref{sec 1 - STLE in compact Stratonovich form} for a sequence of noises $W^N$ whose $L^2$-norm is exploding as $N\\to\\infty$, a suitable sequence of multiplicative coefficients $\\varepsilon^N$ must be introduced. In order to explain better what we mean and to give a rough statement of the main result, we give a brief description of the setting in which we study \\eqref{sec 1 - STLE in compact Stratonovich form}. More details will be given in the next section.\n\n\nWe consider everything to be defined on the $d$-dimensional torus $\\mathbb{T}^d = \\mathbb{R}^d\/(2\\pi\\mathbb{Z}^d)$ with periodic boundary condition, $d\\geq 2$, with suitable assumptions on $b$.\nWe denote by $\\mathcal{H}$ the closed subspace of $L^2(\\mathbb{T}^d;\\mathbb{R}^d)$ given by divergence free, mean-zero functions (see Section \\ref{subsection 2.1 - notation and functional setting} for the exact definition).\n\nWe fix an a priori given filtered probability space $(\\Omega,\\mathcal{F},\\mathcal{F}_t,\\mathbb{P})$ on which an $\\mathcal{H}$-cylindrical $\\mathcal{F}_t$-Wiener process $\\widetilde{W}$ is defined, see \\cite{DaP}. We apply to $\\widetilde{W}$ a Fourier multiplier $\\Theta$ such that $W:= \\Theta\\widetilde{W}$ satisfies \\eqref{sec 1 - condition on the regularity of the noise}. We consider, for this choice of $W$, the Cauchy problem given by \\eqref{sec 1 - STLE in compact Stratonovich form} together with a deterministic initial condition $u_0\\in L^2(\\mathbb{T}^d)$; we are interested in energy solutions $u$, namely $\\mathcal{F}_t$-progressively measurable processes, with weakly continuous paths, for which equation \\eqref{sec 1 - STLE in compact Stratonovich form} is satisfied when interpreted in an analitically weak sense, i.e. testing against smooth functions, and a suitable energy inequality holds.\nWe stress that we consider $u$ to be a strong solution in the probabilistic sense; we can vary $W$ by considering different choices of $\\Theta$, but the probability space and $\\widetilde{W}$ are fixed and a priori given. The main result can then be loosely stated as follows.\n\n\\begin{res}\\label{result sec 1 - introductory formulation of main theorem}\nAssume that $b$ satisfies suitable conditions together with the following assumption:\n\\begin{itemize}\n\\item[(UN)] (Uniqueness for the parabolic limit equation) $b$ is such that, for any $\\nu>0$, uniqueness holds in the class of weak $L^\\infty(0,T;L^2(\\mathbb{T}^d))$ solutions of the Cauchy problem\n\\begin{equation}\\label{sec 1 - parabolic limit problem}\n\\begin{cases} \\partial_t u = \\nu\\Delta u + b\\cdot\\nabla u\\\\ u(0)=u_0 \\end{cases}.\n\\end{equation}\n\\end{itemize}\nThen for any $\\nu>0$, there exists a class of sequences of Fourier multipliers $\\Theta^N$ and of constants $\\varepsilon^N$, with $\\varepsilon^N$ depending only on $\\nu$ and $\\Theta^N$ for each $N$, such that, denoting $W^N=\\Theta^N\\widetilde{W}$, for any $u_0\\in L^2(\\mathbb{T}^d)$, any sequence of energy solutions $u^N$ of the STLEs\n\\begin{equation}\\label{sec 1 - equations for the approximating u^N}\\begin{cases}\n\\diff u^N = b\\cdot\\nabla u^N\\, \\diff t + \\sqrt{\\varepsilon^N}\\circ \\diff W^N\\cdot \\nabla u^N\\\\\nu^N(0)=u_0\n\\end{cases}\\end{equation}\nconverges in probability (in a suitable topology) as $N\\to\\infty$ to the unique deterministic solution $u$ of the parabolic equation \\eqref{sec 1 - parabolic limit problem}.\n\\end{res}\n\nA more precise statement and the proof will be given in Section \\ref{section 3 - proof of the main result}; let us comment some of the features of the result.\n\n\\begin{itemize}\n\\item[i)] The statement is formulated in the spirit of a multiplicative renormalization: the sequence $\\varepsilon^N$ depends on the chosen $\\Theta^N$, but the limit does not, up to the arbitrary choice of a one dimensional parameter $\\nu>0$. However, as will be discussed in Section \\ref{section 3 - proof of the main result}, this is not a real renormalization due to the presence of some degeneracy: while we need to impose some conditions on $\\Theta^N$, these do not imply uniqueness of the limit of $\\Theta^N \\widetilde{W}$ and explicit examples of choices leading to different limits, for which the above statement holds, can be given. In a sense, the result is more similar to a weak law of large numbers, as will be discussed in Section \\ref{section 3 - proof of the main result}.\n\n\\item[ii)] The statement provides a sequence of solutions of stochastic transport equations converging to a deterministic parabolic equation. This is rather surprising, not only for the transition from a stochastic problem to a deterministic one, but also for the change in the nature of the equation. The original STLEs are hyperbolic: whenever $W$ is regular enough, they can be solved explicitly by means of the stochastic flow associated to the characteristics \\eqref{sec 1 - stochastic characteristics associated, compact Stratonovich form}; in particular the solutions don't have in general better regularity than the initial data, at least at the level of trajectories. However, when considering the corresponding It\\^o formulation, the It\\^o-Stratonovich corrector gives rise to a Laplacian. It was intuited in \\cite{FlaGub} that equation \\eqref{sec 1 - STLE in compact Stratonovich form} has some parabolic features at the mean level; this has become clear in \\cite{BecFla}.\n\n\\item[iii)] The statements holds for \\textit{any} sequence of energy solutions of \\eqref{sec 1 - equations for the approximating u^N}, even when uniqueness is not known; existence of energy solutions can be shown under suitable assumptions on $b$. We only need well posedness for the limit problem \\eqref{sec 1 - parabolic limit problem} and that's why we require (UN) to hold. In general (UN) is satisfied under very mild assumptions on $b$, much weaker than those required for the associated deterministic transport equation to be well posed. This suggests the possibility to obtain uniqueness for the STLE under the same assumption (UN); in this direction, see the results given in \\cite{Mau}, \\cite{BecFla} and the references therein.\n\n\\item[iv)] From the modelling, perturbative viewpoint, the result could be interpreted in this way: when a system of particles transported by a drift $b$ is subject to an environmental background noise which is very irregular but of very small intensity, in the ideal limit such a disturbance is correctly modelled by a diffusive term $\\nu\\Delta$. This also gives an interesting link between different selection principles for ill posed transport equations, since it hints to the fact that a vanishing viscosity limit and certain types of zero noise limits should behave similarly; observe however that this is not true in general, since in the setting of space-independent noise, examples of transport equations for which the zero noise limit and the vanishing viscosity one do not coincide are provided in \\cite{AttFla}.\n\\end{itemize}\n\nWe believe our main result holds on a wider class of domains and not only on the torus, but there are several technical issues which prevent a straightforward generalization and solving them is currently an open problem. Indeed, if the domain is a bounded open subset of $\\mathbb{R}^d$, then a boundary condition must also be imposed and handled in the limit; in this regard, let us mention the recent work \\cite{Hai}, in which it is shown that in certain scaling regimes (however different from our case) also the boundary condition must undergo a renormalization. If the setting is instead a compact manifold without boundary, the main challenge becomes finding examples of vector fields $\\sigma_k$ for which the It\\^o-Stratonovich correctors, as well as their limit once properly renormalized, can be computed explicitly. On the torus this task is greatly simplified by the presence of many simmetries, as it is shown in Section \\ref{subsection 2.3 - SPDE in Ito form and definition of energy solutions}.\n\nLet us highlight that even if in the discussion we have adopted a perturbative approach, motivating \\eqref{sec 1 - STLE in compact Stratonovich form} as a stochastic variation of an originally deterministic problem, the equation is of interested by itself even when $b=0$ as it is related to the theory of passive scalars and the celebrated Kraichnan model of turbulence, see \\cite{Cha}, \\cite{Fal}. From the mathematical point of view, it has been treated in a very complete but rather technical way in \\cite{LeJ}, \\cite{LeJ2}; in Section \\ref{subsection 4.2 - proof of strong uniqueness in the case b=0} we present a simple proof in the case $b=0$ of pathwise uniqueness of $L^2(\\mathbb{T}^d)$-valued solutions under very mild assumptions on the noise (basically all isotropic divergence free noises for which the equation is well defined are included). To the best of our knowledge this result is new, since even in \\cite{LeJ2} is suggested but not explicitly stated whether pathwise uniqueness can be proved, see the beginning of Section \\ref{subsection 4.2 - proof of strong uniqueness in the case b=0} for more details.\n\nFinally, let us mention the strong similarity between our technique and the one considered in \\cite{FlaLuo2}.\n\n\n\\begin{plan}\nThe paper is structured as follows: in Section \\ref{section 2 - preliminaries}, we introduce our notations and basic definitions; in Section \\ref{section 3 - proof of the main result} we give a more precise statement and the proof of the main result. In Section \\ref{section 4 - discussion of existence and uniqueness}, in order for the main result to be non vacuous, we give a proof of existence of energy solutions and we discuss the problem of their uniqueness. Finally in Section \\ref{section 5 - the case d=1} we treat the case $d=1$, in which we show that we are not able to obtain an equivalent of the main result; still, from the modelling viewpoint, some interesting conclusions can be drawn.\n\\end{plan}\n\n\\section{Preliminaries}\\label{section 2 - preliminaries}\n\n\nIn this section we provide all the notions necessary to give a meaning to \\eqref{sec 1 - STLE in compact Stratonovich form} and its solutions; with this set up we will be able to prove the main result in the next section.\n\\subsection{Notations and functional setting}\\label{subsection 2.1 - notation and functional setting}\n\n\nWe work on the $d$-dimensional torus, $\\mathbb{T}^d=\\mathbb{R}^d\/(2\\pi\\mathbb{Z}^d)$, with periodic boundary condition. We denote by $L^2(\\mathbb{T}^d;\\mathbb{C})$ the set of complex-valued, square integrable function defined on $\\mathbb{T}^d$, which is a Hilbert space endowed with the (normalized) inner product\n\\begin{equation}\\nonumber\n\\langle f, g\\rangle_{L^2} = \\frac{1}{(2\\pi)^d}\\int_{\\mathbb{T}^d} f(x)\\overline{g}(x)\\diff x,\n\\end{equation}\nwhere $\\overline{z}$ denotes the complex conjugate of $z$, $\\vert z\\vert^2 = z\\,\\overline{z}$; we denote by $\\vert\\cdot\\vert_{L^2}$ the norm induced by $\\langle\\cdot,\\cdot\\rangle_{L^2}$. Under this inner product, $\\{e_k\\}_{k\\in\\mathbb{Z}^d}$ given by $e_k(x)= e^{i\\, k\\cdot x}$ is a complete orthonormal system ($k\\cdot x= \\sum_{i=1}^d k_i x_i$ denoting the standard inner product in $\\mathbb{R}^d$). Any element $f\\in L^2(\\mathbb{T}^d;\\mathbb{C})$ can be written uniquely in Fourier series as\n\\begin{equation}\\nonumber\nf = \\sum_{k\\in\\mathbb{Z}^d} f_k\\, e_k, \\quad f_k = \\langle f, e_k\\rangle_{L^2} ,\n\\end{equation}\nwhere the series is convergent in $L^2(\\mathbb{T}^d;\\mathbb{C})$ and it satisfies\n\\begin{equation}\\nonumber\n\\vert f\\vert_{L^2}^2 = \\sum_{k\\in\\mathbb{Z}^d} \\vert f_k\\vert^2.\n\\end{equation}\nAn element $f$ is real-valued if and only if $f_{-k}=\\overline{f}_k$ for every $k\\in\\mathbb{Z}^d$. We denote the set of square integrable, real-valued functions by $L^2(\\mathbb{T}^d)=L^2$. The formulas above hold more generally for $f\\in L^2(\\mathbb{T}^d;\\mathbb{R}^d)$ if we interpret $f_k$ as the $\\mathbb{C}^d$-valued vector with components $f_k^{(j)} = \\langle f^{(j)}, e_k\\rangle_{L^2}$.\n\nWe will always deal with real-valued functions, but for the sake of calculations it is more convenient to use complex Fourier series; for the same reason we work on $\\mathbb{T}^d$ defined as above rather than $\\mathbb{R}^d\/\\mathbb{Z}^d$. We stress however that the results are independent of this choice and can be obtained in the same way by using real Fourier series or $\\mathbb{R}^d\/\\mathbb{Z}^d$.\n\n\nWe consider the Sobolev spaces $H^\\alpha(\\mathbb{T}^d)$, $\\alpha\\in\\mathbb{R}$, given by\n\\begin{equation}\\nonumber\nH^\\alpha(\\mathbb{T}^d) = \\Big\\{f=\\sum_k f_k\\,e_k\\, \\Big\\vert\\, f_{-k}=\\overline{f}_k,\\, \\sum_k \\big(1+\\vert k\\vert^2\\big)^\\alpha \\vert f_k\\vert^2 <\\infty\\Big\\},\n\\end{equation}\nsee \\cite{Tem} for more details. Then the space of test functions $C^\\infty(\\mathbb{T}^d)$ corresponds to $\\cap_\\alpha H^\\alpha(\\mathbb{T}^d)$ and its dual $C^\\infty(\\mathbb{T}^d)'$, the space of distributions, to $\\cup_\\alpha H^\\alpha(\\mathbb{T}^d)$. We denote by $\\langle\\cdot,\\cdot\\rangle$ also the duality pairing between them.\n\nGiven $f\\in L^2(\\mathbb{T}^d;\\mathbb{R}^d)$, we say that $f$ is divergence free in the sense of distributions if\n\\begin{equation}\\nonumber\n\\langle f, \\nabla\\varphi\\rangle = 0 \\qquad \\forall\\, \\varphi\\in C^\\infty(\\mathbb{T}^d).\n\\end{equation}\nIt's easy to check that $f$ is divergence free if and only if $f_k\\cdot k = 0$ for all $k\\in\\mathbb{Z}^d$. Consider the subspace\n\\begin{equation}\\nonumber\n\\mathcal{H}= \\bigg\\{f\\in L^2(\\mathbb{T}^d;\\mathbb{R}^d) \\text{ such that } \\int_{\\mathbb{T}^d} f =0 \\text{ and } f \\text{ is divergence free}\\bigg\\}.\n\\end{equation}\n$\\mathcal{H}$ is a closed linear subspace of $L^2(\\mathbb{T}^d;\\mathbb{R}^d)$ and so the orthogonal projection $\\Pi:L^2(\\mathbb{T}^d;\\mathbb{R}^d)\\to \\mathcal{H}$ is a linear continuous operator. $\\Pi$ can be represented in Fourier series by\n\\begin{equation}\\nonumber\n\\Pi: f=\\sum_{k\\in\\mathbb{Z}^d} f_k\\, e_k \\mapsto \\Pi f = \\sum_{k\\in\\mathbb{Z}^d} P_k f_k\\, e_k,\n\\end{equation}\nwhere $P_k\\in \\mathbb{R}^{d\\times d}$ is the d-dimensional projection on $k^\\perp$, $P_k = I - \\frac{k}{\\vert k\\vert}\\otimes \\frac{k}{\\vert k\\vert}$, whenever $k\\neq 0$ and we set $P_0\\equiv 0$. $\\Pi$ can be extended to a continuous linear operator from $H^\\alpha(\\mathbb{T}^d;\\mathbb{R}^d)$ to itself for any $\\alpha\\in\\mathbb{R}$. We also define the projectors $\\Pi_N$ on the space of Fourier polynomials of degree at most $N$ by\n\\begin{equation}\\nonumber\nf=\\sum_k f_k\\,e_k\\mapsto \\Pi_Nf = \\sum_{k: \\vert k\\vert\\leq N} f_k\\, e_k,\n\\end{equation}\nwhere $\\Pi_N:C^\\infty(\\mathbb{T}^d)'\\to C^\\infty(\\mathbb{T}^d)$.\n\n\\subsection{Construction of the noise $W(t,x)$}\\label{subsection 2.2 - contruction of the noise}\n\n\nWe have introduced the space $\\mathcal{H}$ and the projector $\\Pi$ because we want to deal with an $\\mathcal{H}$-valued noise $W$; the reason for this choice will become clear in Section \\ref{subsection 2.3 - SPDE in Ito form and definition of energy solutions}. We are first going to construct $W$ by giving an explicit Fourier representation, but then we will also provide a more elegant, abstract construction.\n\nSet $\\mathbb{Z}^d_0=\\mathbb{Z}^d\\setminus \\{0\\}$ and consider $\\Lambda\\subset\\mathbb{Z}^d$ such that $\\Lambda$ and $-\\Lambda$ form a partition of $\\mathbb{Z}^d_0$. Consider a collection\n\\begin{equation}\\nonumber\n\\Big\\{ B^{(j)}_k, k\\in\\mathbb{Z}^d_0, 1\\leq j\\leq d-1\\Big\\}\n\\end{equation}\nof standard, real valued, independent $\\mathcal{F}_t$-Brownian motions, defined on a filtered probability space $(\\Omega,\\mathcal{F},\\mathcal{F}_t,\\mathbb{P})$, $\\{\\mathcal{F}_t\\}_{t\\geq 0}$ being a normal filtration (see \\cite{Rev}). Define\n\\begin{equation}\\nonumber\nW^{(j)}_k := \\begin{cases} B^{(j)}_k + iB^{(j)}_{-k}\\qquad & \\text{if }\\ k\\in\\Lambda\\\\\nB^{(j)}_k - iB^{(j)}_{-k} & \\text{if }\\ k\\in-\\Lambda \\end{cases}.\n\\end{equation}\nIn this way, $\\{W^{(j)}_k, k\\in\\mathbb{Z}^d_0, 1\\leq j\\leq d-1\\}$ is a collection of standard complex valued Brownian motions (namely complex processes with real and complex part given by independent real BM) such that $W^{(j)}_{-k}=\\overline{W}^{(j)}_k$ and $W^{(j)}_k, W^{(m)}_l$ are independent whenever $k\\neq \\pm l$, $j\\neq m$. We denote by $[M,N]$ the quadratic covariation process, which is defined for any couple $M$, $N$ of square integrable real semimartingales (see for instance \\cite{Rev}) and we extend it by bilinearity to the analogue complex valued processes.\nObserve that by result of bilinearity it holds\n\\begin{equation}\\nonumber\n\\Big[W^{(j)}_k,W^{(j)}_k\\Big]_t = 0,\\quad \\Big[W^{(j)}_k, W^{(j)}_{-k}\\Big]_t = 2t,\n\\end{equation}\nand therefore\n\\begin{equation}\\nonumber\n\\Big[W^{(j)}_k,W^{(m)}_l\\Big]_t = 2t\\,\\delta_{j,m}\\,\\delta_{k,-l} = 2t\\,\\delta_{j-m}\\,\\delta_{k+l}.\n\\end{equation}\nWe omit the details, but it's easy to check all the stochastic calculus rules, in particular It\\^o formula and It\\^o isometry, can be extended by bilinearity to the case of complex valued semimartingales.\n\nFor any $k\\in\\Lambda$, let $\\{a_k^{(1)},\\ldots, a_k^{(d-1)}\\}$ be an orthonormal basis of $k^\\perp$. Then $\\{k\/\\vert k\\vert, a_k^{(1)},\\ldots, a_k^{(d-1)}\\}$ form an orthonormal basis of $\\mathbb{R}^d$ and it holds\n\\begin{equation}\\nonumber\nP_k = a_k^{(1)}\\otimes a_k^{(1)} + \\ldots +a_k^{(d-1)}\\otimes a_k^{(d-1)} \\quad \\forall\\, k\\in\\Lambda;\n\\end{equation}\nfor $k\\in -\\Lambda$ we can set $a_k^{(j)} = a_{-k}^{(j)}$ and the above identity still holds.\n\nLet $\\{\\theta_k, k\\in\\mathbb{Z}^d_0\\}$ be a collection of real constants such that $\\theta_k=\\theta_{-k}$ and satisfying suitable conditions, which will be specified later. We set\n\\begin{equation}\\label{sec 2.2 - definition of the noise W(t,x)}\nW(t,x):= \\sum_{k\\in\\mathbb{Z}^d_0} \\theta_k \\Bigg(\\sum_{j=1}^{d-1} a_k^{(j)}\\,W_k^{(j)}(t)\\Bigg) e_k(x) .\n\\end{equation}\nFrom now on, whenever it doesn't create confusion, we will only write the indices $k,j$ without specifying their index sets, in order for the notation not to become too burdensome. Observe that, for fixed $t$, $W(t,\\cdot)$ is already written in its Fourier decomposition and by the definitions of $a_k^{(j)}$ and $W_k(j)$ it's a real, mean zero, divergence free random distribution. It only remains to show that, for fixed $t$, $W(t,\\cdot)$ belongs $\\mathbb{P}$-a.s. to $L^2(\\mathbb{T}^d;\\mathbb{R}^d)$. Indeed, denoting by $\\mathbb{E}$ the expectation with respect to $\\mathbb{P}$, we have\n\\begin{equation}\\nonumber\n\\mathbb{E}\\Big[\\,\\vert W(t,\\cdot)\\vert_{L^2}^2\\Big]\n= \\mathbb{E}\\Bigg[ \\sum_k \\theta_k^2\\, \\Big\\vert \\sum_j a_k^{(j)}W_k^{(j)}(t)\\Big\\vert^2\\Bigg]\n= 2t(d-1)\\sum_k \\theta_k^2\n\\end{equation}\nand therefore, under the conditions\n\\begin{equation}\\label{sec 2.2 - condition on the coefficients}\\tag{H1}\n\\theta_{-k}=\\theta_k\\ \\forall\\, k,\\quad \\sum_k \\theta_k^2<\\infty,\n\\end{equation}\n$W(t,\\cdot)$ is a well defined random variable belonging to $L^2(\\Omega,\\mathcal{F},\\mathbb{P};\\mathcal{H})$. From the point of view of mathematical rigour, we should have first done the above calculation when summing over finite $k$ and then shown that, under condition \\eqref{sec 2.2 - condition on the coefficients}, the sequence of finite sums is Cauchy; this can be easily checked and we omit it for the sake of simplicity. With similar calculations, exploiting Gaussianity and Kolmogorov continuity criterion, it can be shown that, as an $\\mathcal{H}$-valued process, up to modification $W$ has paths in $C^{\\alpha}([0,T];\\mathcal{H})$ for any $\\alpha<1\/2$, see for instance \\cite{DaP}.\n\n\nWe now show an alternative, more abstract construction of $W$. Let $\\theta_k$ be some real coefficients satisfying \\eqref{sec 2.2 - condition on the coefficients} as before and define the Fourier multiplier\n\\begin{equation}\\nonumber\n\\Theta: f=\\sum_k f_k\\, e_k\\mapsto \\Theta f= \\sum_k \\theta_k f_k\\, e_k.\n\\end{equation}\nThen $\\Theta$ is a continuous, self-adjoint operator from $H^\\alpha(\\mathbb{T}^d;\\mathbb{R}^d)$ to itself which commutes with $\\Pi$; condition \\eqref{sec 2.2 - condition on the coefficients} implies that $\\Theta$ is an Hilbert-Schmidt operator, namely $\\Theta^\\ast \\Theta=\\Theta^2$ is a trace class operator:\n\\begin{equation}\\nonumber\n\\Theta^2: f=\\sum_k f_k\\, e_k\\mapsto \\Theta^2 f= \\sum_k \\theta_k^2\\, f_k\\, e_k.\n\\end{equation}\nNow let $\\widetilde{W}$ be a cylindrical Wiener process on $L^2(\\mathbb{T}^d;\\mathbb{R}^d)$ (in the sense of \\cite{DaP}): a Gaussian distribution valued process with covariance\n\\begin{equation}\\nonumber\n\\mathbb{E}\\big[\\langle \\widetilde{W}_t,\\varphi\\rangle\\,\\langle \\widetilde{W}_s,\\psi\\rangle \\big]\n= (t\\wedge s)\\langle \\varphi,\\psi\\rangle\n\\quad \\forall\\, \\varphi,\\psi\\in C^\\infty(\\mathbb{T}).\n\\end{equation}\nThen it can be shown that, up to modifications, $\\widetilde{W}$ has paths in $C^\\alpha([0,T],H^\\beta)$ for any $\\alpha<1\/2$ and for any $\\beta<-d\/2$. If we define\n\\begin{equation}\\nonumber\nW := \\Theta\\Pi \\widetilde{W},\n\\end{equation}\nthen $W$ is a Wiener process on $\\mathcal{H}$. This construction is useful as it shows that we can consider, on a given filtered probability space with a given noise $\\widetilde{W}$, several different $W$ just by varying the deterministic operator $\\Theta$. By construction, $W$ has covariance given by\n\\begin{equation}\\nonumber\\begin{split}\n\\mathbb{E}\\big[\\langle W_t,\\varphi\\rangle\\,\\langle W_s,\\psi\\rangle \\big]\n& = (t\\wedge s)\\langle (\\Theta\\Pi)^\\ast\\varphi,(\\Theta\\Pi)^\\ast\\psi\\rangle\\\\\n& = (t\\wedge s)\\langle \\Theta^2\\Pi\\varphi,\\psi\\rangle\n\\qquad \\quad \\quad \\forall\\, \\varphi,\\psi\\in C^\\infty(\\mathbb{T}).\n\\end{split}\\end{equation}\nIt can be checked that $W$ defined as above is space homogeneous, namely its distribution is invariant under space translations $W(t.\\cdot)\\mapsto W(t,x+\\cdot)$ for any $x\\in\\mathbb{T}^d$. This is a consequence of the fact that $\\widetilde{W}$ is space homogeneous and $W$ is defined by a Fourier multiplier. For our purposes, we want it to be isotropic as well.\n\nLet $E_\\mathbb{Z}(d)$ denote the group of linear isometries of $\\mathbb{R}^d$ into itself which leave $\\mathbb{Z}^d$ invariant; it is the group generated by swaps\n\\begin{equation}\\nonumber\n(x_1,\\ldots,x_i,\\ldots,x_j,\\ldots, x_d)\\mapsto (x_1,\\ldots,x_j,\\ldots,x_i,\\ldots, x_d)\n\\end{equation}\nand reflections\n\\begin{equation}\\nonumber\n(x_1\\ldots x_{i-1},x_i,x_{i+1}\\ldots x_d)\\mapsto (x_1\\ldots x_{i-1},-x_i,x_{i+1}\\ldots x_d).\n\\end{equation}\nTo see this, observe that if $O\\in E_\\mathbb{Z}(d)$, then for any element $e_i$ of the canonical basis it holds $Oe_i\\in\\mathbb{Z}^d$ and $\\vert Oe_i\\vert=1$, which necessarily implies that $Oe_i=\\pm e_j$ for another index $j$.\n$W$ is isotropic if its law is invariant under transformations $W(t,\\cdot)\\mapsto W(t,O\\cdot)$ for all $O\\in E_\\mathbb{Z}(d)$ . In order to have an isotropic noise, we impose the following condition on the coefficients $\\theta_k$:\n\\begin{equation}\\label{sec 2.2 - isotropy condition}\\tag{H2}\n\\theta_k = \\theta_{Ok}\\quad\\, \\forall\\, O\\in E_{\\mathbb{Z}}(d).\n\\end{equation}\nTipical choices of $\\theta_k$ will be of the form $\\theta_k = F(\\vert k\\vert)$, where $F:\\mathbb{R}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ is a function with sufficient decay at infinity; for instance we can take $F$ with finite support, or $F(r)=r^{-\\alpha}$ for some $\\alpha>d\/2$. However all statements in the next section hold in general as long as \\eqref{sec 2.2 - condition on the coefficients} and \\eqref{sec 2.2 - isotropy condition} are satisfied.\n\n\\subsection{STLE in It\\^o form and definition of energy solutions}\\label{subsection 2.3 - SPDE in Ito form and definition of energy solutions}\n\nWe can now write explicitly \\eqref{sec 1 - STLE in compact Stratonovich form} and find the corresponding It\\^o formulation.\nIn order to simplify the exposition, we will do all the computations as if we were summing over a finite number of $k$ and find the right conditions under which every sum is well defined. Rigorously speaking, we should use an approximation argument and check that the finite series form a Cauchy sequence, but we skip this technical part, which can be easily verified.\n\n\nLet $W$ be given as in \\eqref{sec 2.2 - definition of the noise W(t,x)}, then \\eqref{sec 1 - STLE in compact Stratonovich form} can be formulated as\n\\begin{equation}\\label{sec 2.3 - equation in Stratonovich form}\n\\diff u = b\\cdot\\nabla u\\diff t + \\sum_{j,k} \\theta_k\\, e_k\\, a_k^{(j)}\\cdot \\nabla u\\circ \\diff W^{(j)}_k.\n\\end{equation}\nEquation \\eqref{sec 2.3 - equation in Stratonovich form} must be interpreted in integral form: a process $u$ is a strong (from the analytical point of view) solution if $\\mathbb{P}$-a.s. the following identity is satisfied for every $t,x$ (and $u$ is pogressively measurable and sufficiently regular for it to be meaningful):\n\\begin{equation}\\nonumber\\begin{split}\nu(t,x)-u(0,x) = & \\int_0^t b(s,x)\\cdot\\nabla u(s,x)\\diff s\\\\ &+ \\sum_{j,k}\\theta_k \\int_0^t e_k(x)\\,a^{(j)}_k\\cdot\\nabla u(s,x)\\circ \\diff W_k^{(j)}(s).\n\\end{split}\\end{equation}\nSince in general Stratonovich integral is not so easy to control, we prefer to pass to the equivalent formulation in It\\^o form:\n\\begin{equation}\\nonumber\\begin{split}\n\\diff u\n& = b\\cdot\\nabla u\\diff t + \\sum_{j,k} \\theta_k\\, e_k\\, a_k^{(j)}\\cdot \\nabla u \\diff W^{(j)}_k + \\frac{1}{2}\\sum_{j,k} \\theta_k\\, e_k \\diff \\big[a_k^{(j)}\\cdot\\nabla u, W^{(j)}_k\\big]\\\\\n& = b\\cdot\\nabla u\\diff t + \\sum_{j,k} \\theta_k\\, e_k\\, a_k^{(j)}\\cdot \\nabla u\\diff W^{(j)}_k + \\sum_{j,k} \\theta_k^2\\, e_k\\,a_k^{(j)}\\cdot\\nabla\\Big( e_{-k}\\,a_{-k}^{(j)}\\cdot\\nabla u\\Big) \\diff t\\\\\n& = b\\cdot\\nabla u\\diff t + \\sum_{j,k} \\theta_k\\, e_k\\, a_k^{(j)}\\cdot \\nabla u\\diff W^{(j)}_k + \\sum_{k,j} \\theta_k^2\\, \\text{Tr}\\Big(a_k^{(j)}\\otimes a_k^{(j)}\\, D^2u\\Big)\\diff t\\\\\n& = b\\cdot\\nabla u\\diff t + \\sum_{j,k} \\theta_k\\, e_k\\, a_k^{(j)}\\cdot \\nabla u\\diff W^{(j)}_k + \\text{Tr}\\left( \\Big( \\sum_{k} \\theta_k^2 P_k\\Big) \\, D^2u\\right)\\diff t.\n\\end{split}\\end{equation}\nIn the above computation we exploited many of the properties of $a^{(j)}_k$ and $W^{(j)}_k$ highlighted in the previous section: $d[W^{(j)}_k,W^{(l)}_m] = 2\\delta_{j,l}\\delta_{k,-m}\\,dt$, $a_k^{(j)}\\cdot k=0$, $a_k^{(j)}=a^{(j)}_{-k}$. It remains to compute more explicitly the matrix appearing in the last line on the right hand side:\n\\begin{equation}\\nonumber\n\\sum_k \\theta_k^2 P_k\n= \\sum_k \\theta_k^2 \\left(I - \\frac{k}{\\vert k\\vert}\\otimes \\frac{k}{\\vert k\\vert}\\right)\n= \\left( \\sum_k \\theta_k^2\\right) I - \\sum_k \\theta_k^2 \\frac{k}{\\vert k\\vert}\\otimes \\frac{k}{\\vert k\\vert}.\n\\end{equation}\nBy the isotropy condition \\eqref{sec 2.2 - isotropy condition} of $\\theta_k$, whenever $i\\neq j$, using the change of variables $k\\mapsto \\tilde{k}$ that switches the sign of the $i$-th component, we have\n\\begin{equation}\\nonumber\n\\left( \\sum_k \\theta_k^2 \\frac{k}{\\vert k\\vert}\\otimes \\frac{k}{\\vert k\\vert}\\right)_{ij}\n= \\sum_k \\theta_k^2\\, \\frac{k^{(i)}\\,k^{(j)}}{\\vert k\\vert^2}\n= \\sum_{\\tilde{k}} \\theta_{\\tilde{k}}^2\\, \\frac{\\tilde{k}^{(i)}\\,\\tilde{k}^{(j)}}{\\vert \\tilde{k}\\vert^2}\n= \\sum_k \\theta_k^2\\, \\frac{-k^{(i)}\\,k^{(j)}}{\\vert k\\vert^2} =0.\n\\end{equation}\nInstead, when $i=j$, using a change of variables $k\\mapsto \\tilde{k}$ that swaps the $i$-th component with the $l$-th one, we obtain\n\\begin{equation}\\nonumber\n\\left( \\sum_k \\theta_k^2 \\frac{k}{\\vert k\\vert}\\otimes \\frac{k}{\\vert k\\vert}\\right)_{ii}\n= \\sum_k \\theta_k^2\\, \\frac{{k^{(i)}}^2}{\\vert k\\vert^2}\n= \\sum_k \\theta_k^2\\, \\frac{{k^{(l)}}^2}{\\vert k\\vert^2}\n= \\left( \\sum_k \\theta_k^2 \\frac{k}{\\vert k\\vert}\\otimes \\frac{k}{\\vert k\\vert}\\right)_{ll}\n\\end{equation}\nand therefore, summing over $i$,\n\\begin{equation}\\nonumber\n\\left( \\sum_k \\theta_k^2 \\frac{k}{\\vert k\\vert}\\otimes \\frac{k}{\\vert k\\vert}\\right)_{ii}\n= \\frac{1}{d} \\sum_k \\theta_k^2 \\frac{\\vert k\\vert^2}{\\vert k\\vert^2}\n= \\frac{1}{d} \\sum_k \\theta_k^2.\n\\end{equation}\nIn conclusion, we have obtained\n\\begin{equation}\\label{sec 2.3 - definition of the constant c}\n\\sum_k \\theta_k^2\\, P_k = \\frac{d-1}{d} \\Bigg(\\sum_k \\theta_k^2\\Bigg) I=: c\\,I,\n\\end{equation}\nso that equation \\eqref{sec 2.3 - equation in Stratonovich form} has corresponding It\\^o formulation\n\\begin{equation}\\label{sec 2.3 - equation in Ito form}\n\\diff u = b\\cdot\\nabla u\\diff t + c\\,\\Delta u\\diff t + \\sum_{j,k} \\theta_k\\, e_k\\, a_k^{(j)}\\cdot \\nabla u\\diff W^{(j)}_k.\n\\end{equation}\nWe have actually only shown that formally \\eqref{sec 2.3 - equation in Stratonovich form} implies \\eqref{sec 2.3 - equation in Ito form}, but the same calculations done backward show that the two formulations are equivalent, whenever $u$ is a smooth solution. Observe that condition \\eqref{sec 2.2 - condition on the coefficients} on the coefficients $\\theta_k$ is necessary in order to give a meaning to equation \\eqref{sec 2.3 - equation in Stratonovich form}: otherwise, passing to the It\\^o formulation we would find a term of the form \\textquotedblleft $+\\infty\\,\\Delta u$\\textquotedblright\\ which is ill-defined even in the case $u$ had very good regularity.\nLet us stress that, even if writing the It\\^o formulation we find a diffusion term, this is actually a \\textquotedblleft fake Laplacian\\textquotedblright : the nature of the equation is still hyperbolic and it can be solved by characteristics; moreover, in the case $\\text{div}\\, b=0$, it can be checked that the energy $\\vert u\\vert_{L^2}$ is (formally) invariant, while in a real diffusion it would be dissipated.\nWe have done the computations leading to \\eqref{sec 2.3 - definition of the constant c} explicitly but we could have also derived it by the following reasoning: the matrix\n\\begin{equation}\\nonumber\nA= \\sum_k \\theta_k^2\\, P_k\n\\end{equation}\nis a symmetric and semipositive definite; by the isotropy condition it follows that $O^T A O = A$ for all $O\\in E_{\\mathbb{Z}}(d)$ and therefore necessarily $A=c I$ for some constant $c$. Indeed, if $v$ is an eigenvector for $A$, by the isotropy condition so is $Ov$, with respect to the same eigenvalue, for all $O\\in E_{\\mathbb{Z}}(d)$; this immediately implies that the associated eigenspace is the whole $\\mathbb{R}^d$. But then taking the trace on both sides we find\n\\begin{equation}\\nonumber\n(d-1)\\sum_k\\theta_k^2 = dc,\n\\end{equation}\nwhich gives \\eqref{sec 2.3 - definition of the constant c}. This shows that the presence of $\\Delta$ is strictly related to isotropy of the noise.\n\nSince we are interested in studying weak (in the analytical sense) solutions of equation \\eqref{sec 2.3 - equation in Ito form}, we need to rewrite it in a suitable way by testing against test functions in $C^\\infty(\\mathbb{T}^d)$. Recalling that, for any $k$ and $j$, $x\\mapsto a_k^{(j)}\\,e_k(x)$ is divergence free by construction, the weak formulation then corresponds to:\n\\begin{equation}\\label{sec 2.3 - equation in Ito form, weak formulation in differential form}\\begin{split}\n\\diff\\langle u,\\varphi\\rangle = & -\\langle u, \\text{div}(b\\varphi)\\rangle\\diff t + c\\,\\langle u,\\Delta \\varphi\\rangle\\diff t\\\\ & - \\sum_{j,k} \\theta_k\\, \\langle u, e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k\\quad \\forall\\,\\varphi\\in C(\\mathbb{T}^d),\n\\end{split}\\end{equation}\nwhere as usual the above equation must be interpreted in the integral sense. In order for the term $\\langle u, \\text{div}(b\\varphi)\\rangle = \\langle u, \\text{div}b\\,\\varphi + b\\nabla\\varphi)\\rangle$ to be well defined, we need at least to require the following assumption on $b$:\n\\begin{equation}\\tag{A1}\\label{sec 2.3 - assumption 1 on b}\nb\\in L^2(0,T;L^2),\\quad \\text{div}\\,b\\in L^2(0,T;L^2).\n\\end{equation}\nIt is natural in the definition of weak solution to require weak continuity in time of the solution. We denote by $C([0,T];L^2_w)$ the space of functions $f:[0,T]\\to L^2$ which are continuous w.r.t. the weak topology of $L^2$, namely $f(s)\\rightharpoonup f(t)$ as $s\\to t$. For more details on the weak topology, we refer to \\cite{Bre}. We are now ready to give with the following definition.\n\\begin{definition}\\label{definition sec 2.3 - weak solution}\nLet $(\\Omega,\\mathcal{F},\\mathcal{F}_t,\\mathbb{P})$ be a filtered probability space, with normal filtration $\\{\\mathcal{F}_t\\}$, on which a collection $\\{B_k^{(j)}, k\\in\\mathbb{Z}^d_0, 1\\leq j\\leq d-1\\}$ of independent, standard $\\mathcal{F}_t$-Brownian motions is defined. Let $W$ be defined as in \\eqref{sec 2.2 - definition of the noise W(t,x)}, for given coefficients $\\{\\theta_k\\}_k$ satisfying \\eqref{sec 2.2 - condition on the coefficients}, \\eqref{sec 2.2 - isotropy condition}. We say that an $\\mathcal{F}_t$-progressively measurable, $L^2$-valued process $u$, with paths in $C([0,T];L^2_w)$, satisfying\n\\begin{equation}\\label{sec 2.3 - condition of bounded energy in the definition of weak solution}\n\\int_0^T \\mathbb{E}\\big[\\vert u(t)\\vert^2_{L^2}\\big]\\diff t <\\infty,\n\\end{equation}\nis a \\textbf{weak solution} (in the analytical sense) on the interval $[0,T]$ of the equation\n\\begin{equation}\\label{sec 2.3 - equation in Stratonovich compact form, useful repetition}\n\\diff u = b\\cdot\\nabla u\\diff t + \\circ \\diff W\\cdot\\nabla u\n\\end{equation}\nif, for every $\\varphi\\in C^\\infty(\\mathbb{T}^d)$, $\\mathbb{P}$-a.s. the following identity holds for all $t\\in [0,T]$:\n\\begin{equation}\\label{sec 2.3 - equation in Ito form, weak formulation in integral form}\\begin{split}\n\\langle u(t),\\varphi\\rangle - \\langle u(0),\\varphi\\rangle\n=& -\\int_0^t \\langle u(s), \\text{div}(b\\varphi)\\rangle\\diff s + c\\int_0^t \\langle u(s),\\Delta \\varphi\\rangle\\diff s\\\\\n& - \\sum_{j,k} \\theta_k\\, \\int_0^t \\langle u(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s).\n\\end{split}\\end{equation}\n\\end{definition}\n\nIn order to show that it is a good definition, we need to prove that equation \\eqref{sec 2.3 - equation in Ito form, weak formulation in integral form} is meaningful. By assumption \\eqref{sec 2.3 - assumption 1 on b} and condition \\eqref{sec 2.3 - condition of bounded energy in the definition of weak solution}, it holds\n\\begin{equation}\\nonumber\\begin{split}\n\\mathbb{E}&\\bigg[\\bigg\\vert \\int_0^t \\langle u(s), \\text{div}(b\\varphi)\\rangle\\diff s \\bigg\\vert\\bigg]\\\\\n&\\leq \\Vert \\varphi\\Vert_{W^{1,\\infty}} \n\\big(\\Vert b\\Vert_{L^2(0,T;L^2)} + \\Vert \\text{div}b\\Vert_{L^2(0,T;L^2)}\\big)\n\\, \\sqrt{\\int_0^T \\mathbb{E}\\big[\\vert u(t)\\vert^2_{L^2}\\big]\\diff t} <\\infty.\n\\end{split}\\end{equation}\nSince $u$ is $\\mathcal{F}_t$-progressively measurable, the real-valued process $t\\mapsto \\langle u(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle$ is also $\\mathcal{F}_t$-progressively measurable and can be integrated with respect to $W_k^{(j)}$, for any $k$ and $j$. Therefore we only need to check that the infinite series is convergent, in a suitable sense. By It\\^o isometry we have\n\\begin{equation}\\nonumber\\begin{split}\n\\mathbb{E}\\Bigg[\\Big\\vert \\sum_{j,k} \\theta_k & \\int_0^t \\langle u(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s) \\Big\\vert^2 \\Bigg]\\\\\n& = 2 \\sum_{j,k}\\theta_k^2\\, \\mathbb{E}\\Bigg[\\int_0^t\\vert \\langle u(s),e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\vert ^2\\diff s\\Bigg]\\\\\n& \\leq 2\\, \\sup_k \\theta_k^2\\, \\sum_{j,k} \\mathbb{E}\\Bigg[\\int_0^T \\vert\\langle u(s)\\,\\nabla\\varphi, a_k^{(j)}\\,e_k\\rangle\\vert^2\\diff s \\Bigg]\\\\\n& \\leq 2\\, \\sup_k \\theta_k^2\\, \\int_0^T\\mathbb{E}\\big[\\vert u(s)\\,\\nabla\\varphi\\vert_{L^2}^2\\big]\\diff s\\\\\n& \\leq 2\\, \\sup_k \\theta_k^2\\, \\Vert\\nabla\\varphi\\Vert_{\\infty}^2 \\int_0^T\\mathbb{E}\\big[\\vert u(s)\\vert_{L^2}^2\\big]\\diff s\n\\end{split}\\end{equation}\nand the last term is finite since $u$ satisfies \\eqref{sec 2.3 - condition of bounded energy in the definition of weak solution} and $\\theta_k$ satisfy \\eqref{sec 2.2 - condition on the coefficients}. In the above calculations we have exploited the fact that $\\{a_k^{(j)}\\, e_k, k\\in\\mathbb{Z}^d_0, 1\\leq j\\leq d-1\\}$ is an (incomplete) orthonormal system in $L^2(\\mathbb{T}^d;\\mathbb{C}^d)$.\n\nLet us now briefly discuss the energy balance for equation \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition}. If $u$ were a classical smooth solution of the deterministic linear transport equation\n\\begin{equation}\\nonumber\n\\partial_t u = b\\cdot \\nabla u + v\\cdot\\nabla u,\n\\end{equation}\nwith $b$ as before and $v=v(t,x)$ being a divergence free vector field (both with periodic boundary condition), then we would have\n\\begin{equation}\\nonumber\\begin{split}\n\\frac{\\diff}{\\diff t}\\int_{\\mathbb{T}^d} u^2(t,x)\\diff x\n& = \\int_{\\mathbb{T}^d} 2\\,u(t,x)(b(t,x)+v(t,x))\\cdot \\nabla u(t,x)\\diff x\\\\\n& = \\int_{\\mathbb{T}^d} (b(t,x)+v(t,x))\\cdot \\nabla (u^2)(t,x)\\diff x\\\\\n& = -\\int_{\\mathbb{T}^d} (\\text{div} b)(t,x)\\,u^2(t,x)\\diff x\\\\\n& \\leq \\Vert \\text{div} b(t)\\Vert_\\infty \\int_{\\mathbb{T}^d} u^2(t,x)\\diff x,\n\\end{split}\\end{equation}\nand therefore by Gronwall's lemma we would obtain\n\\begin{equation}\\label{sec 2.3 - energy inequality, preliminary version}\n\\vert u(t)\\vert_{L^2}^2 \\leq \\vert u(0)\\vert_{L^2}^2\\,\\exp\\Big\\{\\int_0^t \\Vert \\text{div} b(s)\\Vert_\\infty\\diff s \\Big\\}.\n\\end{equation}\nIt is therefore natural to impose the following condition on $b$:\n\\begin{equation}\\tag{A2}\\label{sec 2.3 - assumption 2 on b}\n\\text{div}\\,b \\in L^1(0,T;L^\\infty).\n\\end{equation}\nUsing the properties of Stratonovich integral (or if one prefers using a Wong-Zakai approximation technique), it can be shown that, whenever $u$ is a smooth solution of (STLE), the above calculation still holds, since by construction $W(t,\\cdot)$ is divergence free. However, since equation \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition} is hyperbolic in nature, we don't expect solutions with initial data only in $L^2$ to become more regular and in this case the above reasoning doesn't hold. By approximation with smooth solutions, we can still at least expect the final inequality \\eqref{sec 2.3 - energy inequality, preliminary version} to hold also for weak solutions.\n\n\nThe above observations lead to the following notion of energy solutions for the Cauchy problem given by \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition} and an initial condition $u_0$:\n\\begin{definition}\\label{definition sec 2.3 - energy solution} Given a deterministic initial condition $u_0\\in L^2$, we say that $u$ is an \\textbf{energy solution} of the Cauchy problem\n\\begin{equation}\\nonumber\n\\begin{cases} \\diff u = b\\cdot\\nabla u\\diff t + \\circ \\diff W\\cdot\\nabla u\\\\ u(0)=u_0\n\\end{cases}\n\\end{equation}\nif $u$ is a weak solution of \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition}, equation \\eqref{sec 2.3 - equation in Ito form, weak formulation in integral form} is satisfied with $u(0)=u_0$ and the following \\textbf{energy inequality} holds:\n\\begin{equation}\\label{sec 2.3 - energy inequality}\n\\sup_{t\\in [0,T]} \\Big\\{e^{-1\/2 \\int_0^t \\Vert \\text{div} b(s,\\cdot)\\Vert_\\infty\\diff s}\\,\\vert u(t)\\vert_{L^2} \\Big\\} \\leq \\vert u_0\\vert_{L^2} \\qquad \\mathbb{P}\\text{-a.s.}\n\\end{equation}\n\\end{definition}\n\nLet us finally define what we mean by convergence in probability in abstract topological spaces. If $X_n$ is a sequence of random variables defined on the same probability space, with values in $(E,\\tau,\\mathcal{B}(\\tau))$, where $\\tau$ is a topology and $\\mathcal{B}(\\tau)$ is the associated Borel-$\\sigma$ algebra, we say that $X_n\\to X$ in probability if any subsequence of $\\{X_n\\}_n$ contains a subsequence which converges to $X$ $\\mathbb{P}$-a.s.. We need this definition because we will work with convergence in probability in a non metrizable topology.\n\n\\section{Rigorous statement and proof of the main result}\\label{section 3 - proof of the main result}\nIn this section we provide a rigorous statement of the main result and its proof. Throughout the section we consider a fixed, a priori given filtered probability space $(\\Omega,\\mathcal{F},\\mathcal{F}_t,\\mathcal{P})$ together with a collection $\\{B_k^{(j)}, k\\in\\mathbb{Z}^d_0, 1\\leq j\\leq d-1\\}$ of independent, standard $\\mathcal{F}_t$-Brownian motions. However we consider different choices of the parameters $\\{\\theta_k\\}_k$, so that we can obtain different space-time dependent noises $W(t,x)$ constructed from $\\{B_k^{(j)}\\}_{k,j}$ by \\eqref{sec 2.2 - definition of the noise W(t,x)}. All these noises are still defined on the same probability space with respect to the same filtration; the drift $b$ is fixed. Whenever referring to energy solutions of \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition} we will therefore consider strong in the probabilistic sense solutions (i.e. progressively measurable w.r.t. $\\mathcal{F}_t$) all defined on the same probability space. The main result can then be stated as follows.\n\\begin{theorem}\\label{theorem sec 3 - main result}\nLet $\\{\\theta_k^N, k\\in\\mathbb{Z}^d_0, N\\in\\mathbb{N} \\}$ be a collection of real coefficients such that:\n\\begin{itemize}\n\\item[i)] For each $N$, $\\{\\theta_k^N\\}_k$ satisfies \\eqref{sec 2.2 - condition on the coefficients} and \\eqref{sec 2.2 - isotropy condition}.\n\\item[ii)] It holds\n\\begin{equation}\\label{sec 3 - condition on the coefficients in the statement of the main theorem}\\tag{H3}\n\\lim_{N\\to\\infty} \\frac{\\sup_k (\\theta_k^N)^2}{\\sum_k (\\theta_k^N)^2} = 0.\n\\end{equation}\n\\end{itemize}\nAssume that $b$ satisfies \\eqref{sec 2.3 - assumption 1 on b}, \\eqref{sec 2.3 - assumption 2 on b} and the following:\n\\begin{itemize}\n\\item[(A3)] $b$ is such that, for any $\\nu>0$, uniqueness holds in the class of weak $L^\\infty(0,T;L^2)$ solutions of the parabolic Cauchy problem\n\\begin{equation}\\label{sec 3 - parabolic cauchy limit problem in the statement of the main theorem}\n\\begin{cases} \\partial_t u = \\nu\\Delta u + b\\cdot\\nabla u\\\\ u(0)=u_0 \\end{cases}.\n\\end{equation}\n\\end{itemize}\nLet $W^N$ denote the divergence free noises constructed from the coefficients $\\{\\theta^N_k\\}_k$ as in \\eqref{sec 2.2 - definition of the noise W(t,x)}. Then for any $\\nu>0$ there exists a sequence of constants $\\varepsilon^N$, which depend on the coefficients $\\{\\theta_k^N\\}$, such that, for any $u_0\\in L^2$, any sequence of energy solutions $u^N$ of the Cauchy problems\n\\begin{equation}\\label{sec 3 - cauchy probelms for uN in the statement of the main theorem}\n\\begin{cases} \\diff u^N= b\\cdot\\nabla u^N\\diff t + \\sqrt{\\varepsilon^N} \\circ \\diff W^N\\cdot\\nabla u^N\\\\\nu(0)=u_0\n\\end{cases}\n\\end{equation}\nconverge in probability, in $L^\\infty(0,T;L^2)$ endowed with the weak-$\\star$ topology, to the unique weak solution of the deterministic Cauchy problem \\eqref{sec 3 - parabolic cauchy limit problem in the statement of the main theorem}.\nIn particular, the constants $\\varepsilon^N$ can be taken as\n\\begin{equation}\\label{sec 3 - choice of the varepsilonN in the statement of the main theorem}\n\\varepsilon^N = \\nu\\,\\frac{d}{d-1}\\Big(\\sum_k(\\theta_k^N)^2 \\Big)^{-1}.\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nThe basic idea of the proof is the following: when we rewrite the transport SPDE in It\\^o form, we can see that in general the It\\^o-Stratonovich corrector term is well defined under more restrictive conditions than the It\\^o integral. We can exploit this to our advantage by introducing a multiplicative renormalization $\\sqrt{\\varepsilon^N}$ under which the corrector term is uniformly bounded with respect to $N$, but then under condition \\eqref{sec 3 - condition on the coefficients in the statement of the main theorem} the It\\^o integrals become infinitesimal. To this aim, it is fundamental to have a uniform control on the energy of the solutions $u^N$ and that's why we work with energy solutions. We now formalize this reasoning properly.\n\n\nBy definition of energy solutions, we know that for each $N$ inequality \\eqref{sec 2.3 - energy inequality} holds. In particular it follows that there exists a constant $K$, which only depends on $b$, such that\n\\begin{equation}\\label{sec 3 - uniform energy estimate inside proof of the main theorem}\n\\sup_N \\Vert u^N(\\omega)\\Vert_{L^\\infty(0,T,L^2)}\n= \\sup_N \\sup_{t\\in [0,T]} \\vert u^N(\\omega,t)\\vert_{L^2} \\leq K\\vert u_0\\vert_{L^2} \\quad \\text{for }\\mathbb{P}\\text{-a.e. }\\omega\n\\end{equation}\nand\n\\begin{equation}\\nonumber\n\\sup_N \\int_0^T\\mathbb{E}[\\vert u^N(t)\\vert_{L^2}^2]\\diff t\\leq K\\vert u_0\\vert_{L^2}.\n\\end{equation}\nRewriting the Cauchy problem in It\\^o form, by the definition of energy solution we obtain that, for any $N$ and for any $\\varphi\\in C^{\\infty}(\\mathbb{T}^d)$, it holds\n\\begin{equation}\\nonumber\\begin{split}\n\\langle u^N(t),\\varphi\\rangle - \\langle u_0,\\varphi\\rangle\n=& -\\int_0^t \\langle u^N(s), \\text{div}(b\\varphi)\\rangle\\diff s + \\varepsilon^N c^N \\int_0^t \\langle u^N(s),\\Delta \\varphi\\rangle\\diff s\\\\\n& - \\sqrt{\\varepsilon^N}\\,\\sum_{j,k} \\theta^N_k\\, \\int_0^t \\langle u^N(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s),\n\\end{split}\\end{equation}\nwhere $c^N$ is defined as in \\eqref{sec 2.3 - definition of the constant c}. With the choice \\eqref{sec 3 - choice of the varepsilonN in the statement of the main theorem}, the equation becomes\n\\begin{equation}\\label{sec 3 - identity for energy solutions uN inside proof main theorem}\\begin{split}\n\\langle u^N(t),\\varphi\\rangle - \\langle u_0,\\varphi\\rangle\n=& -\\int_0^t \\langle u^N(s), \\text{div}(b\\varphi)\\rangle\\diff s + \\nu \\int_0^t \\langle u^N(s),\\Delta \\varphi\\rangle \\diff s\\\\\n& - \\sqrt{\\varepsilon^N}\\,\\sum_{j,k} \\theta^N_k\\, \\int_0^t \\langle u^N(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s).\n\\end{split}\\end{equation}\nUsing estimates similar the ones of Section \\ref{subsection 2.3 - SPDE in Ito form and definition of energy solutions}, it holds\n\\begin{equation}\\nonumber\\begin{split}\n\\varepsilon^N\\, & \\mathbb{E}\\Bigg[ \\Big(\\sum_{j,k}\\theta^N_k \\int_0^T \\langle u^N(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s)\\Big)^2 \\Bigg]\\\\\n& \\leq 2\\varepsilon^N\\, (\\sup_k\\theta_k^N)^2\\,\\Vert \\nabla\\varphi\\Vert_{\\infty}^2 \\int_0^T\\mathbb{E}[\\vert u^N(t)\\vert_{L^2}^2] \\diff t\\\\\n& \\leq \\widetilde{K} \\Vert\\nabla\\varphi\\Vert_{\\infty}^2 \\frac{(\\sup_k\\theta_k^N)^2}{\\sum_k (\\theta^N_k)^2}\\to 0 \\text{ as } N\\to\\infty\n\\end{split}\\end{equation}\nby assumption \\eqref{sec 3 - condition on the coefficients in the statement of the main theorem}. Using the properties of It\\^o integral and Doob's inequality, we deduce that for any fixed $\\varphi$\n\\begin{equation}\\nonumber\n\\sup_{t\\in [0,T]} \\bigg\\vert\\sum_{j,k}\\theta^N_k\\int_0^t \\langle u^N(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s) \\bigg\\vert\\to 0 \\text{ in probability w.r.t. }\\mathbb{P}.\n\\end{equation}\nLet $\\{\\varphi_n\\}_n$ be a countable dense subset of $C^\\infty(\\mathbb{T}^d)$; by a diagonal extraction argument, we can find a subsequence (which will still be denoted by $N$ for simplicity) and a set $\\Gamma\\subset \\Omega$ with $\\mathbb{P}(\\Gamma)=1$ such that: the above process converges uniformly to 0 for every $\\varphi_n$ and for every $\\omega\\in \\Gamma$; inequality \\eqref{sec 3 - uniform energy estimate inside proof of the main theorem} holds for every $\\omega\\in \\Gamma$. Now let us consider a fixed $\\omega\\in \\Gamma$ and the realizations $\\{u^N(\\omega)\\}_N$. Since they are a bounded sequence $L^\\infty(0,T;L^2)$, we can extract a subsequence (which depends on $\\omega$) which is weak-$\\star$ convergent to some $u\\in L^\\infty(0,T;L^2)$. Taking the limits on both sides of \\eqref{sec 3 - identity for energy solutions uN inside proof main theorem}, since $\\omega\\in\\Gamma$, we find that for every $n$\n\\begin{equation}\\nonumber\n\\langle u(t),\\varphi_n\\rangle - \\langle u_0,\\varphi_n\\rangle = -\\int_0^t \\langle u(s), \\text{div}(b \\varphi_n)\\rangle\\diff s + \\nu \\int_0^t \\langle u(s),\\Delta \\varphi_n\\rangle\\diff s.\n\\end{equation}\nBy density we can extend the above equation for all $\\varphi\\in C^\\infty(\\mathbb{T}^d)$, so that $u$ is a weak solution of the Cauchy problem\n\\begin{equation}\\nonumber\\begin{cases}\n\\partial_t u = \\nu\\Delta u + b\\cdot\\nabla u\\\\ u(0)=u_0\n\\end{cases}.\\end{equation}\nBy assumption (A3), the candidate limit is therefore unique; since the argument applies for any subsequence of $\\{u^N(\\omega)\\}_N$, we conclude that the entire sequence is converging weakly-$\\star$ to the unique solution of the above problem, without the need of selecting an $\\omega$-dependent subsequence. Moreover the reasoning holds for any $\\omega\\in \\Gamma$. Summarising, we have shown the existence of a subsequence of $\\{u^N\\}_N$ such that, for any $\\omega\\in\\Gamma$, this subsequence converges in the weak-$\\star$ topology of $L^\\infty(0,T;L^2)$ to the unique solution of the above deterministic parabolic equation. Since the reasoning holds also for any subsequence of $\\{u^N\\}$, we conclude that convergence in probability in $L^\\infty(0,T;L^2)$ endowed with weak-$\\star$ topology holds.\n\\end{proof}\n\n\n\\begin{remark} Let us make some comments on the above result.\n\n\\begin{itemize}\n\n\\item[i)] For any $u^N$ solving \\eqref{sec 3 - cauchy probelms for uN in the statement of the main theorem}, its expectation $\\tilde{u}(t)=\\mathbb{E}[u^N(t)]$ solves \\eqref{sec 3 - parabolic cauchy limit problem in the statement of the main theorem}. Therefore the result can be expressed as the convergence in probability of $u^N$ to their mean value, which is a weak law of large numbers.\n\n\\item[ii)] Observe that whenever we consider coefficients $\\{\\theta^N_k\\}$ satisfying \\eqref{sec 3 - condition on the coefficients in the statement of the main theorem} such that $\\sup_k \\vert \\theta^N_k\\vert =1$ for all $N$ (some examples will be given shortly), the sequence of noises $W^N(t,x)$ has bounded norm in some distribution spaces, like $H^\\alpha$ for $\\alpha<-d\/2$. Recalling that\n\\begin{equation}\\nonumber\n\\mathbb{E}\\big[\\vert W^N(1,x)\\vert_{L^2}^2\\big]= 2(d-1)\\sum_k(\\theta_k^N)^2,\n\\end{equation}\nwe find that the constants $\\varepsilon^N$ and $\\nu$ must satisfy the relation\n\\begin{equation}\\label{sec 3 - nu measures the irregularity\/magnitude relation}\n\\nu = C(d)\\lim_{N\\to\\infty}\\varepsilon^N\\,\\mathbb{E}\\big[\\vert W^N(1,x)\\vert_{L^2}^2\\big],\n\\end{equation}\nwhere $C(d)=2\/d$ is a dimensional constant, independent of the probability space, the coefficients $\\{\\theta_k^N\\}_{k,N}$ and the noises $W^N$ considered. Therefore the parameter $\\nu$ appearing in the limit equation in front of the dissipation term $\\Delta$ is measuring product of the spatial irregularity of the noise (in terms of its $L^2$ norm) and its magnitude.\n\n\\item[iii)] We illustrate some typical examples of coefficients $\\theta_k^N$, widely used in other contexts, which satisfy \\eqref{sec 2.2 - isotropy condition} and \\eqref{sec 3 - condition on the coefficients in the statement of the main theorem}. Let $F:\\mathbb{R}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ be a smooth, decreasing function of compact support with $F(0)=1$ and consider a sequence of positive real numbers $\\alpha_N\\to 0$; then we can take $\\theta_k^N:= F(\\alpha_N\\vert k\\vert)$. Other choices, for $\\alpha_N$ infinitesimal, are\n\\begin{equation}\\nonumber\n\\theta_k^N = (1+\\alpha_N\\vert k\\vert^2)^\\beta \\text{ for some }\\beta<-d\/2,\\quad \\theta_k^N = (1+\\vert k\\vert^2)^{-d\/2-\\alpha_N}.\n\\end{equation}\nWe can also take $\\theta^N_k=\\mathbbm{1}_{B(0,1)}(\\alpha_N\\vert k\\vert)$, where $\\mathbbm{1}_A$ denotes the characteristic function of $A$. These examples can also be combined together to produce new ones. In terms of Fourier multipliers, some of the above examples are standard rescaled volume cutoffs in Fourier space, others correspond to operators like $(1-\\alpha_N\\Delta)^{-\\beta}$ or $(1-\\Delta)^{-d\/2-\\alpha_N}$.\n\n\\item[iv)] The theorem resembles a renormalization statement: different choices of the coefficients $\\theta_k^N$, which can be spatial regularizations of a space-time white noise, require different multiplicative constants $\\varepsilon^N$, but the final limit solves an equation which is independent of $\\theta_k^N$, up to the choice of a 1-dimensional parameter $\\nu$. We have however already pointed out in the introduction the presence of some degeneracy in our result. Indeed, different choices of the parameters $\\theta_k^N$ can lead to very different limits for $W^N$ in terms of regularity: for instance, taking $(1+\\alpha_N\\vert k\\vert)^{-\\beta}$, the sequence will converge to white noise, while taking $(1+\\vert k\\vert^2)^{-1}\\mathbbm{1}_{B(0,1)}(\\alpha_N\\vert k\\vert)$ it will converge to a Gaussian free field (properly speaking, since we want divergence-free distributions, it will converge to the image under $\\Pi$ of the aforementioned objects). However, in both cases the multiplicative constants $\\varepsilon_N$ will still give convergence to the same limit. It is therefore unclear if the choice of such a renormalization is too strong, in the sense that it is ignoring too much information on the dynamics, and there is some more refined way to recover it, like an \"higher order expansion\" which not only measures the $L^2$-regularity of $W^N$ but also other norms.\n\n\\item[v)] We have required $b$ to satisfy \\eqref{sec 2.3 - assumption 2 on b} in order to deal with energy solutions, but in principle the structure of the proof holds for any sequence $u^N$ of weak solutions satisfying a uniform bound of the form\n\\begin{equation}\\nonumber\n\\sup_N \\int_0^T \\mathbb{E}[\\vert u^N(s)\\vert_{L^2}^2]\\diff s \\leq K\n\\end{equation}\nup to paying the price of restricting ourselves to a weaker notion of convergence, namely weak convergence in $L^2(\\diff \\mathbb{P}\\otimes \\diff t; L^2)$. It's possible that more refined a priori estimates on the solutions $u^N$ provide this kind of bound under milder conditions on $b$ than \\eqref{sec 2.3 - assumption 2 on b}.\n\\end{itemize}\n\\end{remark}\n\nWe now provide explicit sufficient conditions on $b$ under which assumption (A3) is satisfied. We give the statement in full generality, even when assumption \\eqref{sec 2.3 - assumption 2 on b} does not hold.\n\\begin{lemma}\\label{lemma sec 3 - sufficient condition for uniqueness of parabolic problem} Consider $b$ such that \\eqref{sec 2.3 - assumption 1 on b} holds, as well as the following condition:\n\\begin{equation}\\tag{A4}\\label{sec 3 - KR type condition on b}\n\\begin{cases}\nb\\in L^{p_1}(0,T;L^{q_1}(\\mathbb{T}^d))\\quad\n&\\text{with } q_1\\in (d,+\\infty],\\ p_1\\in \\big( \\frac{2 q_1}{q_1 - d},+\\infty\\big]\\\\\n\\textnormal{div}\\, b\\in L^{p_2}(0,T;L^{q_2}(\\mathbb{T}^d)) &\\text{with } q_2\\in \\big(\\frac{d}{2},+\\infty\\big],\\ p_2\\in \\big( \\frac{2 q_2}{2 q_2 - d},+\\infty\\big]\n\\end{cases}.\\end{equation}\nThen (A3) holds, i.e. we have uniqueness in the class of weak $L^\\infty(0,T;L^2(\\mathbb{T}^d))$ solutions of the Cauchy problem\n\\begin{equation}\\label{sec 3 - parabolic Cauchy problem, useful repetition}\\begin{cases}\n\\partial_t u = \\nu\\Delta u + b\\cdot \\nabla u\\\\\nu(0)=u_0 \\in L^2(\\mathbb{T}^d)\n\\end{cases}.\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWithout loss of generality we can assume $\\nu = 1$. By linearity, it suffices to show uniqueness for $u_0=0$. We first show that $u$ is also a mild solution of \\eqref{sec 3 - parabolic Cauchy problem, useful repetition}. Indeed if $u$ is a weak solution of \\eqref{sec 3 - parabolic Cauchy problem, useful repetition}, then for any interval $[s,t]\\subset [0,T]$ and for any $\\varphi\\in C^\\infty([s,t]\\times\\mathbb{T}^d)$ it holds\n\\begin{equation}\\nonumber\\begin{split}\n\\langle u(t),\\varphi(t)\\rangle - \\langle u(s),\\varphi (s)\\rangle & = \\int_s^t \\langle u(r), (\\partial_t+\\Delta)\\varphi(r)\\rangle\\diff r\\\\& - \\int_s^t \\langle u(r), \\text{div}(b(r)\\varphi(r))\\rangle\\diff r.\n\\end{split}\\end{equation}\nThis can be accomplished by taking a partition $s=t_00$ against the convolution with the heat kernel $P_{t-s}$ and letting $\\delta\\to 0$, using $u_0=0$ we obtain the mild formulation\n\\begin{equation}\\nonumber\nu(t) = \\int_0^t P_{t-s} (\\nabla (bu)-\\text{div} b\\, u)\\diff s \\quad \\forall\\, t\\in [0,T].\n\\end{equation}\nIn order to conclude it suffices to show that the map\n\\begin{equation}\\nonumber\nu\\mapsto \\int_0^\\cdot P_{\\cdot-s} (\\nabla (bu)-\\text{div} b\\, u)\\diff s\n\\end{equation}\nis a contraction of $L^\\infty([0,T^\\ast],L^2(\\mathbb{T}^d))$ into itself, for $T^\\ast>0$ sufficiently small. If that's the case, then necessarily $u\\equiv 0$ on $[0,T^\\ast]$ and then we can iterate the argument to cover the whole $[0,T]$. We treat separately the two terms\n\\begin{equation}\\nonumber\nu(t)=\\int_0^t P_{t-s}(\\nabla(bu))\\diff s - \\int_0^t P_{t-s}(\\text{div}b\\, u)\\diff s = (I)(t) + (II)(t).\n\\end{equation}\nFor the first term, using regularity of the heat kernel and the fractional Sobolev embeddings, we have\n\\begin{equation}\\nonumber\\begin{split}\n\\vert I(t)\\vert_{L^2}\n& \\leq \\int_0^t \\vert P_{t-s}(\\nabla(bu))\\vert_{L^2}\\diff s\\\\\n& \\leq C \\int_0^t \\Vert P_{t-s}(\\nabla(bu))\\Vert_{W^{\\alpha,r}}\\diff s\\\\\n& \\leq C \\int_0^t (t-s)^{-(1+\\alpha)\/2} \\Vert bu\\Vert_{L^r}\\diff s\\\\\n& \\leq C \\Vert u\\Vert_{L^\\infty(0,t;L^2)} \\int_0^t (t-s)^{-(1+\\alpha)\/2} \\Vert b\\Vert_{L^{q_1}}\\diff s\n\\end{split}\\end{equation}\nwhere $\\frac{1}{r}=\\frac{1}{q_1}+\\frac{1}{2}$ and $W^{\\alpha,r}\\hookrightarrow L^{\\tilde{r}}$, $\\frac{1}{\\tilde{r}} = \\frac{1}{r}-\\frac{\\alpha}{d}$, $\\tilde{r}\\geq 2$ for some $\\alpha<1$ thanks to \\eqref{sec 3 - KR type condition on b}. Young's convolution inequality then gives\n\\begin{equation}\\nonumber\n\\Vert I\\Vert_{L^\\infty(0,T^\\ast;L^2)}\\leq C_1 \\Vert u\\Vert_{L^\\infty(0,T^\\ast;L^2)}\\, \\Vert b\\Vert_{L^{p_1}(0,T;L^{q_1})}\\, \\bigg( \\int_0^{T^\\ast} s^{-p_1^\\ast(1+\\alpha)\/2}\\diff s\\bigg)^{1\/p_1^\\ast},\n\\end{equation}\nwhere $p_1^\\ast$ denotes the conjugate exponent of $p_1$. The last quantity is finite if we can take $\\alpha$ such that $p_1^\\ast(1+\\alpha)<2$, which is guaranteed by \\eqref{sec 3 - KR type condition on b}. In the case of $(II)(t)$ the calculations are similar, with only a slight difference in the initial part; they lead to\n\\begin{equation}\\nonumber\\begin{split}\n\\Vert & II\\Vert_{L^\\infty(0,T^\\ast;L^2)}\\\\ \n& \\leq C_2 \\Vert u\\Vert_{L^\\infty(0,T^\\ast;L^2)}\\, \\Vert \\text{div}b\\Vert_{L^{p_2}(0,T;L^{q_2})}\\, \\bigg( \\int_0^{T^\\ast} s^{-p_2^\\ast(1+\\alpha_2)\/2}\\diff s\\bigg)^{1\/p_2^\\ast}\n\\end{split}\\end{equation}\nfor a suitable $\\alpha_2$ such that the integral is finite. In particular, this shows that for some $T^\\ast$ small enough, the map is a contraction and this concludes the proof.\n\\end{proof}\n\\begin{remark}\nUp to slight modifications, it can be shown with the same type of proof that under \\eqref{sec 3 - KR type condition on b}, uniqueness holds also in the class of weak solutions $u\\in L^r(0,T;L^2(\\mathbb{T}^d))$, with the additional condition $p_1, p_2\\geq r^\\ast$. Observe that condition $p_1> 2q_1\/(q_1-d)$ may be rewritten as\n\\begin{equation}\\nonumber\n\\frac{2}{p_1}+\\frac{d}{q_1}<1,\n\\end{equation}\nwhich is known in literature as Krylov-R\\\"ockner condition, see \\cite{Kry},\\cite{BecFla}.\n\\end{remark}\n\nThe proof of Lemma \\ref{lemma sec 3 - sufficient condition for uniqueness of parabolic problem} is standard (it is a slight improvement of the one contained in \\cite[Lemma 3.2]{Mau}, which is restricted to the case of time independent $b$) but we had to provide it mainly for two reasons. The first one is that a major part of the results in the literature are set in $\\mathbb{R}^d$ and not in $\\mathbb{T}^d$; the second and most important one is that usually uniqueness for \\eqref{sec 3 - parabolic Cauchy problem, useful repetition} is proved among solutions in a more regular class, tipically $H^p_{2,q}:=L^p(0,T;W^{2,q})\\cap W^{1,p}(0,T;L^q)$, see \\cite{Kry2} and the appendix of \\cite{Kry}. If $u$ belongs in this class, then $\\nabla u\\in L^\\infty([0,T];L^\\infty)$ and so there is no need to impose conditions on $\\text{div}\\,b$. Here however, since our solution $u$ is obtained as the limit of solutions of transport equations, we cannot infer that it belongs to $H^p_{2,q}$, which is why we need to impose the stronger condition \\eqref{sec 3 - KR type condition on b}. Maybe further improvements can be done (for instance if both conditions \\eqref{sec 2.3 - assumption 2 on b} and\\eqref{sec 3 - KR type condition on b} are imposed, then \\eqref{sec 2.3 - assumption 1 on b} can be dropped) but we believe the result to be fairly optimal; indeed the Krylov-R\\\"ockner (KR) condition arises naturally as the subcritical regime of a scaling argument and reaching the critical case (usually referred to as Ladyzhenskaya-Prodi-Serrin condition, (LPS) for short)\n\\begin{equation}\\nonumber\n\\frac{2}{p_1}+\\frac{d}{q_1}=1\n\\end{equation}\nis in general very difficult and seems out of reach in a class of functions like $L^\\infty(0,T;L^2)$. For more details on the topic (both the scaling argument and the critical regime) we refer to \\cite{BecFla} and the references therein.\n\\section{Discussion of existence and uniqueness}\\label{section 4 - discussion of existence and uniqueness}\n\nIn order for the statement of Theorem \\ref{theorem sec 3 - main result} to be non vacuous, we discuss in this section existence and uniqueness of energy solutions, even if it is not the main aim of this paper. Existence is accomplished by a standard Galerkin scheme; regarding uniqueness, several references are given, as well as a proof in the special case $b=0$, but a full answer is missing. We stress however that the statement of the main result holds for \\textit{any} sequence of energy solutions, regardless of their uniqueness; indeed the strength of the result also relies on the fact that the limit satisfies an a priori much better posed equation than the approximating sequence.\\\\\nAs in the previous section, we consider an a priori given filtered probability space $(\\Omega,\\mathcal{F},\\mathcal{F}_t,\\mathbb{P})$ with an $\\mathcal{F}_t$-adapted noise $W$, namely we work in the framework of strong solutions in the probabilistic sense.\n\\subsection{Existence of energy solutions}\\label{subsection 4.1 - existence of energy solutions}\nIn this subsection, the existence of energy solutions for any initial data $u_0\\in L^2(\\mathbb{T}^d)$ is shown. The proof is standard and based on a Galerkin approximation scheme.\n\nFirst we need some preparations. Throughout the proof we will adopt the following notation: by $L^2(\\diff \\mathbb{P}\\otimes \\diff t; L^2)$ denotes the space of all $L^2$-valued, square integrable (in the Bochner sense) functions defined on $\\Omega\\times [0,T]$, endowed with the product $\\sigma$-algebra $\\mathcal{F}\\otimes \\mathcal{B}([0,T])$ and the product measure $\\diff\\mathbb{P}\\otimes \\diff t$. $L^2(\\diff \\mathbb{P}\\otimes \\diff t; L^2)$ is a separable Hilbert space with the scalar product\n\\begin{equation}\\nonumber\n\\langle f, g\\rangle = \\int_0^T\\mathbb{E}[\\langle f(t), g(t)\\rangle_{L^2}]\\diff t.\n\\end{equation}\nMorover it's reflexive and closed balls are weakly compact, due to its Hilbert space structure, see \\cite[Proposition 5.1]{Bre}. Also recall that under weak continuity assumptions (which are satisfied by energy solutions by definition), $\\mathcal{F}_t$-adapted processes are actually predictable, namely measurable with respect to the sub-$\\sigma$-algebra $\\mathcal{P}$ of predictable sets, see \\cite[Porposition 3.7]{DaP}. In particular, predictable processes form a closed subspace of $L^2(\\diff \\mathbb{P}\\otimes \\diff t; L^2)$ and therefore they are also closed with respect to weak convergence.\n\\begin{theorem}\\label{theorem sec 4.1 - existence of energy solutions}\nLet $b$ satisfy \\eqref{sec 2.3 - assumption 1 on b} and \\eqref{sec 2.3 - assumption 2 on b}, $\\{\\theta_k\\}_k$ satisfy \\eqref{sec 2.2 - condition on the coefficients} and \\eqref{sec 2.2 - isotropy condition} and $W$ be the associated divergence free noise. Then for any $u_0\\in L^2$ there exists an energy solution $u$ of \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition}.\n\\end{theorem}\n\\begin{proof}\nFor any $N>0$, let $\\Pi_N$ denote the the Fourier projector on the modes with magnitude $\\vert k\\vert\\leq N$; define $u_0^N = \\Pi_N u_0$. For any $N$, consider the following Cauchy problem:\n\\begin{equation}\\label{sec 4.1 - Galerkin approximation system inside existence theorem}\n\\begin{cases}\n\\diff u^N = \\Pi_N (b\\cdot \\nabla u^N)\\diff t +c\\Delta u^N\\diff t + \\Pi_N\\left(\\sum_{j,k} \\theta_k\\, e_k\\, a_k^{(j)}\\cdot \\nabla u^N\\diff W^{(j)}_k \\right)\\\\\nu^N(0)=u_0^N\n\\end{cases}\n\\end{equation}\nIt can be checked, by writing explicitly the Fourier decomposition, that the above system only involves the noises $W_k^{(j)}$ belonging to a finite set of indices. It is therefore a linear SDE defined on a finite dimensional space (the space of Fourier polynomials of degree at most $N$) and as such it admits a unique local solution $u^N$ with continuous paths. Moreover, the heuristic calculation regarding the energy balance done in Section \\ref{subsection 2.3 - SPDE in Ito form and definition of energy solutions} in this setting is actually rigorous, since we are summing over a finite series, and therefore $u^N$ is defined on the whole $[0,T]$ and satisfies\n\\begin{equation}\\nonumber\n\\sup_{t\\in [0,T]}\\Big\\{ \\vert u^N(t)\\vert^2_{L^2}\\, e^{-\\int_0^t \\Vert \\text{div}b(s,\\cdot)\\Vert_\\infty\\diff s} \\Big\\}\\leq \\vert u^N_0\\vert^2_{L^2} \\quad\\mathbb{P}\\text{-a.s.};\n\\end{equation}\nin particular, for any $N$,\n\\begin{equation}\\label{sec 4.1 - energy inequality inside existence theorem}\n\\vert u^N(\\omega,t)\\vert^2_{L^2}\\leq e^{\\int_0^t \\Vert \\text{div}b(s,\\cdot)\\Vert_\\infty\\diff s} \\vert u_0\\vert^2_{L^2} \\quad\\text{ for }(\\diff\\mathbb{P}\\otimes \\diff t)\\text{-a.e. }(\\omega,t).\n\\end{equation}\nThis implies that for any $N$, equation \\eqref{sec 4.1 - Galerkin approximation system inside existence theorem} has a unique solution, globally defined on $[0,T]$, and\n\\begin{equation}\\nonumber\n\\int_0^T \\mathbb{E}\\big[\\vert u^N(t)\\vert_{L^2}^2\\big] \\diff t\n\\leq K \\vert u_0\\vert^2_{L^2},\n\\end{equation}\nfor a suitable constant $K$ which only depends on $b$. Therefore the sequence $\\{u^N\\}_N$ is uniformly bounded in $L^2(\\diff\\mathbb{P}\\otimes \\diff t; L^2)$ and we can assume, up to extracting a (not relabelled) subsequence, that it weakly converges to a process $u$. We now proceed to show that there exists a version of $u$ which is a weak solution of \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition} with initial data $u_0$. As recalled earlier, $u$ is a predictable process since $u^N$ are so. Fix $\\varphi\\in C^\\infty(\\mathbb{T}^d)$, then by testing $u^N$ against $\\varphi$ we find\n\\begin{equation}\\label{sec 4.1 - weak formulation for u^N inside existence theorem}\\begin{split}\n\\langle u^N(\\cdot),\\varphi\\rangle - \\langle u^N_0,\\varphi\\rangle = & -\\int_0^\\cdot \\langle u^N(s), \\text{div}(b\\Pi_N\\varphi)\\rangle\\diff s + c\\int_0^\\cdot \\langle u^N(s),\\Delta \\varphi\\rangle\\diff s\\\\\n& - \\sum_{j,k} \\theta_k\\, \\int_0^\\cdot \\langle u^N(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\Pi_N\\varphi\\rangle\\diff W^{(j)}_k(s).\n\\end{split}\\end{equation}\nIt's clear that $u^N_0\\to u_0$ in $L^2$; the map $u(\\cdot)\\mapsto \\langle u(\\cdot),\\varphi\\rangle$, from $L^2(\\diff\\mathbb{P}\\otimes \\diff t; L^2)$ to $L^2(\\diff\\mathbb{P}\\otimes \\diff t)$ is linear and continuous and thus also weakly continuous (this is an immediate consequence of the definition of weak convergence). Similarly, the map from $L^2(\\diff\\mathbb{P}\\otimes \\diff t; L^2)$ to $L^2(\\diff\\mathbb{P}\\otimes \\diff t)$ given by\n\\begin{equation}\\nonumber\nu(\\cdot)\\mapsto \\int_0^\\cdot \\langle u(s), \\text{div}(b\\varphi)\\rangle\\diff s\n\\end{equation}\nis linear and continuous since\n\\begin{equation}\\nonumber\\begin{split}\n\\int_0^T & \\mathbb{E}\\Bigg[\\bigg\\vert \\int_0^t \\langle u^N(s), \\text{div}(b\\varphi)\\rangle\\diff s\\bigg\\vert^2\\Bigg]\\diff t\\\\\n& \\leq T\\int_0^T \\vert \\text{div}(b(t)\\varphi)\\vert_{L^2}^2\\diff t\\, \\int_0^T \\mathbb{E}[\\vert u(s)\\vert_{L^2}^2]\\diff t\n\\end{split}\\end{equation}\nand therefore also weakly continuous; since we also have $\\text{div}(b\\Pi^N\\varphi)\\to \\text{div}(b\\varphi)$ strongly, the above estimate shows that overall\n\\begin{equation}\\nonumber\n\\int_0^\\cdot \\langle u^N(s), \\text{div}(b\\Pi_N\\varphi)\\rangle\\diff s \\rightharpoonup \\int_0^\\cdot \\langle u(s), \\text{div}(b\\varphi)\\rangle\\diff s\\quad \\text{weakly}.\n\\end{equation}\nA similar reasoning applies to the processes $\\int_0^\\cdot \\langle u^N(s),\\Delta\\varphi\\rangle\\diff s$. Regarding the stochastic integrals, again we have that for fixed $\\varphi$ the map from $L^2(\\diff\\mathbb{P}\\otimes \\diff t; L^2)$ to $L^2(\\diff\\mathbb{P}\\otimes \\diff t)$ given by\n\\begin{equation}\\nonumber\nu\\mapsto \\sum_{j,k} \\theta_k\\, \\int_0^\\cdot \\langle u^N(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\Pi_N\\varphi\\rangle\\diff W^{(j)}_k(s)\n\\end{equation}\nis linear and continuous, since by the same calculations of Section \\ref{subsection 2.3 - SPDE in Ito form and definition of energy solutions} it holds\n\\begin{equation}\\nonumber\\begin{split}\n\\int_0^T & \\mathbb{E}\\Bigg[\\Big\\vert \\sum_{j,k} \\theta_k\\, \\int_0^t \\langle u^N(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\Pi_N\\varphi\\rangle\\diff W^{(j)}_k(s) \\Big\\vert^2\\Bigg]\\diff t\\\\\n& \\leq 2T\\,\\sup_k \\theta_k^2\\, \\Vert\\nabla\\varphi\\Vert_{\\infty}^2\\,\\int_0^T \\mathbb{E}[\\vert u(s)\\vert_{L^2}^2]\\diff t\n\\end{split}\\end{equation}\nand as before using the fact that $\\nabla\\Pi_N\\varphi\\to\\nabla\\varphi$ uniformly, we also obtain weak convergence. Taking the weak limit as $N\\to\\infty$ on both sides of \\eqref{sec 4.1 - weak formulation for u^N inside existence theorem} we conclude that $u$ satisfies \n\\begin{equation}\\label{sec 4.1 - weak formulation, useful repetition inside existence theorem}\n\\begin{split}\n\\langle u(\\cdot),\\varphi\\rangle - \\langle u_0,\\varphi\\rangle = & -\\int_0^\\cdot \\langle u(s), \\text{div}(b\\varphi)\\rangle\\diff s + c\\int_0^\\cdot \\langle u(s),\\Delta \\varphi\\rangle\\diff s\\\\\n& - \\sum_{j,k} \\theta_k\\, \\int_0^\\cdot \\langle u(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s),\n\\end{split}\n\\end{equation}\nso that $u$ is a candidate weak solution of \\eqref{sec 2.3 - equation in Stratonovich compact form, useful repetition}.\nWe now want to show that there exists a version of $u$ which has paths in $C([0,T];L^2_w)$ and satisfies the energy inequality. Observe that the collection of processes in $L^2(\\diff\\mathbb{P}\\otimes \\diff t; L^2)$ satisfying inequality \\eqref{sec 4.1 - energy inequality inside existence theorem} is a convex, closed subset. Therefore it is also weakly closed (see \\cite{Bre}), which implies that inequality \\eqref{sec 4.1 - energy inequality inside existence theorem} holds also for $u$.\n\n\nFor fixed $\\varphi\\in C^\\infty(\\mathbb{T}^d)$, by standard properties of Lebesgue integral we know that the processes $\\int_0^\\cdot \\langle u(s), \\text{div}(b\\varphi)\\rangle \\diff s$, $\\int_0^\\cdot \\langle u(s),\\Delta \\varphi\\rangle \\diff s$ are $\\mathbb{P}$-a.s. continuous. \nRecall that by construction of the It\\^o integral, for any $k$ and $j$ the process $\\int_0^\\cdot \\langle u(s), e_k\\, a_k^{(j)}\\cdot \\nabla \\varphi\\rangle\\diff W^{(j)}_k(s)$ is a continuous square integrable martingale; moreover, continuous square integrable martingales are closed under $L^2(\\diff \\mathbb{P}\\otimes \\diff t)$-convergence (see \\cite{Rev}) and we have already shown that the infinite series on the r.h.s. of \\eqref{sec 4.1 - weak formulation, useful repetition inside existence theorem} is convergent in this norm. \nTherefore we can conclude that, for a fixed $\\varphi\\in C^\\infty(\\mathbb{T}^d)$, the process appearing on the r.h.s. of \\eqref{sec 4.1 - weak formulation, useful repetition inside existence theorem} is $\\mathbb{P}$-a.s. continuous in time and it coincides up to $(\\diff \\mathbb{P}\\otimes \\diff t)$-negligible sets with $\\langle u,\\varphi\\rangle-\\langle u_0,\\varphi\\rangle$; in particular, $\\mathbb{P}$-a.s. $\\langle u,\\varphi\\rangle$ admits a continuous version.\n\nBut then we can find, also thanks to \\eqref{sec 4.1 - energy inequality inside existence theorem}, a countable dense collection $\\varphi_n$ and a subset $\\Gamma$ of $\\Omega$ with $\\mathbb{P}(\\Gamma)=1$ such that for all $\\omega\\in\\Gamma$ the following holds: $t\\mapsto\\langle u(\\omega,t),\\varphi_n\\rangle$ admits a continuous version for all $n$ and there exists a set $E_\\omega\\subset [0,T]$ of full measure on which $\\vert u(\\omega,\\cdot)\\vert_{L^2}$ is uniformly bounded by some constant $K$. In particular, for any $t\\notin E_\\omega$ and any sequence $\\{t_n\\}_n\\subset E_\\omega$ such that $t_n\\to t$, we can extract a subsequence such that $u(\\omega, t_n)$ admits a weak limit in $L^2$, denoted by $v(\\omega,t)$, whose norm is still bounded by the constant $K$. But since $\\langle u(\\omega,\\cdot),\\varphi_n\\rangle$ all admit continuous versions, the limit $v(\\omega,t)$ is uniquely determined and does not depend on the extracted subsequence, nor on the original sequence $\\{t_n\\}_n$. Reasoning in this way, for fixed $\\omega\\in\\Gamma$, we can define $v(\\omega,t)$ for all $t\\in [0,T]$ and it satisfies the following: $v(\\omega,t)=u(\\omega,t)$ for all $t$ in a set of full Lebesgue measure; $\\vert v(\\omega,t)\\vert_{L^2}\\leq K$ for all $t\\in [0,T]$; $\\langle v(\\omega,t),\\varphi_n\\rangle$ coincides with the continuous version of $\\langle u(\\omega,t),\\varphi_n\\rangle$. But then by the uniform bound and density of $\\varphi_n$ it follows that the map $t\\mapsto \\langle v(\\omega,t)\\varphi\\rangle$ is continuous for every $\\varphi\\in C^\\infty(\\mathbb{T}^d)$ and for every $\\omega\\in \\Gamma$, namely $v$ is a version of $u$ with $\\mathbb{P}$-a.s. weakly continuous paths. Since $v$ also satisfies \\eqref{sec 4.1 - weak formulation, useful repetition inside existence theorem}, we conclude that it is a weak solution.\n\nIt only remains to show that the energy inequality holds, but this is achieved similarly by using the fact that, for all $\\omega\\in \\Gamma$ and all $t\\in \\tilde{E}_\\omega$, $\\tilde{E}_\\omega$ being a full Lebesgue measure set, inequality \\eqref{sec 4.1 - energy inequality inside existence theorem} indeed holds, and therefore by using lower semicontinuity of $\\vert\\cdot\\vert_{L^2}$ and $v(\\omega,\\cdot)\\in C([0,T];L^2_w)$ it can be extended to all $(\\omega,t)\\in \\Gamma\\times[0,T]$.\n\\end{proof}\n\n\\subsection{Proof of pathwise uniqueness in the case $b=0$}\\label{subsection 4.2 - proof of strong uniqueness in the case b=0}\n\n\nWe prove in this section pathwise uniqueness of solutions in the case $b=0$; before proceeding further, let us mention the already existing results in the literature. Many of them are proved in $\\mathbb{R}^d$ but can be easily generalized to $\\mathbb{T}^d$.\n\nA main result in the topic is the already mentioned work \\cite{FlaGub}, where it is shown that for $b\\in L^\\infty(0,T;C^\\alpha(\\mathbb{R}^d;\\mathbb{R}^d))$ with $\\text{div}\\, b\\in L^p([0,T]\\times\\mathbb{R}^d)$, $\\alpha>0$ and $p\\geq 2$, in the case of a space-independent standard $d$-dim. Brownian motion, pathwise uniqueness holds for \\eqref{sec 1 - STLE in compact Stratonovich form} for any $u_0\\in L^\\infty(\\mathbb{R}^d)$; the proof is based on the existence of a sufficiently regular flow for the associated SDE. Many other results are now available, see the references in \\cite{BecFla}, but tipically the noise considered is space independent or has sufficiently good space regularity. In the stochastic fluid dynamics literature, the use of divergence free transport noise of the form\n\\begin{equation}\\nonumber\nW(t,x)=\\sum_{n\\in\\mathbb{N}} \\sigma_n(x)W_n(t)\n\\end{equation}\nappears fairly often; typical assumptions on this kind of noise are like those contained in \\cite{Brz}, specifically it is required that\\begin{equation}\\label{sec 4.2 brz condition}\n\\bigg\\Vert \\sum_{n\\in\\mathbb{N}} \\vert D\\sigma_k(\\cdot)\\vert^2\\bigg\\Vert_{L^\\infty(\\mathbb{T}^d)}<\\infty.\n\\end{equation}\nIn \\cite{BecFla}, \\eqref{sec 1 - STLE in compact Stratonovich form} is studied mainly in the case of space-independent noise, sufficiently regular initial data and $b\\in L^p(0,T;L^q(\\mathbb{R}^d))$, both in the subcritical (KR) and the critical (LPS) regime, using PDE arguments which do not rely on the existence of a regular flow for the associated SDE. It is stated however in Section 1.9 that all the results generalize to the case of \\textquotedblleft $\\sigma_n$ of class $C^4_b$ with proper summability in $n$ \\textquotedblright\\ such that the SDE\n\\begin{equation}\\nonumber\n\\diff Y=\\sum_{n\\in\\mathbb{N}} \\sigma_n(Y)\\circ \\diff W_n\n\\end{equation}\nhas a sufficiently regular stochastic flow of diffeomorphisms. In particular we expect that at least an analogue requirement to \\eqref{sec 4.2 brz condition} is needed; in our setting, this condition is equivalent to\n\\begin{equation}\\label{sec 4.2 brz condition 2}\n\\sum_{k\\in\\mathbb{Z}^d}\\vert k\\vert^2 \\theta_k^2<\\infty.\n\\end{equation}\nHowever, if instead of pathwise uniqueness one only requires \\textit{Wiener uniqueness} of weak solutions of \\eqref{sec 1 - STLE in compact Stratonovich form}, then the problem greatly simplifies. Wiener uniqueness means uniqueness in the class of processes adapted to the Brownian filtration $\\mathcal{F}^W_t$ and can be established by Wiener chaos expansion techniques, see \\cite{LeJ}, \\cite{LeJ2} and \\cite{Mau}. In particular, even if in \\cite{Mau} only space-independent noise and time-independent drift are considered, the technique seems to easily adapt to our setting, for any $\\{\\theta_k\\}$ satisfying \\eqref{sec 2.2 - condition on the coefficients} and \\eqref{sec 2.2 - isotropy condition} and any $b$ satisfying \\eqref{sec 2.3 - assumption 2 on b}, \\eqref{sec 3 - KR type condition on b}, as it fundamentally only requires wellposedness in a suitable class for the Kolmogorov equation \\eqref{sec 3 - parabolic Cauchy problem, useful repetition}, which holds under the conditions of Lemma \\ref{lemma sec 3 - sufficient condition for uniqueness of parabolic problem}. Wiener uniqueness is however unsatisfactory, for several reasons: if the only information on a solution $u$ is that it is adapted to $\\mathcal{F}_t$, then this strategy only gives pathwise uniqueness of the process $\\tilde{u}(t)=\\mathbb{E}[u(t)\\vert \\mathcal{F}^W_t]$; Wiener uniqueness is also too weak to apply the Yamada-Watanabe theorem (which holds also in infinite dimensions, see \\cite{Roc}) and ill-suited to exploit tools like Girsanov transform, see the discussion in Section 4.7 of \\cite{Fla}.\n\nThe problem of passing from Wiener uniqueness to pathwise uniqueness is not only technical, because if condition \\eqref{sec 4.2 brz condition} is not satisfied, the equation cannot be in general solved by means of characteristics, since phenomena like splitting and coalescence can occurr, as shown in \\cite{LeJ}. In this work, Wiener uniqueness is exploited to construct Markovian statistical solutions $S_t$ which are then studied and classified; in our setting, for $b=0$ and $W$ divergence free, according to the terminology introduced in \\cite{LeJ}, the statistical solution is \\textit{diffusive without hitting} and is not a flow of maps (i.e. does not admit a representation by characteristics), see Theorem 10.1 (by the divergence free condition, in our setting the parameter $\\eta$ is always $1$). In particular splitting can occurr, since the $2$-point motion $(X_t,Y_t)$ associated to $S_t$ starting from $(x,x)$ satisfies $X_t\\neq Y_t$ for all positive $t$, see Definition 6.3. In the work \\cite{LeJ2} statistical solutions are studied more in depth and it is hinted that non uniqueness can happen only in the \\textit{turbulent with hitting} regime, but no explicit proof of pathwise uniqueness in the other regimes is given.\n\nHere instead we adapt the strategy developed in \\cite{Barb}, which yields a relatively simple and short proof of pathwise uniqueness in the special case $b=0$, for any $\\{\\theta_k\\}_k$ satisfying \\eqref{sec 2.2 - condition on the coefficients} and \\eqref{sec 2.2 - isotropy condition}, which is a much weaker condition compared to \\eqref{sec 4.2 brz condition 2}.\n\n\nWe focus on the SPDE\n\\begin{equation}\\label{sec 4.2 - STLE with b=0, Ito form}\n\\diff u = c\\Delta u\\diff t + \\diff W\\cdot\\nabla u,\n\\end{equation}\nwith $W$ as usual given by \\eqref{sec 2.2 - definition of the noise W(t,x)}, $c$ defined in function of $\\{\\theta_k\\}_k$ by \\eqref{sec 2.3 - definition of the constant c}. Given $u_0\\in L^2$, we consider a weak solution of \\eqref{sec 4.2 - STLE with b=0, Ito form} in the sense of Definition \\ref{definition sec 2.3 - weak solution}. Following \\cite{Barb}, we rewrite the equation in Fourier components. Let $u$ be given by the Fourier series\n\\begin{equation}\\nonumber\nu(t,x)=\\sum_l u_l(t)\\,e_l(x),\n\\end{equation}\nso that\n\\begin{equation}\\nonumber\n\\nabla u = i\\sum_l l\\, u_l\\,e_l,\n\\quad\n\\Delta u = -\\sum_l \\vert l\\vert^2 u_l\\, e_l.\n\\end{equation}\nExplicit calculations give the Fourier expansion for $\\diff W\\cdot\\nabla u$:\n\\begin{equation}\\nonumber\\begin{split}\n\\diff W\\cdot\\nabla u\n& = \\Bigg(\\sum_{j,k} \\theta_k\\,e_k\\,a_k^{(j)}\\diff W_k^{(j)} \\Bigg) \\cdot\\left(i\\sum_l l\\, u_l\\,e_l\\right)\\\\\n& = \\sum_{j,k,l} i\\,\\theta_k\\, a_k^{(j)}\\cdot l\\, u_l\\, e_{k+l}\\diff W_k^{(j)}\\\\\n& = \\sum_k i\\Bigg(\\sum_{j,l}\\theta_{k-l}\\, a_{k-l}^{(j)}\\cdot l\\, u_l\\diff W^{(j)}_{k-l} \\Bigg)\\, e_k\\\\\n& = \\sum_k i\\Bigg(\\sum_{j,l}\\theta_{k-l}\\, a_{k-l}^{(j)}\\cdot k\\, u_l\\diff W^{(j)}_{k-l} \\Bigg)\\, e_k,\n\\end{split}\\end{equation}\nwhere in the last passage we used the fact that $a^{(j)}_{k-l}\\perp k-l$. Uniqueness of the Fourier expansion then gives the following infinite linear system of coupled SDEs for the coefficients $u_k$:\n\\begin{equation}\\label{sec 4.2 - system of SDEs for u_k}\n\\diff u_k = -c\\vert k\\vert^2\\,u_k\\diff t + i \\sum_{j,l}\\theta_{k-l}\\, a_{k-l}^{(j)}\\cdot k\\, u_l\\diff W^{(j)}_{k-l},\n\\end{equation}\nwhere as usual the identity must be interpreted in integral sense:\n\\begin{equation}\\nonumber\nu_k(t) - u_k(0) = -c\\vert k\\vert^2\\int_0^t u_k(s)\\diff s + i \\sum_{j,l}\\theta_{k-l}\\,a_{k-l}^{(j)}\\cdot k \\int_0^t u_l(s)\\diff W^{(j)}_{k-l}(s).\n\\end{equation}\nThe derivation of \\eqref{sec 4.2 - system of SDEs for u_k} was very heuristical, but it can be checked that we would have found the same exact expression in integral form by taking $\\varphi=e_k$ as test functions in \\eqref{sec 2.3 - equation in Ito form, weak formulation in integral form}. Calculations similar to those of Section \\ref{subsection 2.3 - SPDE in Ito form and definition of energy solutions} give\n\\begin{equation}\\nonumber\\begin{split}\n\\mathbb{E}\\Bigg[\\Big\\vert \\sum_{j,l}\\theta_{k-l}\\,a_{k-l}^{(j)}\\cdot k \\int_0^t u_l(s)\\diff W^{(j)}_{k-l}(s) \\Big\\vert^2\\Bigg]\n& \\leq K \\vert k\\vert^2 \\int_0^T \\sum_l \\mathbb{E}\\big[\\vert u_l(s)\\vert^2\\big] \\diff s\\\\\n& = K \\vert k\\vert^2 \\int_0^T \\mathbb{E}\\big[\\vert u(s)\\vert_{L^2}^2\\big] \\diff s.\n\\end{split}\\end{equation}\nfor a suitable constant $K$, so that the infinite series in \\eqref{sec 4.2 - system of SDEs for u_k} is well defined, since $u$ satisfies \\eqref{sec 2.3 - condition of bounded energy in the definition of weak solution}.\nTo prove uniqueness, we need the following result.\n\\begin{lemma}\\label{lemma sec 4.2 - system for averaged energies}\nLet $u$ be a weak solution of \\eqref{sec 4.2 - STLE with b=0, Ito form}, $u_k$ defined as above. Then the real functions $x_k$ defined by $x_k(t)=\\mathbb{E}[\\vert u_k(t)\\vert^2]$ satisfy the following linear infinite system of coupled ODEs:\n\\begin{equation}\\label{sec 4.2 - equation for average energies}\n\\dot{x}_k = -2c\\vert k\\vert^2 x_k + 2\\sum_l \\theta_{k-l}^2\\, \\vert P_{k-l}k\\vert^2\\,x_l.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSince $u$ is a weak solution, we know that $\\{u_k\\}_k$ satisfy system \\eqref{sec 4.2 - system of SDEs for u_k}. Then applying It\\^o formula, using the properties of $W^{(j)}_k$, we have\n\\begin{equation}\\nonumber\\begin{split}\n\\diff(\\vert u_k\\vert^2)\n& = \\diff(u_k\\overline{u}_k)\n= u_k\\diff \\overline{u}_k + \\overline{u}_k\\diff u_k + \\diff [u_k,\\overline{u}_k]\\\\\n& = -c\\vert k\\vert^2 \\vert u_k\\vert^2\\diff t + \\diff M_t -c\\vert k\\vert^2 \\vert u_k\\vert^2\\diff t + \\diff N_t \\\\ & \\quad + 2\\sum_{j,l} \\theta_{k-l}^2 \\vert a_{k-l}^{(j)}\\cdot k\\vert^2 \\vert u_l\\vert^2\\diff t\n\\end{split}\\end{equation}\nwhere $M$ and $N$ are suitable square integrable martingale starting at 0. Taking expectation their contribution disappears and we obtain\n\\begin{equation}\\nonumber\n\\dot{x}_k = -2c\\vert k\\vert^2 x_k + 2\\sum_{j,l} \\theta_{k-l}^2\\, \\vert a_{k-l}^{(j)}\\cdot k\\vert^2\\,x_l.\n\\end{equation}\nObserve that, for any fixed $k$,\n\\begin{equation}\\nonumber\n\\sum_j \\vert a_{k-l}^{(j)}\\cdot k\\vert^2\n= \\Bigg\\vert \\sum_j \\Big(a_{k-l}^{(j)}\\otimes a_{k-l}^{(j)}\\Big)k \\Bigg\\vert^2\n= \\vert P_{k-l} k\\vert^2,\n\\end{equation}\nwhich implies the conclusion.\n\\end{proof}\n\\begin{remark}\\label{remark sec 4.2 - forward equation of markov chain} System \\eqref{sec 4.2 - equation for average energies} can be written as\n\\begin{equation}\\nonumber\n\\dot{x}_k = q_{kk}\\,x_k + \\sum_l q_{kl}\\, x_l,\n\\end{equation}\nwhere $q_{kk}=-2c\\vert k\\vert^2\\in (-\\infty, 0)$, $q_{kl} = 2 \\theta_{k-l}^2\\, \\vert P_{k-l}k\\vert^2\\geq 0$ for $k\\neq l$. Moreover, for any $k$ it holds\n\\begin{equation}\\nonumber\n\\sum_l q_{kl}\n= 2 \\sum_l \\theta_{k-l}^2\\, \\vert P_{k-l}k\\vert^2\n= 2 \\sum_{\\tilde{l}} \\theta_{\\tilde{l}}^2\\, \\vert P_{\\tilde{l}}k\\vert^2\n= 2 c\\vert k\\vert^2 = -q_{kk},\n\\end{equation}\nwhere we used the change of variables $k-l=\\tilde{l}$ and the computation \\eqref{sec 2.3 - definition of the constant c} from Section \\ref{subsection 2.3 - SPDE in Ito form and definition of energy solutions}. Namely, system \\eqref{sec 4.2 - equation for average energies} can be interpreted as the forward equation associated to a $Q$-matrix, which is the generator of a continuous time Markov process on $\\mathbb{Z}^d\\setminus\\{0\\}$; see the similarity with \\cite{BarFla}. This formulation can be useful to study long-time behaviour of solutions: for instance we can deduce immediately that if $u$ is a stationary solution, then it must hold $x_k=c$ for all $k$ and so the only invariant measure with support in $L^2$ is $\\delta_0$. If we expect convergence to equilibrium as $t\\to\\infty$, then all solutions should converge to $0$, even if energy is a formal invariant for equation \\eqref{sec 4.2 - STLE with b=0, Ito form}; indeed in \\cite{BarFla} anomalous dissipation of energy for a similar model is shown. Understanding whether anomalous dissipation takes place in this model will be the subject of future research.\n\\end{remark}\n\\begin{theorem}\\label{theorem sec 4.2 - uniqueness for b=0} Let $\\{\\theta_k\\}_k$ satisfy \\eqref{sec 2.2 - condition on the coefficients} and \\eqref{sec 2.2 - isotropy condition} and $W$ be the associated divergence free noise. If $u$ and $v$ are two weak solutions of \\eqref{sec 4.2 - STLE with b=0, Ito form} with the same initial data $u_0$, then\n\\begin{equation}\\nonumber\n\\mathbb{P}\\big(u(t)=v(t)\\big)=1\\quad \\forall\\,t\\in [0,T].\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nBy linearity of equation \\eqref{sec 4.2 - STLE with b=0, Ito form}, $w:= u-v$ is a weak solution with initial data $w_0=0$. In order to conclude it suffices to show that $\\mathbb{P}(w(t)=0)=1$ for every $t$. Recall that by the definition of weak solution, $u$ and $v$ satisfy \\eqref{sec 2.3 - condition of bounded energy in the definition of weak solution} and therefore also $w$ does. By Lemma \\ref{lemma sec 4.2 - system for averaged energies}, we know that $x_k=\\mathbb{E}[\\vert w_k\\vert^2]$ satisfy \\eqref{sec 4.2 - equation for average energies} with initial condition $x_k(0)=0$ for all $k$, namely\n\\begin{equation}\\nonumber\nx_k(t) = -2c \\vert k\\vert^2 \\int_0^t x_k(s)\\diff s + 2\\sum_l \\theta_{k-l}^2 \\vert P_{k-l} k\\vert^2 \\int_0^t x_l(s) \\diff s.\n\\end{equation}\nFix $t\\in [0,T]$ and define $A_k=\\int_0^t x_k(s)\\diff s$, then the above equation becomes\n\\begin{equation}\\label{sec 4.2 - proof of main theorem, internal equation}\nx_k(t) + 2c\\vert k\\vert^2 A_k= 2\\sum_l \\theta_{k-l}^2 \\vert P_{k-l} k\\vert^2 A_l.\n\\end{equation}\nCondition \\eqref{sec 2.3 - condition of bounded energy in the definition of weak solution} implies that $A_k$ is summable:\n\\begin{equation}\\nonumber\n\\sum_k A_k\n= \\sum_k \\int_0^t \\mathbb{E}[\\vert w_k(s)\\vert^2]\\diff s\n\\leq \\int_0^T \\mathbb{E}[\\vert w(s)\\vert_{L^2}^2]\\diff s<\\infty,\n\\end{equation}\nso that $A_k\\to 0$ as $k\\to\\infty$; in particular $\\{A_k\\}_k$ admits a maximum, say at $A_{k_1}$. Since $x_{k_1}(t)\\geq 0$ by construction, we find\n\\begin{equation}\\nonumber\\begin{split}\n2c\\vert k_1\\vert^2 A_{k_1}\n& \\leq x_{k_1}(t) + 2c\\vert k_1\\vert^2 A_{k_1}\n= 2\\sum_l \\theta_{k_1-l}^2 \\vert P_{k_1-l} k_1\\vert^2 A_l\\\\\n& \\leq 2\\max_l A_l \\sum_l \\theta_{k_1-l}^2 \\vert P_{k_1-l} k_1\\vert^2\n= 2c\\vert k_1\\vert^2 A_{k_1},\n\\end{split}\\end{equation}\nwhich implies that all the inequalities are equalities and therefore $x_{k_1}(t)=0$, $A_l=A_{k_1}$ for all $l$ such that $\\theta_{k_1-l}\\neq 0$ and $P_{k_1-l}k_1\\neq 0$ (i.e. $l\\notin \\langle k\\rangle$). We are now going to show that we can construct inductively a sequence $k_n$ such that $k_n\\to\\infty$ and $A_{k_n}=\\max_l A_l$. Recall that $\\{\\theta_k\\}_k$ satisfy the isotropy condition \\eqref{sec 2.2 - isotropy condition}, so if $\\theta_{\\overline{j}}\\neq 0$ for some $\\overline{j}$, then $\\theta_{O\\overline{j}}\\neq 0$ as well. Let $\\Gamma=\\{O\\overline{j}, O\\in E_{\\mathbb{Z}}(d)\\}$. Then we can find $j\\in\\Gamma$ such that $k_2:=k_1-j$ satisfies $k_2\\notin\\langle k_1\\rangle$ and $\\vert k_2\\vert> \\vert k_1\\vert$; since $\\theta_{k_1-k_2}=\\theta_j\\neq 0$ we conclude that $A_{k_2}=\\max_l A_l$. But then we can iterate the reasoning, this time starting from $A_{k_2}$, to find $k_3$ such that $\\vert k_3\\vert>\\vert k_2\\vert$ and $A_{k_3}=\\max_l A_l$, and so on. In this way we find the desidered sequence $\\{k_n\\}_n$; but $A_l\\to 0$ as $l\\to\\infty$, which implies that $\\max_l A_l=0$ and so $A_l=0$ for all $l$.\nSince\n\\begin{equation}\\nonumber\n0=\\sum_l A_l = \\int_0^t \\mathbb{E}[\\vert w(s)\\vert^2_{L^2}]\\diff s\n\\end{equation}\nand the reasoning holds for any $t\\in [0,T]$, we obtain the conclusion.\n\\end{proof}\n\\begin{remark}\\label{remark sec 4.2 - advantages and disadvantages of this line of proof}\nLet us underline both the advantages and the disadvantages of the approach we used. On one side, the proof could be further generalised: if we had a weaker concept of solution for which the derivation of system \\eqref{lemma sec 4.2 - system for averaged energies} is still rigorous (in principle system \\eqref{sec 4.2 - equation for average energies} is well defined under assumption $\\{x_k\\}_k\\in l^\\infty$), then in order for the proof to work we only need to guarantee that $\\{A_k\\}\\in c_0$ (i.e. $\\{A_k\\}\\in l^\\infty$ and $A_k$ infinitesimal), which could be deduced under milder conditions than \\eqref{sec 2.3 - condition of bounded energy in the definition of weak solution}. The proof also holds for the inhomogeneous equation with an external forcing $f$, since the difference of two solutions of the inhomogeneous system is a solution of the homogenous one.\\\\\nOn the other side, the proof is not easily generalizable on domains different than $\\mathbb{T}^d$ and completely breaks down when treating the case $b\\neq 0$. In fact, we are not able to obtain a closed equation for $\\mathbb{E}[\\vert u_k\\vert^2]$ as in Lemma \\ref{lemma sec 4.2 - system for averaged energies} anymore; it's still possible to find a closed system of ODEs for the terms $x_{k,l}=\\mathbb{E}[u_k\\overline{u}_l]$, but it's not as nice as \\eqref{sec 4.2 - equation for average energies}. The simplification obtained by finding a closed equation for the \"diagonal\" terms $x_{k,k}=\\mathbb{E}[\\vert u_k\\vert^2]$ is the key in our method of proof.\n\\end{remark}\n\nWe immediately obtain the following corollary.\n\n\\begin{corollary}\\label{corollary sec 4.2 - pathwise uniqueness} The following hold.\n\\begin{itemize}\n\n\\item[i)] (Pathwise uniqueness) Let $u$ be a weak solution of \\eqref{sec 4.2 - STLE with b=0, Ito form} with initial data $u_0$. Then $u$ is the unique energy solution of \\eqref{sec 4.2 - STLE with b=0, Ito form} with initial data $u_0$ (up to indistinguishability).\n\n\\item[ii)] (Stability) Let $u$ and $v$ be two weak solutions with respect to initial data $u_0$ and $v_0$. Then\n\\begin{equation}\\nonumber\n\\Vert u-v\\Vert_{L^\\infty(0,T;L^2)} \\leq \\vert u_0-v_0\\vert_{L^2}\\quad \\mathbb{P}\\text{-a.s.}\n\\end{equation}\n\n\\end{itemize}\n\\end{corollary}\n\\begin{proof}\ni) Let $u$ be as in the hypothesis and $\\tilde{u}$ be an energy solution of \\eqref{sec 4.2 - STLE with b=0, Ito form} with initial data $u_0$. Then by Theorem \\ref{theorem sec 4.2 - uniqueness for b=0}\n\\begin{equation}\\nonumber\n\\mathbb{P}(u(t)=\\tilde{u}(t)\\ \\ \\forall\\, t\\in [0,T]\\cap\\mathbb{Q})=1\n\\end{equation}\nand we conclude that $u$ and $\\tilde{u}$ are indistinguishable, since $u$ and $\\tilde{u}$ both have $\\mathbb{P}$-a.s. continuous paths in the weak topology.\n\nii) By linearity, $u-v$ is a weak solution with initial data $u_0-v_0$. But then it is the unique energy solution and satisfies the energy inequality \\eqref{sec 2.3 - energy inequality} with $b=0$, which gives the conclusion.\n\\end{proof}\n\nAlso observe that we are in the conditions to apply Yamada-Watanabe theorem and therefore not only pathwise but also uniqueness of strong solutions in the probabilistic sense holds; this also implies uniqueness in law.\n\n\\section{The case $d=1$}\\label{section 5 - the case d=1}\nThe proof of our main result required two fundamental features: the use of an incompressible noise, namely $W=W(t,x)$ such that at any fixed time $\\text{div}_x W(t,\\cdot)=0$ in the sense of distributions, and the existence of weak solutions satisfying a uniform energy bound; the existence of such solutions was a consequence of suitable conditions on $b$ and incompressibility of $W$. However, in the case of spatial dimension $d=1$, the divergence-free condition is equivalent to $W$ being space-independent, which doesn't allow to take a sequence of noises which are increasingly rougher in space; it is therefore unclear whether it's possible to obtain an analogue of Theorem \\ref{theorem sec 3 - main result}. One might still look for a suitable sequence of not divergence-free noises, for which existence of energy solutions can be shown, and such that in the limit they converge in a suitable sense to a deterministic PDE. In this section we show that, while this program can still partially be carried out, in the limit we do not expect to find a PDE with a diffusion term.\n\n\nLet us briefly introduce the notation and setting of this section. We consider the one dimensional torus $\\mathbb{T}=\\mathbb{R}\/(0,2\\pi)$, with periodic boundary condition. However, since in one dimension there is no real advantage in working with complex series, we prefer in this case to restrict ourselves to the space of real valued, $2\\pi$-periodic, $L^2$ functions, with the normalised inner product\n\\begin{equation}\\nonumber\n\\langle f,g\\rangle = \\frac{1}{2\\pi}\\int_\\mathbb{T} f(x)g(x)\\diff x.\n\\end{equation}\nA complete orthonormal system for $L^2(\\mathbb{T})$ is given by the real Fourier basis $\\{e_k\\}_{k\\in\\mathbb{Z}}$:\n\\begin{equation}\\nonumber\ne_k(x)=\\begin{cases}\n\\sqrt{2}\\cos(kx)\\qquad & \\text{if } k>0\\\\\n1 & \\text{ if } k=0\\\\\n\\sqrt{2}\\sin(kx) & \\text{ if } k<0\n\\end{cases}.\n\\end{equation}\nThroughout this section, we will assume $\\{W_k\\}_{k\\in\\mathbb{Z}}$ to be a sequence of \\textit{real} independent Brownian motions. In order to understand which kind of noise to use, let us start by considering the deterministic transport equation\n\\begin{equation}\\label{sec 5 - starting deterministic pde}\n\\partial_t u = b\\, \\partial_x u,\n\\end{equation}\nwhere we need to assume at least $b=b(t,x)$ to be in $L^1(0,T;L^2)$ with $\\text{div}\\, b = \\partial_x b \\in L^1(0,T;L^\\infty)$, namely $b\\in L^1(0,T; W^{1,\\infty})$. In the 1-d framework, under such conditions on $b$, equation \\eqref{sec 5 - starting deterministic pde} is already well posed by the DiPerna-Lions theory, see \\cite{DiPLio}. We highlight this aspect as it provides a partial explanation of the fact that in 1-d there doesn't seem to be much space for regularization by space-time dependent noise (even if some results have been obtained, see \\cite{Ges} and the references therein): the deterministic theory is already well posed under sufficiently mild conditions and therefore too \"competitive\" for the noise to perform better. Anyway, observing that for a given smooth deterministic $v=v(t,x)$, the linear PDE\n\\begin{equation}\\nonumber\n\\partial_t u = 2v\\,\\partial_x u + \\partial_x v\\, u\n\\end{equation}\nis formally energy preserving, since\n\\begin{equation}\\nonumber\n\\frac{\\diff}{\\diff t}\\int_\\mathbb{T} u^2\\diff x\n= \\int_\\mathbb{T} \\big(4 v\\,u\\,\\partial_x u + 2\\partial_xv\\, u^2\\big)\\diff x\n= 2\\int_\\mathbb{T} \\partial_x (v\\, u^2)\\diff x = 0,\n\\end{equation}\nit seems reasonable to consider a perturbation of \\eqref{sec 5 - starting deterministic pde} of the form\n\\begin{equation}\\label{sec 5 - linear spde, compact stratonovich formulation}\n\\diff u = b\\, \\partial_x u\\diff t + 2\\circ \\diff W\\, \\partial_x u + \\circ \\diff(\\partial_x W)\\, u,\n\\end{equation}\nwhere $W=W(t,x)$ is a suitable noise which will be described later. As before, $\\circ \\diff W$ denotes Stratonovich integration with respect to the time parameter; observe that, if $W$ is a distribution valued Wiener process, then $\\partial_x W$ is again a a distribution valued Wiener process and therefore, under proper conditions, Stratonovich integration also with respect to it can be defined. This will become more transparent once we describe $W$ explicitly. By the properties of Stratonovich integral (in particular the chain rule) and the above computation, we expect formally to obtain the same energy balance as for equation \\eqref{sec 5 - starting deterministic pde} and therefore the existence of weak solutions satisfying the energy inequality\n\\begin{equation}\\nonumber\n\\sup_{t\\in [0,T]} \\Big\\{ e^{-1\/2 \\int_0^t \\Vert \\partial_x b(s)\\Vert_\\infty \\diff s}\\, \\vert u(t)\\vert_{L^2} \\Big\\} \\leq \\vert u_0\\vert_{L^2} \\quad \\mathbb{P}\\text{-a.s.}\n\\end{equation}\nObserve however that we have already found a criticality with respect to the approach of the previous sections: in equation \\eqref{sec 5 - linear spde, compact stratonovich formulation}, not only Stratonovich multiplicative noise $\\circ \\diff W$ appears, but also $\\circ \\diff(\\partial_x W)$; if the former, in order to be defined, required $W$ to be an $L^2$-valued random variable, then the latter will require $\\partial_x W$ to be $L^2$-valued as well and therefore $W$ to belong to $H^1$. In particular, the class of noises we can use has one additional degree of regularity with respect to the one we could use in higher dimension. If we expect the paradigm \\textquotedblleft the rougher the noise, the better the regularization\\textquotedblright\\ to hold, then this kind of noise shouldn't be able to regularize very much.\n\n\nLet us give a more precise description of $W$, so that we can give proper meaning to equation \\eqref{sec 5 - linear spde, compact stratonovich formulation} and pass to the It\\^o formulation. Similarly to the previous sections, we consider $W$ given by\n\\begin{equation}\\nonumber\nW(t,x)=\\sum_k \\sigma_k(x) W_k(t),\n\\end{equation}\nwhere $\\sigma_k\\in C^\\infty(\\mathbb{T})$ for each $k$ and the index $k$ is ranging over $\\mathbb{Z}$. In particular, we might consider both a finite or infinite series (in the latter case, convergence is interpreted in the sense of distributions as before). Then, $\\partial_x W$ in the sense of distributions is given by\n\\begin{equation}\\nonumber\n\\partial_x W(t,x)=\\sum_k \\sigma'_k(x) W_k(t),\n\\end{equation}\nso that we can write equation \\eqref{sec 5 - linear spde, compact stratonovich formulation} as\n\\begin{equation}\\label{sec 5 - linear spde, explicit stratonovich formulation}\n\\diff u\n= b\\, \\partial_x u\\diff t + \\sum_k \\big( 2\\sigma_k \\partial_x u + \\sigma_k'\\, u\\big)\\circ \\diff W_k\n= b\\, \\partial_x u\\diff t + \\sum_k \\mathcal{M}_k u\\circ \\diff W_k,\n\\end{equation}\nwhere we consider $\\mathcal{M}_k = 2\\sigma_k \\partial_x + \\sigma_k'$ as a linear (unbounded) operator on $L^2(\\mathbb{T})$. From \\eqref{sec 5 - linear spde, compact stratonovich formulation} we obtain the corresponding It\\^o formulation:\n\\begin{equation}\\nonumber\n\\diff u = b\\, \\partial_x u\\diff t +\\frac{1}{2} \\sum_k \\mathcal{M}_k^2 u\\diff t + \\sum_k \\mathcal{M}_k u\\diff W_k.\n\\end{equation}\nAlgebraic computations yield\n\\begin{equation}\\label{sec 5 - linear spde, general ito formulation}\\begin{split}\n\\diff u = b\\, \\partial_x u\\diff t\n& + \\Bigg(\\sum_k \\sigma_k \\sigma''_k + \\frac{1}{2} (\\sigma_k')^2\\Bigg) u\\diff t\n+ 4 \\Bigg(\\sum_k \\sigma_k \\sigma'_k \\Bigg)\\partial_x u\\diff t\\\\\n&+ 2\\Bigg(\\sum_k \\sigma_k^2 \\Bigg)\\partial_{xx} u\\diff t\n+ \\sum_k \\mathcal{M}_k u\\diff W_k.\n\\end{split}\\end{equation}\nIn particular, in order for the It\\^o-Stratonovich corrector to make sense, we need all the above series to be convergent at any fixed $x$. Moreover, observe that now the term in front of $\\partial_{xx}u$ is in \\textquotedblleft competition\\textquotedblright\\ with those in front of $\\partial_x u$ and $u$; in particular, when we renormalise the noise $W$ by dividing by the term which explodes faster, if the other terms don't grow with the same speed they will disappear in the limit. This is indeed what happens and what determines the failure of Theorem \\ref{theorem sec 3 - main result} in $d=1$, at least when the perturbation is performed by a linear multiplicative noise of the form \\eqref{sec 5 - linear spde, compact stratonovich formulation}.\n\n\nWe illustrate what described above for specific choices of $\\sigma_k$ which allow to perform explicit calculations. Take $\\sigma_k(x) = \\lambda_k\\, e_k(x)$, where $\\lambda_k$ are some real constants (on which we need to impose conditions, see below) such that $\\lambda_k = \\lambda_{-k}$ for all $k$, $\\lambda_0=0$ and $\\{e_k\\}_k$ is the real Fourier basis introduced at the beginning of the section. Then equation \\eqref{sec 5 - linear spde, general ito formulation} becomes\n\\begin{equation}\\label{sec 5 - linear spde, specific ito formulation}\\begin{split}\n\\diff u\n& = b\\, \\partial_x u \\diff t -\\frac{1}{2}\\Bigg( \\sum_k k^2 \\lambda_k^2\\Bigg) u\\diff t + 2\\Bigg( \\sum_k \\lambda_k^2\\Bigg)\\partial_{xx}u\\diff t\\\\\n&\\quad+ \\sum_k \\lambda_k \\big( 2 e_k \\partial_x u + e_k' u\\big) \\diff W_k .\n\\end{split}\\end{equation}\nEquation \\eqref{sec 5 - linear spde, specific ito formulation} confirms the discussion above: the It\\^o-Stratonovich corrector in order to be defined requires the condition\n\\begin{equation}\\nonumber\n\\sum_k k^2 \\lambda_k^2 <\\infty,\n\\end{equation}\nnamely $W$ taking values in $H^1$, and the coefficient in front of $u$ is strictly bigger than the one in front of $\\partial_{xx}u$, at least whenever $\\lambda_k\\neq 0$ for some $k\\notin \\{-1,1\\}$. Let us write the weak formulation of equation \\eqref{sec 5 - linear spde, specific ito formulation} in order to understand if the infinite series of It\\^o integrals is well defined as well and how fast it grows as a function of the parameters $\\lambda_k$: $u$ is a weak solution if, for any $\\varphi\\in C^\\infty(\\mathbb{T})$,\n\\begin{equation}\\label{sec 5 - specific ito formulation, weak formulation}\\begin{split}\n\\diff \\langle u,\\varphi \\rangle =\n& - \\langle u,\\partial_x (b \\varphi) \\rangle\\diff t\n- \\frac{1}{2}\\Bigg( \\sum_k k^2 \\lambda_k^2\\Bigg) \\langle u,\\varphi\\rangle \\diff t\n+ 2\\Bigg( \\sum_k \\lambda_k^2\\Bigg)\\langle u,\\partial_{xx}\\varphi\\rangle\\diff t\\\\\n& - 2 \\sum_k \\lambda_k \\langle u, e_k\\partial_x \\varphi\\rangle\\diff W_k\n- \\sum_k \\lambda_k \\langle u, e'_k\\varphi\\rangle\\diff W_k\n\\end{split}\\end{equation}\nLet us consider the last two series separately (in principle we have already committed an abuse by splitting the series, since we haven't yet proved its convergence, but once convergence is proven the passage is rigorous; otherwise just consider the finite approximations first, for which the splitting is legit, and push to the limit after the convergence of both series is proven). By It\\^o isometry and independence of $\\{W_k\\}_k$, for any $t\\in [0,T]$ we have\n\\begin{equation}\\nonumber\\begin{split}\n\\mathbb{E}\\Bigg[ \\bigg\\vert \\sum_k \\lambda_k \\int_0^t \\langle u(s), e_k\\partial_x\\varphi\\rangle\\diff W_k(s)\\bigg\\vert^2 \\bigg]\n& = \\sum_k \\lambda_k^2 \\int_0^t \\mathbb{E}[\\langle u(s)\\partial_x\\varphi, e_k\\rangle^2]\\diff s\\\\\n& \\leq \\sup_k \\lambda_k^2 \\int_0^T \\mathbb{E}\\bigg[\\sum_k \\langle u(s)\\partial_x\\varphi,e_k\\rangle^2\\bigg]\\diff s\\\\\n& = \\sup_k \\lambda_k^2 \\int_0^T \\mathbb{E}\\big[\\,\\vert u(s)\\partial_x\\varphi\\vert_{L^2}^2\\big]\\diff s\\\\\n& \\leq \\sup_k \\lambda_k^2\\, \\Vert \\partial_x\\varphi\\Vert_\\infty^2\\, \\int_0^T \\mathbb{E}\\big[\\,\\vert u(s)\\vert_{L^2}^2\\big]\\diff s\n\\end{split}\\end{equation}\nTherefore the first series grows as $\\sup_k \\vert \\lambda_k\\vert$ as in the case $d\\geq 2$. For the second series with analogous calculations we find\n\\begin{equation}\\nonumber\n\\mathbb{E}\\Bigg[ \\bigg\\vert \\sum_k k\\lambda_k \\int_0^t \\langle u(s), e_k\\varphi\\rangle\\diff W_k(s)\\bigg\\vert^2 \\Bigg]\n\\leq \\sup_k\\big\\{ k^2 \\lambda_k^2\\big\\}\\, \\Vert \\varphi\\Vert_\\infty^2 \\int_0^T \\mathbb{E}\\big[\\,\\vert u(s)\\vert_{L^2}^2\\big]\\diff s,\n\\end{equation}\nwhich shows that the second series grows in norm as $\\sup_k \\{\\vert k\\lambda_k\\vert\\}$, which is therefore also the leading term of the overall martingale term appearing in equation \\eqref{sec 5 - specific ito formulation, weak formulation}. In particular observe that\n\\begin{equation}\\nonumber\n\\sum_k \\lambda_k^2 \\leq \\sup_k \\{k\\lambda_k\\}^2\\, \\sum_{k\\neq 0} \\frac{1}{k^2} = C\\,\\sup_k \\{k\\lambda_k\\}^2,\n\\end{equation}\nwhich shows that it's not possible to renormalize $W$ in such a way that in the limit the coefficient in front of $\\partial_{xx}u$ survives while the martingale term disappears. The above inequality only holds in dimension $d=1$ and fails for $d$ higher, proving once again that dimension is playing a fundamental role and we can't infer for $d=1$ the same results as for $d\\geq 2$. However, it's still easy to find a collection $\\{\\lambda_k^N, k\\in\\mathbb{Z}, N\\in\\mathbb{N}\\}$ such that $\\lambda^N_k = \\lambda^N_{-k}$, $\\lambda_0^N=0$ for all $N$ and\n\\begin{equation}\\nonumber\n\\lim_{N\\to\\infty}\\frac{\\sup_k \\{k^2(\\lambda^N_k)^2\\}}{\\sum_k k^2(\\lambda^N_k)^2}=0,\n\\qquad \\lim_{N\\to\\infty}\\frac{\\sum_k (\\lambda_k^N)^2}{\\sum_k k^2(\\lambda^N_k)^2}=0,\n\\end{equation}\nwhere we are of course assuming that for fixed $N$ all the quantities appearing are finite. For a given such sequence, if we define the noises $W^N=W^N(t,x)$ as\n\\begin{equation}\\nonumber\nW^N(t,x)= \\sum_k \\lambda^N_k e_k(x) W_k(t)\n\\end{equation}\nand for a fixed $\\nu>0$ we define\n\\begin{equation}\\nonumber\n\\varepsilon_N = 2\\nu \\bigg( \\sum_k k^2(\\lambda^N_k)^2\\bigg)^{-1},\n\\end{equation}\nthen $\\varepsilon_N\\to 0$ as $N\\to\\infty$ and going through the same proof as in Theorem \\ref{theorem sec 3 - main result} we obtain that any weak energy solution of equation\n\\begin{equation}\\nonumber\n\\begin{cases}\n\\diff u^N = b\\, \\partial_x u^N\\diff t + 2\\sqrt{\\varepsilon_N}\\circ \\diff W^N\\, \\partial_x u^N + \\sqrt{\\varepsilon_N}\\circ \\diff (\\partial_x W^N)\\, u^N\\\\\nu^N(0)=u_0\n\\end{cases}\n\\end{equation}\nwill converge in probability as $N$ goes to infinity to $u$ deterministic solution of\n\\begin{equation}\\label{sec 5 - expected deterministic pde limit}\n\\begin{cases}\n\\partial_t u = b\\, \\partial_x u - \\nu\\, u\\\\\nu(0)=u_0\n\\end{cases},\n\\end{equation}\nas long as the weak solution of \\eqref{sec 5 - expected deterministic pde limit} is unique on the interval $[0,T]$. If uniqueness of \\eqref{sec 5 - expected deterministic pde limit} fails, then the proof of Theorem \\ref{theorem sec 3 - main result} breaks down as well; we may still however extract a (not relabelled) subsequence $u^N$ which converges weakly in $L^2(\\Omega\\times[0,T]\\times\\mathbb{T},\\diff\\mathbb{P}\\otimes \\diff t\\otimes \\diff x)$ to a stochastic solution of \\eqref{sec 5 - expected deterministic pde limit}, usually referred to as a \\textit{superposition solution}, see \\cite{Amb}, \\cite{Fla3} ; moreover by properties of weak convergence, this solution still satisfies the energy inequality. We have however lost the main advantage of Theorem \\ref{theorem sec 3 - main result}, where the sequence $u^N$ converges to a deterministic PDE which is in principle much better posed than the approximating sequence, due to the presence of the Laplacian.\n\n\nSuch a result may still be seen, from the modelling point of view, as a mathematical justification of the presence of a friction term $-\\nu u$ in one dimensional PDEs, as the ideal limit of the action of a suitable noise $W$ which is very irregular but of very small intensity; the coefficient $\\nu$ is proportional to the product between the magnitude and the spatial irregularity of the noise (measured by its $H^1$ norm). We underline however that a noise of the form \\eqref{sec 5 - linear spde, compact stratonovich formulation} has been introduced only for mathematical convenience (it allows to obtain an energy inequality for the solutions) and has not been justified from the physical point of view, nor equation \\eqref{sec 5 - linear spde, compact stratonovich formulation} has been derived from first principles (namely from a Lagrangian formulation). Therefore there is still the possibility that the addition of a different multiplicative noise, with a more robust modelling justification, allows to obtain an analogue of Theorem \\ref{theorem sec 3 - main result} also in dimension $d=1$.\n\n\n\\begin{acknowledgements}\nThis work stems from my master thesis (see \\cite{Gal}, Section 4.4) which was developed under the supervision of Prof. David Barbato, to whom I'm deeply indebted and I want to express my gratitude. I also want to thank Prof. Franco Flandoli for the very useful discussions and for encouraging me into writing this work, as well as Prof. Dejun Luo for pointing out a mistake in early calculations and showing me a very simple and elegant way to find the explicit expression for the It\\^o-Stratonovich term. I'm also grateful to Prof. Massimiliano Gubinelli for reviewing the early draft of this work and to Immanuel Zachhuber and Lorenzo Dello Schiavo for the useful suggestions for the proof of Lemma \\ref{lemma sec 3 - sufficient condition for uniqueness of parabolic problem}.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\t\nVortices play significant parts in many fields of theoretical physics including superconductivity theory, cosmology, condensed-matter physics, electroweak theory, and quantum Hall effect. A tide of research related to vortex equations has been accomplished; see, for example, \\cite{CI, T, TY, WY, Y} and the references therein. Recently, Han, Lin and Yang \\cite{HLY2} investigated a system of relativistic non-Abelian Chern-Simons-Higgs vortex equations whose Cartan matrix $K$ is that of arbitrary simple Lie algebra, they established a general existence result for the doubly periodic solutions of the Chern-Simons-Higgs vortex equations.\n \nIn recent years, equations on graphs have attracted extensive attention; see, for example, \\cite{ ALY, Hu, Hub, HWY, HLY, LP} and the references therein. \n\n Recently, Huang, Lin and Yau \\cite{HLY} proved the existence of solutions to mean field equations \n$$\n\\Delta u+e^{u}=\\rho \\delta_{0}\n$$\nand\n$$\n\\Delta u=\\lambda e^{u}\\left(e^{u}-1\\right)+4 \\pi \\sum_{j=1}^{M} \\delta_{p_{j}}\n$$\non graphs.\n\nLet $G=(V,E)$ is a connected finite graph, and $V$ denotes the vetex set and $E$ denotes the edge set.\n\nInspired by the work of Huang-Lin-Yau \\cite{HLY}, we study the relativistic Chern-Simons-Higgs equations \n\\begin{equation}\\label{1}\n\t\\Delta u_{i}=\\lambda\\left(\\sum_{j=1}^{n} \\sum_{k=1}^{n} K_{k j} K_{j i} \\mathrm{e}^{u_{j}} \\mathrm{e}^{u_{k}}-\\sum_{j=1}^{n} K_{j i} \\mathrm{e}^{u_{j}}\\right)+4 \\pi \\sum_{j=1}^{N_{i}} \\delta_{p_{i j}}(x), \\quad i=1, \\ldots, n,\n\\end{equation}\non $G$, where $K=(K_{ij})$ is the Cartan matrix of a finite dimensional semisimple Lie algebra $L$, $n\\ge1$ is the rank of $L$ which is the dimension of the Cartan subalgebra of $L$, $p_{ij}$, $j=1,...,N_{i}$, $i=1,...,n$, are arbitrarily chosen distinct vertices on the graph, and $\\delta_{p_{ij}}$ is the Dirac mass at $p_{ij}$. With a view to handling the system in a unified framework, we need some suitable assumption on the matrix K. We suppose that\n\\begin{equation}\\label{41}\n\tK^{T}=PS,\n\\end{equation}\n$P$ is a diagonal matrix satisfying \n\\begin{equation}\\label{33}\n\tP:=diag\\{P_1,...,P_n\\},~P_{i}>0,~i=1,...,n,\n\\end{equation}\n$S$ is a positive definite matrix of the form\n\\begin{equation}\\label{43}\n\tS \\equiv\\left(\\begin{array}{cccccc}\n\t\t\\alpha_{11} & -\\alpha_{12} & \\cdots & \\cdots & \\cdots & -\\alpha_{1 n} \\\\\n\t\t\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n\t\t-\\alpha_{i 1} & -\\alpha_{i 2} & \\cdots & \\alpha_{i i} & \\cdots & -\\alpha_{i n} \\\\\n\t\t\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n\t\t-\\alpha_{n 1} & -\\alpha_{n 2} & \\cdots & \\cdots & -\\alpha_{n n-1} & \\alpha_{n n}\n\t\\end{array}\\right),\n\\end{equation}\n\\begin{equation}\\label{44}\n\t\\alpha_{i i}>0, \\quad i=1, \\ldots, n, \\quad \\alpha_{i j}=\\alpha_{j i} \\geq 0, i \\neq j=1, \\ldots, n,\n\\end{equation}\nand\n\\begin{equation}\\label{45}\n\t\\text{~all~the~entries~of~}S^{-1} \\text{are positive}.\n\\end{equation}\nBy \\eqref{45}, we conclude that \n\\begin{equation}\\label{46}\n\tR_{i} := \\sum_{j=1}^{n}\\left(\\left(K^{\\tau}\\right)^{-1}\\right)_{i j}>0\n\t\\end{equation}\nfor $i=1,...,n$. \n\n We now state our main results as follows.\n \n \\begin{Theorem}\\label{31}\n \tAssume that the matrix $K$ satisfies \\eqref{41}-\\eqref{45}, then we have the following conclusions: \n \t\t\n \t$(\\mathrm{i})$~Suppose that \\eqref{1} has a solution. Then we have \n \t\\begin{equation}\\label{47}\n \t\t\\lambda>\\lambda_{0} \\equiv \\frac{16 \\pi}{|V|} \\frac{\\sum \\limits_{i=1}^{n} \\sum\\limits_{j=1}^{n} P_{i}^{-1}\\left(K^{-1}\\right)_{j i} N_{j}}{\\sum\\limits_{i=1}^{n} \\sum\\limits_{j=1}^{n} P_{i}^{-1}\\left(K^{-1}\\right)_{j i}}.\n \t\\end{equation}\n \n \t$(\\mathrm{ii})$~There exists a constant $\\lambda_{1}>\\lambda_{0}$ so that if $\\lambda>\\lambda_{1}$, then \\eqref{1} admits a solution $(u^{\\lambda}_{1},...,u^{\\lambda}_{n})$. \n \\end{Theorem}\n\n\nThe rest of the paper is arranged as below. In Section 2, We present some results that we will use frequently in the following pages. Section 3 and Section 4 are devoted to the proof of Theorem \\ref{31}.\n\n\\section{Preliminary results}\n\nFor each edge $xy \\in E$, We suppose that its weight $w_{xy}>0$ and that $w_{xy}=w_{yx}$. Set $\\mu: V \\to (0,+\\infty)$ be a finite measure. For any function $u: V \\to \\mathbb{R}$, the Laplacian of $u$ is defined by \n\\begin{equation}\\label{l1}\n\t\\Delta u(x)=\\frac{1}{\\mu(x)} \\sum_{y \\sim x} w_{y x}(u(y)-u(x)),\n\\end{equation}\nwhere $y \\sim x$ means $xy \\in E$. The gradient $\\nabla$ of function $f$ is defined by a vector \n$$\\nabla f (x):=\\left( \\left[ f(y)-f(x)\\right] \\sqrt{\\frac{w_{xy}}{2\\mu (x)}} \\right)_{y\\sim x} .$$ The gradient form of $u$ reads \n\\begin{equation}\n\t\\Gamma(u, v)(x)=\\frac{1}{2 \\mu(x)} \\sum_{y \\sim x} w_{x y}(u(y)-u(x))(v(y)-v(x)).\n\\end{equation}\nWe denote the length of the gradient of $u$ by\n\\begin{equation*}\n\t|\\nabla u|(x)=\\sqrt{\\Gamma(u)(x)}=\\left(\\frac{1}{2 \\mu(x)} \\sum_{y \\sim x} w_{x y}(u(y)-u(x))^{2}\\right)^{1 \/ 2}.\n\\end{equation*}\nDenote, for any function $\nu: V \\rightarrow \\mathbb{R}\n$, an integral of $u$ on $V$ by $\\int \\limits_{V} u d \\mu=\\sum\\limits_{x \\in V} \\mu(x) u(x)$. Denote $|V|$=$ \\text{Vol}(V)=\\sum \\limits_{x \\in V} \\mu(x)$ the volume of $V$. For $p > 0$, denote $|| u ||_{p}:=||u||_{L^{p}(V)}=(\\int \\limits_{V} |u|^{p} d \\mu)^{\\frac{1}{p}}$. Define a sobolev space and a norm on it by \n\\begin{equation*}\n\tW^{1,2}(V)=\\left\\{u: V \\rightarrow \\mathbb{R}: \\int \\limits_{V} \\left(|\\nabla u|^{2}+u^{2}\\right) d \\mu<+\\infty\\right\\},\n\\end{equation*}\nand \\begin{equation*}\n\t\\|u\\|_{H^{1}(V)}=\t\\|u\\|_{W^{1,2}(V)}=\\left(\\int \\limits_{V}\\left(|\\nabla u|^{2}+u^{2}\\right) d \\mu\\right)^{1 \/ 2}.\n\\end{equation*}\n\nTo apply the variational method, we need the following Sobolev embedding, Truding-Moser inequlity and interpolation inequality on graphs.\n\n\\begin{Lemma}\\label{21}\n\t{\\rm (\\cite[Lemma 5]{ALY})} Let $G=(V,E)$ be a finite graph. The sobolev space $W^{1,2}(V)$ is precompact. Namely, if ${u_j}$ is bounded in $W^{1,2}(V)$, then there exists some $u \\in W^{1,2}(V)$ such that up to a subsequence, $u_j \\to u$ in $W^{1,2}(V)$.\n\\end{Lemma}\n\n\\begin{Lemma}\\label{2.2}\n\t{\\rm (\\cite[Lemma 6]{ALY})}\tLet $G = (V, E)$ be a finite graph. For all functions $u : V \\to \\mathbb{R}$ with $\\int \\limits_{V} u d\\mu = 0$, there \n\texists some constant $C$ depending only on $G$ such that $\\int \\limits_{V} u^2 d\\mu \\le C \\int \\limits_{V} |\\nabla u|^2 d\\mu$.\n\\end{Lemma} \n\n\\begin{Lemma}\\label{mt}\n\t{\\rm (\\cite[Lemma 7]{ALY})}\n\tLet $G=(V,E)$ be a finite graph. For any $\\beta\\in \\mathbb{R}$, there exists a constant $C$ depending only on $\\beta$ and $G$ such that for all functions $v$ with $\\int\\limits_{V} |\\nabla v|^{2} d \\mu\\le 1$ and $\\int\\limits_{V} v d \\mu =0$, there holds \n\t\\begin{equation}\n\t\t\\int\\limits_{V} e^{\\beta v^{2}} d\\mu \\le C.\n\t\\end{equation}\n\\end{Lemma} \n\n\\begin{Lemma}\\label{i}\n\t{\\rm (Interpolation inequality for $L^{r}$-norms on graphs) }. Suppose that $\\theta \\in (0,1) $, $0<\\theta r\\le s$, $0<(1-\\theta)r \\le t$ and $\\frac{1}{r}=\\frac{\\theta}{s}+\\frac{1-\\theta}{t}$. Then we have \n\t\\begin{equation}\n\t\t||u||_{L^{r}(V)}\\le ||u||^{\\theta}_{L^{s}(U)} ||u||^{1-\\theta}_{L^{t}(U)} .\n\t\\end{equation}\n\\end{Lemma}\n\\begin{proof}\nBy H$\\ddot{\\text{o}}$lder inequality, we see that \n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t\\int\\limits_{V}|u|^{r} d \\mu &=\\int\\limits_{V}|u|^{\\theta r}|u|^{(1-\\theta) r} d \\mu \\\\\n\t\t\t& \\leq\\left(\\int\\limits_{V}|u|^{\\theta r \\frac{s}{\\theta r}} d \\mu \\right)^{\\frac{\\theta r}{s}}\\left(\\int\\limits_{V}|u|^{(1-\\theta) r \\frac{t}{(1-\\theta) r}} d \\mu \\right)^{\\frac{(1-\\theta) r}{t}} .\n\t\t\\end{aligned}\n\t\\end{equation}\n\\end{proof}\n\nIn order to establish Theorem \\ref{31}, we need the following result due to Huang-Lin-Yau.\n\\begin{Theorem}\\label{ly}\n\t{\\rm (\\cite[Theorem 2.2]{HLY})}\n\tThere is a critical value $\\lambda_{c}$ depending on $G$ satisfying\t\n\t$$\\lambda_{c}\\ge \\frac{16\\pi M}{|V|},$$\n\tsuch that when $\\lambda>\\lambda_{c}$, the equation\n\t\\begin{equation}\\label{13a}\n\t\t\\Delta u=\\lambda e^{u}\\left(e^{u}-1\\right)+4 \\pi \\sum_{j=1}^{M} \\delta_{p_{j}},~x\\in G,\n\t\\end{equation}\n\thas a solution $u_{\\lambda}$ on $G$, and when $\\lambda<\\lambda_{c}$, the equation \\eqref{13a} has no solution, where $M>0$ is an integer.\n\\end{Theorem}\n\nIn fact, we may establish the following more accurate result. \n\\begin{Proposition}\n\n\tThe solution $u_{\\lambda}$ obtained in Theorem \\ref{ly} is maximal in the sense that if $u$ is any other solution of \\eqref{1}, then \n\t\\begin{equation}\n\t\tu\\le u_{\\lambda}.\n\t\\end{equation} \n\\end{Proposition}\n\n\\begin{proof}\n\t Suppose that $u$ is any other solution of \\eqref{13a}. Then it is clear that $u$ is a subsolution and hence \n\t\\begin{equation}\\label{b}\n\t\tu\\le v_{n}\n\t\\end{equation} \n\tas in the proof of Lemma 4.2 in \\cite{HLY} for every $n\\in \\mathbb{N}$, where $v_{n}$ is defined by iterative scheme (4.5) in \\cite{HLY}. Letting $n\\to +\\infty$ in \\eqref{b}, we deduce that $u\\le u_{\\lambda}$ and hence that $u_{\\lambda}$ is a maximal solution of \\eqref{13a}. \n\t\n\tWe now complete the proof.\n\\end{proof}\n\nFurthermore, we have the following propositions.\n\n\\begin{Proposition}\n\tIf $\\lambda_{1}>\\lambda_{2}>\\lambda_{c}$, then $u_{\\lambda_{1}}\\ge u_{\\lambda_{2}}$.\n\\end{Proposition}\n\\begin{proof}\n\tBy Lemma 4.4 of \\cite{HLY}, we deduce that $u_{\\lambda_{2}}<0$, and hence that \n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t\\Delta u_{\\lambda_{2}}&=\\lambda_{2} e^{u_{\\lambda_{2}}}(e^{u_{\\lambda_{2}}}-1)+4\\pi\\sum_{j=1}^{M} \\delta_{p_{j}} \\\\\n\t\t\t& >\\lambda_{1} e^{u_{\\lambda_{2}}}(e^{u_{\\lambda_{2}}}-1) +4\\pi\\sum_{j=1}^{M} \\delta_{p_{j}}.\n\t\t\\end{aligned}\n\t\\end{equation}\nThus $u_{\\lambda_{2}}$ is a subsolution of \\eqref{13a} with $\\lambda=\\lambda_{1}$. By the sub-supersolution argument as in the proof of Lemma 4.2 in \\cite{HLY}, and the maximality of $u_{\\lambda_{1}}$, this implies that $u_{\\lambda_{2}} \\le u_{\\lambda_{1}}$.\n\\end{proof}\n\n\\begin{Proposition}\\label{f2}\n\n\t Let $u_{\\lambda}$ be the maximal solution of \\eqref{13a} for $\\lambda>\\lambda_{c}$. We have \n\t\\begin{equation}\n\t\tu_{\\lambda} \\to 0\\text{~as~}\\lambda\\to +\\infty\\text{~uniformly~on~}V.\n\t\\end{equation}\n\\end{Proposition}\n\\begin{proof}\nFix $\\lambda_{0}>\\lambda_{c}$, since $u_{\\lambda}$ is monotone increasing in $\\lambda$, by Lemma 4.4 of \\cite{HLY}, we deduce that $u_{\\lambda_{0}} \\le u_{\\lambda}\\le 0$ and\n\t\\begin{equation}\n\t\t\\int\\limits_{V} e^{u_{\\lambda}} (1-e^{ u_{\\lambda}}) d \\mu= \\frac{4\\pi M}{\\lambda},\n\t\\end{equation} \nfor $\\lambda\\ge \\lambda_{0}$.\n\tLet $\\bar{v}(x):=\\sup\\limits_{\\lambda>\\lambda_{0}} u_{\\lambda} (x) $, $x\\in V$. We deduce that $u_{\\lambda_{0}} \\le \\bar{v} \\le 0$ in $V$ and \n\t\\begin{equation}\n\t\t\\int\\limits_{V} e^{ \\bar{v}} (1-e^{ \\bar{v}}) d \\mu= 0.\n\t\\end{equation} \nIt follows that $\\bar{v}\\equiv 0$ on $V$.\n\nWe now complete the proof.\n\\end{proof}\n\n\n\\section{The constraints}\n\nFor the sake of convience, by applying the translation\n\\begin{equation}\\label{51}\n\tu_{i} \\to u_{i}+ \\text{ln} R_{i},~i=1,...,n,\n\\end{equation}\nin equations \\eqref{1}, we can conclude that\n\\begin{equation}\n\t\\Delta u_{i}=\\lambda\\left(\\sum_{j=1}^{n} \\sum_{k=1}^{n} \\tilde{K}_{j k} \\tilde{K}_{i j} \\mathrm{e}^{u_{j}} \\mathrm{e}^{u_{k}}-\\sum_{j=1}^{n} \\tilde{K}_{i j} \\mathrm{e}^{u_{j}}\\right)+4 \\pi \\sum_{j=1}^{N_{i}} \\delta_{p_{i j}}(x)\n\\end{equation}\nfor $i=1,...,n$.\nSet $u_{i}^{0}$ be the unique solution to \n\\begin{equation}\n\t\\Delta u_{i}^{0}=4 \\pi \\sum_{s=1}^{N_{i}} \\delta_{p_{i s}}-\\frac{4 \\pi N_{i}}{|V|}, \\quad \\int\\limits_{V} u_{i}^{0} \\mathrm{~d} x=0.\n\\end{equation}\nDenote $u_{i}=u_{i}^{0}+v_{i}$, $i=1,...,n$; then $v_{i} (i=1,...,n)$ satisfy\n\\begin{equation}\\label{59}\n\t\\Delta v_{i}=\\lambda\\left(\\sum_{j=1}^{n} \\sum_{k=1}^{n} \\tilde{K}_{j k} \\tilde{K}_{i j} \\mathrm{e}^{u_{j}^{0}+v_{j}} \\mathrm{e}^{u_{k}^{0}+v_{k}}-\\sum_{j=1}^{n} \\tilde{K}_{i j} \\mathrm{e}^{u_{j}^{0}+v_{j}}\\right)+\\frac{4 \\pi N_{i}}{|V|},\n\\end{equation}\nor\n\\begin{equation}\\label{510}\n\t\\Delta \\mathbf{v}=\\lambda \\tilde{K} U \\tilde{K}(\\mathbf{U}-\\mathbf{1})+\\frac{4 \\pi \\mathbf{N}}{|V|},\n\\end{equation}\nwhere $\\mathbf{v}=(v_1,...,v_n)^{T},$ $\\mathbf{N}=(N_1,...,N_n)^{T}$, $U=diag\\{e^{u_{1}^{0}+v_{1}},...,e^{u_{n}^{0}+v_{n}}\\},$ $\\mathbf{U}=(e^{u_{1}^{0}+v_{1}},...,e^{u_{n}^{0}+v_{n}})^{T},$\n\\begin{equation}\n\t\\tilde{K}:=K^{T}R=PSR,~R:=diag\\{R_1,...,R_n \\}.\n\\end{equation}\n\nWe next establish a necessary condition for the existence of solutions of \\eqref{1}, and then the conclusion (i) of Theorem \\ref{31} follows.\n\n\\begin{Lemma}\n\tSuppose that \\eqref{1} admits a solution. Then \n\t\\begin{equation}\\label{4.7}\n\t\t\\lambda>\\lambda_{0} := \\frac{16 \\pi}{|V|} \\frac{\\sum \\limits_{i=1}^{n} \\sum\\limits_{j=1}^{n} P_{i}^{-1}\\left(K^{-1}\\right)_{j i} N_{j}}{\\sum\\limits_{i=1}^{n} \\sum\\limits_{j=1}^{n} P_{i}^{-1}\\left(K^{-1}\\right)_{j i}}.\n\t\\end{equation} \n\\end{Lemma}\n\\begin{proof}\nDenote \\begin{equation}\\label{57}\n\tA = P^{-1} S^{-1} P^{-1} \\quad \\text { and } \\quad Q = R S R.\n\\end{equation}\t\nIn view of $S$ is positive definite, we see that $A$ ang $Q$ are positive definite. By \\eqref{45}, we deduce that \\begin{equation}\n\t\\mathbf{b} \\equiv\\left(b_{1}, \\ldots, b_{n}\\right)^{T} := 4 \\pi A \\mathbf{N}=4 \\pi P^{-1} S^{-1} P^{-1} \\mathbf{N}>0.\n\\end{equation} \nFrom \\eqref{510}, we deduce that \n\\begin{equation}\n\t\\Delta A \\mathbf{v}=\\lambda U Q(\\mathbf{U}-\\mathbf{1})+\\frac{\\mathbf{b}}{|V|}.\n\\end{equation} \nIt follows that \\begin{equation}\\label{515}\n\t\\int\\limits_{V} \\mathrm{UQ}(\\mathbf{U}-\\mathbf{1}) \\mathrm{d} \\mu +\\frac{\\mathbf{b}}{\\lambda}=\\mathbf{0},\n\\end{equation}\nwhere $\\mathbf{1}:=(1,...,1)^{T}$.\nMultiplying both sides of \\eqref{515} by $\\mathbf{1}^{T}$, we deduce that \n\\begin{equation}\\label{516}\n\t\\int_{\\Omega} \\mathbf{U}^{T} Q(\\mathbf{U}-\\mathbf{1}) \\mathrm{d} \\mu+\\frac{\\mathbf{1}^{T} \\mathbf{b}}{\\lambda}=0.\n\\end{equation}\nFrom \\eqref{41} and \\eqref{46}, we conclude that \n\\begin{equation}\\label{518}\n\t\\left(K^{T}\\right)^{-1} \\mathbf{1}=S^{-1} P^{-1} \\mathbf{1}=R \\mathbf{1}.\n\\end{equation}\nCombining \\eqref{516} and \\eqref{518}, we deduce that\n\\begin{equation}\\label{517}\t\n\t\t\\int\\limits_{V}\\left(\\mathbf{U}-\\frac{\\mathbf{1}}{2}\\right)^{\\tau} Q\\left(\\mathbf{U}-\\frac{\\mathbf{1}}{2}\\right) \\mathrm{d} \\mu \n\t\t=\\frac{|V|}{4} \\mathbf{1}^{\\tau} P^{-1}\\left(K^{\\tau}\\right)^{-1} \\mathbf{1}-\\frac{4 \\pi \\mathbf{1}^{\\tau} P^{-1}\\left(K^{\\tau}\\right)^{-1} \\mathbf{N}}{\\lambda}.\n\\end{equation}\nRecall that $Q$ is positive define, it follows from \\eqref{517} that\n\\begin{equation}\n\t\\frac{|V|}{4} \\mathbf{1}^{\\tau} P^{-1}\\left(K^{\\tau}\\right)^{-1} \\mathbf{1}-\\frac{4 \\pi \\mathbf{1}^{\\tau} P^{-1}\\left(K^{\\tau}\\right)^{-1} \\mathbf{N}}{\\lambda}>0,\n\\end{equation}\nwhich implies that \\eqref{4.7} holds. \n\\end{proof}\n\n\\section{The proof of Theorem \\ref{31}}\nIn this section, we formulate a variational solution of equations \\eqref{1} by using an equality type constraint. \nDefine the energy functional by \n\\begin{equation}\\label{19}\n\tI(\\mathbf{v})=\\frac{1}{2} \\sum_{j,k=1}^{n} \\int\\limits_{V} b_{kj} \\Gamma(v_{k},v_{j}) d \\mu+\\frac{\\lambda}{2} \\int\\limits_{V}(\\mathbf{U}-\\mathbf{1})^{T} Q(\\mathbf{U}-\\mathbf{1}) \\mathrm{d} \\mu+\\int\\limits_{V} \\frac{\\mathbf{b}^{T} \\mathbf{v}}{|V|} \\mathrm{d} \\mu,\n\\end{equation}\nwhere $A=(b_{ij})_{n\\times n}$. Due to the fact that $Q$ and $A$ are symmetric, we know that if $\\mathbf{v}$ is a critical point to $I$, then it is a solution to \\eqref{1}.\n\n We could work on the standard space $H^{1}(V):=W^{1,2}(V)$. Denote \n\\begin{equation}\n\tH^{0}:=\\{v\\in W^{1,2}(V) | \\int\\limits_{V} v d \\mu=0 \\}.\n\\end{equation}\nClearly, for any $f\\in H$, there exists a unique $c\\in \\mathbb{R}$ and $f^{'}\\in H^{0}$ such that \n\\begin{equation}\\label{29}\n\tf=c+f^{'}\n\\end{equation}\nIn the sequal, we use $H^{1}(V)$ to denote the spaces of both scalar and vector-valued functions. \n\nSuppose that $\\mathbf{v}=\\mathbf{w}+\\mathbf{c} \\in H^{1}(V)$ given in \\eqref{29} satisfies \\eqref{515}, we deduce that \\begin{equation}\\label{61}\n\t\\operatorname{diag}\\left\\{\\mathrm{e}^{c_{1}}, \\ldots, \\mathrm{e}^{c_{n}}\\right\\} \\tilde{Q}\\left(\\begin{array}{c}\n\t\t\\mathrm{e}^{c_{1}} \\\\\n\t\t\\vdots \\\\\n\t\t\\mathrm{e}^{c_{n}}\n\t\\end{array}\\right)-P^{-1} R \\operatorname{diag}\\left\\{a_{1}, \\ldots, a_{n}\\right\\}\\left(\\begin{array}{c}\n\t\t\\mathrm{e}^{c_{1}} \\\\\n\t\t\\vdots \\\\\n\t\t\\mathrm{e}^{c_{n}}\n\t\\end{array}\\right)+\\frac{\\mathbf{b}}{\\lambda}=\\mathbf{0},\n\\end{equation} \nwhere \\begin{equation}\\label{64}\n\t\ta_{i} := a_{i}\\left(w_{i}\\right) = \\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} \\mathrm{~d} \\mu ,\n\t\\end{equation}\n\t\t \\begin{equation}\\label{65}\n\t\ta_{i j} := a_{i j}\\left(w_{i}, w_{j}\\right) = \\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+u_{j}^{0}+w_{i}+w_{j}} \\mathrm{~d} \\mu, \\quad i, j=1, \\ldots, n,\n\\end{equation}\n\\begin{equation}\\label{62}\n\t\\tilde{Q} := \\tilde{Q}(\\mathbf{w}) = R \\tilde{S} R,\n\\end{equation}\nand\n\\begin{equation}\\label{63}\n\t\\tilde{S} \\equiv\\left(\\begin{array}{cccccc}\n\t\ta_{11} \\alpha_{11} & -\\alpha_{12} a_{12} & \\cdots & \\cdots & \\cdots & -\\alpha_{1 n} a_{1 n} \\\\\n\t\t\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n\t\t-\\alpha_{i 1} a_{i 1} & -\\alpha_{i 2} a_{i 2} & \\cdots & \\alpha_{i i} a_{i i} & \\cdots & -\\alpha_{i n} a_{i n} \\\\\n\t\t\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n\t\t-a_{n 1} \\alpha_{n 1} & -\\alpha_{n 2} a_{n 2} & \\cdots & \\cdots & \\cdots & a_{n n} \\alpha_{n n} \n\t\\end{array}\\right).\n\\end{equation}\nBy \\eqref{62} and \\eqref{63}, we deduce that \n\\begin{equation}\n\t\\tilde{Q}~\\text{ is~positive~ positive~definite}. \n\\end{equation}\nWe now write \\eqref{61} as the component form:\n\\begin{equation}\\label{68}\n\t\\mathrm{e}^{2 c_{i}} R_{i}^{2} \\alpha_{i i} a_{i i}-\\mathrm{e}^{c_{i}}\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum_{j \\neq i} \\mathrm{e}^{c_{j}} R_{i} R_{j} \\alpha_{i j} a_{i j}\\right)+\\frac{b_{i}}{\\lambda}=0, \\quad i=1, \\ldots, n.\n\\end{equation}\nOf course, \\eqref{59} is a quadratic equation in $t=e^{c}$ which admits a solution if and only if \n\\begin{equation}\\label{67}\n\t\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum_{j \\neq i} \\mathrm{e}^{c_{j}} R_{i} R_{j} \\alpha_{i j} a_{i j}\\right)^{2} \\geq \\frac{4 R_{i}^{2} b_{i} \\alpha_{i i} a_{i i}}{\\lambda}, \\quad i=1, \\ldots, n.\n\\end{equation}\nIt is clear that \\eqref{68} follows from the following inequalities\n\\begin{equation}\\label{69}\n\t\\frac{a_{i}^{2}}{a_{i i}} \\geq \\frac{4 \\alpha_{i i} P_{i}^{2} b_{i}}{\\lambda}, \\quad i=1, \\ldots, n.\n\\end{equation}\nDenote \n\\begin{equation}\\label{A}\n\t\\mathscr{A} \\equiv\\left\\{\\mathbf{w} \\mid \\mathbf{w} \\in {H}^{0}(V) \\text { such that } \\eqref{69} \\text { holds }\\right\\}.\n\\end{equation}\nIn this case, we may select $\\mathbf{c}=\\mathbf{c}(\\mathbf{w}):=(c_1,...,c_n)$ in \\eqref{68} to satisfy \n\\begin{equation}\\label{611}\n\t\\begin{aligned}\n\t\t\\mathrm{e}^{c_{i}}=& \\frac{1}{2 R_{i}^{2} \\alpha_{i i} a_{i i}}\\left\\{\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum_{j \\neq i} \\mathrm{e}^{c_{j}} R_{i} R_{j} \\alpha_{i j} a_{i j}\\right)\\right.\\\\\n\t\t&+\\sqrt{\\left.\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum_{j \\neq i} \\mathrm{e}^{c_{j}} R_{i} R_{j} \\alpha_{i j} a_{i j}\\right)^{2}-\\frac{4 b_{i} R_{i}^{2} \\alpha_{i i} a_{i i}}{\\lambda}\\right\\}} \\\\\n\t\t& =: f_{i}\\left(\\mathrm{e}^{c_{1}}, \\ldots, \\mathrm{e}^{c_{n}}\\right), \\quad i=1, \\ldots, n .\n\t\\end{aligned}\n\\end{equation}\n\n\nTo proof Lemma \\ref{u8} and Lemma \\ref{f1}, we give a priori estimates.\n\n\\begin{Lemma}\n\tFor any $\\mathbf{w} \\in \\mathscr{A}$ and $\\epsilon \\in [0,1]$, if $\\mathbf{t}$ satisfies the following equations\n\t\\begin{equation}\\label{72}\n\t\t\\mathbf{F}(\\epsilon, \\mathbf{t}) \\equiv \\mathbf{t}-\\mathbf{f}(\\epsilon, \\mathbf{t})=\\mathbf{0}, \\quad \\mathbf{t} \\in \\mathbb{R}_{+}^{n}, \\quad \\epsilon \\in[0,1],\n\t\\end{equation}\nwhere \n\\begin{equation}\\label{73}\n\t\\mathbf{f}(\\epsilon, \\mathbf{t}) \\equiv\\left(f_{1}(\\epsilon, \\mathbf{t}), \\ldots, f_{n}(\\epsilon, \\mathbf{t})\\right)^{T},\n\\end{equation}\nand\n\\begin{equation}\\label{74}\n\t\\begin{aligned}\n\t\tf_{i}(\\epsilon, \\mathbf{t}) \\equiv & \\frac{1}{2 R_{i}^{2} \\alpha_{i i} a_{i i}}\\left\\{\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum_{j \\neq i} t_{j} R_{i} R_{j} \\alpha_{i j} a_{i j}\\right)\\right.\\\\\n\t\t+&\\left.\\sqrt{\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum_{j \\neq i} t_{j} R_{i} R_{j} \\alpha_{i j} a_{i j}\\right)-\\frac{4 \\epsilon b_{i} R_{i}^{2} \\alpha_{i i} a_{i i}}{\\lambda}}\\right\\} \\\\\n\t\ti &=1, \\ldots, n,\n\t\\end{aligned}\n\\end{equation}\nthen \n\\begin{equation}\\label{76}\n\t00,$ for $i=1,...,n$, we conclude that \n \\begin{equation}\\label{714}\n \t\\operatorname{diag}\\left\\{a_{11}^{\\frac{1}{2}}, \\ldots, a_{n n}^{\\frac{1}{2}}\\right\\} Q \\operatorname{diag}\\left\\{a_{11}^{\\frac{1}{2}}, \\ldots, a_{n n}^{\\frac{1}{2}}\\right\\} \\mathbf{t} \\leq \\tilde{Q} \\mathbf{t}.\n \\end{equation}\nFrom \\eqref{45} and \\eqref{57}, we conclude that \\begin{equation}\\label{715}\n\t(Q^{-1})_{ij}>0,~i,j=1,...,n.\n\\end{equation}\n Thus, combining \\eqref{713}, \\eqref{714} and \\eqref{715}, we conclude that \n \\begin{equation}\n \t\\begin{aligned}\\label{716}\n \t\t\\mathbf{t} & \\leq \\operatorname{diag}\\left\\{a_{11}^{-\\frac{1}{2}}, \\ldots, a_{n n}^{-\\frac{1}{2}}\\right\\} Q^{-1} \\operatorname{diag}\\left\\{a_{11}^{-\\frac{1}{2}}, \\ldots, a_{n n}^{-\\frac{1}{2}}\\right\\} P^{-1} R \\mathbf{a} \\\\\n \t\t&=\\operatorname{diag}\\left\\{a_{11}^{-\\frac{1}{2}}, \\ldots, a_{n n}^{-\\frac{1}{2}}\\right\\} Q^{-1} \\operatorname{diag}\\left\\{a_{1} a_{11}^{-\\frac{1}{2}}, \\ldots, a_{n} a_{n n}^{-\\frac{1}{2}}\\right\\} P^{-1} R \\mathbf{1} .\n \t\\end{aligned}\n \\end{equation}\n Therefore, by \\eqref{518}, \\eqref{713}, \\eqref{715} and \\eqref{716}, we deduce that \n \\begin{equation}\\label{717}\n \t\\begin{aligned}\n \t\t&\\operatorname{diag}\\left\\{a_{1}, \\ldots, a_{n}\\right\\} \\mathbf{t} \\\\\n \t\t&\\leq \\operatorname{diag}\\left\\{a_{1} a_{11}^{-\\frac{1}{2}}, \\ldots, a_{n} a_{n n}^{-\\frac{1}{2}}\\right\\} Q^{-1} \\operatorname{diag}\\left\\{a_{1} a_{11}^{-\\frac{1}{2}}, \\ldots, a_{n} a_{n n}^{-\\frac{1}{2}}\\right\\} P^{-1} R \\mathbf{1} \\\\\n \t\t&\\leq|V| Q^{-1} P^{-1} R \\mathbf{1} \\\\\n \t\t&=|V| \\mathbf{1}.\n \t\\end{aligned}\n \\end{equation}\n Thus, we get the right hand side of \\eqref{76} holds. It follows from Jensen's inequality that $$\\frac{a_i}{|V|}\\ge e^ { \\frac{\\int\\limits_{V} u_{1}^{0}+w_{i} d \\mu }{|V|} },~i=1,...,n.$$ Then the right hand sides of \\eqref{77} follows from this and \\eqref{717}.\n\t\n\tWe now complete the proof.\n\\end{proof}\n\nThe following result implies that we can solve constraints \\eqref{611}, so constraints \\eqref{68} could be solved.\n\\begin{Lemma}\\label{434}\n\tFor any $\\mathbf{w} \\in$ $ \\mathscr{A}$, the equations \n\t\\begin{equation}\\label{82}\n\t\t\\mathbf{F}(\\mathbf{t}) \\equiv \\mathbf{t}-\\mathbf{f}(\\mathbf{t})=\\mathbf{0}, \\quad \\mathbf{t} \\equiv\\left(t_{1}, \\ldots, t_{n}\\right)^{T} \\in \\mathbb{R}_{+}^{n},\n\t\\end{equation}\n\t admits a solution $\\mathbf{t} \\in (0,\\infty)^{n}$, where $\\mathbb{R}_{+}^{n} \\equiv\\left(\\mathbb{R}_{+}\\right)^{n}, \\mathbf{f}(\\mathbf{t}) \\equiv\\left(f_{1}(\\mathbf{t}), \\ldots, f_{n}(\\mathbf{t})\\right)^{T}$.\t\n\\end{Lemma}\n\\begin{proof}\n\tFor the sake of convience, we write \n\t\\begin{equation}\n\t\t\\left(\\alpha_{1}, \\ldots, \\alpha_{n}\\right)^{\\tau}<(\\leq)\\left(\\beta_{1}, \\ldots, \\beta_{n}\\right)^{\\tau} \\text { if } \\alpha_{i}<(\\leq) \\beta_{i}, i=1, \\ldots, n,\n\t\\end{equation}\nand we use the same notation for matrices. We next find a solution to \\eqref{82} with $\\epsilon=1$.\n\n By \\eqref{77}, we conclude that $\\mathbf{F}(\\epsilon, \\mathbf{t})$ has no zero on the boundary of $\\Omega$ for all $\\mathbf{w} \\in \\mathcal{A}$ and $\\epsilon \\in[0,1]$, where $\\Omega:=(0,r_0)^{n}$ and $r_0>1$ is a constant. Thus, we can define the Brouwer degree $\\operatorname{deg}(\\mathbf{F}(\\epsilon, \\mathbf{t}),\\Omega, \\mathbf{0}).$ Clearly, \n\\begin{equation}\\label{429}\n\t\\mathbf{F}(0, \\mathbf{t})=\\mathbf{0}\n\\end{equation}\nis equivalent to \n\\begin{equation}\\label{237}\n\tt_{i}-\\frac{\\frac{R_{i} a_{i}}{P_{i}}+\\sum\\limits_{j \\neq i} t_{j} R_{i} R_{j} \\alpha_{i j} a_{i j}}{R_{i}^{2} \\alpha_{i i} a_{i i}}=0, \\quad i=1, \\ldots, n.\n\\end{equation}\nWe write \\eqref{237} in its vector form\n\\begin{equation}\n\t\\tilde{Q} \\mathbf{t}=P^{-1} R \\mathbf{a}.\n\\end{equation}\nSince $\\tilde{Q}$ is invertible, we know that \\eqref{429} admits a unique solution \\begin{equation}\n\t\\mathbf{t}=\\tilde{Q}^{-1} P^{-1} R \\mathbf{a},\n\\end{equation}\nBy \\eqref{77},we see that it belong to the interior of $\\Omega$. By the fact that $\\tilde{Q}$ is positive definite, we deduce that the Jacobian of $\\mathbf{F}(0, \\mathbf{t})$ is positive everywhere, and hence that $\\operatorname{deg}(\\mathbf{F}(0, \\mathbf{t}),\\Omega, \\mathbf{0})=1$. It is easy to check that $\\mathbf{F}(\\epsilon, \\mathbf{t})$ is a smooth function for any $\\epsilon\\in [0,1]$. Thus by homotopy invariance, we deduce that \n\\begin{equation}\n\t\\operatorname{deg}(\\mathbf{F}(1, \\mathbf{t}), \\Omega, \\mathbf{0})=\\operatorname{deg}(\\mathbf{F}(0, \\mathbf{t}), \\Omega, \\mathbf{0}).\n\\end{equation}\n \n Now we complete the proof.\n\\end{proof}\n The following Lemma follows from Lemma \\ref{434} immediately. \n \\begin{Lemma}\\label{4}\n \tFor any $\\mathbf{w} \\in \\mathscr{A}$, \\eqref{68} admits a solution $\\mathbf{c}(\\mathbf{w})=(c_{1}(\\mathbf{w}),...,c_{n}(\\mathbf{w}))^{T}$ which satisfies \\eqref{611}, so that $\\mathbf{v}=\\mathbf{w}+\\mathbf{c}(\\mathbf{w})=(w_1+c_{1}(\\mathbf{w}),...,w_n+c_{n}(\\mathbf{w}))^{T}$ satisfies \\eqref{515}.\n \\end{Lemma} \n\nDefine the constrained functional \n\\begin{equation}\\label{J}\n\tJ(\\mathbf{w}) := I(\\mathbf{w}+\\mathbf{c}(\\mathbf{w})), \\quad \\mathbf{w} \\in \\mathscr{A}.\n\\end{equation}\n For all $\\mathbf{w} \\in \\mathscr{A}$, since $\\mathbf{v}=\\mathbf{w}+\\mathbf{c}(\\mathbf{w})$ satisfies \\eqref{515}, we conclude that \n \\begin{equation}\n \t\\begin{aligned}\n \t\t\\int\\limits_{V}(\\mathbf{U}-\\mathbf{1})^{\\tau} Q(\\mathbf{U}-\\mathbf{1}) \\mathrm{d} \\mu &=\\int\\limits_{V} \\mathbf{1}^{T} Q(\\mathbf{1}-\\mathbf{U}) \\mathrm{d} \\mu-\\frac{\\mathbf{1}^{T} \\mathbf{b}}{\\lambda} \\\\\n \t\t&=\\int\\limits_{V} \\mathbf{1}^{T} P^{-1} R(\\mathbf{1}-\\mathbf{U}) \\mathrm{d} \\mu-\\frac{\\mathbf{1}^{\\tau} \\mathbf{b}}{\\lambda}.\n \t\\end{aligned}\n \\end{equation}\nBy \\eqref{19}, we deduce that \n\\begin{equation}\n\t\\begin{aligned}\n\t\tJ(\\mathbf{w})=& \\frac{1}{2} \\sum_{j,k=1}^{n} \\int\\limits_{V} b_{kj} \\Gamma(v_{k},v_{j}) d \\mu+\\frac{\\lambda}{2} \\mathbf{1}^{T} P^{-1} R \\int\\limits_{V}(\\mathbf{1}-\\mathbf{U}) \\mathrm{d} \\mu+\\mathbf{b}^{T} \\mathbf{c}-\\frac{\\mathbf{1}^{T} \\mathbf{b}}{2} \\\\\n\t\t=& \\frac{1}{2} \\sum_{j,k=1}^{n} \\int\\limits_{V} b_{kj} \\Gamma(v_{k},v_{j}) d \\mu+\\frac{\\lambda}{2} \\sum_{i=1}^{n} \\frac{R_{i}}{P_{i}} \\int\\limits_{V}\\left(1-\\mathrm{e}^{c_{i}} \\mathrm{e}^{u_{i}^{0}+w_{i}}\\right) \\mathrm{d} \\mu \\\\\n\t\t&+\\sum_{i=1}^{n} b_{i} c_{i}-\\frac{1}{2} \\sum_{i=1}^{n} b_{i}.\n\t\\end{aligned}\n\\end{equation}\n\n\nTo prove Lemma \\ref{x4}, we need the following result.\n\n\\begin{Lemma}\\label{jian}\n\tSuppose that $\\mathbf{w} \\in \\mathscr{A}$ and $\\tau \\in(0,1)$. Then \n\t\\begin{equation}\n\t\t\\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} \\mathrm{~d} \\mu \\leq\\left(\\frac{\\lambda}{4 P_{i}^{2} b_{i} \\alpha_{i i}}\\right)^{\\frac{1-\\tau}{\\tau}}\\left(\\int\\limits_{V} \\mathrm{e}^{\\tau u_{i}^{0}+\\tau w_{i}} \\mathrm{~d} \\mu \\right)^{\\frac{1}{\\tau}}, \\quad i=1, \\ldots, n\n\t\\end{equation}\n\\end{Lemma}\n\\begin{proof}\nLet $a=\\frac{1}{2-\\tau}$. By \\eqref{69} and Lemma \\ref{i}, we conclude that \n\t\\begin{equation}\n\t\t\\begin{aligned}[]\n\t\t\t\t\\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} \\mathrm{~d} \\mu &\\le \\left[ \\int\\limits_{V} \\left( \\mathrm{e}^{u_{i}^{0}+w_{i}} \\right) ^{\\tau}\\mathrm{~d} \\mu\\right] ^{a} \\left[ \\int\\limits_{V} \\mathrm{e}^{2(u_{i}^{0}+w_{i})} \\mathrm{~d} \\mu\\right] ^{1-a} \\\\\n\t\t\t\t&\\le\\left[ \\int\\limits_{V} \\left( \\mathrm{e}^{u_{i}^{0}+w_{i}} \\right) ^{\\tau}\\mathrm{~d} \\mu\\right] ^{a}\n\t\t\t\t\\left[ \\frac{\\lambda}{4 P_{i}^{2} b_{i} \\alpha_{i i}} \\left( \\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} d \\mu \\right)^{2} \\right] ^{1-a},~i=1,...,n,\n\t\t\\end{aligned}\n\t\\end{equation}\nand hence that \n\n\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} \\mathrm{~d} \\mu &\\le \\left( \\frac{\\lambda}{4 P_{i}^{2} b_{i} \\alpha_{i i}} \\right) ^{\\frac{1-a}{2a-1}} \\left( \\int\\limits_{V} \\mathrm{e}^{\\tau(u_{i}^{0}+w_{i})} \\mathrm{~d} \\mu \\right) ^{\\frac{a}{2a-1}} \\\\\n\t&\t\\le \\left( \\frac{\\lambda}{4 P_{i}^{2} b_{i} \\alpha_{i i}} \\right) ^{\\frac{1-\\tau}{\\tau}} \\left( \\int\\limits_{V} \\mathrm{e}^{\\tau(u_{i}^{0}+w_{i})} \\mathrm{~d} \\mu \\right) ^{\\frac{1}{\\tau}} ,~i=1,...,n.\t\t\n\t\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\n\\begin{Lemma}\\label{x4}\n\tSuppose that $\\gamma$ is the smallest eigenvalues of $A$. Then \n\t\\begin{equation}\\label{x}\n\t\tJ(\\mathbf{w}) \\geq \\frac{\\gamma}{4} \\sum_{i=1}^{n} \\int\\limits_{V} \\Gamma(w_{i},w_{i}) d \\mu-C(\\ln \\lambda+1),\n\t\\end{equation}\nfor all $\\mathbf{w} \\in \\mathscr{A}$, where $C>0$ is a constant independent of $\\lambda$. \n\\end{Lemma}\n\\begin{proof}\n\tFrom \\eqref{57} and \\eqref{J}, we deduce that \n\t\\begin{equation}\\label{b7}\n\t\tJ(\\mathbf{w}) \\geq \\frac{\\gamma}{2} \\sum_{i=1}^{n}\\left\\|\\nabla w_{i}\\right\\|_{2}^{2}+\\sum_{i=1}^{n} b_{i} c_{i}.\n\t\\end{equation}\nFrom \\eqref{69} and \\eqref{611}, we deduce that \n\\begin{equation}\n\t\\mathrm{e}^{c_{i}} \\geq \\frac{a_{i}}{2 R_{i} P_{i} \\alpha_{i i} a_{i i}}\\ge\n\t\t\\frac{2 b_{i} P_{i} }{\\lambda R_{i} a_{i}}=\\frac{2 P_{i} b_{i}}{\\lambda R_{i} \\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} \\mathrm{~d} x}, \\quad i=1, \\ldots, n.\n\\end{equation}\nIt follows from that \n\\begin{equation}\\label{b11}\n\tc_{i} \\geq \\ln \\frac{2 P_{i} b_{i}}{R_{i}}- \\ln \\lambda \\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} \\mathrm{~d} x, \\quad i=1, \\ldots, n.\n\\end{equation}\nBy Cauchy inequlity with $\\epsilon$($\\epsilon$>0) and \\eqref{mt},\nwe deduce that \n\\begin{equation}\\label{g}\n\t\\begin{aligned}\n\t\t\\int\\limits_{V} e^{w} d\\mu &\\le \\int\\limits_{V} e^{\\frac{w}{||\\nabla w||^{2}_{2}} ||\\nabla w||^{2}_{2} } d\\mu\\\\\n\t\t&\\le \\int\\limits_{V} e^{\\frac{w}{ 4\\epsilon||\\nabla w||^{2}_{2}}} d\\mu e^{ {\\epsilon}||\\nabla w||_{2}^{2} }\\\\\n\t\t&=:C(\\epsilon,G) e^{ \\epsilon||\\nabla w||^{2}_{2} }.\n\t\\end{aligned}\n\\end{equation}\nIt is easy to check that \n\\begin{equation}\\label{446}\n\t\\begin{aligned}\n\t\t||\\nabla u^{0}_{i}||^{2}_{2} &= -\\int\\limits_{V} u_{i}^{0} \\Delta u_{i}^{0} d\\mu \\\\\n\t\t&= -\\int\\limits_{V} u^{0}_{i} 4\\pi \\sum_{s=1}^{N_{i}} \\delta_{p_{i s}} d \\mu\\\\\n\t\t&=- 4\\pi \\sum_{s=1}^{N_{i}} u_{i}^{0} (p_{i s }) \\\\\n\t\t&\\le 4\\pi N_{i} \\max_{V} |u_{i}^{0}| \\quad i=1, \\ldots, n .\n\t\\end{aligned}\n\\end{equation}\nBy Lemma \\ref{jian}, \\eqref{446} and \\eqref{g}, we conclude that \n\\begin{equation}\\label{b1}\n\t\\begin{aligned}\n\t\t\\ln \\int\\limits_{V} \\mathrm{e}^{u_{i}^{0}+w_{i}} \\mathrm{~d} \\mu \\leq & \\frac{1-\\tau}{\\tau}\\left\\{\\ln \\lambda-\\ln \\left(4 P_{i}^{2} b_{i} \\alpha_{i i}\\right)\\right\\}+\\frac{1}{\\tau} \\ln \\int\\limits_{V} \\mathrm{e}^{\\tau u_{i}^{0}+s w_{i}} \\mathrm{~d} \\mu \\\\\n\t\t\\leq & 2\\epsilon \\tau\\left\\|\\nabla w_{i}\\right\\|_{2}^{2}+ 2\\epsilon\\tau 4\\pi N_{i} \\max_{V} |u_{i}^{0}|+ \\frac{1-\\tau}{\\tau}\\left\\{\\ln \\lambda-\\ln \\left(4 P_{i}^{2} b_{i} \\alpha_{i i}\\right)\\right\\} \\\\\n\t\t&+\\frac{\\ln C}{\\tau}, \\quad i=1, \\ldots, n .\n\t\\end{aligned}\n\\end{equation}\nIt follows from \\eqref{b1}, \\eqref{b11} and \\eqref{b7} that \n\\begin{equation}\\label{448}\n\t\\begin{aligned}\n\t\tJ(\\mathbf{w}) \\geq &\\left(\\frac{\\gamma}{2}- 2\\epsilon \\tau \\max _{1 \\leq i \\leq n}\\left\\{b_{i}\\right\\} \\right) \\sum_{i=1}^{n}\\left\\|\\nabla w_{i}\\right\\|_{2}^{2}-\\frac{1}{s} \\sum_{i=1}^{n} b_{i}\\left\\{\\ln \\lambda-\\ln \\left(4 P_{i}^{2} b_{i} \\alpha_{i i}\\right)+\\ln C\\right\\} \\\\\n\t\t&-\\sum_{i=1}^{n} b_{i}\\left\\{\\ln \\left(2 R_{i} P_{i} b_{i} \\alpha_{i i}\\right)+ 2\\epsilon\\tau 4\\pi N_{i} \\max_{V} |u_{i}^{0}| \\right\\}\n\t\\end{aligned}.\n\\end{equation}\nTaking $\\tau$ sufficiently small in \\eqref{448}, we get the desired conclusion \\eqref{x}.\n\\end{proof}\n\nBy Lemma \\ref{x4}, we can select a minimizing sequence $\\{ \\mathbf{w}^{k} \\}:=\\{(w_{1}^{(k)},...,w_{n}^{(k)}) \\}$ of the constrained minimization problem \n\\begin{equation}\n\t\\eta=\\inf\\{J(\\mathbf{w})| w\\in \\mathcal{A} \\}.\n\\end{equation}\nBy Lemma \\ref{x4} and Lemma \\ref{21}, we see that, there exists $$\\mathbf{w}^{0}:=\\{(w_{1}^{(0)},...,w_{n}^{(0)}) \\} \\in H^{0}(V)$$ such that, by passing to a subsequence, denoted still by $\\{\\mathbf{w}^{k} \\}$, \n$${w}_{i}^{(k)} \\to {w}_{i}^{(0)} $$ uniformly for $x\\in V$ as $k\\to +\\infty$ for $i=1,...,n$. By the fact that $$\\lim\\limits_{k\\to +\\infty} J(\\mathbf{w}^{k})=J(\\mathbf{w}^{0}),$$ we deduce that $J(\\mathbf{w}^{0})=\\eta$. Thus $\\mathbf{w}^{0}$ is a minimizer of $J$. Next, we prove that the\nminimizer belongs to the interior of $\\mathscr{A}$.\n\n\\begin{Lemma}\\label{u8}\n\tThere holds the following inequlities\n\t\\begin{equation}\\label{4d}\n\t\t\\inf _{\\mathbf{w} \\in \\partial \\mathscr{A}} J(\\mathbf{w}) \\geq \\frac{|V| \\lambda}{2} \\min _{1 \\leq i \\leq n}\\left\\{\\frac{R_{i}}{P_{i}}\\right\\}-C(1+\\ln \\lambda+\\sqrt{\\lambda}),\n\t\\end{equation}\nwher $C$ is constant independent of $\\lambda$.\n\\end{Lemma}\n\\begin{proof}\nFor any $w\\in \\partial \\mathscr{A}$,\tfrom \\eqref{A}, we know that at least one of the following equalities \n\\begin{equation}\\label{450}\n\t\\frac{a_{i}^{2}}{a_{i i}}=\\frac{4 \\alpha_{i i} P_{i}^{2} b_{i}}{\\lambda}, \\quad i=1, \\ldots, n\n\\end{equation}\nholds.\n\nSuppose the case $i=1$ happens, from \\eqref{713} and \\eqref{717}, we deduce that \n\\begin{equation}\\label{q}\n\t\\begin{aligned}\n\t\ta_{1} \\mathrm{e}^{c_{1}} & \\leq \\frac{R_{1}}{P_{1}}\\left(Q^{-1}\\right)_{11} a_{1}^{2} a_{11}^{-1}+\\sum_{j=2}^{n} \\frac{R_{j}}{P_{j}}\\left(Q^{-1}\\right)_{1 j} a_{1} a_{j} a_{11}^{-\\frac{1}{2}} a_{j j}^{-\\frac{1}{2}} \\\\\n\t\t& \\leq \\frac{R_{1}}{P_{1}}\\left(Q^{-1}\\right)_{11} a_{1}^{2} a_{11}^{-1}+|V|^{\\frac{1}{2}} \\sum_{j=2}^{n} \\frac{R_{j}}{P_{j}}\\left(Q^{-1}\\right)_{1 j} a_{1} a_{11}^{-\\frac{1}{2}}.\n\t\\end{aligned}\n\\end{equation}\nBy \\eqref{q} and \\eqref{450}, we deduce that \n\\begin{equation}\\label{115}\n\ta_{1} \\mathrm{e}^{c_{1}} \\le \\left(Q^{-1}\\right)_{11} \\frac{4 P_{1} R_{1} b_{1} \\alpha_{11}}{\\lambda}+\\frac{2 P_{1} \\sum_{j=2}^{n} \\frac{R_{j}}{P_{j}}\\left(Q^{-1}\\right)_{1 j}}{\\sqrt{\\lambda}} \\sqrt{b_{1}|\\textsc{V}| \\alpha_{11}}.\n\\end{equation}\nIf other cases occur, we can establish the similar estimate.\nFrom \\eqref{76} and \\eqref{115}, we deduce that \n\\begin{equation}\\label{b15}\n\t\\begin{aligned}\n\t\t&\\frac{\\lambda}{2} \\sum_{i=1}^{n} \\frac{R_{i}}{P_{i}} \\int\\limits_{V}\\left(1-\\mathrm{e}^{c_{i}} \\mathrm{e}^{u_{i}^{0}+w_{i}}\\right) \\mathrm{d} \\mu \\\\\n\t\t&\\geq \\frac{|V| \\lambda R_{1}}{2 P_{1}}-2\\left(Q^{-1}\\right)_{11} P_{1} R_{1} b_{1} \\alpha_{11}-P_{1} \\sum_{j=2}^{n} \\frac{R_{j}}{P_{j}}\\left(Q^{-1}\\right)_{1 j} \\sqrt{b_{1} \\lambda|V| \\alpha_{11}}.\n\t\\end{aligned}\n\\end{equation}\nFrom \\eqref{b1}, \\eqref{b11} and \\eqref{b15}, we obtain the desired conclusion \\eqref{4d}.\n\\end{proof}\n\n\\begin{Lemma}\\label{f1}\n\tThere exists $\\mathbf{w}_{r_{\\varepsilon}}\\in int \\mathscr{A}$ such that\n\t\\begin{equation}\n\t\tJ\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)-\\inf _{\\mathbf{w} \\in \\partial \\mathscr{A}} J(\\mathbf{w})<-1.\n\t\\end{equation}\n\\end{Lemma}\n\\begin{proof}\n \tBy Proposition \\ref{f2}, we see that, for all sufficiently large $r>0$, the problem \n \t\\begin{equation}\n \t\t\\Delta v=r \\mathrm{e}^{u_{i}^{0}+v}\\left(\\mathrm{e}^{u_{i}^{0}+v}-1\\right)+\\frac{4 \\pi N_{i}}{|V|}, \\quad i=1, \\ldots, n,\n \t\\end{equation}\n has solutions $v_{i,r}(i=1,...,n)$ \tso that $v_{i,r} \\to -u_{i}^{0}$ as $r\\to +\\infty$ uniformly for $x\\in V$. Let $c_{i,r}:=\\frac{1}{|V|}\\int\\limits_{V} v_{i,r} d\\mu$. It follows that $w_{i,r}:=v_{i,r}-c_{i,r} \\to -u_{i}^{0}$ as $r\\to +\\infty$, $i=1,...,n.$ Thus we have \n \t\\begin{equation}\\label{456}\n \t\t\\lim _{\\mu \\rightarrow \\infty} a_{i}(w_{i,r})=|V|, \\quad \\lim _{\\mu \\rightarrow \\infty}a_{ij}( w_{i,r},w_{j,r} )=|V|, \\quad i, j=1, \\ldots, n.\n \t\\end{equation}\n \tBy \\eqref{62}, we obtain \n \t\\begin{equation}\\label{457}\n \t\t\\lim _{r \\rightarrow \\infty} \\tilde{Q}\\left(\\mathbf{w}_{r}\\right)=|V| Q .\n \t\\end{equation}\n By \\eqref{457} and \\eqref{456}, we can find $\\sigma>0$ such that for any $\\epsilon\\in (0,1)$, there exists $r_{\\epsilon}>0$ so that \n \\begin{equation}\n \t\\mathbf{w}_{r_{\\varepsilon}}=\\left(w_{1,r_{\\varepsilon}}, \\ldots, w_{n,r_{\\varepsilon}}\\right)^{T} \\in \\text { int } \\mathscr{A}\n \\end{equation}\t\n \tfor all $\\lambda>\\sigma$, and \\begin{equation}\\label{459}\n \t\t\\begin{aligned}\n \t\t\t&a_{i j}\\left(w_{i,r_{\\varepsilon}}, w_{j,r_{\\varepsilon}}\\right)<(1+\\varepsilon)|V|<2|V|, \\quad i, j=1, \\ldots, n \\\\\n \t\t\t&\\frac{(1-\\varepsilon)}{|V|} Q^{-1}<\\tilde{Q}^{-1}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)<\\frac{(1+\\varepsilon)}{|V|} Q^{-1}<\\frac{2}{|V|} Q^{-1}.\n \t\t\\end{aligned}\n \t\\end{equation} Since $\\mathbf{w}_{r_{\\varepsilon}}\\in \\text { int } \\mathscr{A}$, by \\eqref{67} and the fact that\n $$\\sqrt{1-2x}\\ge 1-2x\\text{~for~}x\\in[0,\\frac{1}{2}] ,$$ we deduce that \\begin{equation}\n \t\\begin{aligned}\n \t\t&\\mathrm{e}^{c_{i}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)}=\\frac{\\frac{R_{i} a_{i}}{P_{i}}+\\sum\\limits_{j \\neq i} \\mathrm{e}^{c_{j}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)} R_{i} R_{j} \\alpha_{i j} a_{i j}}{2 R_{i}^{2} \\alpha_{i i} a_{i i}}\\\\\n \t\t&\\times\\left(1+\\sqrt{1-\\frac{4 b_{i} R_{i}^{2} \\alpha_{i i} a_{i i}}{\\lambda\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum\\limits_{j \\neq i} \\mathrm{e}^{\\left.c_{j} R_{i} R_{j} \\alpha_{i j} a_{i j}\\right)^{2}}\\right.}}\\right)\\\\\n \t\t&\\geq \\frac{\\frac{R_{i} a_{i}}{P_{i}}+\\sum_{j \\neq i} \\mathrm{e}^{c_{j}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)} R_{i} R_{j} \\alpha_{i j} a_{i j}}{R_{i}^{2} \\alpha_{i i} a_{i i}}-\\frac{2 b_{i}}{\n \t\t\t\\lambda\\left(\\frac{R_{i} a_{i}}{P_{i}}+\\sum\\limits_{j \\neq i} \\mathrm{e}^{c_{j} R_{i} R_{j} \\alpha_{i j} a_{i j}}\n \t\t\t\\right)\n \t\t}.\n \t\\end{aligned}\n \\end{equation} \n By \\eqref{k1} and Jensen inequlity, we conclude that \t\n \\begin{equation}\\label{w}\n \t\\mathrm{e}^{c_{i}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)}\n \t\\geq \\frac{\\frac{R_{i}|V|}{P_{i}}+\\sum\\limits_{j \\neq i} \\mathrm{e}^{c_{j}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)} R_{i} R_{j} \\alpha_{i j} a_{i j}}{R_{i}^{2} \\alpha_{i i} a_{i i}}-\\frac{2 P_{i} b_{i}}{\\lambda|V| R_{i}}, \\quad i=1, \\ldots, n.\n \\end{equation}\t\n From now on, we understand\n \\begin{equation}\n \ta_{i}=a_{i}\\left(w_{i,r_{\\varepsilon}} \\right), \\quad a_{i j}=a_{i j}\\left(w_{i,r_{\\varepsilon}}, w_{j,r_{\\varepsilon}}\\right), \\quad i, j=1, \\ldots, n.\n \\end{equation}\t\n\nThen by \\eqref{w} and \\eqref{459}, we deduce that \n\\begin{equation}\\label{eb}\n\t\\begin{aligned}\n\t\tR_{i}^{2} \\alpha_{i i} a_{i i} \\mathrm{e}^{c_{i}\\left(\\mathrm{w}_{r_{\\varepsilon}}\\right)}-\\sum_{j \\neq i} \\mathrm{e}^{c_{j}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)} R_{i} R_{j} \\alpha_{i j} a_{i j} & \\geq \\frac{R_{i}|V|}{P_{i}}-\\frac{2 P_{i} R_{i} b_{i}}{\\lambda|V|} \\alpha_{i i} a_{i i} \\\\\n\t\t& \\geq \\frac{R_{i}|V|}{P_{i}}-\\frac{4 \\alpha_{i i} P_{i} R_{i} b_{i}}{\\lambda}, \\quad i=1, \\ldots, n.\n\t\\end{aligned}\n\\end{equation}\nBy \\eqref{459} and \\eqref{eb}, we conclude that \n\\begin{equation}\\label{l}\n\t\\begin{aligned}\n\t\t&\\left(\\mathrm{e}^{c_{1}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)}, \\ldots, \\mathrm{e}^{c_{n}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)}\\right)^{T} \\\\\n\t\t&\\geq|\\Omega| \\tilde{Q}^{-1}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right) P^{-1} R \\mathbf{1}-\\frac{4 \\tilde{Q}^{-1}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right) P R}{\\lambda} \\operatorname{diag}\\left\\{\\alpha_{11}, \\ldots, \\alpha_{n n}\\right\\} \\mathbf{b} \\\\\n\t\t&\\geq(1-\\varepsilon) Q^{-1} P^{-1} R \\mathbf{1}-\\frac{8 Q^{-1} P R}{\\lambda|\\Omega|} \\operatorname{diag}\\left\\{\\alpha_{11}, \\ldots, \\alpha_{n n}\\right\\} \\mathbf{b} \\\\\n\t\t&=(1-\\varepsilon) \\mathbf{1}-\\frac{8 Q^{-1} P R}{\\lambda|\\Omega|} \\operatorname{diag}\\left\\{\\alpha_{11}, \\ldots, \\alpha_{n n}\\right\\} \\mathbf{b} .\n\t\\end{aligned}\n\\end{equation}\nIt follows that \n\\begin{equation}\n\t\\int\\limits_{V}\\left(1-\\mathrm{e}^{c_{i}\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)} \\mathrm{e}^{u_{i}^{0}+w_{i}^{\\mu_{\\varepsilon}}}\\right) \\mathrm{d} \\mu \\leq|V| \\varepsilon+\\frac{8}{\\lambda} \\sum_{i=1}^{n}\\left(Q^{-1}\\right)_{i j} P_{j} R_{j} b_{j} \\alpha_{j j}, \\quad i=1, \\ldots, n.\n\\end{equation}\nBy \\eqref{77}, there exists a constant $C_{\\epsilon}$ such that\n\t\\begin{equation}\n\t\tJ\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right) \\leq \\frac{|V| \\lambda \\varepsilon}{2} \\sum_{j=1}^{n} \\frac{R_{i}}{P_{i}}+C_{\\varepsilon}.\n\t\\end{equation}\nBy Lemma \\ref{u8}, this implies that there exists $C$ independent of $\\lambda$ so that\n\\begin{equation}\\label{f}\n\tJ\\left(\\mathbf{w}_{r_{\\varepsilon}}\\right)-\\inf _{\\mathbf{w} \\in \\partial \\mathscr{A}} J(\\mathbf{w}) \\leq \\frac{|V| \\lambda}{2}\\left(\\sum_{j=1}^{n} \\frac{R_{i}}{P_{i}} \\varepsilon-\\min _{1 \\leq i \\leq n}\\left\\{\\frac{R_{i}}{P_{i}}\\right\\}\\right)+C(\\sqrt{\\lambda}+\\ln \\lambda+1).\n\\end{equation}\nWe can get \\eqref{f1} by taking $\\epsilon$ suitably small and $\\lambda$ sufficiently large in \\eqref{f}.\n\nWe now prove the lemma.\n\\end{proof}\nFrom Lemmas \\ref{x4} and \\ref{f1}, there exists $\\lambda_{2}:= \\max\\{ \\sigma,\\lambda_{0} \\}$ such that for all $\\lambda>\\lambda_{2}$, we can find $\\mathbf{w}_{0} \\in int \\mathscr{A}$ such that $\\mathbf{w}_{0}$ is a minimizer of $J$. It is easy to check that $\\mathbf{v}_{0}:= \\mathbf{w}_{0}+ \\mathbf{c}({\\mathbf{w}_{0}})$ is a critical point of $I$, which implies that $\\mathbf{v}_{0}$ is a solution of equations \\eqref{510}. Thus, we could establish the conclusion (ii) of Theorem \\ref{31}.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nSpontaneous chiral symmetry breaking (S$\\chi$SB) is fundamental to our\nunderstanding of low energy hadronic phenomena and it is thus\nimportant to demonstrate quantitatively that it is a consequence of\nQCD. A natural candidate for such investigations is the numerical\nsimulation of QCD on a spacetime lattice. S$\\chi$SB, however, presents\nthe lattice approach with a twofold challenge.\n\nThe first is that spontaneous symmetry breaking does not occur in a\nfinite volume. In QCD, a possible signal of S$\\chi$SB is the presence\nof a non-vanishing quark condensate defined as:\n\\begin{equation}\n-\\Sigma\\equiv\\langle\\bar qq\\rangle =\\lim_{\\mbox{\\tiny $m\\to 0$}}\n\\lim_{\\mbox{\\tiny $V\\to\\infty$}} \\langle\\bar\nqq\\rangle_{m,V}\n\\ ,\n\\label{eq:conddef}\n\\end{equation}\nwhere $\\langle\\bar qq\\rangle_{m,V}$ is the condensate for finite volume $V$\nand mass $m$. The double limit in \\eq{eq:conddef} is rather\nchallenging numerically! To get around this problem, we resort to a\nfinite-size scaling analysis. This involves studying the scaling of\nthe condensate with $V$ and $m$ as the limit of restoration of $\\chi$S\nis approached ($m\\to 0$, $V$ finite).\n\nSuch a study requires good control over the chiral properties of the\ntheory, which is the second challenge. Indeed, at finite lattice\nspacing, ``reasonable'' discretizations of fermions either break\ncontinuum $\\chi$S explicitly or lead to extraneous fermion species\n\\cite{Nielsen:1981hk}. To minimize this problem, we resort to recently\nrediscovered \\cite{Hasenfratz:1997ft,Neuberger:1998wv} Ginsparg-Wilson\n(GW) fermions \\cite{Ginsparg:1982bj} which break continuum $\\chi$S in\na very mild and controlled fashion and actually have a slightly\ngeneralized $\\chi$S even at finite lattice spacing\n\\cite{Luscher:1998pq}.\n\n\\vskip -9.5cm\n\\rightline{CERN-TH\/99-273}\n\\rightline{CPT-99\/PE.3886}\n\\vskip +9.1cm\n\n\n\n\\section{Light quarks on a torus}\n\nIn a large periodic box of volume $V=L^4$ such that $F_\\pi L\\gg 1$,\nfor small quark masses and assuming the standard pattern of S$\\chi$SB\nwith $N_f\\ge 2$, the QCD partition function is dominated by the\nnearly massless pions; the system can be described with the first\nfew terms of a chiral lagrangian \\cite{Gasser:1988zq}. If, in addition,\n$m\\to 0$~\\footnote{We assume here for simplicity that the $N_f$\nflavors all have mass $m$.} so that $M_\\pi\nL\\simeq\\frac{\\sqrt{2m\\Sigma}}{F_\\pi}L\\ll 1$, the global mode of\nthe chiral lagrangian field $U\\in SU(N_f)$ dominates the partition\nfunction, leading to a regime of restoration of\n$\\chi$S \\cite{Gasser:1987ah}.\n\nIn the quenched approximation to which we restrict here, topological\nzero modes of the Dirac operator induce $1\/m$ singularities in\n$\\langle\\bar qq\\rangle_{m,V}$ as $m\\to 0$. To subtract these contributions, we\nwork in sectors of fixed topological charge. Generalizing the line of\nargument given above, the partition function\n$Z_\\nu$, in a sector of topological charge $\\nu$, was recently\nevaluated \\cite{Osborn:1998qb} for the quenched case\\footnote{The\n original unquenched treatment is given in \\cite{Leutwyler:1992yt}.}.\nThe quark condensate in sector $\\nu$, proportional to the derivative\nof $\\ln {Z_\\nu}$ w.r.t.\\ $m$, is then $-\\Sigma_\\nu\\equiv\\langle\\bar\nqq\\rangle_{m,V,\\nu}$, such that \\cite{Osborn:1998qb}\n\\begin{equation}\n\\frac{\\Sigma_\\nu}{\\Sigma} = z \\; [ I_\\nu(z) K_\\nu(z) + I_{\\nu+1}(z)\nK_{\\nu-1}(z)]+\\frac{\\nu}{z},\n\\label{qpt}\n\\end{equation}\nwhere $z \\equiv m \\Sigma V$ and $I_\\nu(z)$, $K_\\nu(z)$ are the\nmodified Bessel functions. As advertised, there is a divergence $\\sim\n1\/m$ in sectors with topology. These terms, however, are independent\nof $\\Sigma$.\n\n\\eq{qpt} summarizes the scaling of the quark condensate with the\nvolume and quark mass in the global mode regime, as a function\nof only one non-perturbative parameter: $\\Sigma$. Thus, by fitting the\ndependence of the finite-volume condensate in quark mass and volume to\nMonte Carlo data, we can extract $\\Sigma$ in a perfectly controlled\nmanner.\n\n\n\\section{Ginsparg-Wilson fermions}\n\nTo perform the finite-volume scaling analysis outlined above, we need\nto be able to reach the chiral restoration regime without excessive\nfine-tuning and we need an index theorem to control the contribution\nof topological zero modes. Both these requirements are satisfied by\nGW fermions \\cite{Hasenfratz:1998jp}. In particular, \nthe leading cubic UV divergence of the condensate is known\nanalytically for GW fermions and can thus be subtracted {\\rm\nexactly}. The resulting subtracted condensate,\n$\\Sigma^{sub}_{\\nu}$, however, is still divergent:\n\\begin{eqnarray}\n\\Sigma^{sub}_{\\nu}(a) = C_2 \\frac{m}{a^2} + \\cdots +\n\\Sigma_{\\nu}, \\label{eq:sigsub}\n\\end{eqnarray}\nwhere $a$ is the lattice spacing. The coefficients of the divergences\nare not known a priori and have to be determined, preferably\nnon-perturbatively. For the values of $m$ and $a$ considered below,\nhowever, only the quadratic divergence is important numerically,\nweaker divergences being suppressed by higher powers of $m$. A final\nmultiplicative renormalization is still required to eliminate a\nresidual logarithmic UV divergence in $\\Sigma_{\\nu}$.\n\nIn the present work, we use Neuberger's implementation of\nGW fermions encoded in the Dirac\noperator \\cite{Neuberger:1997fp,Neuberger:1998wv}:\n\\begin{equation}\naD_N=(1+s)\\left[\n1-A\/\\sqrt{A^{\\dagger}A}\\right]\n\\ ,\\label{eq:Dndef}\n\\end{equation}\nwith $A=1+s - aD_W$ where $D_W$ is the standard Wilson-Dirac operator. The\nparameter $s$ must satisfy $|s|<1$.\n\n\\section{Numerical results}\n\nWe work in the quenched approximation on hypercubic lattices with\nperiodic boundary conditions for gauge and fermion\nfields. We choose $\\beta=5.85$, which corresponds to $a^{-1}\\simeq\n1.5\\mbox{ GeV}$ \\cite{Edwards:1997xf}, and use standard methods to obtain\ndecorrelated gauge-field configurations.\n\nTo evaluate $1\/\\sqrt{A^\\dagger A}$ in \\eq{eq:Dndef}, we use a\nChebyshev approximation, $P_{n,\\epsilon}(A^\\dagger A)$, where\n$P_{n,\\epsilon}$ is a polynomial of degree $n$, which gives an\nexponentially converging approximation to $1\/\\sqrt{x}$ for $x\\in\n[\\epsilon,1]$ \\cite{Hernandez:1998et}. The cost of a multiplication by\n$D_N$ is linear in $n$ and becomes rapidly high. To reduce $n$\nsubstantially, we perform the improvements described in\n\\cite{Hernandez:1999cu}. We take $s=0.6$, a value at which\nNeuberger's operator is nearly optimally local for $\\beta=5.85$\n\\cite{Hernandez:1998et}.\n\nTo determine whether a gauge configuration belongs to the $\\nu=0$ or\n$\\pm 1$ sectors, we compute the few lowest eigenvalues of $D_N^\\dagger\nD_N$ by minimizing the relevant Ritz functional \\cite{ritz}. As pointed\nout in \\cite{Edwards:1998wx}, it is advantageous for this computation,\nas well as for the inversion of $(D_N^\\dagger+m)(D_N+m)$, to stay in a\ngiven chiral subspace. Having determined the topological charge of a\nconfiguration, we then obtain the condensate of \\eq{eq:sigsub} in\nthree volumes ($8^4$, $10^4$ and $12^4$) by computing\n\\begin{equation}\n\\Sigma^{sub}_\\nu= \\frac{1}{V}\\left\\langle\\mathrm{Tr}'\\left\\{\n\\frac{1}{D_N+m}+{\\mathrm h.c.}-\\frac{a}{1+s}\\right\\}\\right\\rangle_\\nu, \n\\label{eq:condtrace}\n\\end{equation}\nwhere the trace is taken in the chiral sector opposite to that\nwith the zero modes \\cite{Edwards:1998wx} and the gauge average is\nperformed in a sector of fixed topology $\\nu$. With this definition, terms\n$\\sim 1\/m$ in \\eq{qpt} are absent\\footnote{Though not shown\n explicitly in \\protect\\eq{eq:condtrace}, we correctly account for\n the real eigenvalues of $D_N$ at $2(1+s)\/a$.}. Three gaussian\nsources and a multimass solver \\cite{Jegerlehner:1997rn} were used\nto compute the trace in \\eq{eq:condtrace} for seven values of $m$.\n\nWe show in \\fig{fig:fss} our results for $a^3\\Sigma^{sub}_{\\nu=\\pm\n 1}\/am$ as a function of bare quark mass. We have 15, 10 and 7 gauge\nconfigurations on our $8^4$, $10^4$ and $12^4$ lattices,\nrespectively\\footnote{For the larger volumes, $\\nu=0$ configurations\n are rare while the calculation of $\\Sigma^{sub}_{\\nu=0}$ requires\n large statistics \\cite{Hernandez:1999cu}. $|\\nu|>1$ configurations,\n on the other hand, are rare in the smaller volumes.}. The solid\nlines are a fit of the data to \\eqs{eq:sigsub}{qpt} for all volumes\nand masses. This fit has only two parameters, namely $\\Sigma$ and the\ncoefficient of the quadratic divergence. We find\n$a^3\\Sigma=0.0032(4)$ and $C_2=-0.914(8)$.\n\n\n\\begin{figure}[htb]\n\\epsfxsize=7cm\\epsffile[18 270 480 580]{fss_fig.ps} \\caption{\\small\\it\nMass dependence of the condensate for the $8^4$ (circles),\n$10^4$ (squares) and $12^4$ (triangles) lattices. The\ncurves result from a fit to \\protect\\eqs{eq:sigsub}{qpt}.\n\\vspace{-0.5cm}}\n\\label{fig:fss}\n\\end{figure}\n\n\nClearly, the formulae derived in quenched $\\chi$PT give a very good\ndescription of the numerical data. The value of $\\Sigma$ that we\nextract is, in physical units, $\\Sigma(\\mu\\sim\n1.5\\mbox{ GeV})=(221^{+8}_{-9}\\mbox{ MeV})^3$, up to a multiplicative\nrenormalization constant, which has not been computed yet for\nNeuberger's operator. The quoted error on $\\Sigma$ is purely\nstatistical and the statistics are rather small. Quenching and\ndiscretization errors, for instance, as well as possible contributions\nfrom higher orders in $\\chi$PT are not accounted for. Nevertheless,\nthe value obtained and the agreement with q$\\chi$PT support the\nstandard scenario of S$\\chi$SB.\n\nWe further consider the mean value of the lowest non-zero eigenvalue\nof $\\sqrt{D^\\dagger_N D_N}$ in different topological sectors. In\nRandom Matrix Theory the distributions of these eigenvalues are given\nsolely in terms of $\\Sigma$ \\cite{lmin}. Our determination of $\\Sigma$\ntherefore yields predictions for the mean values. These can then be\ncompared to the average values obtained in simulation. With our $8^4$\nresults, we find agreement within roughly one standard deviation for\n$|\\nu|=1$ (29 configurations) and two for $\\nu=0$ (41 configurations)\n\\cite{Hernandez:1999cu}.\n\n\n{\\small\\it Note: Related work with Neuberger's operator can be found\n in \\cite{Edwards:1998wx,Edwards:1999ra,Damgaard:1999tk}.}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Network Architectures}\n\\label{app:arch}\n\n\\subsection{Autoencoders}\nThe policy and dynamics autoencoders are parameterised by Transformers using stacked self-attention and point-wise, fully connected layers for the encoder, and a fully connected feed-forward network for the decoder. \n\n\\textbf{Encoders:} The encoders consist of one layer composed of two sublayers, followed by another fully connected layer. The first sublayer, is a single-head self-attention mechanism, and the second is a simple fully connected feed-forward network. We employ a residual connection around each of the two sub-layers, followed by layer normalization and dropout. We use a dropout of 0.1 for all experiments. To facilitate these residual connections, all sublayers in the model, as well as the embedding layers, produce outputs of dimension $d_{model} = 64$. The second layer of the encoder projects the output of the first layer into the embedding space (from $d_{model}$ to $d_{emb}$). \n\nThe \\textbf{policy encoder} takes as input a set of state-action pairs $(s_t, a_t)$ from an full trajectory and outputs an embedding for the policy. \n\nThe \\textbf{dynamics encoder} takes as input a set of state-action-next-state tuples $(s_t, a_t, s_{t+1})$ from a full trajectory and outputs an embedding for the dynamics.\n\nThe dimension of both the policy and dynamics embedding is $d_{emb} = 8$ for all environments, with the exception of swimmer which uses a dynamics embedding of dimension 2.\n\n\\textbf{Decoders:} The decoder is a simple fully connected feed-forward network with three layers and ReLU activations after the first two layers.\n\nThe \\textbf{policy decoder} takes as input the state of the environment and the policy embedding (outputted by the policy encoder) and outputs an action (i.e. the predicted action taken by the agent). \n\nThe \\textbf{environment decoder} takes as input the state of the environment, an action, the dynamics embedding (outputted by the dynamics encoder) and outputs a state (i.e. the predicted next state in the environment). \n\nThe dimensions of the states and actions depend on the given environment. \n\n\\subsection{The Policy-Dynamics Value Function}\n\\label{app:pdvf}\nThe Policy-Dynamics Value Function (PD-VF) takes as inputs the initial state of the environment, as well as a policy embedding and a dynamics embedding, and outputs a scalar representing the predicted expected return.\n\nPD-VF is parameterised by a fully connected feed-forward network. First, the environment state and dynamics embedding are concatanated and passed through a linear layer with output dimension 64 followed by a ReLU nonlinearity. The second layer also has output dimension 64 but is followed by a hyperbolic tangent nonlinearity. The output of the second layer is then passed through another linear layer with output dimension equal to the square of the policy embedding dimension which is 64 in this case. Then, the output of this is rearranged in the form of a lower triangular matrix $L$. This matrix is used to construct a Hermitian positive-definite matrix A using the Cholesky decomposition, $A = L L^T$. Finally, the value outputted by the network is obtained by computing $z_{\\pi}^T A z_{\\pi}$, where $z_{pi}$ is the policy embedding. \n\n\\subsection{Baselines}\n\\label{app:baselines}\nAll the pretrained PPO policies as well as all the baselines (except for CondPolicy as explained below) and the ablations use the same actor-critic network architecture. Both the actor and the critic are parameterised two-layer fully connected networks with hidden size 64 and hyperbolic tangent nonlinearities after each layer. Note that the weights are not shared by the two networks. The critic layer has another linear layer on top that outputs the estimated value. The actor network also has a linear layer on top that outputs a vector with the same number of dimensions as the action space. The actions are sampled from a Gaussian distribution with diagonal covariance matrix and means defined by the vector outputted by the actor network. The CondPolicy baseline has a similar architecture. The only difference is the first layer of both the actor and the critic, which has a larger input dimension due to the fact that these networks also take as input the policy embedding (along with the environment state). \n\n\n\\section{Training Details}\n\\label{app:training}\nFor experiments on Spaceship and Swimmer, we use only $N_d = 1$ steps to infer the dynamics embedding, while for Ant-wind we use $N_d = 2$ and for Ant-legs we use $N_d = 4$. Note that in all four domains, we only need a few steps to infer the environment dynamics, which allows us to quickly find a good policy for acting during the rest of the episode. Consequently, this results in good performance when evaluated on a single episode. \n\nFirst, we have the \\textbf{reinforcement learning phase}, in which we pretrain 5 different initializations of PPO policies in each of the 20 environments in our distribution (both those used for training and those used for evaluation). We train all policies for $3e6$ environment interactions, which we have found to be enough for all of them to converge to a stable expected return. \n\nThen, in the \\textbf{self-supervised learning phase}, we use the policies pretrained on the training environments (75 policies for each domain) to generate trajectories through the training environments. For each policy-environment pair, we generate 200 trajectories, half of which are used for training the policy and dynamics autoencodesr and the rest are used for evaluation. We train the autoencoders on this data for a maximum of 200 epochs and we save the models with the lowest evaluation loss. Note that the autoencoders are never trained on trajectories generated in the evaluation environments or by policies pretrained on those environments, but only on data produced by interactions with the training environments. \n\nOnce we have the pretrained policy and dynamics autoencoders, we use them for learning the policy-dynamics value function in the \\textbf{supervised training phase}. To do this, we again generate 40 trajectories in the training environments (using only the policies pretrained on those environments). Half of these trajectories are used for training the PD-VF, while the rest are used for evaluation. In our experiments, we have found 20 trajectories from each policy-environment pair to be enough for training the model. For each trajectory, a policy embedding is obtained by passing the full trajectory through the policy encoder. Similarly, a corresponding dynamics embedding is obtained for each trajectory by passing the first few $N_d$ transitions of that trajectory through the dynamics encoder. The initial state and the return of that trajectory are also recorded. Now we have all the data needed for training the PD-VF with supervision. The PD-VF takes as inputs the initial state, the policy and dynamics embeddings and outputs a prediction for the expected return (corresponding to acting with that policy in the given environment). It is trained with $\\ell_2$ loss using the observed return. For the initial training stage of the PD-VF, we use 200 epochs, while for the second stage that include data aggregation for the value function and policy decoder, we use 100 epochs. The second stage is repeated a maximum of 20 times (each training for 100 epochs). We select the model that obtains the lowest loss on the evaluation data (out of all the models after each stage). We use this model for probing performace on the evaluation environments. \n\n\\section{Hyperparameters}\n\\label{app:hyper}\nFor training the PPO policies, as well as the baselines and ablations, we searched for the learning rate in [0.0001, 0.0003, 0.0005, 0.001] \nand found 0.0003 to work best across the boad. The entropy coefficient was set to 0.0, value loss coefficient 0.5, number of PPO epochs 10, number of PPO steps 2048, number of mini batches 32, gamma 0.99, and generealized advantage estimator coefficient 0.95. We also linearly decay the learning rate.\nThese values were not searched over since they have been previously optimized for MuJoCo domains and have been shown to be robust across these environments.\n\nFor MAML, we used the best hyperparameters found in the original paper for MuJoCo, so meta batch size of 20, 10 batches, and 8 workers.\n\nFor the dynamics autoencoder, we did a grid search over the learning rate in [0.0001, 0.001, 0.01] and found 0.001 to be best for the dynamics and 0.01 to be best for the policy. We also searched for the right batch size in [8, 32, 256, 2048] and found 8 to work best for the dynamics and 2048 for the policy. We also did grid searches over $d_{emb} \\in [2, 8, 32]$ and found $d_{emb} = 8$ for the policy autoencoders and $d_{emb} = 2$ for the dynamics autoencoders (except for Ant, in which we use $d_{emb} = 8$. We also searched for the hidden dimension of the transformers $d_{model} \\in [32, 64, 128]$ and found $d_{model} = 64$ to work best for both the policy and the dynamics embeddings.\n\nFor the value function, we tried different values for the number of epochs for the initial training phase $N_{ep, 1} \\in $ [1000, 500, 200, 100] and for the second training phase $N_{ep, 2} \\in $ [500, 200, 100] and we found 200 and 100 (i.e. each of the 20 data aggregation stages has 100 epochs) to work best, respectively. We also tried different learning rates from [0.0005, 0.001, 0.005, 0.01] and found 0.005 to be the best. Similarly, we tried batch sizes in [64, 128, 256] and found 128 to be the best. \n\nAll the results shown in this paper are obtained using the best hyperparameters found in our grid searches.\n\n\\section{Environments}\n\\label{app:envs}\n\n\\subsection{Spaceship Environment}\nThe source code contains the Spaceship domain that we designed, which is wrapped in a \\textit{gym} environment, so it can be easily integrated with any RL algorithm and used to evaluate agents. \n\nThe task consists of moving a spaceship with a unit point charge from one end of a 2D room through a door at the other end. The action space consists of a fixed-magnitude force vector that is applied at each timestep. The room contains two fixed electric charges that deflect\/attract the ship as it moves through the environment. \n\nAt the beginning of each episode, the agent\\'s location is initialized at the center-bottom of the room with coordinates (2.5, 0.2). The target door is always located at the center-top of the room with coordinates (2.5, 5.0). The size of the room is 5, the size of the door is 1, and the temporal resolution is 0.3 (i.e. the time interval used to compute the next position of the spaceship given the current location and the applied force). The observation consists of the spaceship\\'s 2D location in the room (whose coordinates can be any real number between 0 and 5, the size of the room) and the action consists of the 2D force applied by the agent. The episode ends either when the agent hits a wall, exits the room through the door, or the agent has taken more than 50 steps in the environment. At the end of an episode, the agent receives reward that decreases exponentially with its distance to the target door. The decay factor is set to 3.0. For all other steps, the agent receives no reward. \nThe distribution of dynamics is a centered circle with radius 1.5.\n\n\\subsection{MuJoCo Environments}\nFor Swimmer we use a circle with radius 0.1 to sample the environment dynamics, while Ant-wind uses a radius of 4.0. For all three domains with continuous distribution of dynamics (i.e. Spaceship, Swimmer, and Ant-wind), we sample 15 environments for training and we hold out 5 for evaluation. The evaluation environments have dynamics covering a closed interval from the distribution thus testing the ability of the model to extrapolate (rather than intrapolate) to different dynamics. The Ant-wind domain has a total of 16 environmets, 4 of which are used for evaluation.\n\n\n\\section{Evaluation}\n\\label{app:eval}\nIn this section, we describe in detail the evaluation method and how the results reported here are obtained. For each trained model (i.e. PD-VF, a baseline or an ablation) and for each (unseen) test environment, we use that model to obtain a full trajectory through the given environment. This is repeated 10 times and the average return of the 10 runs is recorded. Then, we compute the mean and standard deviation (of this average return) across 5 different seeds for each model. These are the statistics shown in Figures 4 and 5.\n\nTo generate the t-SNE plots, we generated 10 trajectories for each policy-environment pair, including both the training and the evaluation ones. The encoders are used to obtain policy and dynamics embeddings corresponding to each trajectory. Then, t-Distributed Stochastic Neighbor Embedding (t-SNE) with perplexity 30 is applied to produce Figures 6 and 7. Figure 6 shows the t-SNE for the dynamics embedding, where each point is colored by the environment in which the corresponding trajectory (used to obtain that dynamics embedding) was collected. Conversely, Figure 7 shows the t-SNE for the policy embedding, where each point is colored by the policy which generated the corresponding trajectory (used to obtain that policy embedding). \n\n\n\\begin{figure*}[h!]\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_train_swim_8.png}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_train_space_8.png}}\n \n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_train_ant_8.png}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_train_ant_legs_8.png}}\n \\caption{\\textbf{Train Performance.} Average return on train environments in Swimmer (top-left), Spaceship (top-right), Ant-wind (bottom-left), and Ant-legs (bottom-right) obtained by PD-VF{} and a few ablations, namely NoDaggerPolicy, NoDaggerValue, Kmeans, and NN. PD-VF{} is comparable with or outperforms the ablations on the train environments. While some of these ablations perform reasonably well on the environments they are trained on, they generalize poorly to unseen dynamics.}\n \\label{fig:train_ablations}\n\\end{figure*}\n\n\n\\section{Analysis of Learned Embeddings}\n\\label{app:analysis}\n\nFigure~\\ref{fig:policy_embeddings_all} shows a t-SNE plot of the learned policy embeddings for Spaceship, Swimmer, and Ant (from left to right). The top and bottom rows color the embeddings by the policy and environment that generated the corresponding trajectory, respectively. Trajectories produced by the same policy have similar embeddings, while those generated in the same environment are not necessarily close in this embedding space. This shows that the policy embedding preserves information about the policy while disregarding elements of the environment (that generated the corresponding embedded trajectory).\n\n\\begin{figure}[h!]\n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_space_env_env.pdf}}\n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_swim_env_env.pdf}}\n \\subfigure{\\includegraphics[width=0.37\\columnwidth]{fig\/tsne_ant_env_env.pdf}}\n \n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_space_env_pi.pdf}}\n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_swim_env_pi.pdf}}\n \\subfigure{\\includegraphics[width=0.34\\columnwidth]{fig\/tsne_ant_env_pi.pdf}}\n \\caption{t-SNE plots of the learned environment embeddings $z_{d}$ for Spaceship, Swimmer, and Ant-wind (from left to right). The points are colored by the \\textit{environment} (top) and \\textit{policy} (bottom) used to generate the trajectory of the corresponding dynamics embedding.}\n \\label{fig:env_embeddings_all}\n\\end{figure}\n\n\nSimilarly, Figure~\\ref{fig:env_embeddings_all} shows a t-SNE plot of the learned dynamics embeddings on the three continuous control domains used for evaluating our method. The top row colors each point by the corresponding environment used to generate the trajectory (from which the embedding is inferred), while the bottom row colors each point by the corresponding policy. One can see that the embedding space retrieves the true dynamics distribution and preserves the smoothness of the 1D manifold.\n\n\\begin{figure}[ht!]\n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_space_pi_pi.pdf}}\n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_swim_pi_pi.pdf}}\n \\subfigure{\\includegraphics[width=0.34\\columnwidth]{fig\/tsne_ant_pi_pi.pdf}}\n \n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_space_pi_env.pdf}}\n \\subfigure{\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_swim_pi_env.pdf}}\n \\subfigure{\\includegraphics[width=0.37\\columnwidth]{fig\/tsne_ant_pi_env.pdf}}\n \\caption{t-SNE plots of the learned policy embeddings $z_{\\pi}$ for Spaceship, Swimmer, and Ant-wind (from left to right). The points are colored by the \\textit{policy} (top) and \\textit{environment} (bottom) used to generate the trajectory of the corresponding policy embedding.}\n \\label{fig:policy_embeddings_all}\n\\end{figure}\n\n\n\nImportantly, this analysis shows that the learned policy and dynamics embeddings are generally disentangled (i.e. information about the dynamics is not contained in the policy space and vice versa). This is important as we want the dynamics space to mostly capture information about the transition function and similarly, we want the policy space to capture variation in the agent behavior. The only exception is the dynamics space of Ant-wind, which contains information about both the environment and the policy. This is because in this environment, the policy is dominated by the force applied to the body of the ant, whose goal is to move forward (while incurring a penalty proportional to the applied force). Thus, depending on the wind direction in the training environment, the agent learns to apply a force of a certain magnitude, a characteristic captured in the embedding space. When evaluated on environments with different dynamics, that policy will still apply a similar force. Our experiments indicate that even if the dynamics space is not fully disentangled (yet it contains information about the environment), the PD-VF is still able to make effective use of the embeddings to find good policies for unseen environments and even outperform other state-of-the-art RL methods. \n\n\\section{The Challenge of Transfer}\n\\label{app:transfer}\nIn this section, we emphasize the fact that the family of environments we designed pose a significant challenge to current state-of-the-art RL methods. To do this, we train PPO policies on each of the environments in our set (until convergence) and evaluate them on all other environments. The results show that any of the policies trained in this way can drastically fail in other environments (with different dynamics) from our training and test sets. This demonstrates that our set of environments provides a wide range of dynamics and that a single policy trained in any of these environments does not generalize well to the other ones. Moreover, when evaluated on a single environments, the performance across the pretrained policies varies greatly, illustrating the diversity of collected behaviors (both optimal and suboptimal). This analysis further supports the need for learning about multiple policies (and their performance in various environments) in order to generalize across widely different scenarios (or dynamics in this case). \n\n\n\\begin{table*}[t!]\n \\centering\n \\small\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n & E1 & E2 & E3 & E4 & E5 & E6 & E7 & E8 & E9 & E10 \\\\ [0.5ex]\n \\midrule\n P1 & 598 & 354 & 128 & 372 & 291 & 95.9 & -51.8 & -50.9 & -246 & -299 \\\\ \n P2 & 512 & 461 & 503 & 349 & 228 & 135 & -1.44 & -46.8 & -28.1 & -271 \\\\ \n P3 & 689 & 620 & 593 & 500 & 334 & 80 & 0.4 & -152 & -51 & -258 \\\\ \n P4 & 654 & 665 & 557 & 519 & 29.9 & 180 & 177 & -9.41 & -90.5 & -218.5 \\\\ \n P5 & 935 & 962 & 947 & 930 & 853 & 648 & 429 & 287 & 155 & -52.9 \\\\ \n P6 & 811 & 838 & 794 & 778 & 710 & 600 & 386 & 247 & 123 & -5.14 \\\\ \n P7 & 624 & 659 & 408 & 451 & 351 & 474 & 394 & 82.9 & 151 & 55.9 \\\\ \n P8 & 500 & 470 & 442 & 393 & 321 & 468 & 410 & 315 & 209 & 124 \\\\ \n P9 & 303 & 326 & 297 & 295 & 265 & 254 & 243 & 238 & 11.1 & 223 \\\\ \n P10 & 293 & 54.8 & 294 & 293 & 250 & 226 & 200 & 200 & 180 & -1.28 \\\\ \n P11 & 473 & 236 & 212 & 218 & 243 & 144 & 83.9 & 132 & 107 & 136 \\\\ \n P12 & 266 & 264 & 268 & 242 & 214 & 181 & 55.6 & 72.7 & 239.4 & 240 \\\\ \n P13 & 422 & 669 & 612 & 527 & 401 & 270 & 205 & 128 & 68.5 & 103 \\\\ \n P14 & 436 & 362 & 424 & 366 & 259 & 296 & 97.3 & 55.7 & 24.9 & -2.44 \\\\ \n P15 & 420 & 484 & 264 & 125 & 131 & 66.7 & 44.5 & 13.1 & 5.43 & 35.1 \\\\ \n P16 & 671 & 769 & 573 & 270 & 189 & 212 & 153 & 96.4 & 19.7 & 0.290 \\\\ \n P17 & 784 & 793 & 683 & 600 & 56.9 & 200 & 56.8 & 12.4 & 4.4 & 19.8 \\\\ \n P18 & 755 & 703 & 564 & 213 & 170 & 129 & 58.1 & 2.17 & -103 & -43.7 \\\\ \n P19 & 182 & 593 & 415 & 65.5 & 250 & 112 & 25.8 & 37.1 & -94.9 & -19.2 \\\\ \n P20 & 297 & 589 & 518 & 350 & 134 & 76.3 & 6.55 & -51.5 & -185.4 & -11.2 \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\\textbf{Performance of PPO policies on the Ant-wind domain.} A row shows the mean episode return of a single policy on all environments, while a column shows the mean episode return of all policies on a single environment. This table contains performance on the first 10 environments.}\n \\label{tab:ppo}\n\\end{table*}\n\n\\begin{table*}[t!]\n \\centering\n \\small\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n & E11 & E12 & E13 & E14 & E15 & E16 & E17 & E18 & E19 & E20 \\\\ [0.5ex]\n \\midrule\n P1 & -336 & -136 & -55.2 & -7.47 & -8.11 & 168 & 262 & 473 & 498 & 603 \\\\ \n P2 & -42.2 & -101 & -28.3 & -112 & 14.2 & 132 & 109 & 345 & 510 & 545 \\\\ \n P3 & -54.1 & -10.1 & -232 & -6.25 & 59.9 & 120 & 207 & 136 & 563 & 372 \\\\ \n P4 & -279 & -278 & -135 & -4.81 & 5.49 & 113 & 276 & 150 & 484 & 196 \\\\ \n P5 & -264 & -236 & -69.8 & 14.0 & 77.6 & 248 & 457 & 602 & 813 & 634 \\\\ \n P6 & 20.9 & -112 & 98.7 & 147 & 14.9 & 194 & 321 & 524 & 611 & 639 \\\\ \n P7 & 0.700 & -5.74 & -61.6 & 9.51 & 89.0 & 267 & 212 & 412 & 579 & 536 \\\\ \n P8 & 39.6 & -33.5 & 11.7 & 53.9 & 21.5 & 206 & 218 & 29.9 & 469 & 509 \\\\ \n P9 & 7.05 & 216 & 225 & 224 & 248 & 253 & 284 & 266 & 314 & 281 \\\\ \n P10 & 176 & -159 & 5.98 & 5.03 & 57.8 & 20.6 & 77.0 & 287 & 249 & 539 \\\\ \n P11 & 92.8 & 148 & 206 & 244 & 277 & 351 & 280 & 485 & 556 & 582 \\\\ \n P12 & 235 & 235 & 232 & 122 & 139 & 151 & 165 & 299 & 272 & 265 \\\\ \n P13 & 84.6 & 171 & 230 & 285 & 312 & 300 & 328 & 523 & 640 & 504 \\\\ \n P14 & 45.2 & 159 & 126 & 302 & 369 & 363 & 537 & 457 & 512 & 396 \\\\ \n P15 & -4.98 & -70.8 & 41.3 & 360 & 571 & 654 & 687 & 546 & 393 & 537 \\\\ \n P16 & -7.77 & 19.7 & 63.4 & 248 & 457 & 600 & 740 & 493 & 731 & 804 \\\\ \n P17 & -20.0 & -102 & -8.87 & 69.8 & 254 & 444 & 515 & 658 & 749 & 703 \\\\ \n P18 & -93.7 & -59.9 & -11.6 & 125 & 275 & 434 & 577 & 679 & 781 & 781 \\\\ \n P19 & -10.1 & -16.9 & 1.38 & 18.6 & 86.5 & 126 & 372 & 508 & 489 & 652 \\\\ \n P20 & -118 & -62.4 & -125 & -81.7 & 20.1 & 82.9 & 131 & 195 & 341 & 392 \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\\textbf{Performance of PPO policies on the Ant-wind domain.} A row shows the mean episode return of a single policy on all environments, while a column shows the mean episode return of all policies on a single environment. This table contains performance on the last 10 environments.}\n \\label{tab:ppo}\n\\end{table*}\n\n\n\n\\begin{table*}[t!]\n \\centering\n \\small\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n & E1 & E2 & E3 & E4 & E5 & E6 & E7 & E8 & E9 & E10 \\\\ [0.5ex]\n \\midrule\n P1 & 139.17 & 133.61 & 119.29 & 96.78 & 69.8 & 38.3 & 7.88 & -19.59 & -41.18 & -54.88 \\\\\n P2 & 197.01 & 191.23 & 176.68 & 154.75 & 125.97 & 95.2 & 64.06 & 36.33 & 13.79 & -1.1 \\\\\n P3 & 140.38 & 135.81 & 122.1 & 100.15 & 72.56 & 41.97 & 11.69 & -15.47 & -37.79 & -51.59 \\\\ \n P4 & 139.01 & 134.82 & 121.15 & 98.86 & 71.07 & 41.13 & 10.26 & -17.24 & -39.25 & -54.68 \\\\\n P5 & 215.38 & 209.66 & 196.33 & 174.77 & 147.62 & 115.54 & 84.68 & 59.61 & 36.2 & 22.31 \\\\\n P6 & 207.08 & 201.55 & 186.96 & 165.49 & 138.86 & 107.64 & 77.12 & 49.94 & 28.6 & 14.55 \\\\\n P7 & 210.29 & 205.73 & 191.61 & 169.42 & 141.29 & 112.15 & 81.69 & 53.65 & 32.87 & 18.08 \\\\\n P8 & 213.98 & 209.7 & 198.61 & 179.11 & 152.56 & 124.49 & 95.18 & 67.78 & 45.47 & 32.82 \\\\\n P9 & 206.51 & 201.71 & 190.56 & 170.55 & 142.16 & 114.04 & 84.64 & 56.6 & 33.8 & 19.84 \\\\\n P10 & 204.85 & 201.04 & 186.64 & 168.29 & 141.0 & 113.45 & 80.69 & 53.39 & 33.46 & 19.9 \\\\\n P11 & 197.34 & 193.67 & 182.94 & 164.05 & 137.93 & 109.44 & 80.39 & 52.76 & 32.46 & 18.77 \\\\\n P12 & 204.92 & 201.11 & 186.85 & 166.49 & 138.03 & 108.22 & 79.32 & 50.95 & 29.97 & 16.23 \\\\\n P13 & 202.86 & 204.16 & 174.7 & 174.07 & 139.94 & 130.12 & 88.65 & 45.72 & 24.26 & 15.94 \\\\\n P14 & 205.89 & 201.64 & 187.07 & 166.63 & 138.54 & 108.87 & 77.48 & 51.66 & 29.76 & 15.68 \\\\\n P15 & 209.19 & 186.31 & 190.71 & 168.87 & 142.23 & 114.88 & 82.87 & 56.37 & 35.82 & 20.97 \\\\\n P16 & 214.29 & 204.26 & 188.0 & 165.51 & 140.3 & 109.66 & 80.35 & 56.25 & 37.05 & 23.41 \\\\\n P17 & 202.78 & 197.79 & 183.82 & 160.93 & 133.64 & 103.02 & 71.89 & 44.94 & 22.01 & 9.32 \\\\\n P18 & 202.66 & 204.15 & 190.16 & 167.73 & 139.81 & 109.03 & 77.82 & 51.18 & 29.4 & 14.83 \\\\\n P19 & 208.89 & 204.35 & 191.05 & 168.31 & 139.9 & 109.16 & 79.06 & 51.99 & 28.93 & 14.33 \\\\\n P20 & 141.37 & 136.28 & 122.22 & 100.69 & 73.13 & 42.51 & 12.4 & -15.19 & -37.6 & -51.74 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{\\textbf{Performance of PPO policies on the Swimmer domain.} A row shows the mean episode return of a single policy on all environments, while a column shows the mean episode return of all policies on a single environment. This table contains performance on the first 10 environments.}\n \\label{tab:ppo}\n\\end{table*}\n\n\n\\begin{table*}[t!]\n \\centering\n \\small\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n & E11 & E12 & E13 & E14 & E15 & E16 & E17 & E18 & E19 & E20 \\\\ [0.5ex]\n \\midrule\n P1 & -59.39 & -54.24 & -40.67 & -19.18 & 9.2 & 39.2 & 69.81 & 97.3 & 119.73 & 134.49 \\\\\n P2 & -6.11 & -0.47 & 13.55 & 36.02 & 65.06 & 96.16 & 126.34 & 154.61 & 177.55 & 191.34 \\\\\n P3 & -57.27 & -52.51 & -39.14 & -16.39 & 10.56 & 40.38 & 71.77 & 98.78 & 121.09 & 135.31 \\\\\n P4 & -58.16 & -53.38 & -39.35 & -18.19 & 9.62 & 40.21 & 70.66 & 98.47 & 120.61 & 134.47 \\\\\n P5 & 16.13 & 20.32 & 32.89 & 56.59 & 85.07 & 114.87 & 144.05 & 173.91 & 196.76 & 210.27 \\\\\n P6 & 9.8 & 13.8 & 27.56 & 49.58 & 76.67 & 107.47 & 137.03 & 164.94 & 188.26 & 201.71 \\\\\n P7 & 14.63 & 19.66 & 34.54 & 56.04 & 82.39 & 114.16 & 143.69 & 171.16 & 190.65 & 205.65 \\\\\n P8 & 25.48 & 33.05 & 45.05 & 68.32 & 94.65 & 124.24 & 153.37 & 179.63 & 199.71 & 210.72 \\\\\n P9 & 15.39 & 19.33 & 31.88 & 54.41 & 79.43 & 110.51 & 139.44 & 165.05 & 186.49 & 200.29 \\\\\n P10 & 12.91 & 16.72 & 32.59 & 52.4 & 79.08 & 107.42 & 138.15 & 161.44 & 184.87 & 199.23 \\\\\n P11 & 12.84 & 17.28 & 30.41 & 50.77 & 76.78 & 104.17 & 132.33 & 156.83 & 178.78 & 191.66 \\\\\n P12 & 11.56 & 16.06 & 29.37 & 51.48 & 78.29 & 108.94 & 136.79 & 164.79 & 187.09 & 201.86 \\\\\n P13 & 3.07 & -8.63 & 30.02 & 59.98 & 72.13 & 110.56 & 128.87 & 148.73 & 173.19 & 192.25 \\\\\n P14 & 10.65 & 16.23 & 30.17 & 52.24 & 79.07 & 109.76 & 139.88 & 166.59 & 187.43 & 202.78 \\\\\n P15 & 14.82 & 21.48 & 35.28 & 57.38 & 82.5 & 114.19 & 144.17 & 169.13 & 190.07 & 203.95 \\\\\n P16 & 18.92 & 23.92 & 36.31 & 57.05 & 88.57 & 116.78 & 147.6 & 172.97 & 196.18 & 209.08 \\\\\n P17 & 4.5 & 9.35 & 23.38 & 45.9 & 72.68 & 103.65 & 133.52 & 161.88 & 183.18 & 197.69 \\\\\n P18 & 10.91 & 14.36 & 29.04 & 49.94 & 78.39 & 109.27 & 140.76 & 167.58 & 189.9 & 202.89 \\\\\n P19 & 9.76 & 14.41 & 27.79 & 49.65 & 76.61 & 109.48 & 140.12 & 167.97 & 188.71 & 203.91 \\\\\n P20 & -56.61 & -52.49 & -38.36 & -16.25 & 11.95 & 42.14 & 72.88 & 100.43 & 122.74 & 136.87 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{\\textbf{Performance of PPO policies on the Swimmer domain.} A row shows the mean episode return of a single policy on all environments, while a column shows the mean episode return of all policies on a single environment. This table contains performance on the last 10 environments.}\n \\label{tab:ppo}\n\\end{table*}\n\n\n\\begin{table*}[t!]\n \\centering\n \\small\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n & E1 & E2 & E3 & E4 & E5 & E6 & E7 & E8 & E9 & E10 \\\\ [0.5ex]\n \\midrule\n P1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n P2 & 0.97 & 0.97 & 0.92 & 0.92 & 0.90 & 0.91 & 0.93 & 0.95 & 0.99 & 0.95 \\\\\n P3 & 0.97 & 0.92 & 0.85 & 0.85 & 0.86 & 0.86 & 0.86 & 0.91 & 0.94 & 0.94 \\\\\n P4 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n P5 & 0.61 & 0.61 & 0.60 & 0.58 & 0.57 & 0.55 & 0.53 & 0.51 & 0.50 & 0.49 \\\\\n P6 & 0.96 & 0.90 & 0.87 & 0.87 & 0.88 & 0.90 & 0.91 & 0.92 & 0.96 & 0.95 \\\\\n P7 & 0.83 & 0.89 & 0.93 & 0.96 & 0.97 & 0.96 & 0.94 & 0.89 & 0.84 & 0.80 \\\\\n P8 & 0.84 & 0.84 & 0.84 & 0.82 & 0.81 & 0.79 & 0.78 & 0.77 & 0.75 & 0.74 \\\\\n P9 & 0.97 & 0.93 & 0.90 & 0.87 & 0.86 & 0.86 & 0.87 & 0.87 & 0.87 & 0.86 \\\\\n P10 & 0.91 & 0.90 & 0.88 & 0.87 & 0.87 & 0.88 & 0.89 & 0.92 & 0.96 & 0.98 \\\\\n P11 & 0.77 & 0.78 & 0.79 & 0.80 & 0.82 & 0.78 & 0.72 & 0.76 & 0.79 & 0.80 \\\\\n P12 & 0.67 & 0.56 & 0.39 & 0.46 & 0.58 & 0.41 & 0.39 & 0.46 & 0.49 & 0.47 \\\\\n P13 & 0.38 & 0.36 & 0.32 & 0.26 & 0.30 & 0.25 & 0.32 & 0.32 & 0.33 & 0.40 \\\\\n P14 & 0.82 & 0.79 & 0.76 & 0.74 & 0.73 & 0.72 & 0.72 & 0.73 & 0.75 & 0.76 \\\\\n P15 & 0.67 & 0.66 & 0.65 & 0.63 & 0.62 & 0.62 & 0.61 & 0.61 & 0.61 & 0.61 \\\\\n P16 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n P17 & 0.86 & 0.82 & 0.80 & 0.78 & 0.77 & 0.78 & 0.79 & 0.82 & 0.85 & 0.87 \\\\\n P18 & 0.69 & 0.68 & 0.66 & 0.65 & 0.64 & 0.63 & 0.63 & 0.63 & 0.63 & 0.63 \\\\\n P19 & 0.80 & 0.81 & 0.81 & 0.73 & 0.75 & 0.78 & 0.81 & 0.79 & 0.78 & 0.90 \\\\\n P20 & 0.96 & 0.94 & 0.90 & 0.88 & 0.87 & 0.86 & 0.86 & 0.86 & 0.85 & 0.83 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{\\textbf{Performance of PPO policies on the Spaceship domain.} A row shows the mean episode return of a single policy on all environments, while a column shows the mean episode return of all policies on a single environment. This table contains performance on the first 10 environments.}\n \\label{tab:ppo}\n\\end{table*}\n\n\\begin{table*}[t!]\n \\centering\n \\small\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n & E11 & E12 & E13 & E14 & E15 & E16 & E17 & E18 & E19 & E20 \\\\ [0.5ex]\n \\midrule\n P1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n P2 & 0.89 & 0.85 & 0.82 & 0.79 & 0.79 & 0.79 & 0.81 & 0.84 & 0.87 & 0.92 \\\\\n P3 & 0.93 & 0.91 & 0.86 & 0.83 & 0.82 & 0.82 & 0.84 & 0.86 & 0.91 & 0.95 \\\\\n P4 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n P5 & 0.49 & 0.49 & 0.49 & 0.50 & 0.51 & 0.53 & 0.55 & 0.57 & 0.59 & 0.61 \\\\\n P6 & 0.91 & 0.85 & 0.82 & 0.79 & 0.78 & 0.78 & 0.80 & 0.82 & 0.86 & 0.92 \\\\\n P7 & 0.74 & 0.70 & 0.67 & 0.65 & 0.64 & 0.65 & 0.66 & 0.70 & 0.75 & 0.79 \\\\\n P8 & 0.73 & 0.71 & 0.70 & 0.70 & 0.70 & 0.71 & 0.73 & 0.76 & 0.79 & 0.82 \\\\\n P9 & 0.84 & 0.82 & 0.80 & 0.79 & 0.80 & 0.81 & 0.84 & 0.87 & 0.93 & 0.97 \\\\\n P10 & 0.94 & 0.90 & 0.86 & 0.84 & 0.83 & 0.83 & 0.85 & 0.88 & 0.90 & 0.92 \\\\\n P11 & 0.81 & 0.93 & 0.90 & 0.87 & 0.84 & 0.82 & 0.80 & 0.78 & 0.77 & 0.77 \\\\\n P12 & 0.46 & 0.83 & 0.61 & 0.44 & 0.77 & 0.74 & 0.64 & 0.59 & 0.59 & 0.63 \\\\\n P13 & 0.43 & 0.73 & 0.64 & 0.81 & 0.67 & 0.69 & 0.80 & 0.63 & 0.51 & 0.41 \\\\\n P14 & 0.79 & 0.80 & 0.82 & 0.85 & 0.86 & 0.89 & 0.91 & 0.91 & 0.89 & 0.85 \\\\\n P15 & 0.61 & 0.62 & 0.62 & 0.63 & 0.64 & 0.65 & 0.66 & 0.67 & 0.68 & 0.68 \\\\\n P16 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n P17 & 0.90 & 0.91 & 0.92 & 0.93 & 0.94 & 0.96 & 0.98 & 0.98 & 0.94 & 0.90 \\\\\n P18 & 0.63 & 0.63 & 0.64 & 0.64 & 0.65 & 0.67 & 0.68 & 0.69 & 0.70 & 0.70 \\\\\n P19 & 0.91 & 0.89 & 0.87 & 0.93 & 0.89 & 0.86 & 0.83 & 0.81 & 0.80 & 0.80 \\\\\n P20 & 0.80 & 0.78 & 0.77 & 0.76 & 0.76 & 0.77 & 0.80 & 0.85 & 0.90 & 0.93 \\\\\n\n \\bottomrule\n \\end{tabular}\n \\caption{\\textbf{Performance of PPO policies on the Spaceship domain.} A row shows the mean episode return of a single policy on all environments, while a column shows the mean episode return of all policies on a single environment. This table contains performance on the last 10 environments.}\n \\label{tab:ppo}\n\\end{table*}\n\n\n\n\n\\section{Discussion and Future Work}\nIn this work, we propose policy-dynamics value functions (PD-VF{}), a novel framework for fast adaptation to new environment dynamics. The key idea is to learn a value function conditioned on both a policy and a dynamics embedding which are learned in a self-supervised way. At test time, the environment embedding can be inferred from only a few interactions, which allows the selection of a policy that maximizes the learned value function. PD-VF{} has a number of desirable properties: it leverages the structure in both the policy and the dynamics space to estimate the expected return, it only needs a small number of steps to adapt to unseen dynamics, it does not update any parameters at test time, and it does not require dense reward or long rollouts to find an effective policy in a new environment. Empirical results on a set of continuous control domains show that PD-VF{} outperforms other methods on unseen dynamics, while being competitive on training environments. \n\nPD-VF{} opens up many promising directions for future research. First of all, the formulation can be extended to estimate the value function not only for a family of policies and environment dynamics, but also for a family of reward functions. Another avenue for future research is to use a more general class of function approximators (such as neural networks) to parameterise the value estimator instead of a quadratic form. The PD-VF{} framework can, in principle, also be used to evaluate a family of policies and environments on other metrics of interest besides the expected return, such as, for example, reward variance, agent prosociality, deviation from expert behavior, and so on. Another interesting direction is to integrate additional constraints (or prior knowledge) to the optimization problem (\\textit{e.g.}{} maximize expected return while only using policies in a certain region of the policy space). As noted by \\citet{precup2001off}, \\citet{Sutton2011HordeAS}, and \\citet{white2012scaling}, learning about multiple policies in parallel via general value functions can be useful for lifelong learning. Similarly, PD-VF{} can be a useful tool for an agent to continually gather knowledge about various policies and dynamics in the world. Finally, PD-VF{} can also be applied to multi-agent settings for adapting to different opponents or teammates whose behaviors determine the environment dynamics. \n\n\n\\label{conclusion}\n\n\\section{Experiments}\n\\label{experiments}\n\n\\begin{figure}[ht!]\n \\subfigure[Spaceship]{\\label{fig:space}\\includegraphics[width=0.32\\columnwidth]{fig\/spaceship_diagram.pdf}}\n \\subfigure[Swimmer]{\\label{fig:swim}\\includegraphics[width=0.32\\columnwidth]{fig\/swimmer.pdf}}\n \\subfigure[Ant-wind]{\\label{fig:antwind}\\includegraphics[width=0.32\\columnwidth]{fig\/ant_wind.pdf}}\n\n \\subfigure[Dynamics]{\\label{fig:dynamics}\\includegraphics[width=0.32\\columnwidth]{fig\/dynamics_diagram.pdf}}\n \\subfigure[Ant-legs-v1]{\\label{fig:antleg1}\\includegraphics[width=0.32\\columnwidth]{fig\/ant_legs_v1.pdf}}\n \\subfigure[Ant-legs-v2]{\\label{fig:antleg2}\\includegraphics[width=0.32\\columnwidth]{fig\/ant_legs_v2.pdf}}\n \\caption{(a) - (c) illustrate the continuous control domains used for testing adaptation to unseen environment dynamics. In Spaceship, Swimmer, and Ant-wind, the train and test distribution of the dynamics is continuous as illustrated in (d). (e) and (f) show two instances of the Ant-legs task in which limb lengths sampled from a discrete distribution determine the dynamics.}\n \\label{fig:diagrams}\n\\end{figure}\n\n\n\\begin{figure*}[ht!]\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_eval_swim_8.pdf}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_eval_space_8.pdf}}\n \n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_eval_ant_8.pdf}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_eval_ant_legs_8.pdf}}\n \\caption{\\textbf{Test Performance.} Average return on test environments with unseen dynamics in Swimmer (top-left), Spaceship (top-right), Ant-wind (bottom-left), and Ant-legs (bottom-right) obtained by PD-VF{}, the upper bound PPOenv, and baselines $RL^2$, MAML, PPOdyn, and PPOall. PD-VF{} outperforms these baselines on most test environments and, in some cases, it is comparable with PPOenv (which was trained directly on the test environments).}\n \\label{fig:eval_results}\n\\end{figure*}\n\n\\subsection{Experimental Setup}\nWe evaluate PD-VF{} on four continuous control domains, and compare it with an upper bound, four baselines, and four ablations. For each domain, we create a number of environments with different dynamics. Then, we split the set of environments into training and test subsets, so that at test time, the agent has to find a policy that behaves well on unseen dynamics. For all our experiments, we show the mean and standard deviation of the average return (over 100 episodes) across 5 different seeds of each model. The dynamics embeddings are inferred using at most $N_d = 4$ interactions with the environment. \n\n\\subsection{Environments}\n\n\\textbf{Spaceship} is a new continuous control domain designed by us. The task consists of moving a spaceship with a unit point charge from one end of a 2D room through a door at the other end. The action space consists of a fixed-magnitude force vector that is applied at each timestep. The room contains two fixed electric charges that deflect \/ attract the ship as it moves through the environment (see Figure~\\ref{fig:space}). The polarity and magnitude of these charges are parameterised by $d$ and determine the environment dynamics. The distribution of dynamics $\\mathcal{D}$ is chosen to be circular and centered (see Figure~\\ref{fig:dynamics}). Samples $d$ are drawn at intervals of $\\pi\/10$, each forming a different environment instance with charge configuration $(\\cos(d), \\sin(d))$. The 5 samples in the range $[\\frac{3}{4} 2\\pi, \\ldots, 2\\pi]$ are held out as evaluation environments, the rest being used for training. \n\n\\begin{figure*}[ht!]\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_eval_swim_8.png}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_eval_space_8.png}}\n \n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_eval_ant_8.png}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/ablations_eval_ant_legs_8.png}}\n \\caption{\\textbf{Test Performance.} Average return in Swimmer (top-left), Spaceship (top-right), Ant-wind (bottom-left), and Ant-legs (bottom-right) obtained by PD-VF{}, NoDaggerPolicy, NoDaggerValue, Kmeans, and NN. PD-VF{} is better than these ablations overall.}\n \\label{fig:eval_ablations}\n\\end{figure*}\n\n\\textbf{Swimmer} is a family of environments with varying dynamics based on MuJoCo's Swimmer-v3 domain \\citep{Todorov2012MuJoCoAP}. The goal is to control a three-link robot in a viscuous fluid to swim forward as fast as possible (Figure~\\ref{fig:swim}). The dynamics are determined by a 2D current within the fluid, whose direction changes between environments (but has fixed magnitude). The current direction is determined by an angle $d$, which is sampled in the same manner as for Spaceship above, \\textit{i.e.}{} train on $3\/4$ of all possible directions and hold out the other $1\/4$ for evaluation. \n\n\n\\textbf{Ant-wind} is a family of environments based on MuJoCo's Ant-v3 domain in which the goal is to make a four-legged creature walk forward as fast as possible (Figure~\\ref{fig:antwind}). The environment dynamics are determined by the direction of a wind $d$, which is sampled from a continuous distribution in the same way as for Swimmer. \n\n\n\\textbf{Ant-legs} is a second task based on MuJoCo's Ant-v3 domain, in which the dynamics are sampled from a discrete distribution. The training environments are generated by fixing three ankle lengths (short, medium, and long) and generating all possible permutations for the four legs. The length of the ant leg is fixed to medium across all training environments. Symmetries in the training environments are removed by considering ants with the same number of short, medium, or long legs to be the same and choosing one ant from each equivalency class. There are four test environments with both the leg and ankle lengths being either short or long. Note that the test environments are significantly different from all the training ones, thus making Ant-legs a challenging setting for our method. Figures~\\ref{fig:antleg1} and~\\ref{fig:antleg2} show two instances of this environment. \n\n\n\n\\subsection{Baselines}\nWe use PPO \\citep{schulman2017proximal} as the base RL algorithm for all the baselines and for the reinforcement learning phase of training the PD-VF{} (Sec.~\\ref{sec:rltrain}). We use Adam \\citep{kingma2014adam} for optimization. All models use the same network architecture for the policy and value functions. For a given environment, all methods use the same number of steps $N_d$ (at the beginning of each episode) to infer the embedding of the environment dynamics. Then, they each use a single policy network to act in the environment until the end of the episode. We report the cumulative reward obtained by each method throughout an episodes (in which they first infer the environment dynamics which determines the policy used for acting until the end of the episode). We compare with the following baselines: \n\n\n\n\n\\begin{figure*}[h!]\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_train_swim_8.png}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_train_space_8.png}}\n \n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_train_ant_8.png}}\n \\subfigure{\\includegraphics[width=0.50\\textwidth]{fig\/results_train_ant_legs_8.png}}\n \\caption{\\textbf{Train Performance.} Average return on train environments in Swimmer (top-left), Spaceship (top-right), Ant-wind (bottom-left), and Ant-legs (bottom-right) obtained by PD-VF{}, the upper bound PPOenv, and baselines $RL^2$, MAML, PPOdyn, and PPOall. PD-VF{} outperforms the baselines and ablations on most test environments and, in some cases, it is comparable with PPOenv (which was trained directly on the test environments). While other methods also perform reasonably well on the training environments, they generalize poorly to new environments with unseen dynamics.}\n \\label{fig:train_results}\n\\end{figure*}\n\n\n\n\\textbf{PPOenv} trains a PPO policy for each environment in our set. This is used as an upper bound for the other models. \n\n\\textbf{MAML} is the meta-learning algorithm from \\citet{finn2017model}. MAML{} generally requires some amount of training on the test environments, so to make it more comparable to our method and the other baselines, we allow one gradient step using a trajectory of length $N_d$ (i.e. the same length as the one used by PD-VF{} to infer the embedding of the environment dynamics). Thus, MAML{} has an advantage over PD-VF{} which does not make any parameter updates at test time.\n\n$\\textbf{RL}^2$ is the meta-learning algorithm from \\citet{Wang2016LearningTR} and \\citet{Duan2016RL2FR}, which uses a recurrent policy that takes as input the previous action and reward.\n\n\\textbf{PPOdyn} trains (using PPO) a single policy network conditioned on the dynamics embedding. At test time, it first infers the dynamics embedding and then conditions the pretrained policy network on that vector. This is a close implementation of the approach in \\citet{yang2019single}\\footnote{An exact match was not feasible as code for \\citet{yang2019single} was not available.}. \n\n\\textbf{PPOall} trains a single PPO policy on all the training environments and uses it on the test environments without any additional fine-tuning. \n\nWe also compare PD-VF{} with four ablations:\n\n\\textbf{NN} finds the environment that is closest (in Euclidean metric) to the test environment's embedding and uses the PPOenv{} policy trained on that environment to act. This ablation aims to tease out the effect of using both the learned space of policies and that of dynamics to adapt to new environments, from that of only using the learned dynamics space.\n\n\n\\textbf{Kmeans} clusters the environment embeddings (using trajectories collected in Section~\\ref{sec:rltrain}) into $K$ clusters. Then, for each cluster, we train a new PPO policy on all the environments assigned to that cluster. At test time, we find the closest cluster for the given environment embedding and use the policy corresponding to that cluster to act in the environment. \n\n\n\\textbf{NoAggValue} trains a PD-VF{} without using dataset aggregation for the value function (see Section~\\ref{sec:suptrain}).\n\n\\textbf{NoAggPolicy} uses PD-VF{} without using dataset aggregation for the policy decoder (see Section~\\ref{sec:suptrain}).\n\n\n\\section{Introduction}\n\nDeep reinforcement learning (RL) has achieved impressive results on a wide range of complex tasks \\citep{mnih2015human, silver2016mastering, silver2017mastering, silver2018general, jaderberg2019human, Berner2019Dota2W, vinyals2019grandmaster}. However, recent studies have pointed out that RL agents trained and tested on the same environment tend to overfit to that environment's idiosyncracies and are unable to generalize to even small perturbations \\citep{Whiteson2011ProtectingAE, rajeswaran2017towards, zhang2018study, zhang2018dissection, henderson2018deep, Cobbe2019QuantifyingGI, Raileanu2020RIDERI, Song2020ObservationalOI}.\nIt is often the case that besides the test environments being different from the train environments, they will also have\ncostly interactions, scarce or unavailable feedback, and irreversible consequences. For example, a self-driving car might have to adjust its behavior depending on weather conditions, or a prosthetic control system might have to adapt to a new human. In these cases it is crucial for RL agents to find and execute appropriate policies as quickly as possible.\n\n\n\n\n\n\n\n\nOur approach is inspired by \\citet{Sutton2011HordeAS} who introduced the notion of general value functions (GVFs), which can be used to gather knowledge about the world in the form of predictions. A GVF estimates the expected return of an arbitrary policy on a certain task (as defined by a reward function, a termination function and a terminal-reward function). Similarly, in this work, we aim to learn a value function conditioned on elements of a space of policies and tasks, but here, a ``task'' is specified by the transition function of the MDP instead of the reward function. \n\nMore specifically, we propose PD-VF{}, a novel framework for rapid adaptation to new environment dynamics. PD-VF{} consists of four phases: \n\\begin{enumerate*}[label=(\\roman*)] \n \\item a \\textit{reinforcement learning phase} in which individual policies are learned for each environment in our training set using standard RL algorithms,\n \\item a \\textit{self-supervised phase} in which trajectories generated by these policies are used to learn embeddings for both policies and environments,\n \\item a \\textit{supervised training phase} in which a neural network is used to learn the value function of a certain policy acting in some environment. The network takes as inputs the initial state of the environment, as well as the corresponding policy and environment embeddings (as learned in the previous phase) and is trained with supervision of the cumulative reward obtained during an episode, and finally \n \\item an \\textit{evaluation phase} in which, given a new environment, its dynamics embedding is inferred using the first few steps of an episode. Then, a policy is selected by finding the policy embedding that maximizes the learned value function. The selected policy is used to act in the environment until the episode ends.\n\\end{enumerate*}\n\nOur framework uses self-supervised interactions with the environment to learn an embedding space of both dynamics and policies. By learning a value function in the policy-dynamics space, PD-VF{} can discover useful patterns in the complex relation between a family of environment dynamics, various behaviors, and the expected return. The value function is designed to model non-optimal policies along with optimal policies in given environments so that it can understand how changes in dynamics relate to changes in the return of different policies. PD-VF{} uses the learned space of dynamics to rapidly embed a new environment in that space using only a few interactions. At test time, PD-VF{} can evaluate or rank policies (from a certain family) on unseen environments without the need of full rollouts (\\textit{i.e.}{} it does not require full trajectories or rewards to update the policy). We evaluate our method on a set of continuous control tasks (with varying dynamics) in MuJoCo \\citep{Todorov2012MuJoCoAP}. The dynamics of each task instance are determined by physical parameters such as wind direction or limb length and can be sampled from a continuous or discrete distribution. Performance is evaluated on a single episode at test time to emphasize rapid adaptation. We show that PD-VF{} outperforms other meta-learning and transfer learning approaches on new environments with unseen dynamics. \n\n\\label{introduction}\n\\section{Policy-Dynamics Value Functions}\n\\label{method}\n\nIn this work, we aim to design an approach that can quickly find a good policy in an environment with new and unknown dynamics, after being trained on a family of environments with related dynamics.\nThe problem can be formalized as a family of Markov decision processes (MDPs) defined by $(\\mathcal{S}, \\mathcal{A}, \\mathcal{T}, \\mathcal{R}, \\gamma)$, where $(\\mathcal{S}, \\mathcal{A}, \\mathcal{R}, \\gamma)$ are the corresponding state space, action space, reward function, and discount factor. Each instance of the family is a stationary MDP with transition function $\\mathcal{T}_d(s'|s, a) \\in \\mathcal{T}$. Each $\\mathcal{T}_d$ has a hidden parameter $d$ that is sampled once from a distribution $\\mathcal{D}$ and held constant for that instance (\\textit{i.e.}{} episode). $\\mathcal{T}_d$ can be continuous or discrete in $d$. By design, the latent variable $d$ that defines the MDP's dynamics cannot be observed from individual states. \n\nWe present Policy-Dynamics Value Functions (PD-VF{}), a novel framework for rapid adaptation across \nsuch MDPs with different dynamics. PD-VF{} is an extension of a value function that not only conditions on a state, but also on a policy and a transition function. \n\nA conventional value function $V: \\mathcal{S} \\rightarrow \\mathcal{R}$ is defined as the expected future return from state $s$ of policy $\\pi$: \n\\begin{equation*}\n V(s) = \\mathbb{E_{}} \\left[ G_{t} | S_t = s \\right] = \\mathbb{E_{}} \\left[ \\sum_{k = t+1}^{T} \\gamma^k r_{k} | S_t = s \\right]. \n\\end{equation*}\n\nFormally, we define a \\textit{policy-dynamics value function} or PD-VF{} as a function $W: \\mathcal{S} \\times \\Pi \\times \\mathcal{T} \\rightarrow \\mathcal{R}$ with two auxiliary inputs representing the policy $\\pi$ and the dynamics $d$:\n\\begin{equation*}\n W(s, \\pi, d) = \\mathbb{E} \\left[ G_{t} | S_t = s, A_t \\sim \\pi, S_{t+1} \\sim \\mathcal{T}_d \\right].\n\\end{equation*}\n\n\n\\subsection{Problem setup}\nThe dynamics distribution $\\mathcal{D}$ is partitioned into two disjoint sets $\\mathcal{D}_{train}$ and $\\mathcal{D}_{test}$. These are used to generate a set of training and test environments, each having different transition functions, drawn from their respective distributions. \n\nOur model is learned on the training environments in three stages:\n\\begin{enumerate*}[label=(\\roman*)] \n \\item a reinforcement learning phase,\n \\item a self-supervised phase and \n \\item a supervised phase.\n\\end{enumerate*}\nThe resulting PD-VF model is evaluated on test environments, where it only experiences a single episode in each. This evaluation setting probes PD-VF's ability to very quickly adapt to previously unseen dynamics. \n\n\\subsection{Reinforcement Learning Phase}\n\\label{sec:rltrain}\nThe first phase of training uses standard model-free RL algorithms to acquire experience in the training environments. An ensemble of $N$ policies are trained, each with a different random seed on one of the training environments. For each policy, we save a number of checkpoints at different stages throughout training. Then, we collect trajectories using each of these checkpoints in each of our training environments. This results in experience from a diverse set of policies (some good, some bad) across environments with different dynamics. Importantly, this dataset contains the behaviors of policies in environments they haven't been trained on. In the next section, we describe how the collected trajectories are used to learn policy and dynamics embeddings. \n\n\\subsection{Self-Supervised Learning Phase}\nThe goal of this phase is to learn an embedding space of the dynamics that captures variations in the transition function, as well as an embedding space of the policies that captures variations in the agent behavior. \n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{fig\/emb_diagram.pdf}\n \\caption{In the \\textbf{self-supervised learning phase}, a pair of autoencoders is trained using transitions generated by a diverse set of policies in a set of environments with different dynamics. By exploiting the Markov property of the environment, distinct latent embeddings of the dynamics $z_d$ and policy $z_\\pi$ are produced.} \n \\label{fig:emb_diagram}\n\\end{figure}\nThe space of dynamics is learned using an encoder $E_{d}$ parameterised as a Transformer \\cite{vaswani2017attention}, and a decoder $D_{d}$ parameterised as a feed-forward network. The encoder takes as input a {\\it set} of transitions $\\{(s_t, a_t, s_{t+1})\\}$ from the first $N_d$ steps in each episode and outputs a vector embedding for the dynamics $z_{d}$. The decoder takes as inputs the state $s_t$, action $a_t$ and dynamics embedding $z_{d}$, and predicts the next state $\\hat{s}_{t+1}$. The parameters $\\theta_d$ and $\\phi_d$ of the encoder and decoder are trained to minimize the $\\ell_2$ error of $\\hat{s}_{t+1}$ and ${s}_{t+1}$. Formally,\n\\begin{equation*}\n z_{d} = E_{d}(\\{(s_t, a_t, s_{t+1})\\} ; \\; \\theta_{d})\n\\end{equation*}\n\\begin{equation*}\n \\hat{s}_{t+1} = D_{d}(s_t, a_t, z_{d} ; \\; \\phi_{d}).\n\\end{equation*}\nThis arrangement exploits the inductive bias that, conditioned on $d$, the environment is Markovian. By using no positional encoding in the Transformer, the input transitions lack any temporal ordering, thus preserving the Markov property. The decoder receives no historical information (since it is unnecessary in a Markovian setting), so it is forced to embed information about the dynamics into $z_d$ to make good predictions. Because the input set contains the actions in each triple, the encoder has no incentive to encode policy information into $z_d$. This modeling choice encourages $z_d$ to only contain information about the dynamics, rather than the policy used to generate the transitions.\n\nSimilarly, the space of policies is learned using an encoder $E_{\\pi}$ parameterised as a Transformer and a decoder $D_{\\pi}$ parameterised as a feed-forward network. The encoder takes as input a set (again using the Markov property as an inductive bias) of state-action pairs $\\{(s_t, a_t)\\}$ from a full episode and outputs a vector embedding for the policy $z_{d}$. The decoder takes as inputs the state $s_t$ and the policy embedding $z_{\\pi}$ to predict the action taken by the policy $\\hat{a}_{t}$. Since the policy encoder does not have direct access to full environment transitions, $z_{\\pi}$ is constrained to capture information about the policy without elements of the dynamics. The parameters $\\theta_{\\pi}$ and $\\phi_{\\pi}$ of the encoder and decoder are trained to minimize the $\\ell_2$ error of $\\hat{a}_{t}$ and ${a}_{t}$. Formally, \n\\begin{equation*}\n z_{\\pi} = E_{\\pi}(\\{(s_t, a_t)\\} ; \\; \\theta_{\\pi})\n\\end{equation*}\n\\begin{equation*}\n \\hat{a}_t = D_{\\pi}(s_t, z_{\\pi} ; \\; \\phi_{\\pi}).\n\\end{equation*}\n\nBoth the policy and the dynamics embeddings are normalized to have unit $\\ell_2$-norm. \n\nSee Figure~\\ref{fig:emb_diagram} for an overview of the self-supervised learning phase.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{fig\/pdvf_diagram.pdf}\n \\caption{In the \\textbf{supervised learning phase}, a parametric value function $W$ is trained to predict the expected return $G$ for an entire space of policies and dynamics. W takes as inputs the initial state $s_0$, policy embedding $z_\\pi$, and dynamics embedding $z_d$ (estimated from a small set of transitions). We train $W$ in a supervised fashion, using Monte-Carlo estimates of the expected return $G$ for policy $\\pi$ in environment with dynamics $\\mathcal{T}_d$. At test time, $z_\\pi$ is optimized to maximize $\\hat{G}$ (red dashed arrow), resulting in $z^*_\\pi$ which is then decoded to an actual policy via $D_\\pi$.}\n\\label{fig:pdvf_diagram}\n\\end{figure}\n\n\\subsection{Supervised Learning Phase}\n\\label{sec:suptrain}\n\nIn this phase, the goal is to train an estimator $W$ of the expected return $\\hat{G}$ for a space of policies and dynamics. More specifically, $W$ is a function approximator conditioned on the learned policy and dynamics embeddings, $z_\\pi$ and $z_d$.\n\nA central idea of our PD-VF framework is that $W$ provides a scoring function over the policy embedding space. It thus provides a mechanism to allow on-the-fly optimization of $z_\\pi$ with respect to the estimated return $\\hat{G}$, without the need for any environment interaction, given an estimate (or embedding) of the environment's dynamics. This is key to PD-VF's ability to rapidly find an effective policy in a new environment, only requiring enough environment interaction to give a reliable estimate of the dynamics embedding $z_d$ (just a few steps in practice). We choose $W$ to have a quadratic form to permit easy optimization with respect to $z_\\pi$:\n\\begin{equation*}\n \\hat{G} = W(s_0, z_{\\pi}, z_{d}) = z_{\\pi}^T \\, A(s_0, z_{d}; \\psi) \\, z_{\\pi}.\n\\end{equation*}\nThe matrix $A(s_0, z_d; \\psi)$ is a function of the initial environment state $s_0$ as well as the dynamics embedding $z_d$. Note that $A$ only needs to model the initial state $s_0$ rather than an arbitrary state $s$ since the optimization w.r.t $z_\\pi$ occurs only once, at the start of an episode. Since $A$ must be Hermitian positive-definite, a feed-forward network with parameters $\\psi$ is first used to obtain a lower triangular matrix $L(s_0, z_d; \\psi)$. Then $A$ is constructed from $L L^T$. \n\n{\\noindent \\bf Optimizing the policy embedding $z_\\pi$}: The optimization of the policy embedding $z_{\\pi}$ has a closed-form solution which is achieved by performing a singular value decomposition, $A = U S V^T$, and taking the top singular vector of this decomposition $z_{\\pi}^{*}$. Unit $\\ell_2$ normalization is then applied to $z_{\\pi}^{*}$. We refer to this vector $z_{\\pi}^{*}$ as the {\\em optimal policy embedding} (OPE) of the PD-VF{}. \n\n\n{\\noindent \\bf Learning $\\psi$ -- Initial stage}: We collect training data for the PD-VF{} in the following manner. First, we randomly select a policy and an environment from our training set (described in Section~\\ref{sec:rltrain}). Second, we generate full trajectories of that policy in the selected environment and cache the average return obtained across all episodes. This gives us a Monte-Carlo estimate for the expected return of the corresponding policy in that particular environment. Then, we use the first $N_{d}$ steps of that trajectory to infer the dynamics embedding. Similarly, we use the full trajectory to infer the policy embedding (via $E_\\pi$, not the above optimization procedure). After collecting this data into a buffer, we train the estimator $W$ in a supervised fashion by predicting the expected return $G$ given an initial state $s_0$, a policy embedding $z_{\\pi}$ and a dynamics embedding $z_{d}$. \n\n{\\noindent \\bf Learning $\\psi$ -- Data Aggregation for the Value Function}:\nFor the method to work well, it is important that the learned value function $W$ makes accurate predictions for the entire policy space, and especially for the OPE $z_{\\pi}^{*}$ (which correspond to the policies selected to act in the environment). One way to ensure that these estimates are accurate is by adding the OPEs to the training data. After initial training of the PD-VF{} on the original dataset of policy and dynamics embeddings, we use an iterative algorithm that alternates between collecting a new dataset of OPEs and training the PD-VF{} on the aggregated data (including the original data as well as data added from all previous iterations). We use early stopping to select the best value function (\\textit{i.e.}{} the one with the lowest loss) to be used at test time.\n\n{\\noindent \\bf Learning $\\psi$ -- Data Aggregation for the Policy Decoder}:\nSimilarly, the policy decoder may poorly estimate an agent's actions in states not seen during training. Thus, we iteratively train the policy decoder using a combination of the original set of states as well as new states generated by the policy embeddings that maximize the current value function. More specifically, we use the current OPEs (corresponding to the policies that PD-VF{} thinks are best) as inputs to the policy decoder to generate actions and interact with the environment. Then, we add the states visited by this policy to the data. The policy decoder is trained using the aggregated collection of states which includes both the states visited by the original collection of policies as well as the states visited by the current OPEs selected by the PD-VF{}.\n\nSee Figure~\\ref{fig:pdvf_diagram} for an overview of the supervised learning phase.\n\n\\subsection{Evaluation Phase}\n\nAt test time, we want to find a policy that performs well on a single episode of an environment with unseen dynamics. This proceeds as follows: (i) the agent uses one of the pretrained RL policies to act for $N_{d}$ steps; (ii) the generated transitions are then used to infer the dynamics embedding $z_d$; (iii) once an estimate of the dynamics is obtained, the matrix $A(s_0, z_{d};\\; \\psi)$ can be computed; (iv) we employ the closed-form optimization described above to compute the optimal policy embedding $z_{\\pi}^{*}$; (v) the policy decoder, conditioned on the $z_{\\pi}^{*}$ embedding, is then used to take actions in the environment until the end of the episode. Note that only a small number of interactions with a new environment is needed in order to adapt, the policy selection being performed internally within the PD-VF model. Performance is evaluated on a single trajectory of each environment instance. \n\n\n\\section{Related Work}\nOur work draws inspiration from multiple research areas such as transfer learning \\citep{taylor2009transfer, Higgins2017DARLAIZ}, skill and task embedding \\citep{Devin2016LearningMN, Zhang2018DecouplingDA, Hausman2018LearningAE, Petangoda2019DisentangledSE}, and general value functions \\citep{precup2001off, Sutton2011HordeAS, white2012scaling}.\n\n\\textbf{Multi-Task and Transfer Learning.}\n\\citet{taylor2009transfer} presents an overview of transfer learning methods in RL. A popular approach for transfer in RL is multi-task learning \\citep{taylor2009transfer, teh2017distral}, a paradigm in which an agent is trained on a family of related tasks. By simultaneously learning about different tasks, the agent can exploit their common structure, which can lead to faster learning and better generalization to unseen tasks from the same family \\citep{taylor2009transfer, lazaric2012transfer, ammar2012reinforcement, ammar2014automated, parisotto2015actor, borsa2016learning, gupta2017learning, andreas2017modular, oh2017zero, hessel2019multi}. A large body of work has been inspired by the Horde architecture \\citep{Sutton2011HordeAS}, which consists of a number of RL agents with different policies and goals. Each agent is tasked with estimating the value function of a particular policy on a given task, thus collectively representing knowledge about the world. Building on these ideas, other methods leverage the shared dynamics of the tasks \\citep{barreto2017successor, zhang2017deep, madjiheurem2019state2vec} or the similarity among value functions and the associated optimal policies \\citep{schaul2015universal, borsa2018universal, hansen2019fast, siriwardhana2019vusfa}. These approaches assume the same underlying transition function for all tasks. In contrast, we focus on transferring knowledge across tasks with different dynamics.\n\n\\textbf{Meta-Learning and Robust Transfer.}\nA popular approach for fast adaptation to new environments is meta reinforcement learning (meta RL) \\citep{Cully2015RobotsTC, finn2017model, Wang2017RobustIO, Duan2016RL2FR, Xu2018MetaGradientRL, Houthooft2018EvolvedPG, saemundsson2018meta, nagabandi2018learning, humplik2019meta, rakelly2019efficient}. Meta RL methods have been designed to work well with dense reward and recent work has shown that they struggle to learn from a limited number of interactions and optimization steps at test time \\citep{yang2019single}. In contrast, our framework is capable of rapid adaptation to new environment dynamics and does not require dense reward or a large number of interactions to find a good policy. Moreover, PD-VF{} does not update the model parameters at test time, which makes it less computationally expensive than meta RL. Another common approach for transfer across dynamics is model-based RL, which uses Gaussian processes (GPs) or Bayesian neural networks (BNNs) to estimate the transition function \\cite{DoshiVelez2013HiddenPM, Killian2017RobustAE}. However, such methods require fictional rollouts to train a policy from scratch at test time, which makes them computationally expensive and limits their applicability for real-world tasks. \\citet{Yao2018DirectPT} uses a fully-trained BNN to further optimize latent variables during a single test episode, but requires an optimal policy for each training instance, which makes it harder to scale. Robust transfer methods either require a large number of interactions at test time \\citep{rajeswaran2017towards} or assume that the distribution over hidden variables is known or controllable \\citep{Paul2018FingerprintPO}. An alternative approach was proposed by \\citet{Pinto2017RobustAR} who use an adversary to perturb the system, achieving robust transfer across physical parameters such as friction or mass. \n\n\n\\textbf{Skill and Task Embeddings.}\nA large body of work proposes the use of learned skill and task embeddings for transfer in RL~\\cite{da2012learning, sahni2017learning, oh2017zero, gupta2017learning, Hausman2018LearningAE, he2018zero}. For example, \\citet{Hausman2018LearningAE} use approximate variational inference to learn a latent space of skills. Similarly, \\citet{Arnekvist2018VPEVP} learn a stochastic embedding of optimal Q-functions for various skills and train a universal policy conditioned on this embedding. In both \\citet{Hausman2018LearningAE} and \\citet{Arnekvist2018VPEVP}, adaptation to a new task is done in the latent space with no further updates to the policy network. \\citet{CoReyes2018SelfConsistentTA} learn a latent space of low-level skills that can be controlled by a higher-level policy, in the context of hierarchical reinforcement learning. This embedding is learned using a variational autoencoder \\citep{Kingma2013AutoEncodingVB}\nto encode state trajectories and decode states and actions. \\citet{Zintgraf2018FastCA} use a meta-learning approach to learn a deterministic task embedding. \\citet{Wang2017RobustIO} and \\citet{Duan2017OneShotIL} learn embeddings of expert demonstrations to aid imitation learning using variational and deterministic methods, respectively. More recently, \\citet{Perez2018EfficientTL} learn dynamic models with auxiliary latent variables and use them for model-predictive control. \\citet{Zhang2018DecouplingDA} use separate dynamics and reward modules to learn a task embedding. They show that conditioning a policy on this embedding helps transfer to changes in transition or reward function. While the above approaches might learn embeddings of skills or tasks, none of them leverage \\textit{both} the latent space of policies and that of the environments for estimating the expected return and using it to select an effective policy at test time.\n\nMore similar to our work is that of \\citet{yang2019single}, who also focus on fast adaptation to new environment dynamics and evaluate performance on a single episode at test time. \\citet{yang2019single} train an inference model and a probe to estimate the underlying latent variables of the dynamics, which are then used as input to a universal control policy. While similar in scope, our approach is significantly different from that of \\citet{yang2019single}. Importantly, \\citet{yang2019single} does not learn a latent space of policies and instead trains a universal policy on all the environments. Learning a value function in a space of policies and dynamics allows the function approximator to capture relations among dynamics, behaviors (both optimal as well as non-optimal), and rewards that a universal policy cannot learn. Moreover, the learned structure can aid transfer to new dynamics.\n\n\\label{related}\n\\section{Results}\n\\label{results}\n\n\\subsection{Adaptation to New Environment Dynamics}\nAs seen in Figures~\\ref{fig:eval_results} and~\\ref{fig:eval_ablations}, PD-VF{} outperforms all other methods on test environments with new dynamics. In some cases (particularly on Spaceship and Swimmer), our approach is comparable to the PPOenv upper bound which was directly trained on the respective test environment (in contrast, PD-VF{} has never interacted with that environment before). While the strength of PD-VF{} lies in quickly adapting to new dynamics, its performance on training environments is still comparable to that of the other baselines, as shown in Figure~\\ref{fig:train_results}. This result is not surprising since current state-of-the-art RL algorithms such as PPO can generally learn good policies for the environments they are trained on, given enough interactions, updates, and the right hyperparamters. However, as predicted, standard model-free RL methods such as the baseline PPOall do not generalize well to environments with dynamics different from the ones experiences during training. Even meta-learning approaches like MAML or $RL^2$ struggle to adapt when they are allowed to use only a short trajectory for updating the policy at test time, as is the case here. \n\nBut most importantly, PD-VF{} also outperforms the approaches that use the dynamics embedding such as NN, Kmeans, and PPOdyn. This supports our claim that learning a value function for an entire space of policies (rather than for a single optimal policy as standard RL methods do) can be beneficial for adapting to unseen dynamics. By simultaneously estimating the return of a collection of policies in a family of environments with different but related dynamics, PD-VF{} can learn how variations in dynamics relate to differences in the performance of various policies. This allows the model to rank different policies and understand that sub-optimal behaviors in certain environments might be optimal in others. Thus, at least in theory, PD-VF{} has the ability to find policies that are better than the ones seen during training. Our empirical results indicate that this might also hold true in practice. Overall, PD-VF{} proves to be more robust to changes in dynamics relative to the other methods, especially in completely new environments.\n\n\n\\subsection{Analysis of Learned Embeddings}\n\nThe performance of PD-VF{} relies on learning useful policy and dynamics embeddings that capture variations in agent behaviors and transition functions, respectively. In this section, we analyze the learned embeddings. \n\n\\begin{figure}[ht!]\n \\subfigure[]{\\label{fig:env_space}\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_space_env_env.pdf}}\n \\subfigure[]{\\label{fig:env_swim}\\includegraphics[width=0.31\\columnwidth]{fig\/\/tsne_swim_env_env.pdf}}\n \\subfigure[]{\\label{fig:env_ant}\\includegraphics[width=0.36\\columnwidth]{fig\/tsne_ant_env_env.pdf}}\n \\caption{t-SNE plots of the learned \\textbf{environment embeddings} $z_{d}$ for Spaceship (a), Swimmer (b), and Ant-wind (c). The color corresponds to the \\textit{environment} that generated the transitions used to encode the corresponding dynamics embeddings. The plot contains embeddings of both train and test environments.}\n \\label{fig:env_embeddings}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\subfigure[]{\\label{fig:pi_space}\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_space_pi_pi.pdf}}\n \\subfigure[]{\\label{fig:pi_swim}\\includegraphics[width=0.31\\columnwidth]{fig\/tsne_swim_pi_pi.pdf}}\n \\subfigure[]{\\label{fig:pi_ant}\\includegraphics[width=0.34\\columnwidth]{fig\/tsne_ant_pi_pi.pdf}}\n \\caption{t-SNE plots of the learned \\textbf{policy embeddings} $z_{\\pi}$ for Spaceship (a), Swimmer (b), and Ant-wind (c). The color corresponds to the \\textit{policy} that generated the transitions used to encode the corresponding policy embeddings. The plot contains embeddings of policies trained on both train and test environments.}\n \\label{fig:policy_embeddings}\n\\end{figure}\n\nFigure~\\ref{fig:env_embeddings} shows a t-SNE plot \\citep{Maaten2008VisualizingDU} of the learned dynamics embeddings on the three continuous control domains used for evaluating our method. Environment $i$ corresponds to dynamics defined by $d = i \\times \\pi \/ 10$ (\\textit{i.e.}{} the direction of the wind in Swimmer's environment 1 is at $\\pi\/10$ degrees). Environments 1 - 15 are used for training, while 16 - 20 are used for evaluation. The latent space captures the continuous nature of the distribution used to generate the environment dynamics. For example, in Figure~\\ref{fig:env_ant}, one can see the wind direction corresponding to a particular environment, indicating that the learned embedding space uncovers the manifold structure of the true dynamics distribution. Even if, during training, the dynamics model never sees trajectories through the test environments, it is still able to embed them within the 1D manifold, thus preserving smoothness in the latent space. \n\nSimilarly, Figure~\\ref{fig:policy_embeddings} shows the corresponding t-SNE \\citep{Maaten2008VisualizingDU} of the learned policy embeddings for Spaceship, Swimmer, and Ant. The embeddings are clustered according to the policy that generated them. \n\n\n\\label{results}\n\n\\section*{Acknowledgements}\nRoberta and Max were supported by the DARPA L2M grant. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nLet $X$ be a (reduced, irreducible) smooth projective curve over an algebraically closed field $k$ of characteristic $p \\ge 0$ and let $k(X)$ be its function field. \nWe consider a morphism $\\varphi: X \\rightarrow \\mathbb{P}^2$, which is birational onto its image.\nIn this situation, Hisao Yoshihara introduced the notion of a Galois point. \nA point $P \\in \\mathbb{P}^2$ is called a {\\it Galois point}, if the field extension $k(\\varphi(X))\/\\pi_P^*k(\\mathbb{P}^1)$ of function fields induced by the projection $\\pi_P$ from $P$ is a Galois extension (\\cite{miura-yoshihara, yoshihara}). \nFurthermore, a Galois point $P$ is said to be inner (resp. outer), if $P \\in \\varphi(X) \\setminus {\\rm Sing}(\\varphi(X))$ (resp. if $P \\in \\mathbb{P}^2 \\setminus \\varphi(X)$). \n\nA criterion for the existence of a birational embedding with two Galois points was described by the present author (\\cite{fukasawa2}). \nIt is a natural problem to find a condition for the existence of {\\it three} Galois points (see also \\cite{open}). \nNon-collinear Galois points were considered in \\cite{fukasawa3}. \nIn this article, (three) collinear Galois points are studied. \nThe associated Galois group is denoted by $G_P$, when $P$ is a Galois point.\nThe following criterion is presented. \n\n\\begin{theorem} \\label{main} \nLet $G_1$, $G_2$ and $G_3 \\subset {\\rm Aut}(X)$ be finite subgroups of order at least three, and let $P_1$, $P_2$ and $P_3$ be different points of $X$. \nThen, four conditions\n\\begin{itemize}\n\\item[(a)] $X\/{G_i} \\cong \\Bbb P^1$ for $i=1, 2, 3$, \n\\item[(b)] $G_i \\cap G_j=\\{1\\}$ for any $i, j$ with $i \\ne j$, \n\\item[(c)] there exists a divisor $D$ such that $D=P_i+\\sum_{\\sigma \\in G_i}\\sigma(P_j)$ for any $i, j$ with $i \\ne j$, and\n\\item[(d)] $\\dim \\Lambda \\le 2$, for the smallest sublinear system $\\Lambda$ of $|D|$ such that $D, P_i+\\sum_{\\sigma \\in G_i}\\sigma(P_i) \\in \\Lambda$ for $i=1, 2, 3$\n\\end{itemize}\nare satisfied, if and only if there exists a birational embedding $\\varphi: X \\rightarrow \\mathbb P^2$ of degree $|G_1|+1$ such that $\\varphi(P_1)$, $\\varphi(P_2)$ and $\\varphi(P_3)$ are three collinear inner Galois points for $\\varphi(X)$ and $G_{\\varphi(P_i)}=G_i$ for $i=1, 2, 3$. \n\\end{theorem}\n\n\n\\begin{theorem} \\label{main-outer}\nLet $G_1$, $G_2$ and $G_3 \\subset {\\rm Aut}(X)$ be finite subgroups, and let $Q$ be a point of $X$. \nThen, four conditions\n\\begin{itemize}\n\\item[(a)] $X\/{G_i} \\cong \\Bbb P^1$ for $i=1, 2, 3$, \n\\item[(b)] $G_i \\cap G_j=\\{1\\}$ for any $i, j$ with $i \\ne j$, \n\\item[(c')] there exists a divisor $D$ such that $D=\\sum_{\\sigma \\in G_i}\\sigma(Q)$ for $i=1, 2, 3$, and\n\\item[(d')] $\\dim \\Lambda \\le 2$, for the smallest sublinear system $\\Lambda$ of $|D|$ such that $\\Lambda_1 \\cup \\Lambda_2 \\cup \\Lambda_3 \\subset \\Lambda$, where $\\Lambda_i$ is the base-point-free linear system induced by the covering map $X \\rightarrow X\/G_i \\cong \\mathbb{P}^1$ for $i=1, 2, 3$\n\\end{itemize}\nare satisfied, if and only if there exists a birational embedding $\\varphi: X \\rightarrow \\mathbb P^2$ of degree $|G_1|$ and three collinear outer Galois points $P_1, P_2$ and $P_3$ exist for $\\varphi(X)$ such that $G_{P_i}=G_i$ for $i=1, 2, 3$, and $\\overline{P_1P_2} \\ni \\varphi(Q)$, where $\\overline{P_1P_2}$ is the line passing through $P_1$ and $P_2$. \n\\end{theorem} \n\n\n\nThe uniqueness of the birational embedding constructed in \\cite{fukasawa2} is also proved. \n\n\\begin{proposition} \\label{uniqueness} \nAssume that the orders of groups $G_1$ and $G_2$ in Facts \\ref{criterion1} and \\ref{criterion2} are at least three. \nThen, a morphism $\\varphi$ described in Fact \\ref{criterion1} (resp. in Fact \\ref{criterion2}) is uniquely determined by a $4$-tuple $(G_1, G_2, P_1, P_2)$ (resp. by a $3$-tupble $(G_1, G_2, Q)$), up to a projective equivalence. \n\\end{proposition} \n\nUsing (the proof of) this Proposition, the following criterion for the extendability of an automorphism $\\sigma \\in G_P$ for an inner Galois point $P$ is presented. \n\n\\begin{proposition} \\label{extendable} \nLet $\\deg \\varphi(X) \\ge 4$, let $\\varphi(P_1)$ and $\\varphi(P_2) \\in \\varphi(X) \\subset \\mathbb{P}^2$ be different inner Galois points, and let $\\sigma \\in G_{\\varphi(P_1)}$ satisfy $P_3=\\sigma(P_2)$. \nThen, there exists a linear transformation $\\tilde{\\sigma}$ of $\\mathbb{P}^2$ such that $\\varphi^{-1}\\circ\\tilde{\\sigma}\\circ\\varphi=\\sigma$, if and only if three conditions\n\\begin{itemize}\n\\item[(a)] $\\sigma(P_1)=P_1$, \n\\item[(b)] $\\varphi(P_3)$ is an inner Galois point, and \n\\item[(c)] $\\sigma^*(P_3+\\sum_{\\gamma \\in G_{\\varphi(P_3)}}\\gamma(P_3))=P_2+\\sum_{\\tau \\in G_{\\varphi(P_2)}}\\tau(P_2)$\n\\end{itemize}\nare satisfied. \n\\end{proposition}\n\n\\begin{corollary} \\label{total flexes}\nLet $\\varphi(P_1), \\varphi(P_2)$ and $\\varphi(P_3)$ be different inner Galois points, and let $\\sigma \\in G_{\\varphi(P_1)}$ satisfy $\\sigma(P_2)=P_3$. \nIf $\\varphi(P_1)$, $\\varphi(P_2)$ and $\\varphi(P_3)$ are total inflection points, then there exists a linear transformation $\\tilde{\\sigma}$ of $\\mathbb{P}^2$ such that $\\varphi^{-1}\\circ\\tilde{\\sigma}\\circ\\varphi=\\sigma$. \n\\end{corollary}\n\n\\section{Preliminaries} \n\nWe recall the criterion presented in \\cite{fukasawa2} for two Galois points. \n\n\\begin{fact} \\label{criterion1} \nLet $G_1$ and $G_2$ be finite subgroups of ${\\rm Aut}(X)$ and let $P_1$ and $P_2$ be different points of $X$.\nThen, three conditions\n\\begin{itemize}\n\\item[(a)] $X\/{G_i} \\cong \\Bbb P^1$ for $i=1, 2$, \n\\item[(b)] $G_1 \\cap G_2=\\{1\\}$, and\n\\item[(c)] $P_1+\\sum_{\\sigma \\in G_1} \\sigma (P_2)=P_2+\\sum_{\\tau \\in G_2} \\tau (P_1) $\n\\end{itemize}\nare satisfied, if and only if there exists a birational embedding $\\varphi: X \\rightarrow \\mathbb P^2$ of degree $|G_1|+1$ such that $\\varphi(P_1)$ and $\\varphi(P_2)$ are different inner Galois points for $\\varphi(X)$ and $G_{\\varphi(P_i)}=G_i$ for $i=1, 2$. \n\\end{fact}\n\n\\begin{fact} \\label{criterion2} \nLet $G_1$ and $G_2$ be finite subgroups of ${\\rm Aut}(X)$ and let $Q$ be a point of $X$.\nThen, three conditions\n\\begin{itemize}\n\\item[(a)] $X\/{G_i} \\cong \\Bbb P^1$ for $i=1, 2$, \n\\item[(b)] $G_1 \\cap G_2=\\{1\\}$, and\n\\item[(c')] $\\sum_{\\sigma \\in G_1} \\sigma (Q)=\\sum_{\\tau \\in G_2} \\tau (Q) $\n\\end{itemize}\nare satisfied, if and only if there exists a birational embedding $\\varphi: X \\rightarrow \\mathbb P^2$ of degree $|G_1|$ and two outer Galois points $P_1$ and $P_2$ exist for $\\varphi(X)$ such that $G_{\\varphi(P_i)}=G_i$ for $i=1, 2$, and $\\overline{P_1P_2} \\ni Q$. \n\\end{fact}\n\nAccording to \\cite[Lemma 2.5]{fukasawa1}, the following holds. \n\n\\begin{fact} \\label{inner-lemma}\nAssume that $\\deg \\varphi(X) \\ge 4$, and points $\\varphi(P_1)$ and $\\varphi(P_2)$ are distinct inner Galois points for $\\varphi(X)$. \nThen, the line $\\overline{\\varphi(P_1)\\varphi(P_2)}$ is different from the tangent line at $\\varphi(P_1)$. \nIn particular, $\\sigma(P_1) \\ne P_2$ for each automorphism $\\sigma \\in G_{\\varphi(P_1)}$. \n\\end{fact}\n\\section{Proof of Theorems \\ref{main} and \\ref{main-outer}}\n\n\n\\begin{proof}[Proof of Theorem \\ref{main}]\nWe consider the if-part. \nIt follows from conditions (a) and (b) in Fact \\ref{criterion1} that conditions (a) and (b) are satisfied. \nBy Fact \\ref{criterion1}(c), since $\\varphi(P_1), \\varphi(P_2)$ and $\\varphi(P_3)$ are collinear Galois points, condition (c) is satisfied. \nLet $\\Lambda' \\subset |D|$ be the (base-point-free) linear system induced by $\\varphi$. \nSince $\\varphi(P_i)$ is inner Galois, $P_i+\\sum_{\\sigma \\in G_i}\\sigma(P_i) \\in \\Lambda'$, for $i=1, 2, 3$. \nTherefore, $\\dim \\Lambda \\le 2$. \nCondition (d) is satisfied. \n\nWe consider the only-if part. \nBy conditions (a), (b) and (c) and Fact \\ref{criterion1}, for each $i, j$ with $i \\ne j$, there exists a birational embedding $\\varphi_{ij}: X \\rightarrow \\mathbb{P}^2$ such that $\\varphi_{ij}(P_i)$ and $\\varphi_{ij}(P_j)$ are inner Galois points for $\\varphi_{ij}(X)$, $G_{\\varphi_{ij}(P_i)}=G_i$ and $G_{\\varphi_{ij}(P_j)}=G_j$. \nIt follows from Fact \\ref{inner-lemma} that \n$$ G_1P_1 \\ne G_1 P_2, \\mbox{ and } \\sum_{\\sigma \\in G_1}\\sigma(P_1) \\ne \\sum_{\\sigma \\in G_1} \\sigma(P_2). $$\nThen, by condition (a), there exists a function $f \\in k(X) \\setminus k$ such that \n$$ k(X)^{G_1}=k(f), \\ (f)=\\sum_{\\sigma \\in G_1}\\sigma(P_1)-\\sum_{\\sigma \\in G_1}\\sigma(P_2)$$\n(see also \\cite[III.7.1, III.7.2, III.8.2]{stichtenoth}). \nNote that, by condition (c), $(f)_{\\infty}=D-P_1$. \nSimilarly, there exist $g, h \\in k(X) \\setminus k$ such that \n$$ k(X)^{G_2}=k(g), \\ (g)=\\sum_{\\tau \\in G_2}\\tau(P_2)-(D-P_2) $$\nand \n$$ k(X)^{G_3}=k(h), \\ (h)=\\sum_{\\gamma \\in G_3}\\gamma(P_3)-(D-P_3).$$ \nThen, $\\varphi_{12}$ is represented by $(f:g:1)$ (see \\cite[Proofs of Proposition 1 and of Theorem 1]{fukasawa2}). \nLet $\\Lambda \\subset |D|$ be as in condition (d), and let $\\Lambda' \\subset |D|$ be the sublinear system corresponding to $\\langle f, g, 1 \\rangle$.\nSince $D, (f)+D, (g)+D \\in \\Lambda$, it follows that $\\Lambda' \\subset \\Lambda$. \nBy condition (d), $\\Lambda'=\\Lambda$. \nThis implies that $P_3+\\sum_{\\gamma \\in G_3}\\gamma(P_3) \\in \\Lambda'$. \nTherefore, $h \\in \\langle f, g, 1\\rangle$. \nSince the covering map $X \\rightarrow X\/G_3$ is represented by $\\langle h, 1 \\rangle$, this covering map coincides with the projection from some smooth point of $\\varphi_{12}(X)$. \nSuch a center of projection coincides with $\\varphi_{12}(P_3)$, since the center is determined by ${\\rm supp}(D) \\cap {\\rm supp}((h)+D)$. \nThis implies that $\\varphi_{12}(P_3)$ is an inner Galois point. \nBy condition (c), points $\\varphi_{12}(P_1)$, $\\varphi_{12}(P_2)$ and $\\varphi_{12}(P_3)$ are collinear. \n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{main-outer}]\nWe consider the if-part. \nIt follows from conditions (a) and (b) in Fact \\ref{criterion2} that conditions (a) and (b) are satisfied. \nBy Fact \\ref{criterion2}(c'), since $P_1, P_2$ and $P_3$ are collinear outer Galois points, condition (c') is satisfied. \nLet $\\Lambda' \\subset |D|$ be the (base-point-free) linear system induced by $\\varphi$. \nSince $P_i$ is outer Galois, the linear system corresponding to $X \\rightarrow X\/G_i \\cong \\mathbb{P}^1$ is contained in $\\Lambda'$, for $i=1, 2, 3$. \nTherefore, $\\dim \\Lambda \\le 2$. \nCondition (d') is satisfied. \n\nWe consider the only-if part. \nBy condition (a), there exists a function $f \\in k(X) \\setminus k$ such that \n$$ k(X)^{G_1}=k(f), \\ (f)_{\\infty}=\\sum_{\\sigma \\in G_1}\\sigma(Q)$$\n(see also \\cite[III.7.1, III.7.2, III.8.2]{stichtenoth}). \nNote that, by condition (c'), $(f)_{\\infty}=D$. \nThe sublinear system corresponding to $\\langle 1, f \\rangle \\subset \\mathcal{L}(D)$ coincides with $\\Lambda_1 \\subset |D|$ as in condition (d'). \nSimilarly, there exist $g, h \\in k(X) \\setminus k$ such that \n$$ k(X)^{G_2}=k(g), \\ k(X)^{G_3}=k(h), \\mbox{ and } \\ (g)_{\\infty}=(h)_{\\infty}=D. $$\nFurthermore, the subspaces $\\langle 1, g \\rangle$ and $\\langle 1, h \\rangle$ correspond to the linear systems $\\Lambda_2$ and $\\Lambda_3$ as in condition (d'), respectively. Then, by conditions (b) and (c'), the morphism $\\varphi$ represented by $(f:g:1)$ is birational onto its image and outer Galois points $P_1$ and $P_2$ exist for $\\varphi(X)$ such that $G_{\\varphi(P_i)}=G_i$ for $i=1, 2$ (see \\cite[Proofs of Proposition 1 and of Theorem 1]{fukasawa2}). \nLet $\\Lambda \\subset |D|$ be as in condition (d'), and let $\\Lambda' \\subset |D|$ be the sublinear system corresponding to $\\langle f, g, 1 \\rangle$. \nSince $\\Lambda_1, \\Lambda_2 \\subset \\Lambda$, it follows that $\\Lambda' \\subset \\Lambda$. \nBy condition (d'), $\\Lambda'=\\Lambda$. \nThis implies that $\\Lambda_3 \\subset \\Lambda'$. \nTherefore, $h \\in \\langle f, g, 1\\rangle$. \nSince the covering map $X \\rightarrow X\/G_3$ is represented by $\\langle h, 1 \\rangle$, this covering map coincides with the projection from some outer point $P_3 \\in \\mathbb{P}^2 \\setminus \\varphi(X)$. \nThen, $P_3$ is an outer Galois point. \nBy condition (c'), points $P_1, P_2$ and $P_3$ are collinear. \n\\end{proof}\n\n\n\n\\section{Proof of Propositions \\ref{uniqueness} and \\ref{extendable}}\n\n\\begin{proof}[Proof of Proposition \\ref{uniqueness}]\nWe consider inner Galois points. \nAssume that condition (c) in Fact \\ref{criterion1} is satisfied. \nLet $D:=P_1+\\sum_{\\sigma \\in G_1}\\sigma(P_2)=P_2+\\sum_{\\tau \\in G_2}\\tau(P_1)$. \nNote that, by Fact \\ref{inner-lemma}, $P_1+\\sum_{\\sigma \\in G_1}\\sigma(P_1) \\ne D$ and $P_1 \\not\\in {\\rm supp}(P_2+\\sum_{\\tau \\in G_2}\\tau(P_2))$. \nThe uniqueness of the linear system corresponding to a birational embedding follows, since a (base-point-free) linear system $\\Lambda \\subset |D|$ of dimension two such that \n$$D, \\ P_1+\\sum_{\\sigma \\in G_1}\\sigma(P_1), \\ P_2+\\sum_{\\tau \\in G_2}\\tau(P_2) \\in \\Lambda$$\nis uniquely determined. \n\nWe consider outer Galois points. \nAssume that condition (c') in Fact \\ref{criterion2} is satisfied. \nLet $D:=\\sum_{\\sigma \\in G_1}\\sigma(Q)=\\sum_{\\tau \\in G_2}\\tau(Q)$, and let $\\Lambda_i$ be the (base-point-free) linear system corresponding to the covering map $\\pi_i: X \\rightarrow X\/G_i \\cong \\mathbb{P}^1$ for $i=1, 2$. \nThen, $D \\in \\Lambda_i$ and $\\Lambda_i \\subset |D|$. \nIf $\\pi_1$ and $\\pi_2$ are realized as the projections from different outer Galois points for a birational embedding $\\varphi: X \\rightarrow \\mathbb{P}^2$, then $\\varphi$ is determined by a sublinear system $\\Lambda \\subset |D|$ such that $\\dim \\Lambda=2$ and $\\Lambda_1 \\cup \\Lambda_2 \\subset \\Lambda$, up to a projective equivalence. \nTherefore, the uniqueness follows. \n\\end{proof} \n\n\n\\begin{proof}[Proof of Proposition \\ref{extendable}] \nLet $C:=\\varphi(X)$. \nWe consider the only-if part. \nAssume that there exists a linear transformation $\\tilde{\\sigma}$ of $\\mathbb{P}^2$ such that $\\varphi^{-1}\\circ\\tilde{\\sigma}\\circ\\varphi=\\sigma$. \nFor a general line $\\ell \\ni \\varphi(P_1)$, $C \\cap \\ell$ contains at least two points (since $\\deg C \\ge 3$), and $\\tilde{\\sigma}((C \\cap \\ell)\\setminus \\{\\varphi(P_1)\\}) \\subset \\ell$. \nSince $\\tilde{\\sigma}$ is a linear transformation, $\\tilde{\\sigma}(\\ell)=\\ell$ follows. \nThis implies that $\\tilde{\\sigma}(\\varphi(P_1))=\\varphi(P_1)$. \nCondition (a) is satisfied. \nSince $\\varphi(P_3)=\\varphi(\\sigma(P_2))=\\tilde{\\sigma}(\\varphi(P_2))$, condition (b) is satisfied. \nSince the divisor $P_2+\\sum_{\\tau \\in G_{\\varphi(P_2)}}\\tau(P_2)$ corresponds to the tangent line of $\\varphi(X)$ at $\\varphi(P_2)$, conditions (c) is also satisfied. \n\nWe consider the if part. \nLet $\\Lambda$ be the linear system corresponding to the birational embedding $\\varphi: X \\rightarrow \\mathbb{P}^2$. \nAs in the proof of Proposition \\ref{uniqueness}, it follows from condition (b) that $\\Lambda$ is the smallest linear system containing the divisors \n$$D, \\ P_1+\\sum_{\\sigma \\in G_{\\varphi(P_1)}}\\sigma(P_1), \\ P_3+\\sum_{\\gamma \\in G_{\\varphi(P_3)}}\\gamma(P_3), $$ \nwhere $D:=P_1+\\sum_{\\sigma \\in G_{\\varphi(P_1)}}\\sigma(P_3)=P_3+\\sum_{\\gamma \\in G_{\\varphi(P_3)}}\\gamma(P_1)$. \nBy condition (a), divisors $D$ and $P_1+\\sum_{\\sigma \\in G_{\\varphi(P_1)}}\\sigma(P_1)$ are invariant under the action of $\\sigma^*$. \nSince $P_2+\\sum_{\\tau \\in G_{\\varphi(P_2)}}\\tau(P_2) \\in \\Lambda$, by condition (c), it follows that $\\sigma^*\\Lambda=\\Lambda$. \n\\end{proof} \n\n\\begin{proof}[Proof of Corollary \\ref{total flexes}] \nWe prove that conditions (a), (b) and (c) in Proposition \\ref{extendable} are satisfied. \nSince $\\varphi(P_1)$ is a total inflection point, by \\cite[III.8.2]{stichtenoth}, condition (a) is satisfied. \nCondition (b) is satisfied by the assumption. \nSince $\\varphi(P_3)$ is a total inflection point, it follows from \\cite[III.8.2]{stichtenoth} that \n$$ P_3+\\sum_{\\gamma \\in G_{\\varphi(P_3)}}\\gamma(P_3)=(|G_3|+1) P_3. $$\nTherefore, \n$$ \\sigma^*\\left(P_3+\\sum_{\\gamma \\in G_{\\varphi(P_3)}}\\gamma(P_3)\\right)=(|G_2|+1)P_2= P_2+\\sum_{\\tau \\in G_{\\varphi(P_2)}}\\tau(P_2). $$\nCondition (c) is satisfied. \n\\end{proof}\n\n\\begin{center} {\\bf Acknowledgements} \\end{center} \nThe author is grateful to Doctor Kazuki Higashine for helpful discussions. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}