diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzljel" "b/data_all_eng_slimpj/shuffled/split2/finalzzljel" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzljel" @@ -0,0 +1,5 @@ +{"text":"\\section*{Part I: Conceptual Framework}\n\n\\section{Introduction}\n\\label{Intro}\n\nIn this work we study the homogenization (assymptotic \nlimit as $\\varepsilon \\to 0$) of the anisotropic\nSchr\\\"odinger equation in the following Cauchy problem\n\\begin{equation}\n\\label{jhjkhkjhkj765675233}\n\t\\left\\{\n\t\\begin{aligned}\n&\ti\\displaystyle\\frac{\\partial u_\\varepsilon}{\\partial t} - {\\rm div} {\\big(A( \\Phi^{-1} {\\big( \\frac{x}{\\varepsilon}, \\omega \\big)},\\omega) \\nabla u_\\varepsilon \\big)} \n + \\frac{1}{\\varepsilon^2} V( \\Phi^{-1} {\\big( \\displaystyle\\frac{x}{\\varepsilon}, \\omega \\big)},\\omega) \\; u_\\varepsilon\n\\\\[5pt]\n\t&\\hspace{100pt}\t+ U( \\Phi^{-1} {\\big( \\frac{x}{\\varepsilon}, \\omega \\big)},\\omega) \\; u_\\varepsilon = 0, \n\t\t\\quad \\text{in $\\mathbb{R}^{n+1}_T \\! \\times \\! \\Omega$}, \n\\\\[3pt]\n\t\t& u_\\varepsilon= u_\\varepsilon^0, \\quad \\text{in $\\mathbb{R}^n \\! \\times \\! \\Omega$},\n\t\\end{aligned}\n\t\\right.\n\\end{equation} \nwhere $\\mathbb{R}^{n+1}_T := (0,T) \\times \\mathbb{R}^n$, for any real number $T> 0$, $\\Omega$ is a probability space, \nand the unknown function $u_\\varepsilon(t,x,\\omega)$ is complex-value. \n\n\\medskip\nThe coefficients in \\eqref{jhjkhkjhkj765675233}, that is the matrix-value \nfunction $A$, the real-value (potencial) functions $V$, $U$ \nare random perturbations of stationary functions accomplished by \nstochastic diffeomorphisms $\\Phi: \\mathbb R^n \\times \\Omega \\to \\mathbb R^n$, (called stochastic deformations).\nThe stationarity property of random \nfunctions will be precisely defined in Section \\ref{628739yhf}, also the definition of \nstochastic deformations which were introduced by \nX. Blanc, C. Le Bris, P.-L. Lions (see \\cite{BlancLeBrisLions1,BlancLeBrisLions2}).\nIn that paper they consider the homogenization \nproblem of an elliptic operator whose coefficients are periodic or stationary functions perturbed by \nstochastic deformations. \n\n\\medskip\nIn particular, we assume that $A = (A_{k \\ell})$, $V$ and $U$ are measurable and\nbounded functions, i.e. for $k, \\ell= 1,\\ldots,n$\n\\begin{equation}\n\\label{ASSUM1}\n A_{k \\ell}, \\; V, \\; U \\in L^\\infty(\\mathbb{R}^n \\times \\Omega).\n\\end{equation}\nMoreover, the matrix $A$ is symmetric and \nuniformly positive defined, that is, there exists $a_0> 0$, such that, for a.a. \n$(y, \\omega) \\in \\mathbb{R}^n \\times \\Omega$, and each $\\xi \\in \\mathbb R^n$\n\\begin{equation}\n\\label{ASSUM2}\n\t\\sum_{k,\\ell=1}^n A_{k\\ell}(y,\\omega)\\, \\xi_k \\, \\xi_\\ell \\geqslant a_0 {\\vert \\xi \\vert}^2.\n\\end{equation}\n\n\\medskip\nThis paper is the second part of the Project initiated with \nT. Andrade, W. Neves, J. Silva \\cite{AndradeNevesSilva}\n(Homogenization of Liouville Equations \nbeyond stationary ergodic setting)\nconcerning the study of moving electrons in non-crystalline matter, \nwhich justify the form considered for the coefficients \nin \\eqref{jhjkhkjhkj765675233}. \nWe recall that crystalline materials, also called perfect materials, are described \nby periodic functions. Thus any homogenization result for\nSchr\\\"odinger equations with periodic coefficients is restrict to \ncrystalline matter. \nMoreover, perfect materials are rare in Nature, there \nexist much more non-crystalline than crystalline materials. \nFor instance, there exists a huge class called quasi-perfect materials\n(see Section \\ref{6775765ff0090sds}, also \\cite{AndradeNevesSilva}), which are closer to \nperfect ones. Indeed, the concept of stochastic deformations \nare very suitable to describe interstitial defects in \nmaterials science\n(see Cances, Le Bris \\cite{CancesLeBris}, \nand Myers \\cite{Myers}).\n\n\\medskip\nOne remarks that, the homogenization of the Schr\\\"odinger equation\nin \\eqref{jhjkhkjhkj765675233},\nwhen the stochastic deformation $\\Phi(y,\\omega)$ \nis the identity mapping and the coefficients are periodic, \nwere studied by Allaire, Piatnitski \\cite{AllairePiatnitski}. \nNotably, that paper presents the discussion about the \ndifferences between the scaling considered in \\eqref{jhjkhkjhkj765675233}\nand the one called semi-classical limit. \nWe are not going to rephrase\nthis point here, and address the reader to Chapter 4 in \\cite{BensoussanLionsPapanicolaou}\nfor more general considerations about that. It should be mentioned that, to the best of\nour knowledge the present work is the first to study the homogenization \nof the Schr\\\"odinger equations beyond the periodic setting, applying the double-scale \nlimits and the wave function is spanned on the Bloch basis. Therefore, we have extended \nthe Bloch Theory, which was restrict until now to periodic potentials. \n\n\\medskip\nLast but not least, one observes that the initial data $u_\\varepsilon^0$ \nshall be considered well-prepared, see equation \\eqref{WellPreparedness}. \nThis assumption is fundamental for the abstract homogenization result \nestablished in Theorem \\ref{876427463tggfdhgdfgkkjjlmk}, where the \nlimit function obtained from $u_\\varepsilon$\nsatisfies a simpler Schr\\\"odinger equation, called the effective mass equation,\nwith effective constant coefficients, namely matrix $A^*$, and potential $V^*$.\nThis homogenization procedure is well known in solid state physics as \nEffective Mass Theorems, see Section \\ref{HomoSchEqu}. \n\n\\medskip\nFinally, we stress Section \\ref{6775765ff0090sds} which is \nrelated to the homogenization of the Schr\\\"odinger equation for quasi-perfect materials,\nand it is also an important part of this paper. \nIndeed, a very special case occurs in situations where\nthe amount of randomness is small, more specifically the disorder in the \nmaterial is limited. In particular, this section is interesting \nfor numerical applications, where specific computationally efficient techniques \nalready designed to deal with the homogenization of the Schr\\\"odinger equation in the periodic setting, \ncan be employed to treat the case of quasi-perfect materials. \n\n\\subsection{Contextualization}\n\nLet us briefly recall that the homogenization's problem for \\eqref{jhjkhkjhkj765675233} has been treated for the periodic case \n($A_{\\rm per}(y)$, $V_{\\rm per}(y)$, $U_{\\rm per}(y)$), \nand $\\Phi(y,\\omega) = y$ by some authors. Besides the paper by G. Allaire, A.Piatnitski \\cite{AllairePiatnitski} already mentioned,\nwe address the following papers for the case of $A_{\\rm per}= I_{n \\times n}$, i.e. isotropic \nSchr\\\"odinger equation in \\eqref{jhjkhkjhkj765675233}: G. Allaire, M.Vanninathan \\cite{AllaireVanninathan}, L. Barletti, N. Ben Abdallah \\cite{BarlettiBenAbdallah}, \nV. Chabu, C. Fermanian-Kammerer, F. Marci\u00e0 \\cite{ChabuFermanianMarcia}, and we observe that this list is by no means exhaustive. \nIn \\cite{AllaireVanninathan}, the authors study a semiconductors model excited by an external potencial $U_{\\rm per}(t,x)$, which depends on \nthe time $t$ and macroscopic variable $x$. \nIn \\cite{BarlettiBenAbdallah} the authors\ntreat the homogenization's problem when the external \npotential $U_{\\rm per}(x,y)$ depends also on the macroscopic variable $x$. \nFinally, in \\cite{ChabuFermanianMarcia}\nit was considered an external potential $U_{\\rm per}(t,x)$ \nwhich model the effects of impurities on the otherwise perfect matter. \n\n\\medskip\nAll the references cited above treat the homogenization's problem \nfor \\eqref{jhjkhkjhkj765675233}, studying the spectrum of the associated \nBloch spectral cell equation, that is, for each $ \\theta \\in \\mathbb{R}^n$,\nfind the eigenvalue-eigenfunction pair $(\\lambda,\\psi)$, satisfying \n\\begin{equation}\n\\label{8756trg}\n\\left\\{\n\\begin{aligned}\nL_{\\rm per}(\\theta) {\\big[ \\psi \\big]}&= \\lambda \\, \\psi, \n\\quad \\text{in $[0,1)^n$},\n\\\\[5pt]\n\\psi(y)&\\not= 0, \\quad \\text{periodic function},\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $L_{\\rm per}(\\theta)$ is the Hamiltonian given by \n$$\nL_{\\rm per}(\\theta){\\big[ f \\big]}= -{\\big( {\\rm div}_{\\! y} + 2i\\pi \\theta \\big)} {\\big[ A_{\\rm per}(y) {( \\nabla_{\\!\\! y} \n+ 2i \\pi \\theta)} f \\big]} + V_{\\rm per}(y) f.\n$$\nThe above eigenvalue problem is precisely stated (in the more general context studied in this paper) in \nSection \\ref{877853467yd56rtfe5rtfgeds76ytged}. \nHere, concerning the periodic setting mathematical solutions to \\eqref{8756trg}, we address the reader to \nC. H. Wilcox \\cite{Wilcox}, (see in particular Section 2: A discussion of related literature). Then,\nonce this eigenvalue problem is resolved, the goal is to pass to the limit as $\\varepsilon \\to 0$. One remarks that, \nthere does not exist an uniform estimate in $H^1(\\mathbb R^n)$ for the family of solutions $\\{u_\\varepsilon\\}$ of \\eqref{jhjkhkjhkj765675233}, \ndue to the scale $\\varepsilon^{-2}$ multiplying the \ninternal potential $V_{\\rm per}(y)$. To accomplish the desired asymptotic limit, under this lack of compactness,\na nice strategy is to use the two-scale convergence, for instance see the proof of Theorem 3.2 in \\cite{AllairePiatnitski}. \n\n\\medskip\nLet us now focus on the stochastic setting proposed in this paper, more precisely when the coefficients of\nthe Schr\\\"odinger equation in \\eqref{jhjkhkjhkj765675233} are the composition of\nstationary functions with stochastic deformations. Hence we have the following \nnatural questions:\n\n\\medskip\n$(Q.1)$ Is it possible to obtain an analogously Bloch spectral cell equation to this stochastic setting? \n\n\\medskip\n$(Q.2)$ This new stochastic spectral problem can be resolved, such that, the eigenvalues do not depend \non $\\omega \\in \\Omega$ (see Remark \\ref{GROUPNECE})? \n\n\\medskip\n$(Q.3)$ Is it feasible to adapt the two-scale convergence to this new proposed stochastic setting? \nWe remark that, the approach of stochastic two-scale convergence developed by \nBourgeat, Mikelic, Wright \\cite{BourgeatMikelicWright}, and also by\nZhikov, Pyatnitskii \\cite{ZhikovPyatnitskii} do not\nfit to the present context \nbecause of the presence of the stochastic deformation $\\Phi$. \n\n\\medskip\nThe former question $(Q.1)$ is answered in Section \n\\ref{683926ruesszs}. Indeed, assuming that the solution of \nequation \\eqref{jhjkhkjhkj765675233} is given by a plane wave, \nthe stochastic spectral Bloch cell equation\n\\eqref{92347828454trfhfd4rfghjls}\nis obtained applying the\nasymptotic expansion WKB method, \n(developed by Wentzel, Kramers, and Brillouin,\nsee G. Allaire \\cite{AllaireArnoldDegondHou}). \nMore specifically, the Hamiltonian in \\eqref{92347828454trfhfd4rfghjls}\nis given by\n$$\n L^\\Phi(\\theta)\\big[ F \\big]\\! \\! = -\\big( {\\rm div}_{\\! z} + 2i\\pi \\theta \\big){\\left[ A{( \\Phi^{-1}(z,\\omega),\\omega)} {\\big( \\nabla_{\\!\\! z} + 2i\\pi\\theta \\big)} F \\right]} \n + V( \\Phi^{-1}(z,\\omega),\\omega) F, \n$$\nfor each $F(z,\\omega)= f\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)$, where $f(y,\\omega)$ is a stationary function. \n\n\\medskip\nTo answer $(Q.2)$, we have to study the spectrum of the operator $L^\\Phi(\\theta)$, for each $\\theta \\in \\mathbb R^n$ fixed. \nThe first idea is to follow the techniques applied for the periodic setting, that is, for the operator \n$L_{\\rm per}(\\theta)$ in \\eqref{8756trg}, where the fundamental tool is the compact embedding of \n $H^1_{ \\rm per}([0,1)^n)$ in $L^2([0,1)^n)$. \nAlthough, since $\\omega \\in \\Omega$ can not be treat as a fixed parameter, we have to consider the more general theory of \nSobolev spaces on locally compact Abelian groups, which is developed in Section \\ref{9634783yuhdj6ty}. \nIn fact, we have established in details a Rellich-Kondrachov type Theorem, (see Theorem \\ref{7864876874}),\nsuch that together \nwith the study of continuous dynamic systems on compact Abelian groups enable us to answer positively \nthis question, at least, when $\\Omega$ has some structure. The second applied strategy here to answer $(Q.2)$ is \nthe Perturbation Theory, that is to say, taking the advantage of the well known spectrum for $L_{\\rm per}(\\theta)$. \nTo this end, we first consider that the coefficients of\nthe Schr\\\"odinger equation in \\eqref{jhjkhkjhkj765675233} are the composition of the periodic \nfunctions $A_{\\rm per}$, $V_{\\rm per}$ and $U_{\\rm per}$ with a special case of stochastic deformations, \nnamely stochastic perturbation of the identity (see Definition \\ref{37285gdhddddddddddd}), \nwhich is given by \n$$\n \\Phi_\\eta(y,\\omega) := y + \\eta \\, Z(y,\\omega) + \\mathrm{O}(\\eta^2),\n$$\nwhere $Z$ is some stochastic deformation and $\\eta \\in (0,1)$. This concept was introduced \nby X. Blanc, C. Le Bris, P.-L. Lions \\cite{BlancLeBrisLions2}, and applied for the first time to \nevolutionary equations in T. Andrade, W. Neves, J. Silva \\cite{AndradeNevesSilva}. \nThen, taking this special case\n$\\Phi_\\eta$, the operator $L^{\\Phi_\\eta}(\\theta)$ has the following expansion \nin a neighborhood of $(0,\\theta_0) \\in \\mathbb{R}^{n+1}$,\n$$\n L^{\\Phi_\\eta}(\\theta) = L_{\\rm per}(\\theta_0) + \\sum_{{\\vert \\varrho \\vert} = 1}^{3} ((\\eta,\\theta)-(0,\\theta_0))^{\\varrho}L_{\\varrho} + \\mathrm{O}(\\eta^2),\n$$\nwhere $\\varrho= (\\varrho_1,\\ldots,\\varrho_n,\\varrho_{n+1}) \\in \\mathbb{N}^{n+1}$, ${\\vert \\varrho \\vert} = \\sum_{k=1}^{n+1} \\varrho_k$,\nand $L_{\\varrho}$ is a bounded operator, see Section \\ref{6775765ff0090sds}. \nFrom the above equation, it follows that the point spectrum \n(i.e. the set of eigenvalues) of $L^{\\Phi_\\eta}(\\theta)$\nis not empty in a neighborhood of $(0,\\theta_0)$, when \n$\\lambda_{\\rm per}(\\theta_0)$ is an isolated eigenvalue with finite multiplicity. \nThis last property is studied in details in Section \\ref{0239786gfhgdf},\nsee Theorem \\ref{768746hughjg576}. \n\n\\medskip\nThe question $(Q.3)$ is answered positively in \nSection \\ref{pud63656bg254v2v5}, that is, we have established \nin this section a two-scale convergence in a stochastic setting, which is beyond the classical stationary \nergodic setting. Indeed, the main difference here with the earlier stochastic extensions of the periodic setting is \nthat, the test functions used are random \nperturbations of stationary functions accomplished by \nthe stochastic deformations. These compositions are beyond the stationary class, thus we have a \nlack of the stationarity property in this kind of test functions (see the introduction section in \\cite{AndradeNevesSilva} \nfor a deep discussion about this subject). It was introduced a compactification argument that, preserves the ergodic \nnature of the setting involved and allow us to overcome these difficulties. \n\n\t\n\\subsection{Summary of the main results}\n\nIn this section we summarize the main results on this paper. \nSince some of the theorems (cited below) have its on interested, \nwe describe shortly the main issue of each one. \n\n\\medskip\nFirst, Theorem \\ref{Compacification} allows us to overcome the lack of topological structure of a given probability space reducing it to \na separable compact space whose topological basis is dictated by the coefficients of the problem~\\eqref{jhjkhkjhkj765675233}. \n\n\\smallskip\nThen, the Theorem \\ref{TwoScale} uses all topological features brought forth by the Theorem \\ref{Compacification} in order to give us a result about \ntwo-scale convergence where the test functions are random perturbations accomplished by stochastic diffeomorphisms of stationary functions. It is worth \nmentioning that this result generalizes the corresponding one for deterministic case in~\\cite{DiazGayte} and the corresponding one for the stochastic case in~\\cite{BourgeatMikelicWright}.\n\n\\smallskip\nTheorem \\ref{768746hughjg576} consider a sequence of bounded operators in a Hilbert space, which defines a symmetric\noperator via the power series of multidimensional complex variables. It is stated that,\nif the first coefficient operator of this series has isolated eigenvalues of finite multiplicity, then\nthe holomorphic defined operator inherited from it similar point spectrum analysis.\n\n\\smallskip\nThe Theorem \\ref{876876876GG} established a necessary condition such that, the\nRellich--Kondrachov Theorem on compact Abelian groups holds true. More precisely, \nthe dual group must be an enumerable set. \n\n\\smallskip\nA complete characterization of the Rellich--Kondrachov Theorem on compact Abelian groups\nis given by Theorem \\ref{7864876874}. \nMoreover, as a byproduct of this characterization, we provide\na proof of the Rellich--Kondrachov Theorem in a precise context. \n\n\\smallskip\nThe Theorem \\ref{876427463tggfdhgdfgkkjjlmk} is one of the main results of this paper. It is an abstract homogenization result for Schr\\\"odinger equations that \nencompasses the corresponding one given by Allaire and Piatnistski~\\cite{AllairePiatnitski} in the periodic context. \n\n\\smallskip\nThe Theorem \\ref{873627yuhfdd} shows how the periodic setting can be used to deal with homogenization of the equation~\\eqref{jhjkhkjhkj765675233} for\n materials when the amount of randomness is small. This has importants numerical implications. \n\n\\smallskip\nThe Theorem \\ref{THM511} reveals an interesting splitting property of the solution of the homogenized equation associated to~\\eqref{jhjkhkjhkj765675233} in the \nspecific case of the quasi-perfect materials.\n\n\\section{Preliminaries and Background}\n\\label{PrelmBackg}\n\nThis section introduces the basement theory, which will be used through the paper. \nTo begin we fix some notations, and collect some preliminary results. The material which is well-known or a direct extension \nof existing work are giving without proofs, otherwise we present them.\n\n\\medskip\nWe denote by $\\mathbb{G}$ the group $\\mathbb{Z}^n$ (or $\\mathbb{R}^n$), with $n \\in \\mathbb{N}$.\nThe set $[0,1)^n$\ndenotes the unit cube, which is also called the unitary cell and will be used \nas the reference period for periodic functions.\nThe symbol $\\left\\lfloor x \\right\\rfloor$ denotes the \nunique number in $\\mathbb{Z}^n$, such that $x - \\left\\lfloor x \\right\\rfloor \\in [0,1)^n$.\nLet $H$ be a complex Hilbert space, we denote by $\\mathcal{B}(H)$ \nthe Banach space of linear bounded operators from $H$ to $H$.\n\n\\medskip\nLet $U \\subset \\mathbb R^{n}$ be an open set, $p \\geqslant 1$, and $s \\in \\mathbb{R}$.\nWe denote by \n$L^p(U)$ the set of (real or complex) $p-$summable functions\nwith respect to the Lebesgue measure (vector ones should be understood\ncomponentwise). Given a Lebesgue measurable set\n$E \\subset \\mathbb R^n$, \n$|E|$ denotes its $n-$dimensional Lebesgue measure.\nMoreover, we will use the standard notations for the \nSobolev spaces $W^{s,p}(U)$ and $H^{s}(U)\\equiv W^{s,2}(U)$. \n\n\\subsection{Anisotropic Schr\\\"odinger equations}\n\\label{SchEq}\n\nThe aim of this section is to present the well-posedness for the solutions of the Schr\\\"odinger equation, \nand some properties of them. \nMost of the material can be found in\nCazenave, Haraux \\cite{CazenaveHaraux}.\n\n\\medskip\nFirst, let us consider the following Cauchy problem, which is driven by a linear anisotropic\nSchr\\\"odinger equation, that is\n\\begin{equation}\n\\label{87644343}\n \\left\\{\n \\begin{aligned}\n &i \\; \\partial_t u(t,x) - {\\rm div} \\big(A(x) \\nabla u(t,x) \\big)\n + V(x) \\, u(t,x) = 0 \\quad \\text{in $\\mathbb{R}^{n+1}_T$}, \n \\\\[5pt]\n & u(0,x)=u_0(x) \\quad \\text{in $\\mathbb{R}^n$},\n \\end{aligned}\n \\right.\n\\end{equation}\nwhere the unknown $u(t,x)$ is a complex value function, and $u_0$\nis a given initial datum. The coefficient $A(x)$ is a symmetric real $n \\times n$-matrix \nvalue function, and the potential $V(x)$ is a real function. We always assume that\n\\begin{equation}\n\\label{CONDITAV}\n A(x), V(x) \\quad \\text{are measurable bounded functions}. \n\\end{equation}\nOne recalls that, a matrix $A$ is called (uniformly) coercive, when, there exists $a_0> 0$, \nsuch that, for each $\\xi \\in \\mathbb{R}^n$, and almost all $x \\in \\mathbb{R}^n$,\n$A(x) \\xi \\cdot \\xi \\geqslant a_0 \\vert \\xi \\vert^2$. \n\n\\medskip\nThe following definition tell us in which sense a complex function $u(t,x)$ is a mild solution to \\eqref{87644343}.\n\\begin{definition}\n\\label{MildSol}\nLet $A, V$ be coefficients satisfying \\eqref{CONDITAV}. \nGiven $u_0 \\in H^1(\\mathbb{R}^n)$, a function \n$$\n u \\in C( [0,T]; H^1(\\mathbb{R}^n)) \\cap C^1((0,T); H^{-1}(\\mathbb{R}^n))\n$$\nis called a mild solution to the Cauchy problem \\eqref{87644343}, when for each $t \\in (0,T)$, it follows that\n\\begin{equation}\n\\label{DEFSOLSCH}\n i \\partial_t u(t) -{\\rm div} \\big(A \\nabla u(t) \\big) + V u(t) = 0 \\quad \\text{in $H^{-1}(\\mathbb{R}^n)$}, \n\\end{equation}\nand $u(0)= u_0$ in $H^1(\\mathbb{R}^n)$.\n\\end{definition}\t\n\nThen, we state the following \n\\begin{proposition}\n\\label{PROPEUSCHEQ}\nLet $A$ be a coercive matriz value function, $V$ a potential and \n$u_0 \\in H^1(\\mathbb{R}^n)$ a given initial data. \nAssume that $A, V$ satisfy \\eqref{CONDITAV}. Then, there exist a unique \nmild solution of the Cauchy problem \\eqref{87644343}. \n\\end{proposition}\n\n\\begin{proof}\nThe proof follows applying Lemma 4.1.5 and Corollary 4.1.2 in \\cite{CazenaveHaraux}.\n\\end{proof}\n\n\\medskip\n\\begin{remark}\n\\label{REMCOSTCOEFF}\nIt is very important in the homogenization procedure of the Schr\\\"odinger equation, \nwhen the coefficients $A$ and $V$ in \\eqref{87644343} are constants,\nthe matrix $A$ is not necessarily coercive, and the initial data \n$u_0 \\in L^2(\\mathbb{R}^n)$. Then, a function $u \\in L^2(\\mathbb{R}^{n+1}_T)$ is called a\nweak solution to \\eqref{87644343}, if it satisfies \n$$\n i \\partial_t u - {\\rm tr}(A D^2 u) + V u = 0 \\quad \\text{in distribution sense}.\n$$\nSince $A, V$ are constant, we may apply the Fourier Transform, and\nobtain the existence of a unique solution $u \\in H^1((0,T); L^2(\\mathbb{R}^{n}))$. \nTherefore, the solution $u \\in C([0,T]; L^2(\\mathbb{R}^{n}))$ after being \nredefined in a set of measure zero, and we have $u(0)= u_0$ in $L^2(\\mathbb{R}^n)$.\n\\end{remark}\n\nNow, let us recall the standard a priori estimates for the solutions of the \nCauchy problem \\eqref{87644343}. First, under the conditions of Proposition \\ref{PROPEUSCHEQ}, a function\n$u \\in C( [0,T]; H^1(\\mathbb{R}^n)) \\cap C^1((0,T); H^{-1}(\\mathbb{R}^n))$, which is the mild solution of \\eqref{87644343},\nsatisfies for each $t \\in [0,T]$\n\\begin{equation}\n\\begin{aligned}\n\t\t&(i) \\ \\int_{\\mathbb{R}^n} |u(t)|^2 dx = \\int_{\\mathbb{R}^n} |u_0|^2 dx,\n\t\t\\\\[5pt]\n\t\t&(ii) \\ \\int_{\\mathbb{R}^n} |\\nabla u(t)|^2 dx \\leqslant C \\ \\big(\\int_{\\mathbb{R}^n} |\\nabla u_0|^2 dx \n\t\t+ \\int_{\\mathbb{R}^n} |u_0|^2 dx \\big), \n\\end{aligned}\n\\end{equation}\nwhere $C= C(\\|V\\|_{L^\\infty}, \\|A\\|_{L^\\infty}, a_0)$ is a positive constant. Clearly, for the constant coefficients case,\nwith $A$ non-coercive and $u_0 \\in L^2(\\mathbb{R}^n)$, a function $u \\in C([0,T]; L^2(\\mathbb{R}^{n}))$, which is \nthe weak solution of \\eqref{87644343}, just satisfies the item $(i)$ above. These estimates follow by \ndensity argument. \n\n\\subsection{Stochastic configuration}\n\\label{628739yhf}\n\nHere we present the stochastic context, which will be used thoroughly in the paper. \nTo begin, let $(\\Omega, \\mathcal{F}, \\mathbb{P})$ be a probability space. For each random variable \n$f$ in $L^1(\\Omega; \\mathbb P)$, ($L^1(\\Omega)$ for short), \nwe denote its expectation value by\n$$\n \\mathbb{E}[f]= \\int_\\Omega f(\\omega) \\ d\\mathbb P(\\omega).\n$$\n\nA mapping $\\tau: \\mathbb{G} \\times \\Omega \\to \\Omega$ is said a $n-$dimensional dynamical \nsystem if:\n\\begin{enumerate}\n\\item[(i)](Group Property) $\\tau(0,\\cdot)=id_{\\Omega}$ and $\\tau(x+y,\\omega)=\\tau(x,\\tau(y,\\omega))$ for all $x,y \\in \\mathbb{G}$ \nand $\\omega\\in\\Omega$.\n\\item[(ii)](Invariance) The mappings $\\tau(x,\\cdot):\\Omega\\to \\Omega$ are $\\mathbb P$-measure preserving, that is, for each $x \\in \\mathbb{G}$ and \nevery $E\\in \\mathcal{F}$, we have \n$$\n\\tau(x,E)\\in \\mathcal{F},\\qquad \\mathbb P(\\tau(x,E))=\\mathbb P(E).\n$$\n\\end{enumerate}\nFor simplicity, we shall use $\\tau(k)\\omega$ to denote $\\tau(k,\\omega)$. Moreover, it is usual to say that \n$\\tau(k)$ is a discrete (continuous) dynamical system if $k \\in \\mathbb Z^n$ ($k \\in \\mathbb R^n$), but we only stress \nthis when it is not obvious from the context. \n\n\\medskip\nA measurable function $f$ on $\\Omega$ is called $\\tau$-invariant, if for each $k \\in \\mathbb{G}$ \n$$\n f(\\tau(k) \\omega)= f(\\omega) \\quad \\text{for almost all $\\omega \\in \\Omega$}. \n$$\nHence a measurable set $E \\in \\mathcal{F}$ is $\\tau$-invariant, if its characteristic function $\\chi_E$ is $\\tau$-invariant. \nIn fact, it is a straightforward to show that, a $\\tau$-invariant set $E$ can be equivalently defined by \n$$\n \\tau(k) E= E \\quad \\text{for each $k \\in \\mathbb{G}$}.\n$$\nMoreover, we say that the dynamical system $\\tau$ is ergodic, when\nall $\\tau$-invariant sets $E$ have measure $\\mathbb P(E)$ of either zero or one. \nEquivalently, we may characterize an ergodic dynamical system\nin terms of invariant functions. Indeed, a dynamical system is ergodic if \neach $\\tau$- invariant function is constant almost everywhere, that is to say \n$$\n \\Big( f(\\tau(k) \\omega)= f(\\omega) \\quad \\text{for each $k \\in \\mathbb{G}$ and a.e. $\\omega \\in \\Omega$} \\Big) \n \\Rightarrow \\text{ $f(\\cdot)= const.$ a.e.}. \n$$\n\n\\medskip\n\\begin{example}\n\\label{NDT}\nLet $\\Omega= [0,1)^n$ be a sample space, $\\mathcal{F}$ the appropriate $\\sigma$-algebra on \n$\\Omega$, and $\\mathbb P$ the probability measure, i.e. the Lebesgue measure restrict to $\\Omega$.\nThen, we consider the $n$-dimensional \ndynamical system $\\tau: \\mathbb R^n \\times \\Omega \\to \\Omega$, defined by \n$$\n \\tau(x) \\omega:= x + \\omega - \\left\\lfloor x+\\omega \\right\\rfloor. \n$$ \nThe group property for $\\tau(x)$ follows from the greatest integer function properties, \nand its invariance from the translation invariance of the Lebesgue measure. \n\\end{example}\n\n\\begin{example}\n\\label{EXTJING}\nLet $(\\Omega_0,\\mathscr{F}_0,\\mathbb{P}_0)$ be a probability space. \nFor $m \\in \\mathbb{N}$ fixed, we consider the set $S= \\{0,1,2,\\ldots,m\\}$ \nand the real numbers \n$$\n \\text{$p_0, p_1, p_2, \\ldots, p_m$ in $(0,1)$, such that $\\sum_{\\ell= 0}^m p_\\ell=1$}. \n$$\nIf $ \\{X_k:\\Omega_0 \\to S \\}_{k\\in\\mathbb{Z}^n}$\nis a family of random variables, then it is induced a probability measure from it on the measurable space \n$\\big( S^{\\mathbb{Z}^n}, \\bigotimes_{k\\in\\mathbb{Z}^n}2^{S} \\big)$. Indeed, we may define the probability \nmeasure\n$$\n\\mathbb{P}(E):= \\mathbb{P}_0{\\left\\{ X \\in E \\right\\}}, \\;\\; E \\in \\bigotimes_{k\\in\\mathbb{Z}^n}2^{S},\n$$\nwhere the mapping $X: \\Omega_0 \\to S^{\\mathbb{Z}^n}$ is given by \n$X(\\omega_0)= (X_k(\\omega_0))_{k\\in\\mathbb{Z}^n}$.\n\n\\medskip\nNow, we denote for convenience $\\Omega= S^{\\mathbb{Z}^n}$\nand $\\mathscr{F}= \\bigotimes_{k\\in\\mathbb{Z}^n}2^{S}$, \nthat is, $\\mathscr{F}= \\sigma(\\mathscr{A})$, \nwhere $\\mathscr{A}$ is the algebra given by the finite union of sets (cylinders of finite base) \nof the form \n\\begin{equation}\n\\label{356}\n \\prod_{k \\in \\mathbb{Z}^n} E_k, \n\\end{equation}\nwhere $E_k \\in 2^S$ is different from $S$ for a finite number of indices $k$. Additionally we assume that,\nthe family \t$\\{X_k\\}_{k\\in\\mathbb{Z}^n}$ is independent, and for each $k \\in \\mathbb{Z}^n$, we have \n\\begin{equation}\n\\label{243}\n\t\\mathbb{P}_0{\\{ X_k=0 \\}}= p_0, \\,\\, \\mathbb{P}_0{\\{ X_k=1 \\}}= p_1, \\,\\, \\ldots, \\,\\, \\mathbb{P}_0{\\{ X_k=m \\}}= p_m.\n\\end{equation}\nThen, we may define an ergodic dynamical system $\\tau: \\mathbb{Z}^n \\times \\Omega \\to \\Omega$, by\n$$\n\t{\\left( \\tau (\\ell) \\omega \\right)}(k) := \\omega(k + \\ell), \\quad \\text{for any $k,\\ell \\in \\mathbb{Z}^n$},\n$$\nwhere $\\omega= (\\omega(k))_{k \\in \\mathbb{Z}^n}$.\n\n\\medskip\n$i)$ The group property follows from the definition. Indeed, \nfor each $\\omega \\in \\Omega$ and $\\ell_1,\\ell_2 \\in \\mathbb{Z}^n$, it follows that \n\\begin{equation*}\n\t{\\big( \\tau (\\ell_1 + \\ell_2) \\omega \\big)}(k) = \\omega(k + \\ell_1 + \\ell_2) = {\\big( \\tau (\\ell_1) \\tau(\\ell_2) \\omega \\big)}(k), \n\\end{equation*}\nfor any $k \\in \\mathbb{Z}^n$.\n\n\\medskip\n$(ii)$ The mappings $\\tau(\\ell,\\cdot):\\Omega\\to \\Omega$ are $\\mathbb P$-measure preserving.\nFirst, we observe from \\eqref{356} that, for all $\\ell \\in \\mathbb{Z}^n$\n\\begin{equation*}\n\t\\tau(\\ell) \\big( \\prod_{k \\in \\mathbb{Z}^n} E_k \\big)= \\prod_{k \\in \\mathbb{Z}^n} E_{k+\\ell}.\n\\end{equation*}\nTherefore, for any $\\ell \\in \\mathbb{Z}^n$\n$$\n\\begin{aligned}\n \\mathbb{P}{\\Big( \\tau(\\ell) {\\big( \\prod_{k \\in \\mathbb{Z}^n} E_k \\big)} \\Big)}&= \n \\mathbb{P}{\\big( \\prod_{k \\in \\mathbb{Z}^n} E_{k+\\ell} \\big)}\n = \\mathbb{P}_0 {\\big( \\bigcap_{k\\in\\mathbb{Z}^n} \\{X_k \\in E_{k+\\ell}\\} \\big)}\n \\\\[5pt]\n &= \\prod_{k\\in \\mathbb{Z}^n} \\mathbb{P}_0 {\\left\\{ X_k \\in E_{k+\\ell} \\right\\}}\n \\\\[5pt]\n &= \\prod_{k\\in \\mathbb{Z}^n} \\mathbb{P}_0 {\\left\\{ X_{k+\\ell} \\in E_{k+\\ell} \\right\\}} \n = \\prod_{k\\in \\mathbb{Z}^n} \\mathbb{P}_0 {\\left\\{ X_k \\in E_k \\right\\}},\n\\end{aligned}\n$$\nwhere we have used in the second line that the family of random variables is\nindependent and in the third line it has the same distribution, equation \\eqref{243}. \nThen, the measure preserving is satisfied for each element of the algebra \n$\\mathscr{A}$, and hence for each element of $\\mathscr{F}$. \n\n\\medskip\n$(iii)$ The ergodicity. Given the cylinders ${ \\prod_{k \\in \\mathbb{Z}^n} E_k }$ and ${ \\prod_{k \\in \\mathbb{Z}^n} F_k }$, \nthere exists $\\ell_0 \\in \\mathbb{Z}^n$, such that\n$$\n \\mathbb{P}{\\Big( \\tau(\\ell_0) {\\big( \\prod_{k \\in \\mathbb{Z}^n} E_k \\big)} \\cap {\\big( \\prod_{k \\in \\mathbb{Z}^n} F_k \\big)} \\Big)} \n = \\mathbb{P}{\\big( \\prod_{k \\in \\mathbb{Z}^n} E_k \\big)} \\, \\mathbb{P} {\\big( \\prod_{k \\in \\mathbb{Z}^n} F_k \\big)}.\n$$\nIndeed, let us define \n\t\\begin{equation*}\n\t\te_0:= {\\rm max}{\\{ {\\vert k \\vert} \\, ; \\, k \\in \\mathbb{Z}^n, \\, E_k \\not= S \\}}, \\,\n\t\t\\quad f_0:= {\\rm max}{\\{ {\\vert k \\vert} \\, ; \\, k \\in \\mathbb{Z}^n, \\, F_k \\not= S \\}},\n\t\\end{equation*}\nand observe that, if $\\ell_0 \\in \\mathbb{Z}^n$ satisfies ${ {\\vert \\ell_0 \\vert} > e_0 + f_0 }$, then \n\\begin{equation*}\nE_{k+\\ell_0} \\cap F_k = \\left\\{\n\\begin{array}{ll}\n\tF_k & \\text{if} \\; {\\vert k \\vert} \\leqslant f_0, \n\t\\\\[5pt]\n\tE_k & \\text{if} \\; f_0 < {\\vert k \\vert} \\leqslant e_0 + f_0, \n\t\\\\[5pt]\n\tS & \\text{if} \\; {\\vert k \\vert} > e_0 + f_0.\n\\end{array}\n\\right.\n\\end{equation*}\nTherefore, we have \n\\begin{eqnarray*}\n\\mathbb{P}{\\big( \\tau(\\ell_0) {\\big( \\prod_{k \\in \\mathbb{Z}^n} E_k \\big)} \\cap {\\big( \\prod_{k \\in \\mathbb{Z}^n} F_k \\big)} \\big)} \n& = & \\mathbb{P}{\\big( {\\big( \\prod_{k \\in \\mathbb{Z}^n} E_{k+\\ell_0} \\big)} \\cap {\\big( \\prod_{k \\in \\mathbb{Z}^n} F_k \\big)} \\big)} \n\\\\[5pt]\n\t\t& = & \\mathbb{P}{\\big( \\prod_{k \\in \\mathbb{Z}^n} {\\big( E_{k+\\ell_0} \\cap F_k \\big)} \\big)} \n\\\\[5pt]\n\t\t& = & \\prod_{k \\in \\mathbb{Z}^n} \\mathbb{P}_0 {\\left\\{ X_k \\in E_{k+\\ell_0} \\cap F_k \\right\\}} \n\\\\[5pt]\n\t\t& = & \\mathbb{P}{\\big( \\prod_{k \\in \\mathbb{Z}^n} E_k \\big)} \\mathbb{P}{\\big( \\prod_{k \\in \\mathbb{Z}^n} F_k \\big)}.\n\\end{eqnarray*}\nThe above property follows for finite unions of cylinders, that is to say, given $E_1, E_2 \\in \\mathscr{A}$, \nthere exists $\\ell_0 \\in \\mathbb{Z}^n$, such that \n\\begin{equation*}\n \\mathbb{P}{\\left( \\tau(\\ell_0) {E}_1 \\cap {E}_2 \\right)}= \\mathbb{P}({E}_1) \\, \\mathbb{P}({E}_2).\n\\end{equation*}\n\t\n\\medskip\nNow, let $E \\in \\mathscr{F}$ be a $\\tau$-invariant set. \nFor each $\\varepsilon> 0$, there exists ${E}_0 \\in \\mathscr{A}$ such that,\n$\\mathbb{P} {\\left({E} \\Delta \\, {E}_0 \\right)} < \\varepsilon$. \nThen, since $E$ is $\\tau$-invariant we have for each \n$ \\ell \\in \\mathbb{Z}^n$\n\\begin{equation}\n\\label{684}\n\\begin{aligned}\n \\mathbb{P}{\\big( \\tau(\\ell) {E}_0 \\, \\Delta \\, {E}_0 \\big)}\n &\\leq \\mathbb{P}{\\big( \\tau(\\ell) {E}_0 \\, \\Delta \\, \\tau(\\ell) {E} \\big)} + \\mathbb{P}{\\big( \\tau(\\ell) {E} \\, \\Delta \\, {E} \\big)} \n + \\mathbb{P}{\\big({E} \\Delta \\, {E}_0 \\big)} \n\\\\[5pt]\n &= 2 \\, \\mathbb{P}{\\big({E} \\Delta \\, {E}_0 \\big)} \\leq 2 \\varepsilon.\n\\end{aligned}\n\\end{equation}\nOn the other hand, since ${E}_0 \\in \\mathscr{A}$, for some $\\ell_0 \\in \\mathbb{Z}^n$, it follows that \n\\begin{equation*}\n\t\t\\mathbb{P}{\\left( \\tau(\\ell_0){E}_0 \\cap {E}_0^c \\right)}= \\mathbb{P}({E}_0)\\mathbb{P}({E}_0^c) \n\t\t\\quad \\text{and} \\quad\n\t\t\\mathbb{P}{\\left( \\tau(\\ell_0) {E}_0^c \\cap {E}_0 \\right)}= \\mathbb{P}({E}_0^c)\\mathbb{P}({E}_0),\n\\end{equation*}\nand thus\n\\begin{equation}\n\\label{ABA}\n\\begin{aligned}\n \\mathbb{P}{\\big( \\tau(\\ell_0) {E}_0 \\, \\Delta \\, {E}_0 \\big)}&= \\mathbb{P} {\\left( \\tau(\\ell_0) {E}_0 \\cap {E}_0^c \\right)} \n + \\mathbb{P} {\\left( \\tau(\\ell_0) {E}_0^c \\cap {E}_0 \\right)} \n \\\\[5pt]\n &= 2 \\mathbb{P}({E}_0) (1-\\mathbb{P}({E}_0)).\n\\end{aligned}\n\\end{equation}\nFrom \\eqref{684} and \\eqref{ABA}, it follows for each $\\varepsilon> 0$\n\t\\begin{equation*}\n\t\t\\mathbb{P}({E}_0) (1-\\mathbb{P}({E}_0))< \\varepsilon.\n\t\\end{equation*}\nConsequently, we obtain that $\\mathbb{P}({E})= 0$ or $\\mathbb{P}({E})= 1$.\n\\end{example}\n\n\n\\medskip\nNow, let $(\\Gamma, \\mathcal{G}, \\mathbb{Q})$ be a given probability space. We say that a\nmeasurable function $g: \\mathbb R^n \\times\\Gamma \\to \\mathbb R$ is stationary, if for any finite set \nconsisting of points $x_1,\\ldots,x_j\\in \\mathbb R^n$, and any $k \\in \\mathbb{G}$, the distribution of the random vector \n$$\n \\Big(g(x_1+k,\\cdot),\\cdots,g(x_j+k,\\cdot)\\Big)\n$$\nis independent of $k$. Further, subjecting the stationary function $g$ to some natural conditions\nit can be showed that, there exists other probability space $(\\Omega, \\mathcal{F}, \\mathbb P)$, a $n-$dimensional dynamical system \n$\\tau: \\mathbb{G} \\times \\Omega \\to \\Omega$ and a measurable function $f: \\mathbb R^n \\times \\Omega \\to \\mathbb R$ satisfying \n\\begin{itemize}\n\\item For all $x \\in \\mathbb R^n$, $k \\in \\mathbb{G}$ and $\\mathbb P-$almost every $\\omega \\in \\Omega$ \n\\begin{equation}\n\\label{Stationary}\n f(x+k, \\omega)= f(x, \\tau(k) \\omega).\n\\end{equation} \n\n\\item For each $x \\in \\mathbb R^n$ the random variables $g(x,\\cdot)$ and $f(x,\\cdot)$ have the same \nlaw. We recall that, the equality almost surely implies \nequality in law, but the converse is not true. \n\n\\end{itemize} \n\nOne remarks that, the set of stationary functions forms an algebra, and\nalso is stable by limit process. \nFor instance, the product of two\nstationaries functions is a stationary one, and the derivative of a \nstationary function is stationary. \nMoreover, the stationarity concept is the most general extension of the \nnotions of periodicity and almost periodicity for a function to have some \"self-averaging\" behaviour. \n\t\n\\begin{example}\nUnder the conditions of Example \\ref{NDT}, let $F \\! : \\Omega \\! \\to \\mathbb{C}$ be a\nmeasurable function. Then, the function $f \\! :\\! \\mathbb{R}^n \\! \\times \\Omega \\! \\to \\! \\mathbb{C}$, \ndefined by \n$$ \n f(x,\\omega):= F(\\tau(x)\\omega)\n$$ \nis a stationary function. In fact, considering continuous dynamical systems, \nany stationary function can be written in this way. Therefore, \neven if $f(\\cdot,\\omega)$ is just a measurable function, it makes sense to \nwrite, for instance, $f(0,\\cdot)$ due to the stationary property. \n\n\\end{example}\n\n\\begin{example}\n\\label{8563249tyudh}\nUnder the conditions of Example \\ref{EXTJING}, we take $m= 1$, and\nconsider the following functions, $\\varphi_0 = 0$ and $\\varphi_1$ be a Lipschitz vector field, \nsuch that, \n$\\varphi_1$ is periodic, ${\\rm supp} \\, \\varphi_1 \\subset (0,1)^n$. \nConsequently, the function \n\\begin{equation*}\n f(y,\\omega) := \\varphi_{\\omega({\\lfloor y \\rfloor})} (y), \\;\\; (y,\\omega) \\in \\mathbb{R}^n \\times \\! \\Omega\n\\end{equation*}\nsatisfies, ${ f(y,\\cdot) }$ is ${ \\mathscr{F} }$-measurable, ${ f(\\cdot,\\omega) }$ is continuous, and \nfor each $k \\in \\mathbb{Z}^n$, \n$$\n f(y+k,\\omega)= f(y,\\tau(k)\\omega).\n$$\nTherefore, ${ f }$ is a stationary function. \n\\end{example}\n\n\\medskip\nNow, we present the precise definition of the stochastic deformation as presented in \\cite{AndradeNevesSilva}.\n\\begin{definition}\n\\label{GradPhiStationary}\nA mapping $\\Phi: \\mathbb R^n \\times \\Omega \\to \\mathbb R^n, (y,\\omega) \\mapsto z= \\Phi(y,\\omega)$, is called a stochastic deformation (for short $\\Phi_\\omega$), when satisfies:\n\\begin{itemize}\n\\item[i)] For $\\mathbb{P}-$almost every $\\omega \\in \\Omega$, $\\Phi(\\cdot,\\omega)$ is a bi--Lipschitz diffeomorphism.\n\n\\item[ii)] There exists $\\nu> 0$, such that\n$$\n\\underset{\\omega \\in \\Omega, \\, y \\in \\mathbb R^n}{\\rm ess \\, inf} \n\\big({\\rm det} \\big(\\nabla \\Phi(y,\\omega)\\big)\\big) \\geq \\nu.\n$$\n\\item[iii)] There exists a $M> 0$, such that\n$$\n \\underset{\\omega \\in \\Omega, \\, y \\in \\mathbb R^n}{\\rm ess \\, sup}\\big(|\\nabla \\Phi(y,\\omega)|\\big) \\leq M< \\infty.\n$$\n\\item[iv)]\nThe gradient of $\\Phi$, i.e. $\\nabla\\Phi(y,\\omega)$, is stationary in the sense~\\eqref{Stationary}.\n\\end{itemize}\n\\end{definition}\n\t\nHere, we first recall from \\cite{AndradeNevesSilva} a general example of \nstochastic deformations $\\Phi: \\mathbb R^n \\times \\Omega \\to \\mathbb R^n$ associated to\na dynamical system $T: \\mathbb R^n \\times \\Omega \\to \\Omega$, where the sample\nspace $\\Omega$ is arbitrary. Then, following the idea of Example \\ref{8563249tyudh},\nwe present an example of stochastic deformation $\\Phi: \\mathbb R^n \\times \\Omega \\to \\mathbb R^n$ associated to\na dynamical system $T: \\mathbb Z^n \\times \\Omega \\to \\Omega$, where $\\Omega$ is prescribed. \n\nLet $(\\Omega_i, \\mathcal{F}_i, \\mathbb P_i)_{i=1}^n$ be \nprobability spaces, and $f_i:\\Omega_i\\to\\mathbb R$ be a measurable function, such that \n$0 0$)\nthe following map \n$$ \n \\Phi(y,\\omega):= y + \\eta \\, \\varphi_{\\omega({\\lfloor y \\rfloor})} (y), \\;\\; (y,\\omega) \\in \\mathbb{R}^n \\times \\Omega.\n$$\nThen, $\\nabla_{\\!\\! y} \\Phi(y,\\omega) = I_{\\mathbb{R}^{n\\times n}} + \\eta \\, \\nabla \\varphi_{\\omega({\\lfloor y \\rfloor})} (y)$,\nand for $\\eta$ sufficiently small all the conditions in the Definition \\ref{GradPhiStationary} are satisfied.\nThen, ${ \\Phi }$ is a stochastic deformation.\n\\end{example}\n\n\\bigskip\nGiven a stochastic deformation $\\Phi$, \nlet us consider the following spaces\n\\begin{equation}\n\t\\mathcal{L}_\\Phi := {\\big\\{F(z,\\omega)= f( \\Phi^{-1} (z, \\omega), \\omega); f \\in L^2_{\\rm loc}(\\mathbb{R}^n; L^2(\\Omega)) \\;\\; \\text{stationary} \\big\\}}\n\\end{equation}\nand\n\\begin{equation}\n\\label{SPACEHPHI}\n\t\t\\mathcal{H}_\\Phi := {\\big\\{F(z,\\omega)= f( \\Phi^{-1} (z, \\omega), \\omega); \\; f\\in H^1_{\\rm loc}(\\mathbb{R}^n; L^2(\\Omega)) \\;\\; \\text{stationary} \\big\\}}\n\\end{equation}\nwhich are Hilbert spaces, endowed respectively with the inner products \n$$\n\\begin{aligned}\n {\\langle F, G \\rangle}_{\\mathcal{L}_\\Phi}&:= \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} \\!\\! F(z, \\omega) \\, \\overline{ G(z, \\omega) } \\, dz \\, d\\mathbb{P}(\\omega),\n\\\\[5pt]\n{\\langle F, G \\rangle}_{\\mathcal{H}_\\Phi}&:= \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} \\!\\! F(z, \\omega) \\, \\overline{ G(z, \\omega) } \\, dz \\, d\\mathbb{P}(\\omega)\n\\\\\n&\\; \\quad +\\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} \\!\\! \\nabla_{\\!\\! z} F(z, \\omega) \\cdot \\overline{ \\nabla_{\\!\\! z} G(z, \\omega) } \\, dz \\, d\\mathbb{P}(\\omega). \n\\end{aligned}\n$$\n\\begin{remark}\n\\label{REMFPHI}\nUnder the above notations, \nwhen $\\Phi= Id$ we denote $\\mathcal{L}_\\Phi$ and $\\mathcal{H}_\\Phi$ by $\\mathcal{L}$ and $\\mathcal{H}$ respectively.\nMoroever, a function $F \\in \\clg{H}_\\Phi$ if, and only if, $F \\circ \\Phi \\in \\clg{H}$, and \nthere exist constants $C_1, C_2> 0$, such that \n$$\n C_1 \\|F \\circ \\Phi \\|_{\\clg{H}} \\leq \\|F \\|_{\\clg{H}_\\Phi} \\leq C_2 \\|F \\circ \\Phi \\|_{\\clg{H}}.\n$$\nAnalogously, $F \\in \\clg{L}_\\Phi$ if, and only if, $F \\circ \\Phi \\in \\clg{L}$, and \nthere exist constants $C_1, C_2> 0$, such that \n$$\n C_1 \\|F \\circ \\Phi \\|_{\\clg{L}} \\leq \\|F \\|_{\\clg{L}_\\Phi} \\leq C_2 \\|F \\circ \\Phi \\|_{\\clg{L}}.\n$$ \nIndeed, let us show the former equivalence.\nApplying a change of variables, we obtain \n$$\n\\begin{aligned}\n \\|F\\|^2_{\\clg{H}_\\Phi}&= \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} \\!\\! |F(z, \\omega)|^2 \\, dz \\, d\\mathbb{P}(\\omega)\n +\\int_\\Omega \\int_{\\Phi(\\mathsf{Y},\\omega)} \\!\\! |\\nabla_{\\!\\! z} F(z, \\omega)|^2 \\, dz \\, d\\mathbb{P}(\\omega)\n\\\\[5pt] \n &= \\int_\\Omega \\int_{[0,1)^n} \\!\\! |f(y, \\omega)|^2 \\det [\\nabla \\Phi(y,\\omega)] \\, dy \\, d\\mathbb{P}(\\omega)\n\\\\[5pt] \n &\\quad +\\int_\\Omega \\int_{[0,1)^n} \\!\\! | [\\nabla \\Phi(y,\\omega)]^{-1} \\nabla_{\\!\\! z} f(y, \\omega)|^2 \\det [\\nabla \\Phi(y,\\omega)] \\, dy \\, d\\mathbb{P}(\\omega).\n\\end{aligned}\n$$\nThe equivalence follows from the properties of the stochastic deformation $\\Phi$. \n\\end{remark}\n\n\n\\subsubsection{Ergodic theorems}\n\\label{ErgThm}\n\nWe begin this section with the concept of mean value, which is \nin connection with the notion of stationarity. \nA function $f \\in L^1_{\\loc}(\\mathbb R^n)$ is said to possess a mean value if the \nsequence $\\{f(\\cdot\/\\varepsilon){\\}}_{\\varepsilon>0}$ converges in the duality with $L^{\\infty}$ and compactly supported \nfunctions to a constant $M(f)$. This convergence is equivalent to\n\\begin{equation}\n\\label{MeanValue}\n\\lim_{t\\to\\infty}\\frac1{t^n|A|}\\int_{A_t}f(x)\\,dx=M(f),\n\\end{equation}\nwhere $A_t:=\\{x\\in\\mathbb R^n\\,:\\, t^{-1}x\\in A\\}$, for $t>0$ and any $A \\subset \\mathbb R^n$, with $|A| \\ne0$.\n\n\n\\begin{remark}\n\\label{REMERG}\nUnless otherwise stated, we assume that the dynamical system $\\tau: \\mathbb{G} \\times \\Omega\\to\\Omega$ is ergodic \nand we will also use the notation \n$$\n \\Medint_{\\mathbb R^n} f(x) \\ dx \\quad \\text{for $M(f)$}.\n$$\n\\end{remark}\n\nNow, we state the result due to Birkhoff, which connects all the notions \nconsidered before, see \\cite{Krengel}. \n\n\\begin{theorem}[Birkhoff Ergodic Theorem]\\label{Birkhoff}\nLet $f \\in L^1_\\loc(\\mathbb R^n; L^1(\\Omega))$, $($also $f \\in L^\\infty(\\mathbb{R}^n; L^1(\\Omega)) )$, be a stationary random variable. \nThen, for almost every $\\widetilde{\\omega} \\in \\Omega$ the function \n$f(\\cdot,\\widetilde{\\omega})$ possesses a mean value in the sense of~\\eqref{MeanValue}. Moreover, the mean value \n$M\\left(f(\\cdot,\\widetilde{\\omega})\\right)$ as a function of $\\widetilde{\\omega} \\in\\Omega$ satisfies\nfor almost every $\\widetilde{\\omega} \\in \\Omega$: \n\n\\smallskip\ni) Discrete case (i.e. $\\tau: \\mathbb Z^n \\times \\Omega \\to \\Omega$);\n$$\n \\Medint_{\\mathbb R^n} f(x,\\widetilde{\\omega}) \\ dx= \n \\mathbb{E} \\left[\\int_{[0,1)^n} f(y,\\cdot)\\, dy\\right].\n$$\n\nii) Continuous case (i.e. $\\tau: \\mathbb R^n \\times \\Omega \\to \\Omega$);\n$$\n \\Medint_{\\mathbb R^n} f(x,\\widetilde{\\omega}) \\ dx= \\mathbb{E}\\left[ f(0,\\cdot) \\right].\n$$\n\\end{theorem}\n\n\\medskip\nThe following lemma shows that, the Birkhoff Ergodic Theorem holds if a stationary function is composed \nwith a stochastic deformation. \n\\begin{lemma}\\label{phi2}\nLet $\\Phi$ be a stochastic deformation and $f \\in L^{\\infty}_\\loc(\\mathbb R^n; L^1(\\Omega))$ be a stationary random variable in the \nsense~\\eqref{Stationary}. Then for almost $\\widetilde{\\omega} \\in \\Omega$ the function \n$f\\left(\\Phi^{-1}(\\cdot,\\widetilde{\\omega}),\\widetilde{\\omega}\\right)$ possesses a mean value \nin the sense of~\\eqref{MeanValue} and satisfies: \n\n\\smallskip\ni) Discrete case;\n$$\n\\text{$\\Medint_{\\mathbb R^n}f\\left(\\Phi^{-1}(z,\\widetilde{\\omega}),\\widetilde{\\omega}\\right)\\,dz\n= \\frac{\\mathbb{E}\\left[\\int_{\\Phi([0,1)^n, \\cdot)} f {\\left( \\Phi^{-1}\\left( z, \\cdot \\right), \\cdot \\right)} \\, dz \\right]}\n{\\det\\left(\\mathbb{E}\\left[\\int_{[0,1)^n} \\nabla_{\\!\\! y} \\Phi(y,\\cdot) \\, dy \\right]\\right)}$\n\\quad for a.a. $\\widetilde{\\omega} \\in \\Omega$}.\n$$\n\nii) Continuous case; \n$$\n\\text{$\\Medint_{\\mathbb R^n}f\\left(\\Phi^{-1}(z,\\widetilde{\\omega}),\\widetilde{\\omega}\\right)\\,dz\n= \\frac{\\mathbb{E}\\left[f(0,\\cdot)\\det\\left(\\nabla\\Phi(0,\\cdot)\\right)\\right]}\n{\\det\\left(\\mathbb{E}\\left[\\nabla \\Phi(0,\\cdot)\\right]\\right)}$\n\\qquad for a.a. $\\widetilde{\\omega} \\in \\Omega$}.\n$$\n\\end{lemma}\n\n\\begin{proof}\nSee Blanc, Le Bris, Lions \\cite{BlancLeBrisLions1}, also\nAndrade, Neves, Silva \\cite{AndradeNevesSilva}.\n\\end{proof}\n\n\\subsubsection{Analysis of stationary functions}\n\nIn the rest of this paper, unless otherwise explicitly stated, we\nassume discrete dynamical systems and therefore, stationary \nfunctions are considered in this discrete sense.\n\n\\medskip\nWe begin the analysis of stationary functions with the concept of realization.\n\\begin{definition}\nLet $f: \\mathbb{R}^n \\! \\times \\! \\Omega \\to \\mathbb{R}$ be a stationary function. \nFor $\\omega \\in \\Omega$ fixed, the function $f(\\cdot, \\omega)$ is called a realization\nof $f$. \n\\end{definition}\nDue to Theorem \\ref{Birkhoff}, almost every realization \n$f(\\cdot,\\omega)$ possesses a mean value in the sense of~\\eqref{MeanValue}.\nOn the other hand, if $f$ is a stationary function, then the mapping \n$$\n y \\in \\mathbb{R}^n \\mapsto \\int_\\Omega f(y, \\omega) \\, d\\mathbb{P}(\\omega) \n$$\nis a periodic function. \n\n\\medskip\nIn fact, it is enough to consider the realizations to study some properties \nof stationary functions. For instance, the following theorem will be used \nmore than once through this paper. \n\\begin{theorem}\n\\label{987987789879879879}\nFor $p> 1$, let $u,v \\in L^1_{\\rm loc}(\\mathbb{R}^n; L^p(\\Omega))$\nbe stationary functions. Then, for any $i \\in \\{1,\\ldots,n \\}$ fixed, the following sentences are equivalent: \n\\begin{equation}\n\\label{837648726963874}\n(A) \\quad \\int_{[0,1)^n} \\int_\\Omega u(y,\\omega) \\frac{\\partial {\\zeta}}{\\partial y_i} (y, \\omega) \\, d\\mathbb{P}(\\omega) \\, dy \n = - \\int_{[0,1)^n} \\int_\\Omega v(y,\\omega) \\, {\\zeta}(y,\\omega) \\, d\\mathbb{P} \\, dy, \\hspace{20pt}\n\\end{equation}\nfor each stationary function $\\zeta \\in C^1( \\mathbb{R}^n; L^q(\\Omega))$,\nwith $1\/p + 1\/q = 1$. \n\\begin{equation}\n\\label{987978978956743}\n(B) \\quad \\int_{\\mathbb{R}^n} u(y,\\omega) \\frac{\\partial {\\varphi}}{\\partial y_i} (y) \\, dy = - \\int_{\\mathbb{R}^n} v(y,\\omega) \\, {\\varphi}(y) \\, dy,\n\\hspace{87pt}\n\\end{equation}\nfor any $\\varphi \\in C^1_{\\rm c}(\\mathbb{R}^n)$, and almost sure $\\omega \\in \\Omega$.\n\\end{theorem}\n\n\\begin{proof}\n1. First, let us show that $(A)$ implies $(B)$. To begin, \ngiven $\\gamma \\in \\mathbb{R}^n$, there exists a\n$\\mathscr{F}$-measurable set $N_\\gamma$ such that, $\\mathbb{P}(N_\\gamma)=0$ \nand \n$$\n \\int_{\\mathbb{R}^n} u(y,\\omega) \\, \\frac{\\partial {\\varphi}}{\\partial y_i} (y) \\, dy \n = - \\int_{\\mathbb{R}^n} v(y,\\omega) \\, {\\varphi}(y) \\, dy, \n$$\nfor each $\\varphi \\in C^1_{\\rm c}((0,1)^n + \\gamma)$ and $\\omega \\in \\Omega \\setminus N_\\gamma$.\nIndeed, for $\\varphi \\in C^1_{\\rm c}((0,1)^n + \\gamma)$ and ${ \\rho \\in L^q(\\Omega) }$, \nlet us define $\\zeta_\\gamma : \\mathbb{R}^n \\! \\times \\! \\Omega \\to \\mathbb{R}$, by\n$$ \n \\zeta_\\gamma (y,\\omega) := \\varphi(y - \\left\\lfloor y-\\gamma \\right\\rfloor) \\rho(\\tau(\\left\\lfloor y-\\gamma \\right\\rfloor)\\omega),\n$$\nwhere $\\tau: \\mathbb Z^n \\times \\Omega \\to \\Omega$ is a (discrete) dynamical system. Then, \n$\\zeta_\\gamma(\\cdot,\\omega)$ is a continuous functions, $\\zeta_\\gamma(y,\\cdot)$ is a\n$\\mathscr{F}$-measurable function, and for each $k\\in\\mathbb{Z}^n$, it follows that\n$$\n \\zeta_\\gamma(y+k,\\omega)=\\zeta_\\gamma(y,\\tau(k)\\omega).\n$$ \nConsequently, $\\zeta_\\gamma \\in C^1(\\mathbb{R}^n; L^q(\\Omega))$ is a stationary Caratheodory function. \nMoreover, since $\\left\\lfloor y-\\gamma \\right\\rfloor= 0$ for each $y \\in (0,1)^n + \\gamma$, we have \n\\begin{equation}\n\\label{098760987}\n\t\\zeta_\\gamma (y,\\omega) = \\varphi(y) \\rho(\\omega),\n\\end{equation}\nfor each $(y,\\omega) \\in ((0,1)^n + \\gamma) \\times \\Omega$. Therefore, taking $\\zeta_\\gamma$ as a test function in \n\\eqref{837648726963874}, we obtain\n\\begin{equation}\n\\label{65345ere3}\n \\int_{[0,1)^n} \\int_\\Omega u(y,\\omega) \\, \\frac{\\partial {\\zeta_\\gamma}}{\\partial y_i} (y, \\omega) \\, d\\mathbb{P}(\\omega) \\, dy \n = - \\int_{[0,1)^n} \\int_\\Omega v(y,\\omega) \\, {\\zeta_\\gamma}(y,\\omega) \\, d\\mathbb{P}(\\omega) \\, dy.\n\\end{equation}\nDue to the space of stationary functions form an algebra, the functions \n$$\n y \\mapsto \\int_\\Omega u(y,\\omega) \\, \\frac{\\partial {\\zeta_\\gamma}}{\\partial y_i} (y, \\omega) \\, d\\mathbb{P}(\\omega) \n \\quad \\text{and} \\quad y \\mapsto \\int_\\Omega v(y,\\omega) \\, {\\zeta_\\gamma}(y,\\omega) \\, d\\mathbb{P}(\\omega) \n$$\nare periodic, and hence translation invariants. Then, we have from \\eqref{65345ere3}\n$$\n \\int_{(0,1)^n + \\gamma} \\int_\\Omega u \\, \\frac{\\partial {\\varphi}}{\\partial y_i} (y) \\, {\\rho}(\\omega) \\, d\\mathbb{P}(\\omega) \\, dy \n = - \\int_{(0,1)^n + \\gamma} \\int_\\Omega v \\, {\\varphi}(y){\\rho}(\\omega) \\, d\\mathbb{P}(\\omega) \\, dy,\n$$\t\nwhere we have used \\eqref{098760987}. Applying Fubini's Theorem, it follows that \n$$\t\t\t\t\n\\int_\\Omega {\\left( \\int_{(0,1)^n + \\gamma} u(y,\\omega) \\, \\frac{\\partial {\\varphi}}{\\partial y_i} (y) \\, dy \n+ \\int_{(0,1)^n + \\gamma} v(y,\\omega) \\, {\\varphi}(y) \\, dy \\right)} {\\rho}(\\omega) \\, d\\mathbb{P}(\\omega)= 0\n$$\nfor each $\\rho \\in L^p(\\Omega)$. \nTherefore, for each $\\varphi \\in C^1_{\\rm c}((0,1)^n + \\gamma)$ there exists a\nset $N_\\gamma \\in \\mathscr{F}$ with $\\mathbb{P}(N_\\varphi)= 0$, \n(which may depend on $\\varphi$), such that, \nfor each $\\omega \\in \\Omega \\setminus N_\\gamma$ we have \n$$\t\t\t\t\n\\int_{(0,1)^n + \\gamma} u(y,\\omega) \\, \\frac{\\partial {\\varphi}}{\\partial y_i} (y) \\, dy \n= - \\int_{(0,1)^n + \\gamma} v(y,\\omega) \\, {\\varphi}(y) \\, dy.\n$$\nFrom a standard argument, we may remove the dependence on \n$N_\\gamma$ of the test function $\\varphi$. \n\n\\medskip\n2. Finally, to pass from $\\varphi \\in C^1_c((0,1)^n + \\gamma)$ to the case where $\\varphi \\in C^1_{\\rm c}(\\mathbb{R}^n)$,\nwe are going to use a standard procedure of partition of unity. We made it here, to become clear the argument in \nour case. Given $\\varphi \\in C^1_{\\rm c}(\\mathbb{R}^n)$, since ${\\rm supp} \\, \\varphi$ is a compact set, there exists \n$\\left\\{ \\gamma_j \\right\\}_{j = 1}^m$ a finite subset of $\\mathbb{R}^n$, such that \n$$\n {\\rm supp} \\, \\varphi \\subset \\bigcup_{j = 1}^m \\left((0,1)^n + \\gamma_j \\right).\n$$\nThen, we consider a partition of unity $\\{\\theta_j\\}_{j=0}^m$ subordinated to this open covering, \nthat is to say\n\\begin{itemize}\n\\item[i)] $\\theta_j \\in C^1_c(\\mathbb R^n)$, \\quad $0\\leqslant \\theta_j \\leqslant 1$, \\quad $j=0, \\ldots, m$,\n\\item[ii)] ${ \\sum_{j=0}^m \\theta_j(y) = 1 }$, \\quad for all $y \\in \\mathbb{R}^n$, \n\\item[iii)] ${\\rm supp} \\, \\theta_j \\subset (0,1)^n + \\gamma_j$, \\quad $j = 1,\\ldots,m$, \n\\quad and \\quad ${\\rm supp} \\, \\theta_0 \\subset \\mathbb R^n \\setminus {\\rm supp}\\, \\varphi$. \n\\end{itemize}\nSince $\\varphi= 0$ on the support of $\\theta_0$, it follows that, for each $y \\in \\mathbb R^n$\n\\begin{equation}\n\\label{PARTUNIT}\n \\varphi(y)= \\varphi(y) \\sum_{i=1}^m \\theta_i(y)= \\sum_{i=1}^m (\\varphi \\theta_i)(y). \n\\end{equation}\nMoreover, from item 1, there exist sets \n$N_{\\gamma_1}, \\ldots, N_{\\gamma_m} \\in \\mathscr{F}$ with $\\mathbb{P}(N_{\\gamma_j})= 0$, \nfor any $j \\in \\{1,\\ldots,m\\}$, such that\n$$\n \\int_{\\mathbb{R}^n} u(y,\\omega) \\, \\frac{\\partial ({\\varphi \\theta_j}) }{\\partial y_i} \\, dy \n = - \\int_{\\mathbb{R}^n} v(y,\\omega) \\, ({\\varphi\\theta_j}) \\, dy,\n$$\nfor each $\\omega \\in \\Omega\\setminus N_{\\gamma_j}$. To follow, we define \n$N:= \\bigcup_{j=1}^m N_{\\gamma_j}$ (which may depend on $\\varphi$), then $\\mathbb{P}(N)= 0$ and\nsumming from 1 to $m$, we obtain from the above equation \n$$\n \\sum_{j= 1}^m \\int_{\\mathbb{R}^n} u(y,\\omega) \\, \\frac{\\partial ({\\varphi \\theta_j})(y) }{\\partial y_i} \\, dy\n = - \\sum_{j= 1}^m \\int_{\\mathbb{R}^n} v(y,\\omega) \\, ({\\varphi\\theta_j})(j) \\, dy, \n$$\nfor each $\\omega \\in \\Omega\\setminus N$. Therefore, since the above sum is finite and using \\eqref{PARTUNIT},\nwe obtain \t\t\t\t\n$$\n \\int_{\\mathbb{R}^n} u(y,\\omega) \\, \\frac{\\partial {\\varphi(y)}}{\\partial y_i} \\, dy \n = - \\int_{\\mathbb{R}^n} v(y,\\omega) \\, \\varphi(y) \\, dy.\n$$\nAgain, due to a standard argument, we may remove the dependence on \n$N$ with the test function $\\varphi$. Consequently, we have obtained \\eqref{987978978956743}, more precisely \nsentence $(B)$. \n\t\n\\medskip\n3. Now, let us show sentence $(A)$ from $(B)$. For each $\\ell\\in \\mathbb{N}$, \nwe define the set ${Q}_\\ell:= (-\\ell,\\ell)^n$ and the function\n$\\chi_\\ell \\in C^1_c(\\mathbb{R}^n)$, such that \n$$\n\\text{$\\chi_\\ell \\equiv 1$ in ${Q}_\\ell$, \n$\\chi_\\ell \\equiv 0$ in $\\mathbb{R}^n \\setminus {Q}_{\\ell+1}$, and \n${ \\Vert \\nabla \\chi_\\ell \\Vert_{\\infty} \\leqslant 2 }$.}\n$$ \nThen, given $\\zeta \\in C^1(\\mathbb{R}^n; L^q(\\Omega))$ and $i \\in \\{1,\\ldots,n\\}$, \nwe consider $\\zeta(\\cdotp, \\omega) \\chi_\\ell$, (for $\\ell \\in \\mathbb{N}$ and\n$\\omega \\in \\Omega$ fixed), as test function in \n\\eqref{987978978956743}, that is \n$$\n \\int_{\\mathbb{R}^n} u(y, \\omega) \\, \\frac{\\partial}{\\partial y_i} {\\left( {\\zeta}(y, \\omega)\\chi_\\ell(y) \\right)} \\, dy \n = - \\int_{\\mathbb{R}^n} v(y, \\omega) \\, { {\\zeta}(y, \\omega) \\chi_\\ell(y)} \\, dy.\n$$\nFrom the definition of $\\chi_\\ell$, and applying the product rule we obtain \n$$\n\\begin{aligned}\n \\int_{Q_{\\ell + 1}} u(y, \\omega) \\frac{\\partial {\\zeta(y, \\omega)}}{\\partial y_i} \\, \\chi_\\ell(y) \\, dy \n &+ \\int_{Q_{\\ell + 1} \\setminus Q_\\ell} u(y, \\omega) {\\zeta}(y, \\omega) \\, \\frac{\\partial \\chi_\\ell(y)}{\\partial y_i} \\, dy\n\\\\[5pt]\n&= - \\int_{Q_{\\ell + 1}} v(y, \\omega) \\, { {\\zeta}(y, \\omega) \\chi_\\ell(y) } \\, dy,\n\\end{aligned}\n$$\nor conveniently using that $Q_{\\ell + 1} = Q_{\\ell} \\cup (Q_{\\ell + 1} \\setminus Q_{\\ell})$, we have \n\\begin{equation}\n\\label{y67676766798765}\n\\begin{aligned}\n \\int_{Q_\\ell} u(y, \\omega) \\, \\frac{\\partial {\\zeta}}{\\partial y_i}(y, \\omega) \\, dy \n &+ \\int_{Q_\\ell} v(y, \\omega) \\, {\\zeta}(y, \\omega) \\, dy\n \\\\[5pt]\n &= - \\int_{Q_{\\ell + 1} \\setminus Q_\\ell} u(y, \\omega) \\, \\frac{\\partial {\\zeta}}{\\partial y_i}(y, \\omega) \\, \\chi_\\ell(y) \\, dy \n \\\\[5pt]\n &\\quad -\\int_{Q_{\\ell + 1} \\setminus Q_\\ell} u(y, \\omega) \\, {\\zeta}(y, \\omega) \\, \\frac{\\partial \\chi_\\ell(y)}{\\partial y_i} \\, dy \n \\\\[5pt]\n &\\quad -\\int_{Q_{\\ell + 1} \\setminus Q_\\ell} v(y, \\omega) \\, {\\zeta}(y, \\omega) \\, \\chi_\\ell(y) \\, dy\n \\\\[5pt]\n &= I_1(\\omega) + I_2(\\omega) +I_3(\\omega),\n\\end{aligned}\n\\end{equation}\nwith obvious notation. \n\n\\smallskip\n\\underline {Claim:} For $j= 1,2,3$, \n$$\n\\lim_{\\ell \\to \\infty} \\int_\\Omega \\frac{|I_j(\\omega)|}{\\vert Q_\\ell \\vert} d\\mathbb{P}(\\omega)= 0.\n$$\n\nProof of Claim: Let us show for $j= 2$, that is \n$$\n \\lim_{\\ell \\to \\infty} \\int_\\Omega \\frac{1}{\\vert Q_\\ell \\vert} {\\Big| \\int_{Q_{\\ell + 1} \\setminus Q_\\ell} \n u(y, \\omega) \\, {\\zeta}(y, \\omega) \\, \\frac{\\partial \\chi_\\ell(y)}{\\partial y_i} \\, dy \\Big|} d\\mathbb{P}(\\omega) = 0,\n$$\nthe others are similar. Then, applying Fubini's Theorem \n$$\n\\begin{aligned}\n \\int_\\Omega \\frac{1}{\\vert Q_\\ell \\vert} \\Big| \\int_{Q_{\\ell + 1} \\setminus Q_\\ell} & u(y, \\omega) \\, {\\zeta}(y, \\omega) \\, \\frac{\\partial \\chi_\\ell(y)}{\\partial y_i} \\, dy \\Big| d\\mathbb{P}(\\omega) \n \\\\[5pt] \n &\\leq \\, \\frac{1}{\\vert Q_\\ell \\vert} \\int_{Q_{\\ell + 1} \\setminus Q_\\ell} \\int_\\Omega {\\vert u(\\cdotp, \\omega) \\, {\\zeta}(\\cdotp, \\omega) \\vert} \\, {\\Vert \\nabla \\chi_\\ell \\Vert}_\\infty \\, d\\mathbb{P} \\, dy \n \\\\[5pt]\n &\\leq \\, \\frac{2}{\\vert Q_\\ell \\vert} \\int_{Q_{\\ell + 1} \\setminus Q_\\ell} \\int_\\Omega {\\vert u(y, \\omega) \\, {\\zeta}(y, \\omega) \\vert} \\, d\\mathbb{P}(\\omega) \\, dy \n\\\\[5pt]\n &= \\frac{2 \\, {( (2(\\ell + 1))^n - (2\\ell)^n )}}{(2 \\ell)^n} \\!\\! \\int_{[0,1)^n} \\int_\\Omega {\\vert u(y, \\omega) \\, {\\zeta}(y, \\omega) \\vert} \\, d\\mathbb{P}(\\omega) \\, dy \n\\\\[5pt]\n &= \\, {2 \\, { {(( 1 + \\ell^{-1})^n - 1)} }} \\int_{[0,1)^n} \\int_\\Omega {\\vert u(y, \\omega) \\, {\\zeta}(y, \\omega) \\vert} \\, d\\mathbb{P}(\\omega) \\, dy,\n\\end{aligned}\n$$\nfrom which, passing to the limit as $\\ell \\to 0$, follows the claim. \n\n\\medskip\nThen, dividing equation \\eqref{y67676766798765} by $\\vert Q_\\ell \\vert$ and integrating in $\\Omega$, we obtain \n$$\n \\liminf_{\\ell \\to \\infty} \\int_\\Omega {\\Big| \\frac{1}{\\vert Q_\\ell \\vert} \\int_{Q_\\ell} (u \\, \\frac{\\partial {\\zeta}}{\\partial y_i})(y, \\omega) dy \n + \\frac{1}{\\vert Q_\\ell \\vert} \\int_{Q_\\ell} (v \\, {\\zeta})(y, \\omega) dy \\Big|} d\\mathbb{P}(\\omega)= 0,\n$$ \nand applying Fato's Lemma\n$$\n\\int_\\Omega \\liminf_{\\ell \\to \\infty} \\int_\\Omega {\\Big| \\frac{1}{\\vert Q_\\ell \\vert} \\int_{Q_\\ell} (u \\, \\frac{\\partial {\\zeta}}{\\partial y_i})(y, \\omega) dy \n+ \\frac{1}{\\vert Q_\\ell \\vert} \\int_{Q_\\ell} (v \\, {\\zeta})(y, \\omega) \\, dy \\Big|} d\\mathbb{P}(\\omega)= 0.\n$$\nTherefore, there exists a $\\mathscr{F}$-measurable set $\\widetilde{\\Omega} \\subset \\Omega$ of full measure, such that,\nfor each $\\omega \\in \\widetilde{\\Omega}$, we have\n$$\n \\liminf_{\\ell \\to \\infty} {\\Big|\\frac{1}{\\vert Q_\\ell \\vert} \\int_{Q_\\ell} u(y, \\omega) \\, \\frac{\\partial {\\zeta}}{\\partial y_i}(y, \\omega) \\, dy \n + \\frac{1}{\\vert Q_\\ell \\vert} \\int_{Q_\\ell} v(y, \\omega) \\, {\\zeta}(y, \\omega) \\, dy \\Big|}= 0.\n$$\nThen, applying Theorem \\ref{Birkhoff} and from equation \\eqref{MeanValue}, it follows that\n$$\n \\int_{[0,1)^n} \\int_\\Omega u(y, \\omega) \\, \\frac{\\partial {\\zeta}}{\\partial y_i}(y, \\omega) \\, d\\mathbb{P}(\\omega) \\, dy \n = -\\int_{[0,1)^n} \\int_\\Omega v(y, \\omega) \\, {\\zeta}(y, \\omega) \\, d\\mathbb{P}(\\omega) \\, dy,\n$$\nwhich finish the proof of the theorem.\n\\end{proof}\t\t\n\nSimilarly to the above theorem, we have the characterization of weak derivatives of stationary functions \ncomposed with stochastic deformations, given by the following \n\n\\begin{theorem}\n\\label{648235azwsxqdgfd}\nLet $u,v \\in L^1_{\\rm loc}(\\mathbb{R}^n; L^p(\\Omega))$\nbe stationary functions, $(p> 1)$. Then, for any $i \\in \\{1,\\ldots,n \\}$ fixed, the following sentences are equivalent: \n\\begin{multline*}\n(A) \\quad \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} u {\\left( \\Phi^{-1}\\left( z, \\omega \\right), \\omega \\right)} \\, { \\frac{ \\partial {\\left( \\zeta{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)}}{\\partial z_k} } \\, dz \\, d\\mathbb{P}(\\omega) \n\\\\[5pt]\n= - \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} v {\\left( \\Phi^{-1}\\left( z, \\omega \\right), \\omega \\right)} \\, {\\zeta{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)}} \\, dz \\, d\\mathbb{P}(\\omega),\n\\end{multline*}\nfor each stationary function $\\zeta \\in C^1( \\mathbb{R}^n; L^q(\\Omega))$,\nwith $1\/p + 1\/q = 1$. \n\\begin{equation*}\n(B) \\quad \\int_{\\mathbb{R}^n} u {\\left( \\Phi^{-1}\\left( z, \\omega \\right), \\omega \\right)} \\, { \\frac{\\partial \\varphi}{\\partial z_k}(z) } \\, dz \n= - \\int_{\\mathbb{R}^n} v {\\left( \\Phi^{-1}\\left( z, \\omega \\right), \\omega \\right)} \\, {\\varphi(z)} \\, dz, \\hspace{12pt}\n\\end{equation*}\nfor any $\\varphi \\in C^1_{\\rm c}(\\mathbb{R}^n)$, and almost sure $\\omega \\in \\Omega$.\n\\end{theorem}\n\\begin{proof}\nThe proof follows the same lines as in the proof of Theorem \\ref{987987789879879879} after the change of variables\n$y= \\Phi^{-1}(z,\\omega)$.\n\\end{proof}\n\n\n\n\\subsection{$\\Phi_\\omega-$Two-scale Convergence}\n\\label{pud63656bg254v2v5}\n\nIn this subsection, we shall consider the two-scale convergence in a stochastic setting that is beyond of the classical stationary \nergodic setting. The classical concept of two-scale convergence was introduced by Nguetseng~\\cite{Nguetseng} and futher developed by Allaire~\\cite{Allaire} \nto deal with periodic problems. \n\n\\medskip\nThe notion of two-scale convergence has been successfully extended to non-periodic settings in several papers as in~\\cite{FridSilvaVersieux,DiazGayte} in the ergodic algebra \nsetting and in~\\cite{BourgeatMikelicWright} in the stochastic setting. The main difference here with the earlier studies is \nthat the test functions used here are random perturbations accomplished by stochastic diffeomorphisms of stationary \nfunctions. The main difficulty brought by this kind of test function is the lack of the stationarity property (see \\cite{AndradeNevesSilva} for a deep discussion about that) which makes us \nunable to use the results described in~\\cite{BourgeatMikelicWright} and the lack of a compatible topology with the probability space considered. This is overcome by using a \ncompactification argument that preserves the ergodic nature of the setting involved. For this, we will make use of the following lemma, whose simple proof can be found in~\\cite{AF}.\n\n\\begin{lemma}\\label{TopologicalLemma}\nLet $X_1,X_2$ be compact spaces, $R_1$ a dense subset of $X_1$ and $W:R_1\\to X_2$. Suppose that for all $g\\in C(X_2)$ the function $g\\circ W$ is the restriction \nto $R_1$ of some (unique) $g_1\\in C(X_1)$. Then $W$ can be uniquely extended to a continuous mapping $\\underline{W}:X_1\\to X_2$. Further, suppose in addition that \n$R_2$ is a dense set of $X_2$, $W$ is a bijection from $R_1$ onto $R_2$ and for all $f\\in C(X_1)$, $f\\circ W^{-1}$ is the restriction to $R_2$ of some (unique) \n$f_2\\in C(X_2)$. Then, $W$ can be uniquely extended to a homeomorphism $\\underline{W}:X_1\\to X_2$. \n\\end{lemma}\n\nNow, we can prove the following result.\n\n\\begin{theorem}\n\\label{Compacification}\nLet $\\mathbb {S}\\subset L^{\\infty}(\\mathbb R^n\\times \\Omega)$ be a countable set of stationary functions. Then there exists a compact (separable) topological space \n$\\widetilde{\\Omega}$ and one-to-one function $\\delta:\\Omega\\to \\widetilde{\\Omega}$ with dense image satisfying the following properties: \n\\begin{enumerate}\n\\item[(i)] The probability space $\\Big(\\Omega,\\mathscr{F},\\mathbb{P}\\Big)$ and the ergodic dynamical system $\\tau:\\mathbb{Z}^n\\times \\Omega\\to\\Omega$ acting on it \nextends respectively to a Radon probability space $\\Big(\\widetilde{\\Omega},\\mathscr{B},\\widetilde{\\mathbb{P}}\\Big)$ and to an ergodic dynamical system \n$\\widetilde{\\tau}:\\mathbb{Z}^n\\times \\widetilde{\\Omega}\\to\\widetilde{\\Omega}$.\n\\item[(ii)] The stochastic deformation $\\Phi:\\mathbb R^n\\times\\Omega\\to\\mathbb R^n$ extends to a stochastic deformation \n$\\tilde{\\Phi}:\\mathbb R^n\\times\\widetilde{\\Omega}\\to\\mathbb R^n$ satisfying \n$$\n\\Phi(x,\\omega)=\\tilde{\\Phi}(x,\\delta(\\omega)),\n$$\nfor a.e. $\\omega\\in\\Omega$.\n\\item[(iii)] Any function $f\\in\\mathbb{S}$ extends to a $\\tilde{\\tau}-$stationary function $\\tilde{f}\\in L^{\\infty}(\\widetilde{\\Omega}\\times \\mathbb R^n)$ satisfying \n$$\n\\Medint_{\\mathbb R^n}f\\left(\\Phi^{-1}(z,\\omega),\\omega\\right)\\,dz=\\Medint_{\\mathbb R^n}\\tilde{f}\\left(\\tilde{\\Phi}^{-1}(z,\\delta(\\omega)),\\delta(\\omega)\\right)\\,dz,\n$$\nfor a.e. $\\omega\\in\\Omega$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n1. Let $\\mathbb{S}$ be the set of the lemma. Given $f\\in \\mathbb{S}$, define \n$$\nf_j(y,\\omega):=\\int_{\\mathbb R^n}f(y+x,\\omega)\\,\\rho_j(x)\\,dx,\n$$\nwhere $\\rho_j$ is the classical approximation of the identity in $\\mathbb R^n$. Note that for a.e. $y\\in\\mathbb R^n$, we have that $f_j(y,\\cdot)\\to f(y,\\cdot)$ in $L^1(\\Omega)$ as $j\\to\\infty$. \nDefine $\\mathcal{A}$ as the closed algebra with unity generated by the set \n$$\n\\Big\\{ f_j(y,\\cdot);\\,j\\ge 1,y\\in\\mathbb{Q}^n,f\\in\\mathbb{S}\\Big\\}\\cap \\Big\\{\\partial_j \\Phi_i(y,\\cdot);\\, 1\\le j,i\\le n, y\\in\\mathbb{Q}^n\\Big\\}.\n$$\nSince $[-1,1]$ is a compact set, by the well known Tychonoff's Theorem, the set \n$$\n[-1,1]^{\\mathcal{A}}:=\\Big\\{\\text{the functions $\\gamma:\\mathcal{A}\\to[-1,1]$}\\Big\\}\n$$\nis a compact set in the product topology. Define $\\delta:\\Omega\\to[-1,1]^{\\mathcal{A}}$ by \n$$\n\\delta(\\omega)(g):=\\left\\{\\begin{array}{rc}\n\\frac{g(\\omega)}{\\|g{\\|}_{\\infty}},&\\mbox{if}\\quad g\\neq 0,\\\\\n0,&\\mbox{if}\\quad g=0.\n\\end{array}\\right.\n$$\nWe may assume that the algebra $\\mathcal{A}$ distinguishes between points of $\\Omega$, that is, given any two distinct points $\\omega_1,\\omega_2\\in\\Omega$, there exists \n$g\\in\\mathcal{A}$ such that $g(\\omega_1)\\neq g(\\omega_2)$. In the case that it is not true we may replace $\\Omega$ by its quotient by a trivial equivalence relation, in a standard \nway, and we proceed correspondingly with the $\\sigma-$algebra $\\mathscr{F}$ and with the probability measure $\\mathbb{P}$. Thus, the function $\\delta$ is one-to-one. Define \n$$\n\\widetilde{\\Omega}:=\\overline{\\delta(\\Omega)}.\n$$\nNow, we can see that the set $\\Omega$ inherits all topological features of the compact space $\\widetilde{\\Omega}$ in a natural way which allows us to identify it homeomorphically with \nthe image $\\delta(\\Omega)$.\n\n2. Define the mapping $i:\\mathcal{A}\\to C(\\delta(\\Omega))$ by \n$$\ni(g)(\\delta(\\omega)):=g(\\omega).\n$$\nWe claim that there exists a continuous function $\\tilde{g}:\\widetilde{\\Omega}\\to \\mathbb R$ such that \n$$\ni(g)=\\tilde{g}\\,\\text{on $\\delta(\\Omega)$}.\n$$\nIn fact, take $g\\in\\mathcal{A}$ and $Y:=\\overline{g(\\Omega)}$. Define the function $f^{*}:C(Y)\\to\\mathcal{A}$ by \n$$\nf^{*}(h):=h\\circ g\\,\\text{(the algebra structure is used!)}\n$$\nHence, we can define $f^{**}:[-1,1]^{\\mathcal{A}}\\to[-1,1]^{C(Y)}$ by \n$$\nf^{**}(h):=h\\circ f^{*}.\n$$\nNote that the function $f^{**}$ is a continuous function. In order to see that, we highlighted that it is known that a function $H$ from a topological space \nto a product space $\\otimes_{\\alpha\\in \\mathcal{I}}X_{\\alpha}$ is continuous if and only if each component $\\pi_{\\alpha}\\circ H:=H_{\\alpha}$ is \ncontinuous. Hence, if $\\alpha\\in C(Y)$ then the projection function $f^{**}_{\\alpha}$ must satisfy \n\\begin{eqnarray*}\n&&f^{**}_{\\alpha}(h):=\\left(\\pi_{\\alpha}\\circ f^{**}\\right)(h)=\\pi_{\\alpha}\\circ\\left(f^{**}(h)\\right)=\\pi_{\\alpha}\\left(h\\circ f^{*}\\right)\\\\\n&&\\qquad=\\left(h\\circ f^{*}\\right)(\\alpha)=h\\left(f^{*}(\\alpha)\\right)=h\\left(\\alpha\\circ g\\right)=\\pi_{\\alpha\\circ g}(h).\n\\end{eqnarray*}\nNow, consider the function $\\tilde{\\delta}:Y\\to [-1,1]^{C(Y)}$ given by \n$$\n\\tilde{\\delta}(y)(h):=\\left\\{\\begin{array}{rc}\n\\frac{h(y)}{\\|h{\\|}_{\\infty}},&\\mbox{if}\\quad h\\neq 0,\\\\\n0,&\\mbox{if}\\quad h=0.\n\\end{array}\\right.\n$$\nSince the algebra $C(Y)$ has the following property: If $F\\subset Y$ is a closed set and $y\\notin F$ then \n$f(y)\\notin \\overline{f(F)}$ for some $f\\in C(Y)$, we can conclude that the function $\\tilde{\\delta}$ is a homeomorphism onto its image. Furthermore, \ngiven $\\omega\\in\\Omega$ we have that $\\left(f^{**}\\circ \\delta\\right)(\\omega)=f^{**}\\left(\\delta(\\omega)\\right)=\\delta(\\omega)\\circ f^{*}$. Hence, if $0\\neq h\\in C(Y)$ \nit follows that \n\\begin{eqnarray*}\n&&\\left(f^{**}\\circ\\delta\\right)\\big(\\omega\\big)(h)=\\left(\\delta(\\omega)\\circ f^{*}\\right)(h)=\\delta(\\omega)\\left(f^{*}(h)\\right)\\\\\n&&\\quad=\\delta(\\omega)\\left(h\\circ g\\right)=\\frac{\\left(h\\circ g\\right) (\\omega)}{\\|h\\circ g {\\|}_{\\infty}}=\\tilde{\\delta}\\left(g(\\omega)\\right)(h).\n\\end{eqnarray*}\nThus, we see that ${\\tilde{\\delta}}^{-1}\\circ f^{**}=i(g)$. Defining $\\tilde{g}:={\\tilde{\\delta}}^{-1}\\circ f^{**}$ we have clearly that \n$i:\\mathcal{A}\\to C(\\widetilde{\\Omega})$ is a one-to-one isometry satisfying $i(g)=\\tilde{g}$ and our claim is proved. Moreover, it is easy to see that $i(\\mathcal{A})$ is \nan algebra of functions over $C(\\widetilde{\\Omega})$ containing the unity. As before, if $i(\\mathcal{A})$ does not separate points of $\\widetilde{\\Omega}$, then we may replace \n$\\widetilde{\\Omega}$ by its quotient $\\widetilde{\\Omega}\/\\sim$, where \n$$\n\\tilde{\\omega_1}\\sim\\tilde{\\omega_2}\\,\\Leftrightarrow\\, \\tilde{g}(\\tilde{\\omega_1})=\\tilde{g}(\\tilde{\\omega_2})\\,\\forall g\\in\\mathcal{A}.\n$$\nTherefore, we can assume that $i(\\mathcal{A})$ separates the points of $\\widetilde{\\Omega}$. Hence, by the Stone's Theorem (see \\cite{Ru2}, p. 162, Theorem 7.3.2) \nwe must have $i(\\mathcal{A})=C(\\widetilde{\\Omega})$.\n\n\\medskip\n3. Define $\\widetilde{\\tau}:\\mathbb{Z}^n\\times\\delta(\\Omega)\\to \\delta(\\Omega)$ by\n$$\n\\widetilde{\\tau}_k\\left(\\delta(\\omega)\\right):=\\delta(\\tau_k\\omega).\n$$\nIt is easy to see that \n$$\n\\widetilde{\\tau}_{k_1+k_2}(\\delta(\\omega))=\\widetilde{\\tau}_{k_1}\\Big(\\widetilde{\\tau}_{k_2}(\\delta(\\omega))\\Big),\n$$\nfor all $k_1,k_2\\in\\mathbb{Z}^n$ and $\\omega\\in\\Omega$. Since $\\tilde{g}\\circ{\\widetilde{\\tau}_k}=\\widetilde{g\\circ{\\tau}_k}$ for all $g\\in\\mathcal{A}$ and \n$k\\in\\mathbb{Z}^n$, the lemma~\\ref{TopologicalLemma} allows us to extend the mapping $\\widetilde{\\tau}_k$ from $\\delta(\\Omega)$ to $\\widetilde{\\Omega}$ satisfying the \ngroup property $\\widetilde{\\tau}_{k_1+k_2}=\\widetilde{\\tau}_{k_1}\\circ{\\widetilde{\\tau}}_{k_2}$. Given a borelian set $\\tilde{A}\\subset{\\widetilde{\\Omega}}$ and defining \n$\\widetilde{\\mathbb{P}}(\\tilde{A}):=\\mathbb{P}(\\delta^{-1}(\\tilde{A}\\cap \\delta(\\Omega)))$, we can deduce that \n$\\widetilde{\\mathbb{P}}\\circ {\\widetilde{\\tau}_k}=\\widetilde{\\mathbb{P}}$. Thus, the mapping $\\widetilde{\\tau}_k$ is an ergodic dynamical system over the Radon probability \nspace $\\Big(\\widetilde{\\Omega},\\mathscr{B},\\widetilde{\\mathbb{P}}\\Big)$. Thus, we have concluded the proof of the item (i). \n\n4. Now, note that for each $\\omega\\in\\widetilde{\\Omega}$ and each integer $j\\ge 1$, the function $f_j(\\cdot,\\omega)$ is uniformly continuous over $\\mathbb{Q}^n$. Hence, \nit can be extended uniquely to a function $\\widetilde{f_j}(\\cdot,\\omega)$ defined in $\\mathbb R^n$ that satisfies \n$$\n\\limsup_{j,l\\to\\infty}\\int_{[0,1)^n\\times \\widetilde{\\Omega}}|\\widetilde{f_j}(y,\\omega)-\\widetilde{f_l}(y,\\omega)|\\,d\\widetilde{\\mathbb{P}}(\\omega)\\,dy=0.\n$$\nTherefore, there exists a $\\widetilde{\\tau}$-stationary function $\\widetilde{f}\\in L^1_{\\loc}\\left(\\mathbb R^n\\times \\widetilde{\\Omega}\\right)$, such that \n$\\widetilde{f_j}\\to \\widetilde{f}$ as $j\\to\\infty$ in $L^1_{\\loc}(\\mathbb R^n\\times \\widetilde{\\Omega})$. Since $\\| \\widetilde{f_j}{\\|}_{\\infty}\\le \\| {f}{\\|}_{\\infty}$, for all $j\\ge 1$ we have \nthat $\\widetilde{f}\\in L^{\\infty}(\\mathbb R^n\\times \\widetilde{\\Omega})$. In the same way, the stochastic deformation $\\Phi:\\mathbb R^n\\times \\Omega\\to \\mathbb R^n$ extends to a stochastic \ndeformation $\\tilde{\\Phi}:\\mathbb R^n\\times \\widetilde{\\Omega}\\to \\mathbb R^n$ satisfying $\\Phi(y,\\omega)=\\tilde{\\Phi}(y,\\delta(\\omega))$ for all $\\omega\\in\\Omega$ and \n$$\n\\Medint_{\\mathbb R^n}f\\left(\\Phi^{-1}(z,\\omega),\\omega\\right)\\,dz=\\Medint_{\\mathbb R^n}\\tilde{f}\\left(\\tilde{\\Phi}^{-1}(z,\\delta(\\omega)),\\delta(\\omega)\\right)\\,dz,\n$$\nfor a.e. $\\omega\\in\\Omega$. This, completes the proof of the theorem \\eqref{Compacification}.\n\n\n\\end{proof}\n\nIn practice, in our context, the set $\\mathbb{S}$ shall be a countable set generated by the coefficients of our equation $\\eqref{jhjkhkjhkj765675233}$ and by the \neigenfunctions of the spectral equation associated to it. Thus, the Theorem \\eqref{Compacification} allow us to suppose without loss of generality that our probability \nspace $\\Big(\\Omega,\\mathscr{F},\\mathbb{P}\\Big)$ is a separable compact space. Using the Ergodic Theorem, given a stationary function $f\\in L^{\\infty}(\\mathbb R^n\\times\\Omega)$ \nthere exists a set of full measure $\\Omega_f\\subset \\Omega$ such that \n\\begin{equation}\\label{Compacification1}\n\\Medint_{\\mathbb R^n}f\\left(\\Phi^{-1}(z,\\tilde{\\omega}),\\tilde{\\omega}\\right)\\,dz= c_{\\Phi}^{-1}\\int_{\\Omega}\\int_{\\Phi([0,1)^n,\\omega)}f\\left(\\Phi^{-1}(z,\\omega),\\omega\\right)\\,dz\\,d\\mathbb{P}(\\omega),\n\\end{equation}\nfor almost all $\\tilde{\\omega}\\in{\\Omega}_f$. Due to the separability of the probability compact space $\\Big(\\Omega,\\mathscr{F},\\mathbb{P}\\Big)$, we can find a set \n$\\mathbb{D}\\subset C_b(\\mathbb R^n\\times\\Omega)$ such that:\n\\begin{itemize}\n\\item Each $f\\in\\mathbb{D}$ is a stationary function. \n\\item $\\mathbb{D}$ is a countable and dense set in $C_0\\big([0,1)^n\\times\\Omega\\big)$.\n\\end{itemize}\nIn this case, there exists a set $\\Omega_0\\subset\\Omega$ of full measure such that the equality \\eqref{Compacification1} holds for any $\\tilde{\\omega}\\in\\Omega_0$ and \n$f\\in\\mathbb{D}$.\n\n\\medskip\nNow, we proceed with the definition of the two-scale convergence in this scenario of stochastically deformed. In what follows, the set $O\\subset\\mathbb R^n$ is an open set. \n\\begin{definition}\n\\label{two-scale}\nLet $1< p <\\infty$ and $v_{\\varepsilon}:O\\times\\Omega\\to \\mathbb{C}$ be a sequence such that $v_{\\varepsilon}(\\cdot,\\tilde{\\omega})\\in L^p(O)$. \nThe sequence \n$\\{v_{\\varepsilon}(\\cdot,\\tilde{\\omega}){\\}}_{\\varepsilon>0}$ is said to $\\Phi_\\omega-$two-scale converges to a stationary function $V_{\\tilde{\\omega}}\\in L^p\\left(O\\times [0,1)^n\\times\\Omega\\right)$,\nwhen for a.e. $\\tilde{\\omega}\\in\\Omega$ holds the following\n$$\n\\begin{aligned}\n&\\lim_{\\varepsilon\\to 0}\\int_{O}v_{\\varepsilon}(x,\\tilde{\\omega})\\,\\varphi(x)\\,\\Theta\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\tilde{\\omega}\\right),\\tilde{\\omega}\\right)\\,dx\n\\\\\n&= c_{\\Phi}^{-1}\\int_{O\\times\\Omega}\\int_{\\Phi([0,1)^n,\\omega)} \\!\\!\\! V_{\\tilde{\\omega}}\\left(x,\\Phi^{-1}\\left(z,\\omega\\right),\\omega\\right)\\,\\varphi(x)\\,\n\\Theta(\\Phi^{-1}(z,\\omega),\\omega)\\,dz\\,d{\\mathbb{P}(\\omega)}\\,dx,\n\\end{aligned}\n$$\nfor all $\\varphi\\in C_c^{\\infty}(O)$ and $\\Theta\\in L^{q}_{\\loc}(\\mathbb R^n\\times\\Omega)$ stationary. Here, $p^{-1}+q^{-1}=1$ and \n$c_{\\Phi}:=\\det\\Big(\\int_{[0,1)^n\\times \\Omega}\\nabla \\Phi(y,\\omega)\\,d{\\mathbb{P}(\\omega)}\\,dy\\Big)$.\n\\end{definition}\n\\begin{remark}\nFrom now on, we shall use the notation \n\\begin{equation*}\n\t\t\tv_{\\varepsilon}(x,\\widetilde{\\omega}) \\; \\xrightharpoonup[\\varepsilon \\to 0]{2-{\\rm s}}\\; V_{\\widetilde{\\omega}} {\\left(x,\\Phi^{-1}(z,\\omega),\\omega \\right)},\n\t\t\\end{equation*}\nto indicate that $v_{\\varepsilon}(\\cdot,\\tilde{\\omega})$ $\\Phi_\\omega-$two-scale converges to $V_{\\tilde{\\omega}}$.\n\\end{remark}\nThe most important result about the two-scale convergence needed in this paper is the following compactness theorem which \ngeneralize the corresponding one for the \ndeterministic case in~\\cite{DiazGayte} (see Theorem 4.8)\nand the corresponding one for the stochastic case in~\\cite{BourgeatMikelicWright} (see Theorem 3.4). \n\\begin{theorem}\n\\label{TwoScale}\nLet $1< p <\\infty$ and $v_{\\varepsilon}:O\\times\\Omega\\to \\mathbb{C}$ be a sequence such that\n$$\n\\sup_{\\varepsilon>0}\\int_{O}|v_{\\varepsilon}(x,\\tilde{\\omega})|^p\\,dx<\\infty,\n$$\nfor almost all $\\tilde{\\omega}\\in\\Omega$. \nThen, for almost all $\\tilde{\\omega}\\in\\Omega_0$, there exists a subsequence $\\{v_{\\varepsilon'}(\\cdot,\\tilde{\\omega}){\\}}_{\\varepsilon'>0}$, which may depend on $\\tilde{\\omega}$, and a \nstationary function $V_{\\tilde{\\omega}}\\in L^p(O\\times [0,1)^n\\times\\Omega)$, such that \n$$\nv_{\\varepsilon}(x,\\widetilde{\\omega}) \\; \\xrightharpoonup[\\varepsilon' \\to 0]{2-{\\rm s}}\\; V_{\\widetilde{\\omega}} {\\left(x,\\Phi^{-1}(z,\\omega),\\omega \\right)}.\n$$\n\\end{theorem}\n\\begin{proof}\n1. We begin by fixing $\\tilde{\\omega}\\in\\Omega_0$. Due to our assumption, there exists ${ c(\\widetilde{\\omega})>0 }$, such that \nfor all $\\varepsilon >0$\n\t\t\\begin{equation*}\n\t\t\t{\\left\\Vert v_\\varepsilon(\\cdot,\\widetilde{\\omega}) \\right\\Vert}_{L^p(O)} \\leqslant c(\\widetilde{\\omega}).\n\t\t\\end{equation*}\nNow, taking $\\phi\\in \\Xi\\times \\mathbb{D}$ with $\\Xi\\subset C_c^{\\infty}(O)$ dense in $L^q(O)$, we have after applying the H\\\"older inequality and the Ergodic Theorem, \n\\begin{multline}\n\\label{6742938tyuer}\n\t\t\t\\underset{\\varepsilon \\to 0}{\\limsup} {\\vert \\int_{O} v_\\varepsilon(x,\\widetilde{\\omega}) \\, { \\phi{\\left( x,\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon}, \\widetilde{\\omega} \\right)}, \\widetilde{\\omega} \\right)} } dx \\vert} \n\t\t\t\\\\\n\t\t \\leq c(\\widetilde{\\omega}) {\\left[ \\underset{\\varepsilon \\to 0}{\\limsup}\\int_{O} {\\left\\vert \\phi{\\left( x,\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon}, \\widetilde{\\omega} \\right)}, \\widetilde{\\omega} \\right)} \\right\\vert}^q dx \\right]}^{1\/q} \\hspace{100pt} \n\t\t \\\\\n\t\t\t= c(\\widetilde{\\omega}) {\\left[ c_\\Phi^{-1} \\int_{O} \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} {\\left\\vert \\phi{\\left( x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\right\\vert}^q dz \\, d\\mathbb{P}(\\omega) \\, dx \\right]}^{1\/q}.\n\t\t\\end{multline}\nThus, the use of the enumerability of the set $\\Xi\\times \\mathbb{D}$ combined with a diagonal argument yields us a subsequence $\\{\\varepsilon'\\}$ (maybe depending of $\\tilde{\\omega}$) \nsuch that the functional $\\mu:\\Xi\\times \\mathbb{D}\\to \\mathbb{C}$ given by \n\\begin{equation}\\label{Two-scale1}\n\t\t\\langle\\mu,\\phi\\rangle:=\t\\lim_{\\varepsilon^{\\prime} \\to 0}\\int_{O} v_{\\varepsilon^{\\prime}}(x,\\widetilde{\\omega}) \\, { \\phi{\\left( x, \\Phi^{-1}{\\left( \\frac{x}{\\varepsilon^{\\prime}}, \\widetilde{\\omega} \\right)}, \\widetilde{\\omega} \\right)} } dx\n\t\t\\end{equation}\nis well-defined and bounded with respect to the norm $\\|\\cdot{\\|}_q$ defined as \n$$\n\\|\\phi{\\|}_q:= \\big[ c_\\Phi^{-1} \\int_{O} \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} {\\left\\vert \\phi{\\left( x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\right\\vert}^q dz \\, d\\mathbb{P}(\\omega) \\, dx \\big]^{1\/q}\n$$\nby \\eqref{6742938tyuer}. Since the set $\\Xi\\times \\mathbb{D}$ is dense in $L^q\\left(O\\times[0,1)^n\\times\\Omega\\right)$, we can extend the functional $\\mu$ to a bounded \nfunctional $\\tilde{\\mu}$ over $L^q\\left(O\\times[0,1)^n\\times\\Omega\\right)$. Hence, we find $V_{\\tilde{\\omega}}\\in L^p(O\\times [0,1)^n\\times\\Omega)$ which can be extended \nto $O\\times\\mathbb R^n\\times\\Omega$ in a stationary way by setting \n$$\nV_{\\tilde{\\omega}}(x,y,\\omega)=V_{\\tilde{\\omega}}\\left(x,y-\\left\\lfloor y \\right\\rfloor, \\tau_{\\left\\lfloor y \\right\\rfloor}\\omega\\right),\n$$\nand satisfying for all $\\phi\\in L^q\\left(O\\times[0,1)^n\\times\\Omega\\right)$, \n$$\n\\langle \\tilde{\\mu},\\phi\\rangle\\!\\!= c_{\\Phi}^{-1}\\int_{O\\times\\Omega}\\int_{\\Phi([0,1)^n,\\omega)}\n\\!\\!\\! \\!V_{\\tilde{\\omega}}\\left(x,\\Phi^{-1}\\left(z,\\omega\\right),\\omega\\right)\n\\phi\\left(x,\\Phi^{-1}(z,\\omega),\\omega\\right)dz d\\mathbb{P}(\\omega) dx.\n$$\n\n2. Now, take $\\varphi\\in C^{\\infty}_c(O)$ and $\\Theta\\in L^{q}_{\\loc}(\\mathbb R^n\\times\\Omega)$ a $\\tau$-stationary function. Since the set \n$\\Xi\\times \\mathbb{D}$ is dense in $L^q\\left(O\\times[0,1)^n\\times\\Omega\\right)$, we can pick up a sequence \n$\\{(\\varphi_j,\\Theta_j){\\}}_{j\\ge1}\\subset \\Xi\\times \\mathbb{D}$ such that \n$$\n\\lim_{j\\to\\infty}(\\varphi_j,\\Theta_j)=(\\varphi,\\Theta)\\quad\\text{in $L^q\\Big(O\\times[0,1)^n\\times\\Omega\\Big)$}.\n$$\nThen, observing that \n\\begin{eqnarray*}\n&&\\limsup_{\\varepsilon'\\to 0}\\Big| \\int_{O}v_{\\varepsilon'}(x,\\tilde{\\omega})\\varphi(x)\\Theta\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon'},\\tilde{\\omega}\\right),\\tilde{\\omega}\\right)\\,dx\n\\\\\n&&-\\int_{O}v_{\\varepsilon'}(x,\\tilde{\\omega})\\varphi_j(x)\\Theta_j\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon'},\\tilde{\\omega}\\right),\\tilde{\\omega}\\right)\\,dx\\Big|\n\\\\\n&& \\le c \\|\\varphi-\\varphi_j{\\|}_{L^q(O)}\n[ c_\\Phi^{-1} \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} {\\left\\vert (\\Theta-\\Theta_j){\\left(\\Phi^{-1}(z,\\omega),\\omega \\right)} \\right\\vert}^q dz \\, d\\mathbb{P}(\\omega)]^{1\/q},\n\\end{eqnarray*}\nwhere $c= c(\\tilde{\\omega})$ is a positive constant. \nThen, combining the previous equality with the \\eqref{Two-scale1}, we concluded the proof of the theorem.\n\\end{proof}\n\nLet us remember the following space (see Remark \\ref{REMFPHI}) \n$$\n\\mathcal{H}:=\\Big\\{w\\in H^1_{\\loc}(\\mathbb R^n;L^2(\\Omega));\\,\\text{$w$ is a stationary function}\\Big\\},\n$$\nwhich is a Hilbert space with respect to the following inner product \n$$\n\\begin{aligned}\n\\langle w,v{\\rangle}_{\\mathcal{H}}:=\\int_{[0,1)^n\\times\\Omega} \\!\\! & \\nabla_{\\!y} w(y,\\omega)\\cdot \\nabla_y v(y,\\omega)\\,d{\\mathbb{P}}(\\omega)\\,dy\n\\\\\n& +\\int_{[0,1)^n\\times\\Omega}. \\!\\! \\!\\! w(y,\\omega) v(y,\\omega)\\,d{\\mathbb{P}}(\\omega)\\,dy.\n\\end{aligned} \n$$\nThe next lemma will be important in the homogenization's process. \n\\begin{lemma}\\label{SYM1-5}\nLet $O\\subset\\mathbb R^n$ be an open set and assume that $\\{u_{\\varepsilon}(\\cdot,\\tilde{\\omega}){\\}}_{\\varepsilon>0}$ and $\\{\\varepsilon \\nabla u_{\\varepsilon}(\\cdot,\\tilde{\\omega}){\\}}_{\\varepsilon>0}$ are \nbounded sequences in $L^2(O)$ and in $L^2(O;\\mathbb R^n)$ respectively for a.e. $\\tilde{\\omega}\\in\\Omega$. Then, for a.e. $\\tilde{\\omega}\\in\\Omega$, there exists a \nsubsequence $\\{\\varepsilon'\\}$(it may depend on $\\tilde{\\omega}$) and $u_{\\tilde{\\omega}}\\in L^2(O;\\mathcal{H})$, such that \n$$\nu_{\\varepsilon'}(\\cdot,\\tilde{\\omega})\\; \\xrightharpoonup[\\varepsilon \\to 0]{2-{\\rm s}}\\; u_{\\tilde{\\omega}},\n$$\nand \n$$\n\\varepsilon'\\nabla u_{\\varepsilon'}(\\cdot,\\tilde{\\omega})\\; \\xrightharpoonup[\\varepsilon \\to 0]{2-{\\rm s}}\\;[\\nabla_y\\Phi]^{-1}\\nabla_y u_{\\tilde{\\omega}}.\n$$\n\\end{lemma}\n\\begin{proof}\n Applying the Theorem \\ref{TwoScale} for the sequences \n $${ \\{u_\\varepsilon(\\cdot,\\widetilde{\\omega})\\}_{\\varepsilon > 0} }, \\quad\n{ \\{\\varepsilon \\nabla u_\\varepsilon(\\cdot,\\widetilde{\\omega})\\}_{\\varepsilon > 0} }$$ \nfor a.e. ${ \\widetilde{\\omega} \\in \\Omega }$, \nwe can find a subsequence $\\{ \\varepsilon^\\prime \\}$, and functions \n$${ u_{\\widetilde{\\omega}} \\in L^2({O} \\! \\times \\! [0,1)^n \\! \\times \\! \\Omega) }, \\quad\n{ V_{\\widetilde{\\omega}} \\in L^2({O} \\! \\times [0,1)^n \\! \\times \\! \\Omega;\\mathbb R^n)}$$ \nwith ${ V_{\\widetilde{\\omega}} = (v^{(1)}_{\\widetilde{\\omega}}, \\ldots, v^{(n)}_{\\widetilde{\\omega}}) }$\nsatisfying for ${ k \\in \\{1,2, \\ldots, n\\} }$, \n\\begin{equation}\\label{9867986876410}\n\t\t\tu_{\\varepsilon^\\prime}(\\cdot,\\widetilde{\\omega}) \\; \\xrightharpoonup[\\varepsilon^\\prime \\to 0]{2-{\\rm s}} \\; u_{\\widetilde{\\omega}},\n\t\t\\end{equation}\n\t\tand\n\t\t\\begin{equation}\\label{7869876874}\n\t\t\t\\varepsilon^\\prime \\frac{\\partial u_{\\varepsilon^\\prime}}{\\partial x_k} \\; \\xrightharpoonup[\\varepsilon^\\prime \\to 0]{2-{\\rm s}} \\; v^{(k)}_{\\widetilde{\\omega}}.\n\t\t\\end{equation}\t\t\nHence for each ${ k\\in \\{1,\\ldots,n\\} }$ and performing an integration by parts we have \n\t\t\\begin{multline*}\n\t\t\t\\int_{{O}} \\varepsilon^\\prime \\frac{\\partial u_{\\varepsilon^\\prime}}{\\partial x_k} (x,\\widetilde{\\omega}) \\, {\\varphi(x) \\, \\Theta{\\left( \\Phi^{-1}\\left( \\frac{x}{\\varepsilon^\\prime}, \\widetilde{\\omega} \\right), \\widetilde{\\omega} \\right)}} dx\n\t\t\t\\\\\n\t\t\t\\hspace{-4cm}= -\\varepsilon^\\prime\\int_{{O}} u_{\\varepsilon^\\prime} (x,\\widetilde{\\omega}) \\, {\\frac{\\partial \\varphi}{\\partial x_k}(x) \\, \\Theta {\\left( \\Phi^{-1}\\left( \\frac{x}{\\varepsilon^\\prime}, \\widetilde{\\omega} \\right), \\widetilde{\\omega} \\right)}} dx \n\t\t\t\\\\\n\t\t\t\\quad \\quad -\\int_{{O}} u_{\\varepsilon^\\prime} (x,\\widetilde{\\omega}) \\, {\\varphi(x) \\, {[\\nabla_{\\!\\! y}\\Phi]}^{-1} {\\left( \\Phi^{-1}{\\left( \\frac{x}{\\varepsilon^\\prime}, \\widetilde{\\omega} \\right)}, \\widetilde{\\omega} \\right)} \\, \\nabla_{\\!\\! y} \\Theta{\\left( \\Phi^{-1}\\left( \\frac{x}{\\varepsilon^\\prime}, \\widetilde{\\omega} \\right), \\widetilde{\\omega} \\right)} \\cdotp e_k} \\, dx,\n\t\t\\end{multline*}\nfor every $\\varphi\\in C^{\\infty}_c(O)$ and $\\Theta \\in C_c^{\\infty}\\big([0,1)^n; L^{\\infty}(\\Omega)\\big)$ extended in a stationary way to $\\mathbb R^n$. Then, using the relations \\eqref{9867986876410}-\\eqref{7869876874} and a density argument in the space of the test functions, we arrive after letting $\\varepsilon'\\to 0$\n\n\\begin{multline*}\n\t\t\t\\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} v^{(k)}_{\\widetilde{\\omega}} {\\left( x, \\Phi^{-1}\\left( z,\\omega \\right),\\omega \\right)} \\, { \\Theta{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, d\\mathbb{P} \\, dz\n\t\t\t\\\\\n\t\t\t= - \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} u_{\\widetilde{\\omega}} \\left( x, \\Phi^{-1}\\left( z,\\omega \\right),\\omega \\right) { \\frac{\\partial }{\\partial z_k} {\\left( \\Theta{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)} } \\, d\\mathbb{P} \\, dz ,\n\t\t\\end{multline*}\nfor a.e. $x \\in {O} $ and for any $\\Theta \\in C_c^{\\infty}\\big([0,1)^n; L^{\\infty}(\\Omega)\\big)$.\n\nHence\u00ad applying Theorem \\ref{648235azwsxqdgfd}, we obtain \n\\begin{equation*}\n\t\\int_{\\mathbb{R}^n} v^{(k)}_{\\widetilde{\\omega}} {\\left( x, \\Phi^{-1}\\left( z,\\omega \\right),\\omega \\right)} \n\t\\, { \\varphi(z) } \\, dz \\, = \\, -\\int_{\\mathbb{R}^n} u_{\\widetilde{\\omega}} \n\t{\\left( x, \\Phi^{-1}\\left( z,\\omega \\right),\\omega \\right)} \\, { \\frac{\\partial \\varphi}{\\partial z_k}(z) } \\, dz ,\n\\end{equation*}\nfor all ${ \\varphi \\in C_{\\rm c}^\\infty(\\mathbb{R}^n) }$ and a.e. ${ \\omega \\in \\Omega }$. This completes the proof of our lemma. \n\\end{proof}\n\n\n\\subsection{Perturbations of bounded operators} \n\\label{0239786gfhgdf}\n\nThe aim of this section is to study the point spectrum, that is the set of eigenvalues, for perturbations of a \ngiven bounded operator. More precisely, given a complex Hilbert space $H$, and a sequence \nof operators $\\{A_\\alpha\\}$, with \n$A_\\alpha \\in \\mathcal{B}(H)$ for each $\\alpha \\in \\mathbb{N}^n$, we analyse the point spectrum \nof the power series of $n-$complex variables $\\boldsymbol{z}= (z_1, \\ldots, z_n)$, which is\n\\begin{equation}\n\\label{POWERSERIES}\n\t\\sum_{\\alpha \\in \\mathbb{N}^n} \\boldsymbol{z}^\\alpha A_\\alpha \\equiv \n\t\\sum_{\\alpha_1,\\ldots,\\alpha_n = 0}^\\infty z_1^{\\alpha_1} \\ldots z_n^{\\alpha_n} A_{\\alpha_1,\\ldots,\\alpha_n},\n\\end{equation}\nfrom the properties of the spectrum $\\sigma(A_{0,\\ldots,0})$. \nThis subject was studied for instance by T. Kato \\cite{Kato} and F. Rellich \\cite{Rellich}.\n\n\t\n\\medskip\t\nTo follow, we define $|\\alpha|:= \\alpha_1 + \\ldots + \\alpha_n$, $(\\alpha \\in \\mathbb{N}^n)$,\n\\begin{equation}\n\\label{ConvergRadius}\n\tr := \\Big(\\underset{k \\in \\mathbb{N}}{\\rm inf} \\big\\{ \\underset{ {\\|\\alpha\\|}_\\infty > k}{\\rm sup} \\sqrt[ {|\\alpha|}]{ {\\Vert A_\\alpha \\Vert} } \\big\\}\\Big)^{-1},\n\\end{equation} \nand for $R> 0$\n$$\n\t\\Delta_R := \\prod_{\\nu=1}^n B(0,R).\n$$\nThen, we have the following \n\\begin{lemma}\nLet $ {\\left\\{ A_\\alpha \\right\\}}$ be a sequence of operators, such that $A_\\alpha \\in \\mathcal{B}(H)$\nfor each $\\alpha \\in \\mathbb{N}^n$. Then, the series \\eqref{POWERSERIES} is convergent for each $z \\in \\Delta_r$, \nwith $r> 0$ given by \\eqref{ConvergRadius}.\n\\end{lemma}\n\\begin{proof}\nGiven $\\boldsymbol{z} \\in \\Delta_r$, there exists $\\varepsilon> 0$ such that\n\\begin{equation}\n\\label{87687638}\n\t{\\big( \\frac{1}{r} + \\varepsilon \\big)} {\\vert z_\\nu \\vert} < 1, \\quad \\text{for any $\\nu \\in \\{ 1,\\ldots,n \\}$}.\n\\end{equation}\nOn the other hand, \t\t\nfrom \\eqref{ConvergRadius} there exists $k_0 \\in \\mathbb{N}$, such that,\nfor each $k\\geqslant k_0$\n\t\t\\begin{equation*}\n\t\t\t\\underset{ {\\Vert \\alpha \\Vert}_\\infty > k}{\\rm sup} \\sqrt[{\\vert \\alpha \\vert}]{ {\\Vert A_\\alpha \\Vert} } < \\frac{1}{r} + \\varepsilon.\n\t\t\\end{equation*}\nThen, for $\\|\\alpha\\|_\\infty > k_0$\n\t\t\\begin{equation*}\n\t\t\t{\\Vert A_\\alpha \\Vert} < {\\big( \\frac{1}{r} + \\varepsilon \\big)}^{ {\\vert \\alpha \\vert} },\n\t\t\\end{equation*}\nand hence we have\n$$\n {\\vert z_1 \\vert}^{\\alpha_1} \\ldots {\\vert z_n \\vert}^{\\alpha_n} {\\Vert A_\\alpha \\Vert} \n < {\\Big(\\big(\\frac{1}{r} + \\varepsilon \\big) {\\vert z_1 \\vert} \\Big)}^{\\alpha_1} \\ldots \n {\\Big(\\big(\\frac{1}{r} + \\varepsilon \\big) {\\vert z_n \\vert} \\Big)}^{\\alpha_n}.\n$$\nTherefore, we obtain\t\t\n\t\t\\begin{eqnarray*}\n\t\t\t\\sum_{ {\\Vert \\alpha \\Vert}_\\infty > k_0 } {\\vert \\boldsymbol{z} \\vert}^\\alpha {\\Vert A_\\alpha \\Vert} & \\leqslant & \n\t\t\t\\sum_{\\alpha_1,\\ldots,\\alpha_n = 0}^\\infty {\\left\\{ {\\left[ {\\left( \\frac{1}{r} + \\varepsilon \\right)} {\\vert z_1 \\vert} \\right]}^{\\alpha_1}\n\t\t\t \\ldots {\\left[ {\\left( \\frac{1}{r} + \\varepsilon \\right)} {\\vert z_n \\vert} \\right]}^{\\alpha_n} \\right\\}} \n\\\\[5pt]\n\t\t\t&= & {\\left\\{ \\sum_{\\alpha_1=0}^\\infty {\\left[ {\\left( \\frac{1}{r} + \\varepsilon \\right)} {\\vert z_1 \\vert} \\right]}^{\\alpha_1} \\right\\}} \n\t\t\t\\ldots {\\left\\{ \\sum_{\\alpha_n=0}^\\infty {\\left[ {\\left( \\frac{1}{r} + \\varepsilon \\right)} {\\vert z_n \\vert} \\right]}^{\\alpha_n} \\right\\}},\n\t\t\\end{eqnarray*}\nand due to \\eqref{87687638} the power series \\eqref{POWERSERIES} \nis absolutely convergent for each $\\boldsymbol{z} \\in \\Delta_r$. \n\\end{proof}\n\nOne remarks that, for each $r_0< r$ the series \\eqref{POWERSERIES} converges uniformly in $\\Delta_{r_0}$.\nMoreover, it follows from Definition \\eqref{ConvergRadius} that, there exists $c> 0$ such that, for any \n$\\alpha \\in \\mathbb{N}^n$, ${\\Vert A_\\alpha \\Vert} \\leqslant {c}^{ {\\vert \\alpha \\vert} + 1}$.\n\n\\medskip\nNow, let us recall the definition of operator value maps of many complex variables, and after that consider some important results. \nLet $\\mathcal{O} \\subset \\mathbb{C}^n$ be an open set. A map \n$f: \\mathcal{O} \\to \\mathcal{B}(H)$ is called holomorphic in \n$\\boldsymbol{w} \\in \\mathcal{O}$, when there exists an open set $U \\subset \\mathcal{O}$, $\\boldsymbol{w} \\in U$,\nsuch that $f$ is equal to the (absolutely convergent) power series in \n$\\boldsymbol{z}-\\boldsymbol{w}$, with coefficients $A_\\alpha \\in \\mathcal{B}(H)$, that is \n$$\n\\begin{aligned}\n\tf(\\boldsymbol{z}) \\equiv f(z_1,\\ldots,z_n) &= \\sum_{\\alpha\\in \\mathbb{N}^n} (\\boldsymbol{z}-\\boldsymbol{w})^\\alpha A_\\alpha \n\\\\[5pt]\n\t&\\equiv \\sum_{\\alpha_1,\\ldots,\\alpha_n = 0}^\\infty (z_1 - w_1)^{\\alpha_1} \\ldots (z_n - w_n)^{\\alpha_n} A_{\\alpha_1,\\ldots,\\alpha_n}\n\\end{aligned}\n$$\t\t \nfor each $\\boldsymbol{z} \\in U$. Moreover, the function $f$ is called holomorphic in $\\mathcal{O}$, \nif it is holomorphic for any $\\boldsymbol{w}\\in \\mathcal{O}$.\n\n\\medskip\nMoreover, assume that $A \\in \\mathcal{B}(H)$ is a symmetric operator and \n$\\lambda \\in \\mathbb{R}$ is an eigenvalue of $A$ with finite multiplicity $h$. \nTherefore, the operator $A-\\lambda I$ is not invertible and there exists a symmetric operator\n$R \\in \\mathcal{B}(H)$, uniquely defined, such that \n\\begin{equation}\n\\label{DEFR}\n\\begin{aligned}\n R (A-\\lambda I ) f &= f - \\sum_{k=1}^h {\\langle f,\\psi_k \\rangle} \\psi_k, \\quad \\text{for each $f \\in H$, and} \n \\\\\n R \\psi_k &= 0, \\quad \\text{for all $k \\in \\{1,\\ldots,h\\}$},\n\\end{aligned}\n\\end{equation}\nwhere $\\{ \\psi_1, \\ldots, \\psi_h\\}$ is an orthonormal basis of ${\\rm Ker}(A-\\lambda I)$. \nThe operator $R$ is called a pseudo-inverse of $A-\\lambda I$, and \none observes that, $AR= RA$.\n\n\\medskip\nIt is also important to consider the following results on complex value functions.\n\\begin{lemma}[Osgood's Lemma]\n\\label{9734987389rhd7gf6ty}\nLet $\\mathcal{O} \\subset \\mathbb{C}^n$ be an open set, and $f : \\mathcal{O} \\to \\mathbb{C}$\na continuous function that is holomorphic in each variable separately. Then,\nthe function $f$ is holomorphic.\n\\end{lemma}\n\nThen, in order to state the Weierstrass' Preparation Theorem, let us recall the concept of Weierstrass' polinomial. \nA complex function $W(\\varrho,\\boldsymbol{z})$, which is holomorphic in a\nneighborhood of $(0,\\boldsymbol{0})\\in \\mathbb{C} \\! \\times \\! \\mathbb{C}^n$, is called a Weirstrass polynomial of\ndegree $m$, when \n$$\n W(\\varrho,\\boldsymbol{z}) = \\varrho^m + a_1(\\boldsymbol{z}) \\varrho^{m-1} + \\ldots + a_{m-1}(\\boldsymbol{z}) \\varrho + a_m(\\boldsymbol{z}),\n$$\nwhere any $a_i(\\boldsymbol{z})$, \n$(i= 1,\\ldots,m)$, is an holomorphic function in a neighborhood $\\boldsymbol{0} \\in \\mathbb{C}^n$ that vanishes \nat $\\boldsymbol{z}= \\boldsymbol{0}$. Then, we have the following \n\n\\begin{theorem}[Weierstrass Preparation Theorem]\n\\label{8747285tdg4f}\nLet $m$ be a positive integer, and $F(\\varrho,\\boldsymbol{z})$\nholomorphic in a neighborhood of $(0,\\boldsymbol{0}) \\in \\mathbb{C} \\! \\times \\! \\mathbb{C}^n$ such that, \nthe mapping $\\varrho \\mapsto F(\\varrho,\\boldsymbol{0})\/\\varrho^m$ is holomorphic in a neighborhood of \n$0 \\in \\mathbb{C}$\n and is non-zero at $0$. Then, there exist a Weierstrass polynomial $W(\\varrho,\\boldsymbol{z})$ of degree m, \n and a holomorphic function $E(\\varrho,\\boldsymbol{z})$ which does not vanish in a neighborhood \n$U$ of $(0,\\boldsymbol{0})$, such that, for all \n$(\\varrho,\\boldsymbol{z}) \\in U$\n$$\n F(\\varrho,\\boldsymbol{z}) = W(\\varrho,\\boldsymbol{z}) E(\\varrho,\\boldsymbol{z}).\n$$ \n\\end{theorem}\n\\begin{proof}\nSee S. G. Krantz, H. R. Parks \\cite[p. 96]{KrantzParks}.\n\\end{proof}\n\t\t\n\\smallskip\t\nAt this point, we are in condition to establish the main result of this section, \nthat is to say, the perturbation theory for bounded operators with \nisolated eigenvalues of finite multiplicity.\nThe theorem considered here is a convenient and direct version for our purposes\nin this paper. \n\t\n\\begin{theorem}\n\\label{768746hughjg576}\nLet $H$ be a Hilbert space, and a sequence \nof operators $\\{A_\\alpha\\}$,\n$A_\\alpha \\in \\mathcal{B}(H)$ for each $\\alpha \\in \\mathbb{N}^n$.\nConsider the power series of $n-$complex variables $\\boldsymbol{z}= (z_1, \\ldots, z_n)$\nwith coefficients $A_\\alpha$, which is absolutely convergent in a neighborhood $ \\mathcal{O}$ of \n$\\boldsymbol{z}=\\boldsymbol{0}$. Define, the holomorphic map $A: \\mathcal{O} \\to \\mathcal{B}(H)$, \n$$ \n A(\\boldsymbol{z}):= \\sum_{\\alpha\\in \\mathbb{N}^n} \\boldsymbol{z}^\\alpha A_\\alpha\n$$\nand assume that, it is symmetric. If $\\lambda$ is an eigenvalue \nof $A_0 \\equiv A(\\boldsymbol{0})$ with finite multiplicity $h$ (and respective eigenvectors $\\psi_i$, $i= 1,\\ldots,h$), \nthen there exist a neighborhood $U \\subset \\mathcal{O}$\nof ${ \\boldsymbol{0} }$, and holomorphic functions\n$$\n\\begin{aligned}\n \\boldsymbol{z} \\in U &\\, \\mapsto \\, \\lambda_1(\\boldsymbol{z}), \\lambda_2(\\boldsymbol{z}), \\ldots, \\lambda_h(\\boldsymbol{z}) \\in \\mathbb{R},\n \\\\[5pt]\n \\boldsymbol{z} \\in U &\\, \\mapsto \\, \\psi_1(\\boldsymbol{z}), \\psi_2(\\boldsymbol{z}), \\ldots, \\psi_h(\\boldsymbol{z}) \\in H\\setminus \\{0\\},\n\\end{aligned}\n$$\nsatisfying for each $\\boldsymbol{z} \\in U$ and $i \\in \\{1,\\ldots,h\\}:$\n\t\t\\begin{itemize}\n\t\t\t\\item[$(i)$] $A(\\boldsymbol{z}) \\psi_i(\\boldsymbol{z}) = \\lambda_i(\\boldsymbol{z}) \\psi_i(\\boldsymbol{z})$, \n\t\t\t\\item[$(ii)$] ${ \\lambda_i(\\boldsymbol{z} = \\boldsymbol{0}) = \\lambda }$, \n\t\t\t\\item[$(iii)$] ${ {\\rm dim} {\\{w \\in H \\; ; \\; A(\\boldsymbol{z}) w = \\lambda_i(\\boldsymbol{z}) w \\}} \\leqslant h }$.\n\t\t\\end{itemize}\nMoreover, if there exists $d> 0$ such that \n$$\n \\sigma(A_0)\\cap (\\lambda-d, \\lambda+d) = {\\left\\{ \\lambda \\right\\}},\n$$\nthen for each $d^\\prime\\in(0,d)$ there exists a neighborhood $W \\subset U$ of $\\boldsymbol{0}$, \nsuch that\n\\begin{equation}\n\\label{FINALPERT}\n \\sigma(A(\\boldsymbol{z})) \\cap (\\lambda - d^\\prime, \\lambda + d^\\prime) = {\\left\\{ \\lambda_1(\\boldsymbol{z}), \\ldots, \\lambda_h(\\boldsymbol{z}) \\right\\}}\n\\end{equation}\nfor all $\\boldsymbol{z}\\in W$. \n\\end{theorem}\n\t\n\\begin{proof} \n1. First, we conveniently define \n\\begin{equation}\n\\label{DEFB}\n\tB(\\boldsymbol{z}) := A(\\boldsymbol{z}) - A_0 = \\sum_{{\\vert \\alpha \\vert} \\not= 0} \\boldsymbol{z}^\\alpha A_{\\alpha}.\n\\end{equation}\nThen, there exists a neighborhood of $(\\varrho,\\boldsymbol{z})= (0,\\boldsymbol{0})$\nsuch that, the function \n$$\n (\\varrho, \\boldsymbol{z}) \\, \\mapsto \\, \\sum_{l=0}^\\infty {\\left[ R {\\left( \\varrho - B(\\boldsymbol{z}) \\right)} \\right]}^l \\in \\mathcal{B}(H)\n$$\nis well defined (see equation \\eqref{DEFR}), and holomorphic on it. \nIndeed, first we recall that there exists $c> 0$ such that, for any \n$\\alpha \\in \\mathbb{N}^n$, ${\\Vert A_\\alpha \\Vert} \\leqslant {c}^{ {\\vert \\alpha \\vert} + 1}$.\nThen, it follows from \\eqref{DEFB} that \n$$\n\\begin{aligned}\n\t{\\Vert B(\\boldsymbol{z}) \\Vert} & \\leqslant \\sum_{{\\vert \\alpha \\vert}\\not=0} {\\vert z_1 \\vert}^{\\alpha_1} \\ldots {\\vert z_n \\vert}^{\\alpha_n} {\\Vert A_{\\alpha} \\Vert} \n \\leqslant \\sum_{{\\vert \\alpha \\vert} \\not= 0} {\\vert \\boldsymbol{z} \\vert}^{{\\vert \\alpha \\vert}} c^{{\\vert \\alpha \\vert}+1} \n\\\\\n & = \\sum_{k=1}^\\infty {\\sum_{{\\vert \\alpha \\vert} = k} {\\vert \\boldsymbol{z} \\vert}^{{\\vert \\alpha \\vert}} c^{{\\vert \\alpha \\vert}+1} } \n = \\sum_{k=1}^\\infty { \\sum_{{\\vert \\alpha \\vert} = k} {\\vert \\boldsymbol{z} \\vert}^{k} c^{k+1} } \n\\\\\n & = \\sum_{k=1}^\\infty {\\left( \\# {\\left\\{ \\alpha\\in\\mathbb{N}^n \\; ; \\; {\\vert \\alpha \\vert} = k \\right\\}} {\\vert \\boldsymbol{z} \\vert}^{k} c^{k+1} \\right)} \n\\\\\n & \\leqslant \\sum_{k=1}^\\infty (k+1)^n {\\vert \\boldsymbol{z} \\vert}^{k} c^{k+1} \n = {\\vert \\boldsymbol{z} \\vert}c^2 \\sum_{k=1}^\\infty (k+1)^n {\\vert \\boldsymbol{z} \\vert}^{k-1} c^{k-1} \n\\\\\n & \\leqslant {\\vert \\boldsymbol{z} \\vert} c^2 \\sum_{k=0}^\\infty (k+2)^n {\\vert \\boldsymbol{z} \\vert}^{k} c^{k}.\n\\end{aligned}\n$$\nTherefore, it follows that $\\sum_{k=0}^\\infty (k+2)^n {\\vert \\boldsymbol{z} \\vert}^{k} c^{k}$ is absolutely convergent \nfor each $\\boldsymbol{z} \\in B{\\left( \\boldsymbol{0},\\frac{1}{4^{n}c} \\right)}$. Moreover, there exists $\\tilde{c}> 0$, such that\t\t\n$$\n \\big| \\sum_{k=0}^\\infty (k+2)^n {\\vert \\boldsymbol{z} \\vert}^{k} c^{k} \\big| \\leqslant \\tilde{c}\n \\quad \\text{for each $\\boldsymbol{z} \\in B(\\boldsymbol{0},\\frac{1}{4^{n+1}c})$}.\n$$\t\t\nHence we have from \\eqref{DEFB} that\n\\begin{equation}\n\\label{768746876784}\n\\begin{aligned}\n\t{\\left\\Vert R(\\varrho - B(\\boldsymbol{z})) \\right\\Vert} &\\leq {\\Vert R \\Vert}({\\vert \\varrho \\vert} + {\\vert \\boldsymbol{z} \\vert}c^2\\tilde{c}) \n\t\\\\[5pt]\n\t&\\leq {\\Vert R \\Vert} \\ {\\rm max} {\\left\\{ 1,c^2\\tilde{c} \\right\\}}({\\vert \\varrho \\vert} + {\\vert \\boldsymbol{z} \\vert}),\n\\end{aligned} \n\\end{equation}\t \nfor ${ \\varrho\\in \\mathbb{C} }$ and ${ \\boldsymbol{z}\\in B{\\left( \\boldsymbol{0},\\frac{1}{4^{n+1}c} \\right)} }$.\t\nTo follow, we define\n\\begin{equation}\n r:= \\min \\Big\\{\\frac{1}{8 {\\Vert R \\Vert} {\\rm max} {\\left\\{ 1,c^2\\tilde{c} \\right\\}}}, \\frac{1}{4^{n+1} c} \\Big\\},\n \\quad \n\t\\Delta_r:=B(0,r) \\! \\times \\! B(\\boldsymbol{0},r) \\subset \\mathbb{C} \\! \\times \\! \\mathbb{C}^n.\n\\end{equation}\nThen, for any $m,n \\in \\mathbb{N}$ with $m> n$, and all $(\\varrho, \\boldsymbol{z}) \\in \\Delta_r$, we have \t\t\n$$\n\\begin{aligned}\n {\\Vert \\sum_{l=0}^m {\\left[ R(\\varrho - B(\\boldsymbol{z})) \\right]}^l - \\sum_{l=0}^n {\\left[ R(\\varrho - B(\\boldsymbol{z})) \\right]}^l \\Vert} \n &\\leq \\sum_{l=n+1}^m {\\Vert R(\\varrho - B(\\boldsymbol{z})) \\Vert}^l\n \\\\[5pt]\n &\\leq \\sum_{l=n+1}^m {\\left( \\frac{1}{4} \\right)}^l.\n\\end{aligned} \n$$ \nConsequently, for any $(\\varrho, \\boldsymbol{z}) \\in \\Delta_r$, $\\{ \\sum_{l=0}^m {\\left[ R(\\varrho - B(\\boldsymbol{z})) \\right]}^l \\}_{m\\in \\mathbb{N}}$\nis a Cauchy sequence in $\\mathcal{B}(H)$. Therefore, the mapping \n\\begin{equation}\n\t(\\varrho, \\boldsymbol{z})\\in \\Delta_r \\; \\mapsto \\; \\sum_{l=0}^\\infty {\\left[ R(\\varrho - B(\\boldsymbol{z})) \\right]}^l\n\\end{equation}\nis holomorphic, since it is the uniform limit of holomorphic functions. \n\n\\bigskip\n2. Now, for $i,j = 1,\\ldots,h$ and $(\\varrho,\\boldsymbol{z}) \\in \\Delta_r$, let us consider\n$$\n f_{ij}(\\varrho,\\boldsymbol{z}) \n = \\Big\\langle \\sum_{l=0}^\\infty (\\varrho - B(\\boldsymbol{z})) {\\left[ R(\\varrho - B(\\boldsymbol{z})) \\right]}^l \\psi_i, \\psi_j \\Big\\rangle.\n$$\nTherefore, the function $F:\\Delta_r \\rightarrow \\mathbb{C} $, defined by \n$F(\\varrho, \\boldsymbol{z}) := {\\rm det} {\\left[ {\\left( f_{ij}(\\varrho, \\boldsymbol{z}) \\right)} \\right]}$\nis holomorphic. In fact, $F(\\varrho, \\boldsymbol{z})$ is a real value function, when $\\varrho \\in \\mathbb{R}$. \n\\begin{comment}\nIndeed, let us observe that, for $i,j\\in {\\{ 1,\\ldots,h\\}}$ \n$$\n\\begin{aligned}\n\tf_{ij}(\\varrho,\\boldsymbol{z})= & \\sum_{l=0}^\\infty \\left\\langle {\\left[ R(\\varrho - B(\\boldsymbol{z})) \\right]}^l \\psi_i, (\\overline{\\varrho} - B(\\boldsymbol{z})) \\psi_j \\right\\rangle \n\t\\\\\n\t= & \\sum_{l=0}^\\infty \\left\\langle \\psi_i, {\\left[ (\\overline{\\varrho} - B(\\boldsymbol{z}))R \\right]}^l (\\overline{\\varrho} - B(\\boldsymbol{z})) \\psi_j \\right\\rangle \n\t\\\\\n\t= & \\sum_{l=0}^\\infty \\left\\langle \\psi_i, {\\left[ (\\overline{\\varrho} - B(\\boldsymbol{z}))R \\right]}^{l-1} {\\left[ (\\overline{\\varrho} \n\t- B(\\boldsymbol{z}))R \\right]} (\\overline{\\varrho} - B(\\boldsymbol{z})) \\psi_j \\right\\rangle \n\t\\\\\n\t= & \\sum_{l=0}^\\infty \\left\\langle \\psi_i, (\\overline{\\varrho} - B(\\boldsymbol{z})) {\\left[ R(\\overline{\\varrho} - B(\\boldsymbol{z})) \\right]}^l \\psi_j \\right\\rangle\n\t \\\\\n\t= & \\Big\\langle \\psi_i, \\sum_{l=0}^\\infty (\\overline{\\varrho} - B(\\boldsymbol{z})) {\\left[ R(\\overline{\\varrho} - B(\\boldsymbol{z})) \\right]}^l \\psi_j \\Big\\rangle \n\t= \\overline{f_{ji}(\\overline{\\varrho}, \\boldsymbol{z})}.\n\\end{aligned}\n$$\n\\end{comment}\nMoreover, $F(\\varrho,\\boldsymbol{0}) = {\\rm det} {\\left[ \\varrho \\, (\\delta_{ij}) \\right]} = \\varrho^h$\nfor each $\\varrho\\in B (0,r)$, where $\\delta_{ij}$ is the Kronecker delta. Indeed, we have \n\\begin{eqnarray*}\n\t\t\tf_{ij}(\\varrho,\\boldsymbol{0}) & = & \\Big\\langle \\sum_{l=0}^\\infty (\\varrho - B(\\boldsymbol{0})) {\\left[ R(\\varrho - B(\\boldsymbol{0})) \\right]}^l \\psi_i, \\psi_j \\Big\\rangle \n\t\t\t\\\\\n\t\t\t& = & \\left\\langle \\sum_{l=0}^\\infty \\varrho^{l+1} R^l \\psi_i, \\psi_j \\right\\rangle \\\\\n\t\t\t& = & \\left\\langle \\varrho \\, \\psi_i, \\psi_j \\right\\rangle + \\sum_{l=1}^\\infty \\left\\langle \\varrho^{l+1} R^l \\psi_i, \\psi_j \\right\\rangle\n\t\t\t= \\varrho \\, \\delta_{ij},\n\\end{eqnarray*}\t\nfrom which follows the result.\n\t\t\n\\medskip\n3. At this point, we show that there exist $h$ holomorphic functions \n$\\varrho_k(\\boldsymbol{z})$, $(k=1,\\ldots,h)$,\ndefined in a neighborhood of $\\boldsymbol{z}= \\boldsymbol{0}$, such that\n\\begin{equation*}\n\t\\lim_{\\boldsymbol{z} \\to \\boldsymbol{0}} \\varrho_k(\\boldsymbol{z}) = 0,\n\t\\quad \\text{for $k \\in \\{ 1,\\ldots, h \\}$}.\n\\end{equation*}\nIndeed, applying Theorem \\ref{8747285tdg4f} (Weierstrass Preparation Theorem) \nthere exists a Weirstrass polynomial of degree $h$\n$$\n W(\\varrho,\\boldsymbol{z}) \n = \\varrho^h + a_1(\\boldsymbol{z}) \\varrho^{h-1} + \\ldots + a_{h-1}(\\boldsymbol{z}) \\varrho + a_h(\\boldsymbol{z}),\n$$\nand also a holomorphic function \n$E(\\varrho,\\boldsymbol{z})$, which does not vanish in a neighborhood \n$U \\times V$ of $(0,\\boldsymbol{0})$, \nwith $U \\subset B(0,r) \\subset \\mathbb{C}$ and $V\\subset B(\\boldsymbol{0},r) \\subset \\mathbb{C}^n$, such that \nfor each $(\\varrho, \\boldsymbol{z}) \\in U \\! \\times \\! V$\n$$\n\tF(\\varrho, \\boldsymbol{z})= E(\\varrho, \\boldsymbol{z}) {\\left( \\varrho^h + a_1(\\boldsymbol{z}) \\varrho^{h-1} \n\t+ \\ldots + a_{h-1}(\\boldsymbol{z}) \\varrho + a_h(\\boldsymbol{z}) \\right)}.\n$$\nSince the coefficients of the Weirstrass polynomial are holomorphic functions, which vanish in\n$\\boldsymbol{z} = \\boldsymbol{0}$, then there exist holomorphic functions $\\varrho_k(\\boldsymbol{z})$, \nsuch that \n\\begin{equation}\n\\label{76847678644}\n\\begin{aligned}\n F(\\varrho, \\boldsymbol{z})&= E(\\varrho, \\boldsymbol{z}) \\, \\prod_{k= 1}^h {\\left( \\varrho - \\varrho_k(\\boldsymbol{z}) \\right)},\n\\\\[5pt]\n\t\\lim_{ \\boldsymbol{z} \\to \\boldsymbol{0}} \\varrho_k(\\boldsymbol{z})&= 0, \\quad (k=1,\\ldots,h).\n\\end{aligned}\n\\end{equation}\n\n\\bigskip\n4. At this point, let us show that, for $k \\in \\{1, \\ldots, h\\}$ there exists a map\n$\\psi_k(\\boldsymbol{z}) \\in H-\\{0\\}$ such that,\n$A(\\boldsymbol{z})\\psi_k(\\boldsymbol{z}) = (\\lambda + \\varrho_k(\\boldsymbol{z}))\\psi_k(\\boldsymbol{z})$, \nfor each $\\boldsymbol{z}$ in a neighborhood of $\\boldsymbol{0}$. Indeed, let \n$k \\in \\{1, \\ldots,h\\}$ be fix. From item 3, there exists a set $V \\subset \\mathbb{C}$, \nwhich is a neighborhood of $\\boldsymbol{z}= \\boldsymbol{0}$, such that\n$$\n {\\rm det} {\\left[ {\\left( f_{ij}(\\varrho_k(\\boldsymbol{z}), \\boldsymbol{z}) \\right)} \\right]} = 0\n$$\nfor each $\\boldsymbol{z}\\in V$. Therefore, for each $\\boldsymbol{z} \\in V$\nthe linear system \n$$\n \\left( f_{ji}(\\varrho_k(\\boldsymbol{z}), \\boldsymbol{z}) \\right) (c_1,\\ldots,c_n)^T = 0\n$$\nhas a non-trivial solution. Consequently, there exist $h$ holomorphic functions \n$\\boldsymbol{z}\\in V \\mapsto c_1^k(\\boldsymbol{z}), \\ldots, c_h^k(\\boldsymbol{z})$, such that for all $j=1,\\ldots,h$\n$$\n \\sum_{i=1}^h f_{ij}(\\varrho_k(\\boldsymbol{z}), \\boldsymbol{z}) \\, c_i^k(\\boldsymbol{z})= 0,\n$$\nand without loss of generality we may assume \n\\begin{equation}\n\\label{4234343}\n \\sum_{i=1}^h {\\vert c_i^k(\\boldsymbol{z}) \\vert}^2 =1.\n\\end{equation}\nFrom equation \\eqref{76847678644} it is possible to find a neighborhood \n$\\tilde{V}$ of $\\boldsymbol{0}$, which is compactly embedded in $V$, such that\n$$\n \\underset{\\boldsymbol{z} \\in \\tilde{V}}{\\rm sup} {\\vert \\varrho_k(\\boldsymbol{z}) \\vert} < \\frac{1}{8 {\\Vert R \\Vert} {\\rm max}(1, c^2\\tilde{c})},\n$$\nfor each $\\boldsymbol{z} \\in \\tilde{V}$ and all $k \\in \\{1,\\ldots,n\\}$. Hence we obtain for each $\\boldsymbol{z}\\in \\tilde{V}$\n\\begin{equation}\n\\label{6654123123654}\n {\\Vert R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\Vert} \n \\leq {\\Vert R \\Vert} {\\rm max}(1, c^2\\tilde{c}) {\\left( {\\vert \\varrho_k(\\boldsymbol{z}) \\vert} + {\\vert \\boldsymbol{z} \\vert} \\right)} \n \\leq \\frac{1}{4},\n\\end{equation}\t\t\nand then\n\\begin{equation}\n\\label{6486746874}\n \\sum_{l=0}^\\infty {\\Vert R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\Vert}^l \\leqslant \\frac{4}{3}.\n\\end{equation}\nNow, we define for any $\\boldsymbol{z} \\in \\tilde{V}$\n$$\n \\phi_k(\\boldsymbol{z}):= \\sum_{i=1}^h c_i^k(\\boldsymbol{z}) \\psi_i, \n \\quad \\text{and} \\quad \n \\psi_k(\\boldsymbol{z}):= \\sum_{l=0}^\\infty {\\left[ R(\\varrho_k (\\boldsymbol{z}) - B(\\boldsymbol{z})) \\right]}^l \\phi_k(\\boldsymbol{z}).\n$$\nTherefore, we have \n\t\t\\begin{eqnarray*}\n\t\t\t\\psi_k(\\boldsymbol{z}) & = & \\phi_k(\\boldsymbol{z}) + \\sum_{l=1}^\\infty {\\left[ R(\\mu_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\right]}^l \\phi_k(\\boldsymbol{z}) \n\t\t\t\\\\\n\t\t\t& = & \\phi_k(\\boldsymbol{z}) + {\\left[ R(\\varrho_k(\\boldsymbol{z}) \n\t\t\t- B(\\boldsymbol{z})) \\right]} \\sum_{l=1}^\\infty {\\left[ R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\right]}^{l-1} \\phi_k(\\boldsymbol{z}) \n\t\t\t\\\\\n\t\t\t& = & \\phi_k(\\boldsymbol{z}) + {\\left[ R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\right]} \\psi_k(\\boldsymbol{z}),\n\t\t\\end{eqnarray*}\nand it follows that\n\\begin{equation}\n\\label{AAA}\n\\begin{aligned}\n\t\t\t(A_0-\\lambda)\\psi_k(\\boldsymbol{z}) & = (A_0-\\lambda)\\phi_k(\\boldsymbol{z}) + (A_0-\\lambda) {\\left[ R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\right]} \\psi_k(\\boldsymbol{z}) \n\t\t\t\\\\\n\t\t\t& = \\sum_{i=1}^h c_i^k(\\boldsymbol{z})(A_0-\\lambda) \\psi_i + (A_0-\\lambda) R {\\left[ (\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\psi_k(\\boldsymbol{z}) \\right]}\n\t\t\t\\\\\n\t\t\t& = R (A_0-\\lambda) {\\left[ (\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\psi_k(\\boldsymbol{z}) \\right]} \n\t\t\t\\\\\n\t\t\t& = (\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\psi_k(\\boldsymbol{z}) - \\sum_{j=1}^h \\left\\langle (\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\psi_k(\\boldsymbol{z}), \\psi_j \\right\\rangle \\psi_j\n \\\\\n\t\t\t& = (\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\psi_k(\\boldsymbol{z}) \n\\end{aligned}\n\\end{equation}\nsince \n\t\t\\begin{align*}\n\t\t\t\\left\\langle (\\varrho_k(\\boldsymbol{z}) \\right. & - \\left. B(\\boldsymbol{z})) \\psi_k(\\boldsymbol{z}), \\psi_j \\right\\rangle \\\\ \n\t\t\t& = \\sum_{i=1}^h \\Big\\langle \\sum_{l=0}^\\infty (\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) {\\left[ R(\\varrho_k (\\boldsymbol{z}) - B(\\boldsymbol{z})) \\right]}^l \\psi_i, \\psi_j \\Big\\rangle c_i^k(\\boldsymbol{z}) \n\t\t\t\\\\\n\t\t\t& = \\sum_{i=1}^h f_{ij}(\\varrho_k(\\boldsymbol{z}), \\boldsymbol{z}) \\, c_i^k(\\boldsymbol{z}) = 0.\n\t\t\\end{align*}\nThus, for each $\\boldsymbol{z}\\in \\tilde{V}$, $A(\\boldsymbol{z}) \\psi_k(\\boldsymbol{z}) = (\\lambda + \\varrho_k(\\boldsymbol{z}))\\psi_k(\\boldsymbol{z})$.\nOn the other hand, \n\t\t\\begin{equation*}\n \t\t\t\\psi_k(\\boldsymbol{z}) = \\phi_k(\\boldsymbol{z}) + {\\left[ R(\\varrho_k(\\boldsymbol{z}) \n\t\t\t- B(\\boldsymbol{z})) \\right]} \\sum_{l=1}^\\infty {\\left[ R(\\varrho_k(\\boldsymbol{z}) \n\t\t\t- B(\\boldsymbol{z})) \\right]}^{l-1} \\phi_k(\\boldsymbol{z}),\n\t\t\\end{equation*}\nhence from \\eqref{4234343}, \\eqref{6654123123654}, and \\eqref{6486746874}, we have for each $\\boldsymbol{z}\\in \\tilde{V}$\n\t\t\\begin{eqnarray*}\n\t\t\t{\\Vert \\psi_k(\\boldsymbol{z}) - \\phi_k(\\boldsymbol{z}) \\Vert} \n\t\t\t& \\leq & {\\Vert R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\Vert} {\\Vert \\sum_{l=0}^\\infty {\\left[ R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\right]}^{l} \\Vert} {\\Vert \\phi_k(\\boldsymbol{z}) \\Vert} \n\t\t\t\\\\\n\t\t\t& \\leq & \\frac{4}{3} {\\Vert R(\\varrho_k(\\boldsymbol{z}) - B(\\boldsymbol{z})) \\Vert} \\leq \\frac{1}{3}.\n\t\t\\end{eqnarray*}\nConsequently, for each $\\boldsymbol{z}\\in \\tilde{V}$ we have $\\psi_k(\\boldsymbol{z})\\not= 0$, since \n\t\t\\begin{equation*}\n\t\t\t1={\\Vert \\phi_k(\\boldsymbol{z}) \\Vert} \\leq {\\Vert \\phi_k(\\boldsymbol{z}) - \\psi_k(\\boldsymbol{z}) \\Vert} \n\t\t\t+ {\\Vert \\psi_k(\\boldsymbol{z}) \\Vert} \\leq \\frac{1}{3} + {\\Vert \\psi_k(\\boldsymbol{z}) \\Vert}.\n\t\t\\end{equation*}\n\n\\bigskip\n5. Now, let us show item $(iii)$ of the thesis, that is, \n$${ {\\rm dim} {\\{w \\in H \\; ; \\; A(\\boldsymbol{z}) w = \\lambda_i(\\boldsymbol{z}) w \\}} \\leqslant h }.$$\nFrom the previous item, there exists \n$\\lambda_k(\\boldsymbol{z})= \\lambda + \\varrho_k(\\boldsymbol{z})$, \n$k\\in\\{1,\\ldots,h\\}$, an eigenvalue of the operator $A(\\boldsymbol{z})$, \nfor $\\boldsymbol{z}$ in a neighborhood of $\\boldsymbol{z}= \\boldsymbol{0}$. \nWe set $\\lambda(\\boldsymbol{z})= \\lambda_k(\\boldsymbol{z})$, for any \n$k \\in \\{1,\\ldots,h\\}$ fixed, and let $\\psi(\\boldsymbol{z})$ be any function satisfying \n$$\n A(\\boldsymbol{z}) \\psi(\\boldsymbol{z})= \\lambda(\\boldsymbol{z}) \\psi(\\boldsymbol{z}),\n$$\nwhich is not necessarily the eigenfunction $\\psi_k(\\boldsymbol{z})$. \nThen, we are going to show that, there exist a neighborhood of $\\boldsymbol{z}= \\boldsymbol{0}$,\nand for each \n$\\boldsymbol{z}$ in this neighborhood an invertible holomorphic operator $S(\\boldsymbol{z}) \\in \\mathcal{B}(H)$, such that, \n\\begin{equation}\n\\label{SPAM}\n\t\t\t\\psi(\\boldsymbol{z}) \\in {\\rm span} {\\big\\{ S(\\boldsymbol{z})\\psi_1, S(\\boldsymbol{z}) \\psi_2, \\ldots, S(\\boldsymbol{z}) \\psi_h \\big\\}}.\n\\end{equation}\nIndeed, to show \\eqref{SPAM} let us define \n$\\varrho(\\boldsymbol{z}):= \\lambda(\\boldsymbol{z})-\\lambda$, \nthen we have\n$$\n {\\big( \\varrho(\\boldsymbol{z})I - B(\\boldsymbol{z}) \\big)} \\psi(\\boldsymbol{z}) = {\\big( A_0-\\lambda \\big)} \\psi(\\boldsymbol{z}).\n$$\nHence from the first equation in \\eqref{DEFR}, it follows that \n$$\n R {\\big( \\varrho(\\boldsymbol{z})I - B(\\boldsymbol{z}) \\big)} \\psi(\\boldsymbol{z}) = \\psi(\\boldsymbol{z}) - \\sum_{i=1}^h {\\langle \\psi(\\boldsymbol{z}), \\psi_i \\rangle} \\psi_i,\n$$\nor equivalently \n\t\t\\begin{equation*}\n\t\t\t{\\big[ I - R {\\big( \\varrho(\\boldsymbol{z})I - B(\\boldsymbol{z}) \\big)} \\big]} \\psi(\\boldsymbol{z}) = \\sum_{i=1}^h {\\langle \\psi(\\boldsymbol{z}), \\psi_i \\rangle} \\psi_i.\n\t\t\\end{equation*}\nOn the other hand, from \\eqref{768746876784} it is possible to find a neighborhood $V$ of \n$\\boldsymbol{z}=\\boldsymbol{0}$, such that, \n$$\n \\|R \\big( \\varrho(\\boldsymbol{z})I - B(\\boldsymbol{z}) \\big) \\| < 1\n$$ \nTherefore, it exists an invertible operator \n$$\n S(\\boldsymbol{z})= {\\big[ I - R {\\big( \\varrho(\\boldsymbol{z})I - B(\\boldsymbol{z}) \\big)} \\big]}^{-1} = \\sum_{\\nu=0}^\\infty {\\big[ R {\\big( \\varrho(\\boldsymbol{z})I - B(\\boldsymbol{z}) \\big)} \\big]}^\\nu,\n$$\nand hence \n$$\n \\psi(\\boldsymbol{z})= S(\\boldsymbol{z}) {\\left( \\sum_{i=1}^h {\\langle \\psi(\\boldsymbol{z}), \\psi_i \\rangle} \\psi_i \\right)} \n = \\sum_{i=1}^h {\\langle \\psi(\\boldsymbol{z}), \\psi_i \\rangle} {\\big[ S(\\boldsymbol{z}) \\psi_i \\big]}.\n$$\n\n\\bigskip\n6. Finally, we show that the perturbed eigenvalues are isolated. \nTo this end, we consider \n\t\t\\begin{equation*}\n\t\t\tN(\\boldsymbol{z}) := {\\rm span} {\\left\\{ \\psi_1(\\boldsymbol{z}), \\psi_2(\\boldsymbol{z}), \\ldots, \\psi_h(\\boldsymbol{z}) \\right\\}},\n\t\t\\end{equation*}\nthe operator $P(\\boldsymbol{z}): H \\to H$, which is a projection on $N(\\boldsymbol{z})$, given by\n\\begin{equation*}\n P(\\boldsymbol{z}) u = \\sum_{i=1}^h \\left\\langle u, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}),\n\\end{equation*}\nand for $d> 0$ the operator $D(\\boldsymbol{z}): H \\to H$, defined by \n\\begin{equation*}\n D(\\boldsymbol{z}):= A(\\boldsymbol{z}) - 2 d P(\\boldsymbol{z}).\n\\end{equation*}\nOne observes that \t\t\n\\begin{equation}\n\\label{87678687678}\n D(\\boldsymbol{z})u = \\sum_{i=1}^h (\\lambda_i(\\boldsymbol{z}) - 2d) \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) + A(\\boldsymbol{z})u_2,\n\\end{equation}\nwhere we have used the direct sum $u= u_1 + u_2$, $u_1\\in N(\\boldsymbol{z})$ and $u_2\\in N(\\boldsymbol{z})^\\perp$.\n\n\\medskip\t\t\t\n\\underline{{Claim 1}}. \n\\begin{itemize}\n\\item[a)] For $\\xi \\in \\mathbb{R} \\setminus {\\left\\{ \\lambda_1(\\boldsymbol{z}), \\lambda_2(\\boldsymbol{z}), \\ldots, \\lambda_h(\\boldsymbol{z}) \\right\\}}$,\n\\begin{equation*}\n\tD(\\boldsymbol{z}) - \\xi \\;\\, \\text{is bijective} \\; \\Rightarrow \\; A(\\boldsymbol{z}) - \\xi \\;\\, \\text{is bijective}.\n\\end{equation*}\n\t\\item[b)] For $\\xi \\in \\mathbb{R} \\setminus {\\left\\{ \\lambda_1(\\boldsymbol{z})-2d, \\lambda_2(\\boldsymbol{z})-2d, \\ldots, \\lambda_h(\\boldsymbol{z})-2d\\right\\}}$, \n\\begin{equation*}\n\tA(\\boldsymbol{z}) - \\xi \\;\\, \\text{is bijective} \\; \\Rightarrow \\; D(\\boldsymbol{z}) - \\xi \\;\\, \\text{is bijective}.\n\t\\end{equation*}\n\\end{itemize}\n\nProof of Claim 1.\nFirst, we show item (a). Let $\\xi \\in \\mathbb{R} \\setminus {\\left\\{ \\lambda_1(\\boldsymbol{z}), \\ldots, \\lambda_h(\\boldsymbol{z}) \\right\\}}$ be such that,\n$D(\\boldsymbol{z}) - \\xi$ is bijective. Then, we must show that $A(\\boldsymbol{z}) - \\xi$ is injective and surjective:\n\n\\underline{Injective}. Let $u\\in {\\rm Ker}(A(\\boldsymbol{z})-\\xi)$ and since $\\xi \\not= \\lambda_i(\\boldsymbol{z})$, for $i \\in \\{1,\\ldots,n\\}$, we have \n$\\left\\langle u,\\psi_i(\\boldsymbol{z}) \\right\\rangle= 0$ for all $i \\in \\{1,\\ldots,n\\}$. Therefore, $u \\in N(\\boldsymbol{z})^\\perp$ and from \\eqref{87678687678}, \n\\begin{equation*}\n\t(D(\\boldsymbol{z}) - \\xi)u = (A(\\boldsymbol{z}) - \\xi)u = 0.\n\\end{equation*}\nConsequently, we obtain $u=0$.\n\n\\medskip\n\\underline{Surjective}. Applying the surjection of $D(\\boldsymbol{z})-\\xi$, \nfor each $v \\in H$ there exists $u \\in H$, such that \n\\begin{equation}\\label{976876876874}\n\t(D(\\boldsymbol{z})-\\xi)u = v.\n\\end{equation}\nOn the other hand, we write $u= u_1 + u_2$, with $u_1\\in N(\\boldsymbol{z})$ and\n $u_2 \\in N(\\boldsymbol{z})^\\perp$, hence from equations \\eqref{87678687678} and \\eqref{976876876874}, we obtain\n\\begin{equation}\n\\label{6687687644423}\n\tv = \\sum_{i=1}^h (\\lambda_i(\\boldsymbol{z}) - 2d -\\xi) \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) + (A(\\boldsymbol{z}) - \\xi)u_2.\n\\end{equation}\nMoreover, since $\\xi \\not= \\lambda_i(\\boldsymbol{z})$, for $i \\in \\{1,\\ldots,n\\}$, it follows that \n\\begin{equation*}\n\t(A(\\boldsymbol{z}) - \\xi) {\\left[ \\frac{ \\psi_i(\\boldsymbol{z}) }{ \\lambda_i(\\boldsymbol{z}) - \\xi} \\right]} = \\psi_i(\\boldsymbol{z}),\n\\end{equation*}\nand hence applying it in \\eqref{6687687644423}, we have\n\\begin{eqnarray*}\n\tv & = & \\sum_{i=1}^h (A(\\boldsymbol{z}) - \\xi) {\\left[ {\\left( \\frac{\\lambda_i(\\boldsymbol{z}) - 2d \n\t- \\xi}{\\lambda_i(\\boldsymbol{z}) - \\xi} \\right)} \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) \\right]} \n\t+ (A(\\boldsymbol{z}) - \\xi)u_2 \n\t\\\\\n\t& = & (A(\\boldsymbol{z}) - \\xi) {\\left[ \\sum_{i=1}^h {\\left( \\frac{\\lambda_i(\\boldsymbol{z}) - 2d \n\t- \\xi}{\\lambda_i(\\boldsymbol{z}) - \\xi} \\right)} \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) + u_2 \\right]}.\n\\end{eqnarray*}\nThus, the operator $A(\\boldsymbol{z} - \\xi)$ is surjective.\n\n\\medskip\nNow, let us show item (b). Let $\\xi \\in \\mathbb{R} \\setminus {\\left\\{ \\lambda_1(\\boldsymbol{z})-2d, \\ldots, \\lambda_h(\\boldsymbol{z})-2d\\right\\}}$\nbe such that, $A(\\boldsymbol{z}) - \\xi$ is bijective. Similarly, we must show that $D(\\boldsymbol{z}) - \\xi$ is injective and surjective: \n\n\\underline{Injective}. Let $u \\in H$ be such that $(D(\\boldsymbol{z})-\\xi)u= 0$. \nThen writing $u=u_1 + u_2$, with $u_1 \\in N(\\boldsymbol{z})$ and $u_2\\in N(\\boldsymbol{z})^\\perp$, \nit follows from equation \\eqref{87678687678} that \n\\begin{equation}\n\\label{7978776476444}\n\t0 = \\sum_{i=1}^h (\\lambda_i(\\boldsymbol{z}) - 2d - \\xi) \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) + (A(\\boldsymbol{z}) - \\xi)u_2,\n\\end{equation}\nthus $(A(\\boldsymbol{z})-\\xi)u_2 \\in N(\\boldsymbol{z})$. Consequently, we have \n\\begin{eqnarray*}\n\t(A(\\boldsymbol{z}) - \\xi)u_2 & = & P(\\boldsymbol{z}) {\\left[ (A(\\boldsymbol{z}) - \\xi)u_2 \\right]} \n\t= P(\\boldsymbol{z}) A(\\boldsymbol{z})u_2 - \\xi P(\\boldsymbol{z})u_2 \n\t\\\\\n\t& = & \\sum_{i=1}^h \\left\\langle A(\\boldsymbol{z})u_2, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) - \\xi P(\\boldsymbol{z}) u_2\n\t\\\\\n\t& = & \\sum_{i=1}^h \\left\\langle u_2, A(\\boldsymbol{z})\\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) - \\xi P(\\boldsymbol{z})u_2\n\t \\\\\n\t& = & \\sum_{i=1}^h \\lambda_i(\\boldsymbol{z}) \\left\\langle u_2, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) - \\xi P(\\boldsymbol{z})u_2= 0 \n\\end{eqnarray*}\nsince $u_2 \\in N(\\boldsymbol{z})^\\perp$. By hypothesis $A(\\boldsymbol{z}) - \\xi$ is injective, thus $u_2= 0$. \nThen, from equation \\eqref{7978776476444} we obtain \n\\begin{equation*}\n\t\\sum_{i=1}^h (\\lambda_i(\\boldsymbol{z}) - 2d - \\xi) \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle \\psi_i(\\boldsymbol{z}) = 0,\n\\end{equation*}\nand since $\\{\\psi_i(\\boldsymbol{z})\\}_{i=1}^h$ is a linearly dependent set of vectors, we have for each \n$i \\in \\{1, \\ldots,h\\}$, $\\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle= 0$,\nthus $\\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle=0$. Recall that by hypothesis $\\lambda_i(\\boldsymbol{z}) - 2d - \\xi \\not= 0$, \nfor all $i \\in \\{1,\\ldots,h\\}$. Therefore, we obtain $u_1=0$.\n\t\t\n\\medskip\n\\underline{Surjective}. Again, applying the surjection of $A(\\boldsymbol{z}) - \\xi$, \nfor each $v \\in H$ there exists $u \\in H$, such that \n\\begin{equation}\\label{9878977987r34}\n\t(A(\\boldsymbol{z}) - \\xi)u = v.\n\\end{equation}\nThen, writing $u= u_1+u_2$, with $u_1 \\in N(\\boldsymbol{z})$ and \n$u_2 \\in N(\\boldsymbol{z})^\\perp$, from equations \n\\eqref{87678687678} and \\eqref{9878977987r34}, we have\n\\begin{equation}\n\\label{6876876780987}\n v= \\sum_{i=1}^h (\\lambda_i(\\boldsymbol{z}) - \\xi) { \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle } \\psi_i(\\boldsymbol{z}) + (D(\\boldsymbol{z}) - \\xi)u_2.\n\\end{equation}\nMoreover, since $\\xi \\not= \\lambda_i(\\boldsymbol{z})-2d$, for $i \\in \\{1,\\ldots,n\\}$, \n\\begin{equation*}\n (D(\\boldsymbol{z}) - \\xi) {\\left[ \\frac{ \\psi_i(\\boldsymbol{z}) }{ \\lambda_i(\\boldsymbol{z}) - 2d - \\xi} \\right]} = \\psi_i(\\boldsymbol{z})\n\\end{equation*}\nand then from \\eqref{6876876780987}, it follows that\n\\begin{eqnarray*}\n v & = & \\sum_{i=1}^h (D(\\boldsymbol{z}) - \\xi) {\\left[ {\\left( \\frac{\\lambda_i(\\boldsymbol{z}) - \\xi}{\\lambda_i(\\boldsymbol{z}) -2d - \\xi} \\right)} \n { \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle } \\phi^i(\\boldsymbol{z}) \\right]} + (D(\\boldsymbol{z}) - \\xi)u_2 \n \\\\\n & = & (D(\\boldsymbol{z}) - \\xi) {\\left[ \\sum_{i=1}^h {\\left( \\frac{\\lambda_i(\\boldsymbol{z}) - \\xi}{\\lambda_i(\\boldsymbol{z}) -2d - \\xi} \\right)} \n { \\left\\langle u_1, \\psi_i(\\boldsymbol{z}) \\right\\rangle } \\psi_i(\\boldsymbol{z}) + u_2 \\right]}.\n\t\t\\end{eqnarray*}\nTherefore, the operator $D(\\boldsymbol{z})-\\xi$ is surjective.\t\n\t\t\n\\bigskip\n\\underline{{Claim 2}}. The spectrum of the operator $D(\\boldsymbol{0})$ does not contain\nelements of the interval $(\\lambda - d, \\lambda + d)$, i.e.\n\t\t\t\\begin{equation*}\n\t\t\t\t\\sigma(D(\\boldsymbol{0})) \\cap (\\lambda - d, \\lambda + d) = \\emptyset.\n\t\t\t\\end{equation*}\n\t\t\t\nProof Claim 2. From item (b) of Claim 1, we have $\\sigma(D(\\boldsymbol{0})) \\subset \\sigma(A_0)$, and thus \t\t\t\n$$\n \\sigma(D(\\boldsymbol{0})) \\cap (\\lambda - d, \\lambda + d) \\subset \\sigma(A_0) \\cap (\\lambda - d, \\lambda + d) = \\{ \\lambda \\}.\n$$\nSuppose that $\\lambda \\in \\sigma(D(\\boldsymbol{0})) \\cap (\\lambda - d, \\lambda + d)$, that is to say, it is an isolated element of the \nspectrum of $D(\\boldsymbol{0})$. Therefore, $\\lambda$ is an eigenvalue of $D(\\boldsymbol{0})$, \nbut this is not possible since $D(\\boldsymbol{0}) - \\lambda$ is an injective operator (see the proof of item (b) of Claim 1). \nConsequently, we have \t\t\t\n$$\n \\sigma(D(\\boldsymbol{0})) \\cap (\\lambda - d, \\lambda + d) = \\emptyset.\n$$\n\t\t\n\\medskip\nIt remains to show \\eqref{FINALPERT}. First, by definition $P(\\boldsymbol{z}) u$ \nis holomorphic for each $\\boldsymbol{z}$ in a neighborhood of $\\boldsymbol{0}$. Therefore, \nthe mapping $\\boldsymbol{z} \\mapsto P(\\boldsymbol{z})$ is holomorphic in this neighbohood.\nThen, the mapping $\\boldsymbol{z} \\ \\mapsto D(\\boldsymbol{z}) \\in \\mathcal{B}(H)$ \nis continuous. Moreover, since the subset of invertible operators in \n$\\mathcal{B}(H)$ is an open set, there exists \na (small) neighborhood $\\boldsymbol{0} $, such that the function \n$\\boldsymbol{z} \\mapsto (D(\\boldsymbol{z}) - \\lambda)^{-1} \\in \\mathcal{B}(H)$ is continuous. \n\nOn the other hand, there exists $d' \\in (0,d)$ such that \n$$\n {\\left\\Vert (D(\\boldsymbol{0}) - \\lambda)^{-1} \\right\\Vert} \\leqslant \\frac{1}{ {\\rm dist}(\\lambda, \\sigma(D(\\boldsymbol{0}))) } \\leqslant \\frac{1}{d} < \\frac{1}{d^\\prime},\n$$\t\nsee Reed, Simon \\cite[Chapter VIII]{ReedSimon}. Therefore, by the continuity of the map\n$\\boldsymbol{z} \\mapsto (D(\\boldsymbol{z}) - \\lambda)^{-1} \\in \\mathcal{B}(H)$,\nthere exists a neighborhood of $\\boldsymbol{0}$, namely $W$, such that for all $\\boldsymbol{z} \\in W$\n\\begin{equation*}\n {\\left\\Vert (D(\\boldsymbol{\\boldsymbol{z}}) - \\lambda)^{-1} \\right\\Vert} < \\frac{1}{d^\\prime}.\n\\end{equation*}\nThus for any $u \\in H$ and $\\boldsymbol{z} \\in W$, it follows that \n$$\n\\begin{aligned}\n {\\Vert u \\Vert}&= {\\left\\Vert (D(\\boldsymbol{\\boldsymbol{z}}) - \\lambda)^{-1} {\\left[ {\\left( D(\\boldsymbol{\\boldsymbol{z}}) - \\lambda \\right)}u \\right]} \\right\\Vert} \n \\\\\n &\\leq {\\left\\Vert (D(\\boldsymbol{\\boldsymbol{z}}) - \\lambda)^{-1} \\right\\Vert} {\\left\\Vert (D(\\boldsymbol{\\boldsymbol{z}}) - \\lambda)u \\right\\Vert} \n < \\frac{1}{d^\\prime} \\, {\\left\\Vert (D(\\boldsymbol{\\boldsymbol{z}}) - \\lambda)u \\right\\Vert}.\n\\end{aligned}\n$$\t\t\t\nHence for $d'' \\in (0,d')$ and $\\xi \\in (\\lambda - d'',\\lambda + d'')$, we have \n\\begin{eqnarray*}\n {\\left\\Vert (D(\\boldsymbol{\\boldsymbol{z}}) - \\xi)u \\right\\Vert} & \\geq & {\\left\\Vert (D(\\boldsymbol{\\boldsymbol{z}}) - \\lambda)u \\right\\Vert} - {\\left\\vert \\lambda - \\xi \\right\\vert} {\\lVert u \\rVert} \\\\\n & \\geq & (d^\\prime - d'') {\\Vert u \\Vert}.\n\\end{eqnarray*}\nConsequently, for all \n$\\xi \\in (\\lambda - d'', \\lambda + d'')$, \n$\\xi$ is an element of the resolvent of $D(\\boldsymbol{z})$, that is \n$\\xi \\in \\rho(D(\\boldsymbol{z}))$. Thus for each $\\boldsymbol{z} \\in W$, we have \n$$(\\lambda-d',\\lambda+d') \\subset \\rho(D(\\boldsymbol{z})).$$ Finally, since for each \n$\\boldsymbol{z} \\in W$\n\\begin{equation*}\n\t\\sigma(A(\\boldsymbol{z})) \\setminus \\{\\lambda_1(\\boldsymbol{z}),\\ldots,\\lambda_h(\\boldsymbol{z})\\} \\subset \\sigma(D(\\boldsymbol{z})) \n\t\\setminus \\{\\lambda_1(\\boldsymbol{z}),\\ldots,\\lambda_h(\\boldsymbol{z})\\}, \n\\end{equation*}\nwe obtain from item (a) of Claim 1, that \n\\begin{equation*}\n\t\\sigma(A(\\boldsymbol{z})) \\setminus \\{\\lambda_1(\\boldsymbol{z}),\\ldots,\\lambda_h(\\boldsymbol{z})\\} \\cap (\\lambda-d^\\prime, \\lambda+d^\\prime)=\\emptyset,\n\\end{equation*}\nwhich finish the proof. \n\\end{proof}\n\n\\section{Bloch Waves Analysis}\n\\label{877853467yd56rtfe5rtfgeds76ytged}\n\nBloch waves analysis is important in the theory of solid-state physics. \nMore precisely, the displacement of an electron in a crystal \n(periodic setting) is often described by Bloch waves,\nand this application is supported by Bloch's Theorem which states \nthat, the energy eigenstates for an electron in a crystal can be written as \nBloch waves.\n\n\\medskip\nThe aim of this section is to extend the Bloch waves theory, which is known just for periodic functions\nto the considered stochastic setting,\nthat is, stationary functions composed with stochastic deformations, which is used here to describe non-crystalline matter. \nTherefore, we would like to show that, the electron waves in a non-crystalline matter can have a basis \nconsisting entirely of Bloch wave energy eigenstates (now solution to a stochastic Bloch spectral cell equation). \nConsequently, we are extending the concept of electronic band structures to non-crystalline matter. \n\n \t\t\n\\subsection{The WKB method}\n\\label{683926ruesszs}\n\nHere we formally obtain the Bloch spectral cell equation\n(see Definition \\ref{92347828454trfhfd4rfghjls}), applying the \nasymptotic Wentzel-Kramers-Brillouin (WKB for short) expansion method, that is, \nwe assume that the solution of equation \\eqref{jhjkhkjhkj765675233} is given by a\nplane wave. More precisely, for each $\\varepsilon> 0$ let us assume that, the solution $u_\\varepsilon(t,x,\\omega)$\nof the equation \\eqref{jhjkhkjhkj765675233} has the \nfollowing asymptotic expansion \n\\begin{equation}\n\\label{ansatz}\n u_{\\varepsilon}(t,x,\\omega)= e^{2\\pi i S_{\\varepsilon}(t,x)} \\sum_{k=1}^{\\infty}\\varepsilon^k \n u_k\\Big(t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big),\n\\end{equation}\nwhere the functions $u_k(t,x,y,\\omega)$ are conveniently stationary in $y$, and $S_{\\varepsilon}$ \nis a real valued function to be established a posteriori\n(not necessarily a polynomial in $\\varepsilon$), which take part of the modulated plane wave \\eqref{ansatz} \nfrom $e^{2\\pi i S_{\\varepsilon}(t,x)}$. \n\nThe spatial derivative of the above ansatz \\eqref{ansatz} is \n$$\n\\begin{aligned}\n\\nabla u_\\varepsilon&(t,x,\\omega)= e^{2i\\pi S_\\varepsilon(t,x)} \\big(2i\\pi \\nabla S_\\varepsilon(t,x)\\sum_{k=0}^\\infty \\varepsilon^k\\,\nu_k \\big(t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\big)\n\\\\[5pt]\n&\\qquad + \\sum_{k=0}^\\infty \\varepsilon^k\\Big\\{\\left(\\partial_x u_k\\right) \\Big(t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big)\n\\\\[5pt]\n&\\qquad + \\frac{1}{\\varepsilon}(\\nabla\\Phi)^{-1}\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\right)\n\\left(\\partial_y u_k\\right)\\Big(t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big)\\Big\\}\\Big)\n\\\\[5pt]\n&=e^{2i\\pi S_\\varepsilon(t,x)} \\Big(\\sum_{k=0}^\\infty \\varepsilon^k\\left(\\frac{{\\nabla}_z}{\\varepsilon} + 2i\\pi \\nabla S_\\varepsilon(t,x)\\right)\nu_k\\Big(t,x,\\Phi^{-1}(\\frac{x}{\\varepsilon},\\omega),\\omega\\Big)\n\\\\[5pt]\n&\\qquad +\\sum_{k=0}^\\infty \\varepsilon^k\\left(\\nabla_x u_k \\right)\n\\Big(t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big)\\Big).\n\\end{aligned}\n$$\n\nNow, computing the second derivatives of the expansion~\\eqref{ansatz} and writing as a cascade of the power of $\\varepsilon$, we have \n\\begin{equation}\n\\label{ansatz2}\n\\begin{aligned}\ne^{-2i\\pi S_\\varepsilon(t,x)} & {\\rm div}{\\big( A {( \\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\omega \\right)},\\omega)} \n\\nabla u_\\varepsilon(t,x,\\omega) \\big)}\n\\\\\n&=\\frac{1}{\\varepsilon^2}\\Big( {\\rm div}_{\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\n\\Big( A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)} \\Big( \\nabla_{\\!\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\n\\\\\n&\\qquad\\qquad\\qquad\\qquad u_0 ( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega)\\Big){\\Bigg\\rvert}_{z=x\/\\varepsilon}\n\\\\\n&+ \\frac{1}{\\varepsilon}\\Big( {\\rm div}_{\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\n\\Big( A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)} \\Big( \\nabla_{\\!\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\n\\\\\n&\\qquad\\qquad\\qquad\\qquad u_1 ( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega)\\Big){\\Big\\rvert}_{z=x\/\\varepsilon}\n+ I_\\varepsilon, \n\\end{aligned}\n\\end{equation}\nwhere \n\\begin{eqnarray}\n&& I_\\varepsilon= \\sum_{k=0}^\\infty \\varepsilon^k\\Big( {\\rm div}_{\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\n\\Big( A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)} \\Big( \\nabla_{\\!\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\\nonumber\n\\\\\n&&\\qquad\\qquad u_{k+2} ( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega)\\Big){\\Big\\rvert}_{z=x\/\\varepsilon}\\nonumber\n\\\\\n&&+\\frac{1}{\\varepsilon}\\Big( {\\rm div}_{\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\n\\Big( A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)}\\nabla_x u_0 ( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega)\\Big){\\Big\\rvert}_{z=x\/\\varepsilon}\\nonumber\n\\\\\n&&+\\sum_{k=0}^\\infty \\varepsilon^k\\Big( {\\rm div}_{\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\n\\Big( A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)}\\nabla_x u_{k+1} ( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega)\\Big){\\Big\\rvert}_{z=x\/\\varepsilon}\\nonumber\n\\\\\n&&+ \\frac{1}{\\varepsilon}{\\rm div}_{\\! x}\\Big(A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)}\\Big( \\nabla_{\\!\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\nu_0 ( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega)\\Big){\\Big\\rvert}_{z=x\/\\varepsilon}\\nonumber\n\\\\\n&&+ \\sum_{k=0}^\\infty \\varepsilon^k {\\rm div}_{\\! x}\\Big(A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)}\\Big( \\nabla_{\\!\\! z} + 2i\\pi \\varepsilon \\nabla S_\\varepsilon(t,x) \\Big)\nu_{k+1} ( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega)\\Big){\\Big\\rvert}_{z=x\/\\varepsilon}\\nonumber\n\\\\\n&&+\\sum_{k=0}^\\infty \\varepsilon^k{\\rm div}_{\\! x}\\Big(A{\\left( \\Phi^{-1}(\\cdot,\\omega),\\omega \\right)}\\nabla_{\\!\\! x} \nu_k{\\Big( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega\\Big)}\\Big){\\Big\\rvert}_{z=x\/\\varepsilon}.\n\\end{eqnarray}\n\nProceeding in the same way with respect to the temporal derivative, we have\n\\begin{eqnarray}\\label{ansatz3}\n&&e^{-2i\\pi S_\\varepsilon(t,x)}\\,{\\partial}_t u_\\varepsilon\\nonumber\n\\\\\n&&\\qquad\\qquad=\n\\frac{1}{\\varepsilon^2} \\Big(2i\\pi \\varepsilon^2 {\\partial}_{t} S_\\varepsilon(t,x) \\Big) u_0\\Big( t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big)\\nonumber\n\\\\\n&&\\qquad\\qquad+\\frac{1}{\\varepsilon} \\Big(2i\\pi \\varepsilon^2 {\\partial}_{t} S_\\varepsilon(t,x) \\Big) \nu_1\\Big( t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big)\\nonumber\n\\\\\n&&\\qquad\\qquad+\\Big(2i\\pi \\varepsilon^2 {\\partial}_{t} S_\\varepsilon(t,x) \\Big) \\sum_{k=0}^\\infty \\varepsilon^k\nu_{k+2}\\Big( t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big)\\nonumber\n\\\\\n&&\\qquad\\qquad\\qquad\\qquad+\\sum_{k=0}^\\infty \\varepsilon^k {\\partial}_tu_k\\Big( t,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big).\n\\end{eqnarray}\nThus, if we insert the equations \\eqref{ansatz2} and \\eqref{ansatz3} in \\eqref{jhjkhkjhkj765675233} \nand compute the $\\varepsilon^{-2}$ order term, we \narrive at \n$$\n L^\\Phi(\\varepsilon \\nabla S_\\varepsilon(t,x)) u_0 {\\big( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega\\big)}\n = 2 \\pi \\big( \\varepsilon^2 \\partial_t S_\\varepsilon(t,x)\\big)u_0 {\\big( t,x,\\Phi^{-1}(\\cdot,\\omega),\\omega\\big)},\n$$\nwhere for each $\\theta \\in \\mathbb R^n$, the linear operator $L^\\Phi(\\theta)$ is defined by\n\\begin{equation}\n\\label{EqEsp}\n\\begin{aligned}\nL^\\Phi(\\theta)[\\cdot]:=& -\\big( {\\rm div}_{\\! z} + 2i\\pi \\theta \\big)\n\\big(A{( \\Phi^{-1}(z,\\omega),\\omega)} {\\big( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\big)} [\\cdot] \\big)\n\\\\ \n&+V {\\big(\\Phi^{-1}\\left(z,\\omega\\right),\\omega\\big)} [\\cdot].\n\\end{aligned}\n\\end{equation}\nTherefore, $2 \\pi \\Big( \\varepsilon^2 \\partial_t S_\\varepsilon(t,x)\\Big)$ is an eigenvalue of $ L^\\Phi(\\varepsilon \\nabla S_\\varepsilon(t,x))$.\nConsequently, if $\\lambda(\\theta)$ is any eigenvalue of $L^\\Phi(\\theta)$ (which is sufficiently regular with respect to $\\theta$), then\nthe following (eikonal) Hamilton-Jacobi equation must be satisfied\n$$\n 2 \\pi \\varepsilon^2 \\partial_t S_\\varepsilon(t,x) - \\lambda(\\varepsilon \\nabla S_\\varepsilon(t,x))= 0. \n$$\nThus, if we suppose for $t=0$ (companion to \\eqref{ansatz}) the modulated plane wave initial data \n\\begin{equation}\n\\label{ansatzID}\n u_{\\varepsilon}(0,x,\\omega)= e^{2i\\pi \\frac{\\theta \\cdot x}{\\varepsilon}} \\sum_{k=1}^{\\infty}\\varepsilon^k \n u_k\\Big(0,x,\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\Big),\n\\end{equation}\nthen the unique solution for the above Hamilton-Jacobi equation is, for each parameter $\\theta \\in \\mathbb R^n$, \n\\begin{equation}\n\\label{SEP}\n S_\\varepsilon(t,x)= \\frac{\\lambda(\\theta) \\ t}{2 \\pi \\varepsilon^2} + \\frac{\\theta \\cdot x}{\\varepsilon}.\n\\end{equation}\n\nTo sum up, the above expansion, that is\nthe solution $u_{\\varepsilon}$ of the equation \\eqref{jhjkhkjhkj765675233} \nwith initial data given respectively by \\eqref{ansatz} and \\eqref{ansatzID}, \nsuggests the following \n\n\\begin{definition}[Bloch or\nshifted spectral cell equation] \n\\label{DEFBLOCHCELL} Let $\\Phi$ be a stochastic deformation. \nFor any $\\theta \\in \\mathbb R^n$ fixed, the following time independent asymptotic equation \n\\begin{equation}\n\\label{92347828454trfhfd4rfghjls}\n\\left\\{\n\\begin{array}{l}\nL^\\Phi(\\theta) [\\Psi(z,\\omega)]= \\lambda \\ \\Psi(z,\\omega), \\hspace{40pt} \\text{in $\\mathbb R^n \\times \\Omega$}, \n\\\\[5pt]\n\\hspace{32pt} \\Psi(z, \\omega) = \\psi {\\left( \\Phi^{-1} (z, \\omega), \\omega \\right)}, \\quad \\text{$\\psi$ is a stationary function},\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\t\\end{equation}\nis called Bloch's spectral cell equation companion to the Schr\\\"odinger equation in \\eqref{jhjkhkjhkj765675233},\nwhere $L^\\Phi(\\theta)$ is given by \\eqref{EqEsp}.\nMoreover, each $\\theta \\in \\mathbb R^n$ is called a Bloch frequency, $\\lambda(\\theta)$ is called a Bloch energy and the corresponded \n$\\Psi(\\theta)$ is called a Bloch wave. Moreover, if $\\Phi$ is well understood in the context, then $L \\equiv L^\\Phi$. \n\\end{definition}\n\n\\medskip\nThe unknown $(\\lambda,\\Psi)$ in \\eqref{92347828454trfhfd4rfghjls}, which is an eigenvalue-eigenfunction pair, is obtained \nby the associated variational formulation, that is \n\\begin{equation}\n\\label{FORMVARIAC}\n\\begin{aligned}\n&\\langle L(\\theta)[F], G\\rangle\n\\\\\n&= \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} \\!\\!\\!\\!\\!\\!\\!\\!\\! A( \\Phi^{-1}(z, \\omega), \\omega) (\\nabla_{\\!\\! z} + 2i\\pi\\theta) F(z,\\omega) \\cdot\n \\overline{{( \\nabla_{\\!\\! z} + 2i\\pi\\theta)} G(z,\\omega)} \\, dz \\, d\\mathbb{P}(\\omega) \n \\\\[5pt]\n &+ \\int_\\Omega \\int_{\\Phi([0,1)^n,\\omega)} V{( \\Phi^{-1}(z, \\omega), \\omega)} \\ F(z,\\omega) \\, \n \\overline{G(z,\\omega)} \\, dz \\, d\\mathbb{P}(\\omega). \n\\end{aligned}\n\\end{equation}\n\n\\begin{remark}\n\\label{GROUPNECE}\nOne remarks that, $\\lambda= \\lambda(\\theta) \\in \\mathbb R$, that is to say, \n$\\lambda$ depends on the parameter $\\theta$. \nHowever, $\\lambda$ could not depend on $\\omega$, since \nthe homogeneized effective matrix is obtained from the \nHessian of $\\lambda$ at some point $\\theta^*$, and\nshould be constant. \nTherefore, the probabilistic variable $\\omega$ could not be considered as fixed parameter\nin \\eqref{92347828454trfhfd4rfghjls}.\n\\end{remark}\n\n\n\\subsection{Sobolev spaces on groups}\n\\label{9634783yuhdj6ty}\n\nThe main motivation to study Sobolev spaces on groups, besides\nbeing an elegant and modern mathematical theory, is related to the\neigenvalue problem: $$ \\text{Find $\\lambda(\\theta) \\in \\mathbb{R}$ and $\\Psi(\\theta) \\in \\mathcal{H}_\\Phi \\setminus \\{0\\}$\nsatisfying \\eqref{92347828454trfhfd4rfghjls}.}$$\nIndeed, we may use a compactness argument, \nthat is the space $\\mathcal{H}_\\Phi$ is compactly embedded in $\\mathcal{L}_\\Phi$,\nin order to solve the associated variational formulation \\eqref{FORMVARIAC}. Although,\nas observed in Remark \\ref{GROUPNECE}, $\\omega \\in \\Omega$ can not be fixed, hence \nwe are going to establish an equivalence between the space $\\mathcal{H}_\\Phi$ and\nthe Sobolev space on groups, and then consider a related Rellich-Kondrachov's Theorem. \nThis is the main issue of this section. \nLet us recall that, the subject of Sobolev spaces on Abelian locally compact groups,\nto the best of our knowledge,\nwas introduced by P. G\\'orka, E. G. Reyes \\cite{GorkaReyes}. \n\n\\medskip\nTo begin, we sum up some definitions and properties of topological groups, \nwhich will be used along this section. Most of the material could be found in \nE. Hewitt, A. Ross \\cite{HewittRoss} and G. B. Folland \\cite{Folland2}\n(with more details). \n\n\\medskip\nA nonempty set $G$ endowed with an application, $\\ast : G \\! \\times \\! G \\to G$,\nis called a group, when for each $x, y, z \\in G$:\n\\begin{itemize}\n\\item[1.] ${ (x\\ast y) \\ast z = x \\ast (y \\ast z) }$;\n\\item[2.] There exists ${e \\in G }$, such that ${ x \\ast e = e \\ast x = e }$;\n\\item[3.] For all ${ y \\in G }$, there exists ${y^{-1}\\in G }$, such that ${ y \\ast y^{-1} = y^{-1} \\ast y = e }$.\n\t\\end{itemize}\nMoreover, if $x \\ast y = y \\ast x$, then $G$ is called an Abelian group. \nFrom now on, we write for simplicity $x \\, z$ instead of $x \\ast z$. \nA topological group is a group $G$ together with a topology, such that,\nboth the group's binary operation $(x,y) \\mapsto x \\, y$,\nand the function mapping group elements to their respective inverses \n$x \\mapsto x^{-1}$\nare continuous functions with respect to the topology.\nUnless the contrary is explicit stated, any group mentioned here is \na locally compact Abelian (LCA for short) group, and \nwe may assume without loss of generality that, \nthe associated topology is Hausdorff \n(see G. B. Folland \\cite{Folland2}, Corollary 2.3).\n\n\\medskip\nA complex value function\n$\\xi : G \\to \\mathbb{S}^1$ is called a character of $G$, when \n$$\n \\xi(x \\, y) = \\xi(x) \\xi(y),\n\\quad \\quad \\text{(for each $x, y \\in G$)}.\n $$\n We recall that, the set of characters of $G$ is an Abelian group\n\n with the usual product of functions, identity element $e= 1$, and\n inverse element $\\xi^{-1} = \\overline{\\xi}$.\n The characters' group of the topological group $G$, called\nthe dual group of $G$ and denoted by $G^\\wedge$, \nis the set of all continuous characters, that is to say \n$$\n G^\\wedge:= \\{ \\xi : G \\to \\mathbb{S}^1 \\; ; \\; \\text{$\\xi$ is a continuous homomorphism}\\}.\n$$\nMoreover, we may endow $G^\\wedge$ with a topology with respect to which,\n$G^\\wedge$ itself is a LCA group. \n\n\\medskip\nWe denote by $\\mu$, $\\nu$ the unique (up to a positive multiplicative constant) Haar mesures in $G$ and $G^\\wedge$ respectively. \nThe $L^p$ spaces over $G$ and its dual are defined as usual, with their respective mesures. \nLet us recall two important properties when $G$ is compact:\n\\begin{equation}\n\\label{CARACGCOMP}\n\\begin{aligned}\n&i) \\quad \\text{If $\\mu(G)= 1$, then $G^\\wedge$ is an orthonormal set in $L^2(G;\\mu)$}.\n\\\\[5pt]\n&ii) \\quad \\text{The dual group $G^\\wedge$ is discrete, and $\\nu$ is the countermeasure}. \n\\end{aligned}\n\\end{equation}\n\n\\medskip\nOne remarks that, the study of Sobolev \nspaces on LCA groups uses essentially the concept of Fourier Transform, then we have the following \n\\begin{definition}\nGiven a complex value function $f \\in L^1(G;\\mu)$, the function $\\widehat{f}: G^\\wedge \\to \\mathbb{C}$, defined by\n\\begin{equation}\n\t\\widehat{f}(\\xi):= \\int_G f(x) \\, \\overline{\\xi(x)} \\, d\\mu(x)\n\\end{equation}\t\nis called the Fourier transform of $f$ on $G$.\n\\end{definition}\nUsually, the Fourier Transform of $f$ is denoted by $\\clg{F}f$ to emphasize that it is an operator, \nbut we prefer to adopt the usual notation $\\widehat{f}$. \nMoreover, we recall that the Fourier transform is an homomorphism from $L^1(G;\\mu)$ to $C_0(G^\\wedge)$ \n(or $C(G^\\wedge)$ when $G$ is compact), see Proposition 4.13 in \\cite{Folland2}. Also we address the reader to \n \\cite{Folland2}, Chapter 4, for the Plancherel Theorem\nand the Inverse Fourier Transform. \n\n\\medskip\nBefore we establish the definition of (energy) Sobolev spaces on LCA groups, let us\nconsider the following set\n$$\n\\begin{aligned}\n {\\rm P}= \\{p: G^\\wedge \\times & G^\\wedge \\to [0,\\infty) \/ \n \\\\\n \\; & \\text{$p$ is a continuous invariant pseudo-metric on $G^\\wedge$} \\}.\n\\end{aligned} \n$$\nThe Birkhoff-Kakutani Theorem (see \\cite{HewittRoss} p.68) \nensures that, the set P is not empty. \nAny pseudo-metric $p \\in {\\rm P}$ is well defined for each $(x,y) \\in G^\\wedge \\times G^\\wedge$, hence we may define\n\\begin{equation}\n\\label{Gamma}\n\\gamma(x):= p(x,e) \\equiv p(x,1).\n\\end{equation} \nMoreover, one observes that $\\gamma(1)= 0$.\nThen, we have the following \n\\begin{definition}[Energy Sobolev Spaces on LCA Groups]\n\\label{SOBOLEVESPACES}\nLet $s$ be a non-negative real number and $\\gamma(x)$ be given by \\eqref{Gamma}\nfor some fixed $p \\in {\\rm P}$. The energy Sobolev space \n$H^s_\\gamma(G)$ is the set of functions $f \\in L^2(G;\\mu)$, such that\n\\begin{equation}\n \\int_{G^\\wedge} (1+\\gamma(\\xi)^2)^s \\, |\\widehat{f}(\\xi)|^2 d\\nu(\\xi)< \\infty.\n\\end{equation}\nMoreover, given a function $f \\in H^s_\\gamma(G)$ its norm is defined as \n\\begin{equation}\n \\Vert f \\Vert_{H^s_\\gamma(G)} := \\left( \\int_{G^\\wedge} \\left(1+\\gamma(\\xi)^2 \\right)^s \\vert \\widehat{f}(\\xi) \\vert^2 d\\nu(\\xi) \\right)^{1\/2}.\n\\end{equation}\n\\end{definition}\nBelow, taking specific functions $\\gamma$, the usual Sobolev spaces on $\\mathbb R^d$ and \nother examples are considered. In particular, \nthe Plancherel Theorem implies that, $H^0_\\gamma(G)= L^2(G;\\mu)$.\n\n\n\\begin{example}\n\\label{EXAMPLERN}\nLet $G= (\\mathbb R^n, +)$ which is known to be a LCA group, and consider \nits dual group $(\\mathbb{R}^n)^\\wedge = \\{ \\xi_y \\; ; \\; y\\in\\mathbb{R}^n \\}$,\nwhere for each $x \\in \\mathbb R^n$\n\\begin{equation}\n\\label{caracterunitario}\n \\xi_y(x) = e^{2 \\pi i \\, y \\cdot x},\n\\end{equation}\nhence $|\\xi_y(x)|= 1$ and $\\xi_0(x)= 1$. \nOne remarks that, here we denote (without invocation of vector space structure)\n$$\n a \\cdot b= a_1 b_1 + a_2 b_2 + \\ldots + a_n b_n, \\quad \\text{(for all $a,b \\in G$)}.\n$$\nFor any $x, y \\in \\mathbb R^n$ let us consider \n$$\n p(\\xi_x,\\xi_y)= 2\\pi \\|x - y\\|, \n$$\nwhere $\\| \\cdot \\|$ is the Euclidean norm in $\\mathbb R^n$. Hence $\\gamma(\\xi_x)= p(\\xi_x,1)= 2 \\pi \\|x\\|$. \nSince $(\\mathbb{R}^n)^\\wedge \\cong \\mathbb{R}^n$, the Sobolev space $H^s_\\gamma(G)$ coincide \nwith the usual Sobolev space on $\\mathbb R^n$. \n\\end{example}\n\n\\begin{example}\n\\label{6576dgtftdefd}\nLet us recall that, the set $[0,1)^n$ endowed with the binary\noperation \n$$\n (x,y) \\in [0,1)^n \\! \\times \\! [0,1)^n \\;\\; \\mapsto \\;\\; x+y - \\left\\lfloor x+y \\right\\rfloor \\in [0,1)^d\n$$ \nis an Abelian group, and the function \n$\\Lambda: \\mathbb{R}^n \\to [0,1)^n$, $\\Lambda(x):= x - \\left\\lfloor x \\right\\rfloor$\nis an homomorphism of groups. Moreover, under the \ninduced topology by $\\Lambda$, that is to say \n$$\n \\{U \\subset [0,1)^n \\; ; \\; \\Lambda^{-1}(U) \\; \\text{is an open set of} \\;\\, \\mathbb{R}^n \\}, \n$$\n $[0,1)^n$ is a compact Abelian group, which is called $n-$dimensional Torus and denoted by \n$\\mathbb{T}^n$. Its dual group is characterized by the integers $\\mathbb Z^n$, that is \n$$\n\\text{\n$(\\mathbb{T}^n)^\\wedge = \\{ \\xi_m \\; ; \\; m \\in \\mathbb{Z}^n \\}$, where $\\xi_m(x)$ is given by \n\\eqref{caracterunitario} for all $x \\in \\mathbb{R}^n$}. \n$$\nFor each $m,k \\in \\mathbb Z^n$, we consider \n$$\n p(\\xi_m,\\xi_k)= 2\\pi \\sum_{j=1}^n{\\vert m_j - k_j \\vert},\n \\quad \\text{and thus $\\gamma(\\xi_m)= 2 \\pi \\sum_{j=1}^n{\\vert m_j \\vert}$}.\n$$\nThen, the Sobolev space $H^s_\\gamma(\\mathbb{T}^n)$ coincide \nwith the usual Sobolev space on $\\mathbb{T}^n$.\n\t\t\n\\smallskip\nNow, following the above discussion let us consider the infinite Torus \n$\\mathbb{T}^I$, where $I$ is an index set. Since an arbitrary product of compact spaces is compact in the \nproduct topology (Tychonoff Theorem), $\\mathbb{T}^I$ is a compact Abelian group. Here, \nthe binary operation on $ \\mathbb{T}^I \\times \\mathbb{T}^I$ is defined coordinate by coordinate, that is, for each \n$\\ell \\in I$ \n$$\n g_\\ell + h_\\ell:= g_\\ell + h_\\ell - \\left\\lfloor g_\\ell + h_\\ell \\right\\rfloor.\n$$\nMoreover, the set $\\mathbb{Z}^I_{\\rm c} := \\{ m \\in \\mathbb{Z}^I; \\text{{\\rm supp} $m$ is compact} \\}$\ncharacterizes the elements of the dual group $(\\mathbb{T}^I)^\\wedge$.\n Indeed, applying Theorem 23.21 in \n\\cite{HewittRoss}, similarly we have \n$$\n (\\mathbb{T}^I)^\\wedge = {\\left\\{ \\xi_m \\; ; \\; m\\in \\mathbb{Z}^I_{\\rm c} \\right\\}},\n$$\nwhere $\\xi_m(k)$ is given by \n\\eqref{caracterunitario} for each $m,k \\in \\mathbb{Z}_{\\rm c}^I$, the pseudo-metric\n$$\n p(\\xi_m,\\xi_k)= 2\\pi \\sum_{\\ell \\in I}{\\vert m_\\ell - k_\\ell \\vert},\n \\quad \\text{and $\\gamma(\\xi_m)= 2 \\pi \\sum_{\\ell \\in I}{\\vert m_\\ell \\vert}$}.\n$$\nConsequently, we have establish the Sobolev spaces $H^s_{\\gamma}(\\mathbb{T}^I)$.\n\\end{example}\n\n\\subsubsection{Groups and Dynamical systems}\n\\label{kjh876}\n\nIn this section, we are interested to come together the discussion \nabout dynamical systems studied in Section \\ref{628739yhf}\nwith the theory developed in the last section \nfor LCA groups. To this end, we consider \nstationary functions in the continuous sense (continuous dynamical systems). \nMoreover, we recall that all the groups in this paper are \nassumed to be Hausdorff. \n\n\\medskip\nTo begin, let $G$ be a locally compact group with Haar measure $\\mu$,\nwe know that $\\mu(G)< \\infty$ if, and only if, $G$ is compact. \nTherefore, we consider from now on that $G$ is a compact Abelian group, \nhence $\\mu$ is a finite measure and, up to a normalization, $(G,\\mu)$ is a probability space.\nWe are going to consider the dynamical systems, $\\tau: \\mathbb R^n \\times G \\to G$, defined by \n\\begin{equation}\n\\label{TAUFI}\n \\tau(x) \\omega:= \\varphi(x) \\, \\omega,\n\\end{equation}\nwhere $\\varphi: \\mathbb R^n \\to G$ is a given (continuous) homomorphism. \nIndeed, first $\\tau(0) \\omega= \\omega$ and \n$\\tau(x+y, \\omega)= \\varphi(x) \\varphi(y) \\omega= \\tau(x,\\tau(y)\\omega)$. \nMoreover, since $\\mu$ is a translation invariant Haar measure, the \nmapping $\\tau(x,\\cdot): G \\to G$ is $\\mu-$measure preserving. \nRecall from Remark \\ref{REMERG} we have assumed that, \nthe dynamical systems we are interested here are\nergodic. Then, it is important to characterize the conditions\nfor the mapping $\\varphi$, under which the dynamical system defined by \n\\eqref{TAUFI} is ergodic. To this end, first let us consider the following \n\n\\begin{lemma}\n\\label{DIST}\nLet $H$ be a topological group, $F \\subset H$ closed, $F \\neq H$ and $x \\notin F$.\nThen, there exists a neighborwood $V$ of the identity $e$, such that\n$$\n F V \\cap x V= \\emptyset. \n$$\n\\end{lemma}\n\n\\begin{proof}\nFirst, we observe that:\n\ni) Since $F \\subset H$ is closed and $F \\neq H$, there\nexists a neighborwood $U$ of the identity $e$,\nsuch that $F \\cap x U= \\emptyset$. \n\nii) There exists a symmetric neighborwood $V$ of the identity $e$,\nsuch that $VV \\subset U$. \n\nNow, suppose that $F V \\cap x V \\neq \\emptyset$. \nTherefore, there exist $v_1, v_2 \\in V$ and $k_0 \\in F$ such that, $k_0 v_1= x v_2$. \nConsequently, $k_0= x v_2 v_1^{-1}$ and from $(ii)$ it follows that, $k_0 \\in x U$. \nThen, we have a contradiction from $(i)$. \n\\end{proof}\n\n \\underline {\\bf Claim 1:} The dynamical system defined \nby \\eqref{TAUFI} is ergodic if, and only if, \n$\\varphi(\\mathbb R^n)$ is dense in $G$. \n\n\\smallskip\nProof of Claim 1: 1. Let us show first the necessity. Therefore, we suppose that \n$\\varphi(\\mathbb R^n)$ is not dense in $G$, that is $K:= \\overline{\\varphi(\\mathbb R^n)} \\neq G$. \nThen, applying Lemma \\ref{DIST}\nthere exists a neighborhood $V$ of $e$, such that $K V \\cap x V= \\emptyset$,\nfor some $x \\notin K$. Recall that the Haar measure on open sets are positive, \nmoreover\n$$\n K V= \\bigcup_{k \\in K} k V,\n$$\nwhich is an open set, thus we have \n$$\n 0< \\mu(K V) + \\mu(x V) \\leq 1. \n$$\nConsequently, it follows that $0< \\mu(\\varphi(\\mathbb R^n) V)< 1$. For convenience, le us denote \n$E= \\varphi(\\mathbb R^n) V$, hence $\\tau(x) E= E$ for each $x \\in \\mathbb R^n$. \nThen, the dynamical system $\\tau$ is not ergodic, since $E \\subset G$ is a $\\tau$-invariant set \nwith $0< \\mu(E)< 1$. \n\n\\medskip\n2. It remains to show the sufficiency. \nLet $E \\subset G$ be a $\\mu-$measurable $\\tau$-invariant set,\nhence $\\omega E= E$ for each $\\omega \\in \\varphi(\\mathbb R^n)$. Assume \nby contradiction that, $0< \\mu(E)< 1$, thus $\\mu(G \\setminus E)> 0$.\nDenote by $\\mathcal{B}_G$ the Borel $\\sigma-$algebra on $G$, and define, \n$\\lambda:= \\mu_{\\lfloor E}$, that is $\\lambda(A)= \\mu(E \\cap A)$ for all \n$A \\in \\mathcal{B}_G$. Recall that $G$ is not necessarily metric, therefore, it is not\nclear if each Borel set is $\\mu-$measurable. Then, it follows that: \n\n$(i)$ For any $A \\in \\mathcal{B}_G$ fixed, the mapping \n$\\omega \\in G \\mapsto \\lambda(\\omega A)$ is continuous. \nIndeed, for $\\omega \\in G$ and $A \\in \\mathcal{B}_G$, we have\n$$\n\\begin{aligned}\n \\lambda(\\omega A)&= \\int_G 1_E(\\varpi) 1_{\\omega A}(\\varpi) d\\mu(\\varpi)\n \\\\[5pt]\n &= \\int_G 1_E(\\varpi) 1_{A}(\\omega^{-1} \\varpi) d\\mu(\\varpi) \n= \\int_G 1_E(\\omega \\varpi) 1_{A}(\\varpi) d\\mu(\\varpi).\n\\end{aligned}\n$$\nTherefore, for $\\omega, \\omega_0 \\in G$\n$$\n\\begin{aligned}\n|\\lambda(\\omega A) - \\lambda(\\omega_0 A)|&= \\big| \\int_G \\big(1_E(\\omega \\varpi) - 1_E(\\omega_0 \\varpi)\\big) 1_A(\\varpi) d\\mu(\\varpi) \\big|\n\\\\\n &\\leq \\big(\\mu(A)\\big)^{1\/2}\n \\big( \\int_G |1_E(\\omega \\varpi) - 1_E(\\omega_0 \\varpi)|^2 d\\mu(\\varpi) \\big)^{1\/2}\n \\tobo{\\omega \\to \\omega_0} 0. \n\\end{aligned}\n$$\n\n$(ii)$ $\\lambda$ is invariant, i.e. for all $\\omega \\in G$, and $A \\in \\mathcal{B}_G$, $\\lambda(\\omega A)= \\lambda(A)$. \nIndeed, for each $\\omega \\in \\varphi(\\mathbb R^d)$, and $A \\in \\mathcal{B}_G$, we have \n$$\n (\\omega A) \\cap E= (\\omega A) \\cap (\\omega E)= \\omega (A \\cap E). \n$$\nThus since $\\mu$ is invariant, $\\mu_{\\lfloor E}(\\omega A)= \\mu_{\\lfloor E}(A)$. Consequently,\ndue to item $(i)$ and $\\overline{\\varphi(\\mathbb R^d)}= G$, it follows that $\\lambda$ is invariant. \n\nFrom item $(ii)$ the Radon measure $\\lambda$ is a Haar measure on $G$. By the uniqueness \nof the Haar measure on $G$, there exists $\\alpha> 0$, such that for all $A \\in \\mathcal{B}_G$,\n$\\alpha \\lambda(A)= \\mu(A)$. In particular, $\\alpha \\lambda(G \\setminus E)= \\mu(G \\setminus E)$.\nBut $\\lambda(G \\setminus E)= 0$ by definition and $\\mu(G \\setminus E)> 0$, which is a contradiction\nand hence $\\tau$ is ergodic. \n\n\\begin{remark}\n1. One remarks that, in order to show that $\\tau$ given by \\eqref{TAUFI} is ergodic, it was not used\n that $\\varphi$ is continuous, nor that $G$ is metric. Compare with the statement in \\cite{JikovKozlovOleinik} \n p.225 (after Theorem 7.2). \n\n2. From now on, we assume that $\\varphi(\\mathbb R^n)$ is dense in $G$. \n\\end{remark}\n\n\\medskip\nNow, for the dynamical system established before, the main issue is to show how the Sobolev space \n$H^1_{\\gamma}(G)$ is related with the space $\\mathcal{H}_\\Phi$ given by \\eqref{SPACEHPHI} for $\\Phi= Id$, \nthat is \n$$\n \\mathcal{H}= {\\big\\{f(y, \\omega); \\; f \\in H^1_{\\rm loc}(\\mathbb{R}^n; L^2(G)) \\;\\; \\text{stationary} \\big\\}},\n$$\nwhich is a Hilbert space endowed with the following inner product \n$$\n{\\langle f,g \\rangle}_{\\mathcal{H}}= \\int_G f(0, \\omega) \\, \\overline{g(0, \\omega) } \\, d\\mu(\\omega)\n+ \\int_G \\nabla_{\\!\\! y} f(0, \\omega) \\cdot \\overline{ \\nabla_{\\!\\! y} g(0, \\omega) }\\, d\\mu(\\omega).\n$$\nLet $\\chi$ be a character on $G$, i.e. $\\chi \\in G^\\wedge$. Since $\\varphi: \\mathbb R^n \\to G$ is a continuous homomorphism, the \nfunction $(\\chi \\circ \\varphi): \\mathbb R^n \\to \\mathbb C$\nis a continuous character in $\\mathbb R^n$. More precisely, given any fixed $\\chi \\in G^\\wedge$ we may find \n$y \\in \\mathbb R^n$, $(y \\equiv y(\\chi))$, such that, for each $x \\in \\mathbb R^n$\n$$\n \\big(\\chi \\circ \\varphi \\big)(x) =:\\xi_{y(\\chi)}(x)= e^{2\\pi i \\, y(\\chi) \\cdot x}.\n$$\nFollowing Example \\ref{EXAMPLERN} we define the pseudo-metric \n$p_\\varphi: G^\\wedge \\times G^\\wedge \\to [0,\\infty)$ by \n\\begin{equation}\n\\label{PSEDO}\n p_\\varphi(\\chi_1, \\chi_2):= p(\\xi_{y_1(\\chi_1)}, \\xi_{y_2(\\chi_2)})= 2 \\pi \\|y_1(\\chi_1) - y_2(\\chi_2)\\|. \n\\end{equation}\nThen, we have \n$$\n \\gamma(\\chi)= p_\\varphi(\\chi,1)= 2 \\pi \\|y(\\chi)\\|. \n$$\n\n\\medskip\nLet us observe that, we have used in the above construction of $\\gamma$ the continuity of the homomorphism $\\varphi: \\mathbb R^n \\to G$,\nthat is to say, it was essential the continuity of $\\varphi$. In fact, the function $\\gamma$ was given by the pseudo-metric $p_\\varphi$, which is \nnot necessarily a metric. Although, we have the following \n\n\\medskip\n \\underline {\\bf Claim 2:} The pseudo-metric $p_\\varphi: G^\\wedge \\times G^\\wedge \\to [0,\\infty)$ given by \\eqref{PSEDO} \nis a metric if, and only if, $\\varphi(\\mathbb R^n)$ is dense in $G$. \n\n\\smallskip\nProof of Claim 2: 1. First, let us assume that $\\overline{\\varphi(\\mathbb R^n)} \\neq G$, and then show that $p_\\varphi$ is not a metric. \nTherefore, we have the necessity proved. From Corollary 24.12 in \\cite{HewittRoss}, since \n$\\overline{\\varphi(\\mathbb R^n)}$ is a closer proper subgroup of $G$, hence there exists $\\xi \\in G^\\wedge \\setminus \\{1\\}$,\nsuch that $\\xi(\\overline{\\varphi(\\mathbb R^n)})= \\{1\\}$. Hence there exists $\\xi \\in G^\\wedge \\setminus \\{1\\}$, \nsuch that, $\\xi(\\varphi(x))= 1$, for each $x \\in \\mathbb R^n$, i.e. $y(\\xi)= 0$. Therefore, we have \n$p_\\varphi(\\xi, 1)= 0$, \nwhich implies that $p_\\varphi$ is not a metric. \n\n\\medskip\n2. Now, let us assume that $\\overline{\\varphi(\\mathbb R^n)}= G$, and it is enough to show that\nif $p_\\varphi(\\xi, 1)= 0$, then $\\xi= 1$. Indeed, if $0= p_\\varphi(\\xi,1)= 2 \\pi \\|y(\\xi)\\|$, then $y(\\xi)= 0$. \nTherefore, $\\xi(\\varphi(x))= 1$ for each $x \\in \\mathbb R^d$, since $\\xi$ is continuous and $\\overline{\\varphi(\\mathbb R^n)}= G$,\nit follows that, for each $\\omega \\in G$, $\\xi(\\omega)= 1$, which finishes the proof of the claim. \n\n\\begin{remark}\nSince we have already assumed that $\\varphi(\\mathbb R^n)$ is dense in $G$, it follows that \n$p_\\varphi$ is indeed a metric, which does not imply necessarily that $G$, itself, is metric. \n\\end{remark}\t\t\n\nUnder the assumptions considered above, we have the following \n\\begin{lemma} If $f \\in \\mathcal{H}$, then for $j \\in \\{1,\\ldots,d\\}$ and all $\\xi \\in G^\\wedge$\n\\begin{equation}\n\\label{DERIVGROUPFOURIER}\n \\widehat{\\partial_j f(0,\\xi)}= 2 \\pi i \\; y_j(\\xi) \\widehat{f(0,\\xi)}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nFirst, for each $x \\in \\mathbb R^d$ and $\\omega \\in G$, define \n$$\n\\begin{aligned}\n \\xi_\\tau(x,\\omega)&:= \\xi(\\tau(x,\\omega))= \\xi(\\varphi(x) \\omega)= \\xi(\\varphi(x)) \\; \\xi(\\omega)\n \\\\[5pt]\n &= e^{2 \\pi i x \\cdot y(\\xi)} \\; \\xi(\\omega). \n\\end{aligned}\n$$ \nTherefore $\\xi_\\tau \\in C^\\infty(\\mathbb R^d; L^2(G))$, and we have for $j \\in \\{1,\\ldots,d\\}$\n\\begin{equation}\n\\label{AUXIL}\n\\partial_j \\xi_\\tau(0,\\omega)= 2 \\pi i \\; y_j(\\xi) \\; \\xi(\\omega). \n\\end{equation}\nFinally, applying Theorem \\ref{987987789879879879} we obtain\n$$\n\\begin{aligned}\n \\int_G \\partial_j f(0,\\omega) \\; \\overline{\\xi_\\tau}(0,\\omega) d\\mu(\\omega)&= - \\int_G f(0,\\omega) \\; \\partial_j \\overline{\\xi_\\tau}(0,\\omega) d\\mu(\\omega)\n \\\\[5pt]\n &= 2 \\pi i \\; y_j(\\xi) \\int_G f(0,\\omega) \\; \\overline{\\xi}(\\omega) d\\mu(\\omega),\n\\end{aligned} \n$$\nwhere we have used \\eqref{AUXIL}. From the above equation and the definition of the \nFourier transform on groups we obtain \\eqref{DERIVGROUPFOURIER}, and the lemma is proved. \n\\end{proof}\n\n\\medskip\nNow we are able to state the equivalence between the spaces $\\mathcal{H}$ and $H^1_\\gamma(G)$,\nwhich is to say, we have the following \n\\begin{theorem}\n\\label{THMEQNOM}\nA function $f \\in \\mathcal{H}$ if, and only if, $f(0,\\cdot) \\in H^1_\\gamma(G)$,\nand \n$$\n \\Vert f \\Vert_\\mathcal{H} = \\Vert f(0,\\cdot) \\Vert_{H_\\gamma^1(G)}.\n$$\n\\end{theorem}\n\n\\begin{proof}\n1. Let us first show that, if $f \\in \\mathcal{H}$ then $f \\in H^1_\\gamma(G)$. \nTo follow we observe that \n$$\n\\begin{aligned}\n \\int_{G^\\wedge} (1 + \\gamma(\\xi)^2) |\\widehat{f(0,\\xi)}|^2 \\; d\\nu(\\xi)&= \n \\int_{G^\\wedge} |\\widehat{f(0,\\xi)}|^2 \\; d\\nu(\\xi)\n \\\\[5pt]\n &+ \\int_{G^\\wedge} | 2 \\pi i \\; y(\\xi) \\widehat{f(0,\\xi)}|^2 \\; d\\nu(\\xi)\n \\\\[5pt]\n &= \\int_{G^\\wedge} |\\widehat{f(0,\\xi)}|^2 \\; d\\nu(\\xi)\n + \\int_{G^\\wedge} |\\widehat{\\nabla_{\\!\\!y} f(0,\\xi)}|^2 \\; d\\nu(\\xi),\n\\end{aligned}\n$$\nwhere we have used \\eqref{DERIVGROUPFOURIER}. Therefore, applying \nPlancherel theorem \n$$\n \\int_{G^\\wedge}\\! (1 + \\gamma(\\xi)^2) |\\widehat{f(0,\\xi)}|^2 \\; d\\nu(\\xi)= \\!\\!\n \\int_{G}\\! |{f(0,\\omega)}|^2 \\; d\\mu(\\omega)\n + \\! \\int_{G} |\\nabla_{\\!\\!y} {f(0,\\omega)}|^2 \\; d\\mu(\\omega)\\!< \\! \\infty,\n$$\nand thus $f(0,\\cdot) \\in H^1_\\gamma(G)$. \n\n\\medskip\n2. Now, let $f(x,\\omega)$ be a stationary function, such that $f(0,\\cdot) \\in H^1_\\gamma(G)$, then we show that \n$f \\in \\mathcal{H}$. Given a stationary function $\\zeta \\in C^1(\\mathbb R^d; L^2(G))$, applying the Palncherel theorem and polarization identity\n$$\n \\int_G \\partial_j \\zeta(0,\\omega) \\; \\overline{f(0,\\omega)} d\\mu(\\omega)\n = \\int_{G^\\wedge} \\widehat{\\partial_j \\zeta(0,\\xi)} \\; \\overline{\\widehat{f(0,\\xi)}} d\\nu(\\xi)\n$$\nfor $j \\in \\{1,\\ldots,d\\}$. Due to \\eqref{DERIVGROUPFOURIER}, we may write\n\\begin{equation}\n\\label{HH1}\n\\begin{aligned}\n \\int_G \\partial_j \\zeta(0,\\omega) \\; \\overline{f(0,\\omega)} d\\mu(\\omega)\n &= \\int_{G^\\wedge} 2 \\pi\n i \\; y_j(\\xi)\\widehat{\\zeta(0,\\xi)} \\; \\overline{\\widehat{f(0,\\xi)}} d\\nu(\\xi)\n\\\\[5pt]\n&= - \\int_{G^\\wedge} \\widehat{\\zeta(0,\\xi)} \\; \\overline{2 \\pi i \\; y_j(\\xi) \\widehat{f(0,\\xi)}} d\\nu(\\xi).\n\\end{aligned}\n\\end{equation}\nFor $j \\in \\{1,\\ldots,d\\}$ we define, $g_j(\\omega):= \\big(2 \\pi i \\; y_j(\\xi) \\widehat{f(0,\\xi)}\\big)^\\vee$,\nthen $g_j \\in L^2(G)$. Indeed, we have \n$$\n \\int_G |g_j(\\omega)|^2 d\\mu(\\omega)= \\int_{G^\\wedge} |\\widehat{g_j(\\xi)}|^2 d\\nu(\\xi) \n \\leq \\int_{G^\\wedge} (1 + \\gamma(\\xi)^2) |\\widehat{f(0,\\xi)}|^2 d\\nu(\\xi)< \\infty.\n$$\nTherefore, we obtain from \\eqref{HH1}\n$$\n \\int_G \\partial_j \\zeta(0,\\omega) \\; \\overline{f(0,\\omega)} d\\mu(\\omega)\n = - \\int_G \\zeta(0,\\omega) \\; \\overline{g_j(\\omega)} d\\mu(\\omega)\n$$\nfor any stationary function $\\zeta \\in C^1(\\mathbb R^d; L^2(G))$, and $j \\in \\{1,\\ldots,d\\}$. \nThen $f \\in \\mathcal{H}$ due to Theorem \\ref{987987789879879879}. \n\\end{proof} \n\t\n\\begin{corollary}\nLet $f \\in L^2_{\\loc}(\\mathbb R^d; L^2(G))$ be a stationary function\nand $\\Phi$ a stochastic deformation. \nThen, $f \\circ \\Phi^{-1} \\in \\clg{H}_\\Phi$ if, and only if, $f(0,\\cdot) \\in H^1_\\gamma(G)$,\nand there exist constants $C_1, C_2> 0$, such that \n$$\n C_1 \\Vert f \\circ \\Phi^{-1} \\Vert_{\\mathcal{H}_\\Phi}\\leq \\Vert f(0,\\cdot) \\Vert_{H_\\gamma^1(G)}\n \\leq C_2 \\Vert f \\circ \\Phi^{-1} \\Vert_{\\mathcal{H}_\\Phi}.\n$$\n\\end{corollary}\n\t\n\\begin{proof}\nFollows from Theorem \\ref{THMEQNOM} and Remark \\ref{REMFPHI}. \n\\end{proof}\t\n\t\n\\subsubsection{Rellich--Kondrachov type Theorem}\n\\label{927394r6fy7euh73f}\n\nThe aim of this section is to characterize when \nthe Sobolev space $H^1_\\gamma(G)$ is compactly embedded in $L^2(G)$,\nwritten $H^1_\\gamma(G) \\subset \\subset L^2(G)$, where $G$ is considered a compact Abelian group \nand $\\gamma: G^{\\wedge} \\to [0,\\infty)$ is given by \\eqref{Gamma}. \nWe observe that, $H^1_\\gamma(G) \\subset \\subset L^2(G)$ is exactly the \nRellich--Kondrachov Theorem on compact Abelian groups, which was established\nunder some conditions on $\\gamma$\nin \\cite{GorkaReyes}. \nNevertheless, as a byproduct of the characterization established here, we provide\nthe proof of this theorem in a more \nprecise context. \n\n\\medskip\nTo start the investigation, let $(G,\\mu)$ be a probability space and consider \nthe operator\n$T: L^2(G^\\wedge) \\to L^2(G^\\wedge)$,\ndefined by\n\\begin{equation}\n\\label{TCOMP}\n\t[T(f)](\\xi) := \\frac{f(\\xi)}{\\sqrt{(1 + \\gamma(\\xi)^2)}}.\n\\end{equation}\nWe remark that, $T$ as defined above is a bounded linear, ($\\Vert T \\Vert \\leqslant 1$), self-adjoint operator,\nwhich is injective and satisfies for each $f \\in L^2(G^\\wedge)$\n\\begin{equation}\n\\label{76354433}\n\t\\int_{G^\\wedge} \\left(1 + \\gamma(\\xi)^2 \\right) \\, {\\vert [T(f)](\\xi) \\vert}^2 d\\nu(\\xi) \n\t= \\int_{G^\\wedge} \\vert f(\\xi) \\vert^2 d\\nu(\\xi).\n\\end{equation}\nMoreover, a function $f \\in H^1_\\gamma(G)$ if, and only if, $\\widehat{f} \\in T(L^2(G^\\wedge))$, \nthat is to say \n\\begin{equation}\n\\label{87648764}\n f \\in H^1_\\gamma(G) \\Leftrightarrow \\widehat{f} \\in T(L^2(G^\\wedge)). \n\\end{equation}\nIndeed, if $ f \\in H^1_\\gamma(G)$ then, we have $f \\in L^2(G)$ and \n$$\n \\int_{G^\\wedge} \\left( 1+\\gamma(\\xi)^2 \\right) \\vert \\widehat{f}(\\xi) \\vert^2 d\\nu(\\xi) \n = \\int_{G^\\wedge} \\vert \\sqrt{\\left( 1+\\gamma(\\xi)^2 \\right)} \\, \\widehat{f}(\\xi) \\vert^2 d\\nu(\\xi)< \\infty.\n$$\nTherefore, defining $g(\\xi):= \\sqrt{\\left( 1+\\gamma(\\xi)^2 \\right)} \\widehat{f(\\xi)}$, hence $g \\in L^2(G^\\wedge)$ and we have\n$\\widehat{f} \\in T(L^2(G^\\wedge))$.\n\n\\medskip\nNow, if $\\widehat{f} \\in T(L^2(G^\\wedge))$ let us show that, $f \\in H^1_\\gamma(G)$. First, there exists \n$g \\in L^2(G^\\wedge)$ such that, $\\widehat{f} = T(g)$. \nThus from equation \\eqref{76354433}, we obtain \n$$\n \\int_{G^\\wedge} (1 + \\gamma(\\xi)^2) \\, |\\widehat{f}(\\xi)|^2 d\\nu(\\xi) \n = \\int_{G^\\wedge} |g(\\xi)|^2 d\\nu(\\xi)< \\infty,\n$$\nthat is, by definition $f \\in H^1_\\gamma(G)$.\n\n\\medskip\nThen we have the following Equivalence Theorem:\n\\begin{theorem}\n\\label{876876872}\nThe Sobolev space $H^1_\\gamma(G)$ is compactly embedded in $L^2(G)$ \nif, and only if, the operator $T$ defined by \\eqref{TCOMP} is compact. \n\\end{theorem}\n\n\\begin{proof}\n1. First, let us assume that $H^1_\\gamma(G) \\subset \\subset L^2(G)$, \nand take a bounded sequence $\\{f_m\\}$, $f_m \\in L^2(G^\\wedge)$ \nfor each $m \\in \\mathbb N$. Thus $T(f_m) \\in L^2(G^\\wedge)$, and defining \n$g_m:= T(f_m)^\\vee$, we obtain by Plancherel Theorem that $g_m \\in L^2(G)$ \nfor each $m \\in \\mathbb N$. Moreover, from equation \\eqref{76354433}, we have for any \n$m \\in \\mathbb{N}$\n$$\n\\begin{aligned}\n \\infty >\\int_{G^\\wedge} |f_m(\\xi)|^2 d\\nu(\\xi)&= \\int_{G^\\wedge} (1 + \\gamma(\\xi)^2) \\, |[T(f_m)](\\xi)|^2 d\\nu(\\xi)\n \\\\[5pt]\n &= \\int_{G^\\wedge} (1 + \\gamma(\\xi)^2) \\, |\\widehat{g_m(\\xi)}|^2 d\\nu(\\xi). \n\\end{aligned} \n$$\nTherefore, the sequence $\\{g_m\\}$ is uniformly bounded in $H^1_\\gamma(G)$, with respect to $m \\in \\mathbb{N}$. \nBy hypothesis there exists a subsequence of $\\{g_m\\}$, say $\\{g_{m_j}\\}$,\nand a function $g \\in L^2(G)$ such that, $g_{m_j}$ converges strongly to $g$ in $L^2(G)$ as $j \\to \\infty$. \nConsequently, we have \n$$T(f_{m_j})= \\widehat{g_{m_j}} \\to \\widehat{g} \\quad \n\\text{in $L^2(G^\\wedge)$ as $j \\to \\infty$},$$ that is, the operator $T$ is compact. \n\n\\medskip\n2. Now, let us assume that the operator $T$ is compact and then show that $H^1_\\gamma(G) \\subset \\subset L^2(G)$. \nTo this end, we take a sequence $\\{f_m\\}_{m\\in \\mathbb{N}}$ uniformly bounded in $H^1_\\gamma(G)$. \nThen, due to the equivalence \\eqref{87648764} there exists for each $m\\in \\mathbb{N}$, \n$g_m \\in L^2(G^\\wedge)$, such that $\\widehat{f_m} = T(g_m)$. Thus for any $ m\\in \\mathbb{N}$, \nwe have from equation \\eqref{76354433} that\n$$\n\\begin{aligned}\n \\int_{G^\\wedge} |g_m(\\xi)|^2 d\\nu(\\xi)&= \\int_{G^\\wedge} (1 + \\gamma(\\xi)^2) \\, |[T(g_m)](\\xi)|^2 d\\nu(\\xi) \n \\\\[5pt]\n & = \\int_{G^\\wedge} (1 + \\gamma(\\xi)^2) \\, |\\widehat{f_m(\\xi)}|^2 d\\nu(\\xi)< \\infty. \n\\end{aligned}\n$$\nThen, the sequence $\\{g_m\\}$ is uniformly bounded in $L^2(G)$. Since the operator $T$ \nis compact, there exist $\\{m_j\\}_{j \\in \\mathbb{N}}$ and $g \\in L^2(G^\\wedge)$, such that \n$$\n \\widehat{f_{m_j}}= T(g_{m_j}) \\xrightarrow[j \\to \\infty]{} g \\quad \\text{in $L^2(G^\\wedge)$}.\n$$\nConsequently, the subsequence $\\{f_{m_j}\\}$ converges to $g^\\vee$ strongly in $L^2(G)$,\nand thus $H^1_\\gamma(G)$ is compactly embedded in $L^2(G)$.\n\\end{proof}\n\n\\begin{remark}\nDue to Theorem \\ref{876876872} the compactness characterization, that is\n$H^1_\\gamma(G) \\subset \\subset L^2(G)$, follows once\nwe show the conditions that the operator $T$ is compact. \nThe study of the dual space of $G$, i.e. $G^\\wedge$, and $\\gamma$ it will be essential for this characterization. \n\\end{remark}\n\n\\medskip\nRecall from \\eqref{CARACGCOMP} item $(ii)$ that, $G^\\wedge$ is discrete since $G$ is compact. \nThen, $\\nu$ is a countermeasure, and $\\nu(\\{\\chi\\})= 1$ for each singleton $\\{\\chi\\}$, $\\chi \\in G^\\wedge$. \nNow, for any $\\chi \\in G^\\wedge$ fixed, we \ndefine the point mass function at $\\chi$ by \n$$\n \\delta_{\\chi}(\\xi):= 1_{\\{\\chi\\}}(\\xi),\n\\quad \n\\text{for each $\\xi \\in G^\\wedge$}. \n$$\nHence the set $\\{\\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\}$\nis an orthonornal basis for $L^2(G^\\wedge)$. Indeed, we first show the orthonormality. \nFor each $\\chi, \\pi \\in G^\\wedge$, we have \n\\begin{equation}\n\\label{87987948744}\n \\langle \\delta_\\chi, \\delta_\\pi \\rangle_{L^2(G^\\wedge)}\n = \\int_{G^\\wedge} \\delta_\\chi(\\xi) \\; \\delta_\\pi(\\xi) \\, d\\nu(\\xi)= \\left\\{\n\t\\begin{array}{ccl}\n\t\t1, & \\text{if} & \\chi = \\pi, \n\t\t\\\\\n\t\t0, & \\text{if} & \\chi \\not= \\pi.\n\t\\end{array}\t\t \n\\right.\n\\end{equation}\nNow, let us show the density, that is $\\overline{\\{\\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\}}= L^2(G^\\wedge)$, or equivalently $\\{\\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\}^\\perp= \\{0\\}$. \nFor any $w \\in \\{\\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\}^\\perp$, we obtain \n$$\n 0 =\\langle \\delta_\\xi, w \\rangle_{L^2(G^\\wedge)} \n = \\int_{G^\\wedge} \\delta_\\xi(\\chi) w(\\chi) \\, d\\nu(\\chi) \n = \\int_{ \\{ \\xi \\}} w(\\chi) \\, d\\nu(\\chi) = w(\\xi)\n$$\nfor any $\\xi \\in G^\\wedge$, which proves the density. \n\n\\medskip\nFrom the above discussion, it is important to study the operator $T$\non elements of the set $\\{\\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\}$.\nThen, we have the following \n\\begin{theorem}\n\\label{876876876GG}\nIf the operator $T$ defined by \\eqref{TCOMP} is compact, then $G^\\wedge$ is an enumerable set. \n\\end{theorem}\n\\begin{proof} 1. First, let $\\{\\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\}$ be the orthonormal basis for $L^2(G^\\wedge)$,\nand $T$ the operator defined by \\eqref{TCOMP}. Then, the function \n$\\delta_\\xi \\in L^2(G^\\wedge)$ is an eigenfunction of $T$ \ncorresponding to the eigenvalue $(1+\\gamma^2)^{-1\/2}$, that is $\\delta_\\xi \\neq 0$, and\n\\begin{equation}\n\\label{87486tydg}\n T(\\delta_\\xi)= \\frac{\\delta_\\xi}{\\sqrt{1+\\gamma(\\xi)^2}}.\n\\end{equation}\n\n\\medskip\t\t\n2. Now, since $T$ is compact and $\\{\\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\}$ is a basis for $L^2(G^\\wedge)$, it must be enumerable from \\eqref{87486tydg}. \nOn the other hand, the function $\\xi \\in G^\\wedge \\mapsto \\delta_\\xi \\in L^2(G^\\wedge)$ is injective, hence $G^\\wedge$ is enumerable. \n\\end{proof}\n\n\\begin{corollary}\nIf the operator $T$ defined by \\eqref{TCOMP} is compact, then\n$L^2(G)$ is separable. \n\\end{corollary}\n\t\n\\begin{proof} First, the Hilbert space $L^2(G^\\wedge)$ is separable, since $\\{\\delta_\\xi \\; ; \\; \\xi\\in G^\\wedge\\}$\nis an enumerable orthonormal basis of it. Then, the proof follows applying the Plancherel Theorem. \n\\end{proof}\n\n\\begin{corollary}\nLet $G_B$ be the Bohr compactification of $\\mathbb{R}^n$ (see A. Pankov \\cite{Pankov}). \nThen $H^1_\\gamma(G_B)$ is not compactly embedded in $L^2(G_B)$.\n\\end{corollary}\n\t\t\n\\begin{proof} Indeed, $G_B^\\wedge$ is non enumerable.\n\\end{proof}\nConsequently, $G^\\wedge$ be enumerable is a necessarily condition for the operator $T$ be compact, which is not \nsufficient as shown by the Example \\ref{NOSUFF} below. Indeed, it might depend on the chosen $\\gamma$, see also \nExample \\ref{NOSUFF10}. \n\n\\medskip\nTo follow, we first recall the \n\\begin{definition}\nLet $G$ be a group (not necessarily a topological one) and $S$ a subset of it. \nThe smallest subgroup of G containing every element of S, denoted $\\langle S \\rangle$, is called the subgroup \ngenerated by $S$. Equivalently, see Dummit, Foote \\cite{DummitFoote} p.63, \n$$\n \\langle S \\rangle= \\big\\{ g^{\\varepsilon_1}_1 g^{\\varepsilon_2}_2 \\ldots g^{\\varepsilon_k}_k \/ \n \\text{$k \\in \\mathbb{N}$ and for each $j$, $g_j \\in S, \\varepsilon_j= \\pm 1$} \\big\\}.\n$$\nMoreover, \nif a group $G= \\langle S \\rangle$, then $S$ is called a generator of $G$, and\nin this case when S is finite, $G$ is called finitely generated. \n\\end{definition} \t\n\t\n\\begin{theorem}\n\\label{876876876} \nIf the operator $T$ defined by \\eqref{TCOMP} is compact and\nthere exists a generator of $G^\\wedge$ such that $\\gamma$ is bounded on it, \nthen $G^\\wedge$ is finite generated. \n\\end{theorem}\n\t\t\n\\begin{proof} \nLet $S_0$ be a generator of $G^\\wedge$, such that $\\gamma$ is bounded on it. \nTherefore, there exists $d_0 \\geq 0$ such that, \n$$\n \\text{for each $\\xi \\in S_0$, $\\; \\gamma(\\xi) \\leq d_0$}. \n$$\nNow, since $T$ is compact and $\\Vert T \\Vert \\leq 1$, there exists $0 < c \\leq 1$ such that,\nthe set of eigenvectors \n$$\n \\Big\\{ \\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\;\\; \\text{and} \\;\\; \\frac{1}{\\sqrt{1 + \\gamma(\\xi)^2}} \\geq c \\Big\\}\n \\equiv \n \\Big\\{ \\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\;\\; \\text{and} \\;\\; \\gamma(\\xi) \\leq\\sqrt{\\frac{1}{c^2} - 1} \\Big\\}\n$$\nis finite, where we have used the Spectral Theorem for bounded compact operators. Therefore, since \n$$\t\t\t\t\n \\left\\{ \\delta_\\xi \\; ; \\; \\xi \\in S_0 \\right\\} \\subset \n \\left\\{ \\delta_\\xi \\; ; \\; \\xi \\in G^\\wedge \\;\\; \\text{and} \\;\\; \\gamma(\\xi) \\leq d_0 \\right\\}\n$$\nit follows that $S_0$ is a finite set, and thus $G^\\wedge$ is finite generated. \n\\end{proof}\n\n\\begin{example}[Infinite enumerable Torus] \n\\label{NOSUFF}\nLet us recall the Sobolev space $H^1_\\gamma(\\mathbb{T}^\\mathbb N)$, where $\\mathbb{T}^\\mathbb N$ is the infinite enumerable Torus. \nWe claim that: $H^1_\\gamma(\\mathbb{T}^\\mathbb N)$ is not compactly embedded in $ L^2(\\mathbb{T}^\\mathbb N)$, \nfor $\\gamma$ defined in Exemple \\ref{6576dgtftdefd}. \nIndeed, given $k \\in \\mathbb N$ we define $1_k \\in \\mathbb{Z}^\\mathbb N$, such that it is zero for any coordinate \n$\\ell \\neq k$, and one in $k-$coordinate. Therefore, the set \n$$\n S_0 := \\{ \\xi_{1_k} \\; ; \\; k \\in \\mathbb N \\}\n$$\nis an infinite generator of the dual group $(\\mathbb{T}^\\mathbb N)^\\wedge$. \nSince for each $k \\in \\mathbb N$, $\\gamma(\\xi_{1_k}) = 1$, i.e. bounded in $S_0$, applying Theorem \\ref{876876876}\nit follows that $H^1_\\gamma(\\mathbb{T}^\\mathbb N)$ is not compactly embedded in $ L^2(\\mathbb{T}^\\mathbb N)$. \n\\end{example}\n\n\\begin{remark} The above discussion in the Example \\ref{NOSUFF} follows as well to the Sobolev space $H^1_\\gamma(\\mathbb{T}^I)$, where\n$I$ is an index set (enumerable or not). Clearly, the Sobolev space $H^1_\\gamma(\\mathbb{T}^I)$ in not compactly embedded in $ L^2(\\mathbb{T}^I)$, \nwhen $I$ is a non enumerable index set. Indeed, the set $(\\mathbb{T}^I)^\\wedge$ is non enumerable. \n\\end{remark}\n\nNow, we charactherize the condition on $\\gamma: G^\\wedge \\to [0,\\infty)$,\nin order to $T$ be compact. More precisely, let us consider the following property:\n\\begin{equation}\n\\label{ConditionC}\n {\\bf C}. \\quad \\text{For each $d> 0$, the set \n$ \\left\\{ \\xi \\in G^\\wedge \\; ; \\; \\gamma(\\xi) \\leq d \\right\\}$\nis finite}. \n\\end{equation}\n\t\t\t\t\t\t\t\n\\begin{theorem}\n\\label{7864876874}\nIf $\\gamma: G^\\wedge \\to [0,\\infty)$ satisfies ${\\bf C}$, then the operator $T$ defined by \\eqref{TCOMP} is compact. \n\\end{theorem}\n\t\t\n\\begin{proof}\nBy hypothesis, $\\{ \\xi \\in G^\\wedge \\; ; \\; \\gamma(\\xi) \\leq d \\}$ is finite, then we have\n$$\n G^\\wedge= \\bigcup_{k \\in \\mathbb{N}} \\left\\{ \\xi \\in G^\\wedge \\; ; \\; \\gamma(\\xi) \\leq k \\right\\}. \n$$\nConsequently, the set $G^\\wedge$ is enumerable and we may write $G^\\wedge= \\{ \\xi_i \\}_{i \\in \\mathbb{N}}$. \n\n\\medskip\nAgain, due to condition ${\\bf C}$ for each $c \\in (0,1)$ the set \n\\begin{equation}\n\\label{868768767864120}\n\t\\Big\\{ \\xi \\in G^\\wedge \\; ; \\; \\frac{1}{\\sqrt{1 + \\gamma(\\xi)^2}} \\geq c \\Big\\}\n\\end{equation}\nis finite. Since the function $\\xi \\in G^\\wedge \\mapsto \\delta_\\xi \\in L^2(G^\\wedge)$ is injective,\nthe set $\\{ \\delta_{\\xi_i} \\; ; \\; i\\in \\mathbb{N} \\}$ \nis an enumerable orthonormal basis of eigenvectors for $T$, which corresponding eigenvalues satisfy \n$$\n \\lim_{i \\to \\infty} \\frac{1}{\\sqrt{1 + \\gamma(\\xi_i)^2}}= 0,\n$$\t\nwhere we have used \\eqref{868768767864120}. Consequently, $T$ is a compact operator. \n\\end{proof}\n\n\\begin{example}[Bis: Infinite enumerable Torus]\n\\label{NOSUFF10}\nThere exists a function $\\gamma_0$ such that, \n$H^1_{\\gamma_0}(\\mathbb{T}^\\mathbb N)$ is compactly embedded in $ L^2(\\mathbb{T}^\\mathbb N)$. \nIndeed, we are going to show that, $\\gamma_0$ satisfies ${\\bf C}$. \nLet $\\alpha \\equiv (\\alpha_\\ell)_{\\ell \\in \\mathbb{N}}$\nbe a sequence in $\\mathbb R^\\mathbb N$, such that for each $\\ell \\in \\mathbb N$, $\\alpha_\\ell \\geq 0$ and \n\\begin{equation}\n\\label{weight}\n\\lim_{\\ell \\to \\infty} \\alpha_\\ell = +\\infty.\n\\end{equation}\nThen, we define the following pseudo-metric in the dual group $(\\mathbb{T}^\\mathbb{N})^\\wedge$ as follows\n$$\n p_0(\\xi_m, \\xi_n):= 2\\pi \\sum_{\\ell = 1}^\\infty \\alpha_\\ell \\;{\\vert m_\\ell - n_\\ell \\vert}, \n \\quad (m,n \\in \\mathbb{Z}^\\mathbb{N}_{\\rm c}),\n$$\nand consider $\\gamma_0(\\xi_m)= p_0(\\xi_m,1)$. \nThus for each $d> 0$, the set \n$$\n \\{ m \\in \\mathbb{Z}^\\mathbb{N}_{\\rm c} \\; ; \\; \\gamma_0(\\xi_m) \\leq d \\} \\quad \\text{is finite.}\n$$ \nIndeed, from \\eqref{weight} \nthere exists $\\ell_0 \\in \\mathbb N$, such that $\\alpha_\\ell> d$, for each $\\ell \\geq \\ell_0$. \nTherefore, if $m \\in \\mathbb{Z}^\\mathbb{N}_{\\rm c}$ and the support of $m$ is not contained \nin $\\{ 1, \\ldots, \\ell_0-1\\}$, that is to say, there exists $\\tilde{\\ell} \\geq \\ell_0$, \nsuch that, $m_{\\tilde{\\ell}} \\neq 0$. Then, \n$$\n2\\pi \\sum_{\\ell = 1}^\\infty \\alpha_\\ell \\;{\\vert m_\\ell \\vert}\n\\geq \\alpha_{\\tilde{\\ell}} > d. \n$$\nConsequently, we have \n$$\n \\{ m \\in \\mathbb{Z}^\\mathbb{N}_{\\rm c} \\; ; \\; \\gamma_0(\\xi_m) \\leq d \\} \n \\subset \n \\{ m \\in \\mathbb{Z}^\\mathbb{N}_{\\rm c} \\; ; \\; {\\rm supp} \\ m \\subset \\{ 1, \\ldots, \\ell_0-1\\} \\},\n$$ \nwhich is a finite set. Finally, applying Theorem \\ref{7864876874} we obtain that,\nthe Sobolev space \n$H^1_{\\gamma_0}(\\mathbb{T}^\\mathbb N)$ is compactly embedded in $ L^2(\\mathbb{T}^\\mathbb N)$. \n\\end{example}\n\n\\subsubsection{On a class of Quasi-periodic functions}\n\\label{4563tgf5fd3}\n\nIn this section we consider an important class \nof quasi-periodic functions, which \nincludes for instance the \nperiodic functions.\n \n \\smallskip\nLet $\\lambda_1,\\lambda_2,\\ldots,\\lambda_m \\in \\mathbb{R}^n$ be $m-$linear independent \nvectors with respect to $\\mathbb{Z}$, and consider the following matrix\n\\begin{equation*}\n\\Lambda := {\\left(\n\\begin{array}{c}\n\t\\lambda_1\n\\\\\n\t\\lambda_2\n\\\\\n\t\\vdots \n\\\\\n\t\\lambda_m\n\\end{array}\n\\right)}_{m\\times n}\n\\end{equation*}\nsuch that, for each $d> 0$ the set \n\\begin{equation}\n\\label{7863948tyfedf}\n \\{ k \\in \\mathbb{Z}^m \\; ; \\; {\\vert \\Lambda^T k \\vert} \\leqslant d\\} \\quad \\text{is finite.}\n\\end{equation}\nTherefore, we are considering the class of quasi-periodic functions satisfying \ncondition \\eqref{7863948tyfedf}. This set is not empty, for instance let us define \nthe matrix $B:= \\Lambda \\Lambda^T$, such that $\\det B> 0$, which is called here\npositive quasi-periodic functions. It is not difficult to see that, positive quasi-periodic functions \nsatisfies \\eqref{7863948tyfedf}. \nIndeed, it is sufficiently to observe that, for each $k \\in \\mathbb{Z}^m$, we have \n$$\n |k|= | B^{-1} B k | \\leq \\|B^{-1}\\| \\|\\Lambda\\| |\\Lambda^T k|. \n$$\nMoreover, since $\\lambda_1,\\lambda_2,\\ldots,\\lambda_m \\in \\mathbb{R}^n$ are $m-$linear independent \nvectors with respect to $\\mathbb{Z}$, (this property does not imply $\\det B> 0$), \nthe dynamical system $\\tau: \\mathbb{R}^n \\times \\mathbb{T}^m \\to \\mathbb{T}^m$, given by \n\\begin{equation}\n\\label{6973846tyd4f54e54}\n \\tau(x)\\omega := \\omega + \\Lambda x - \\left\\lfloor \\omega + \\Lambda x \\right\\rfloor\n\\end{equation}\nis a ergodic. \n\n\\medskip\nNow we remark that, the application \n${ \\varphi : \\mathbb{R}^n \\to \\mathbb{T}^m }$, $ \\varphi(x) := \\Lambda x - \\left\\lfloor \\Lambda x \\right\\rfloor$,\nis a continuous homeomorphism of groups. Then, we have\n$$\n \\tau(x)\\omega = \\varphi(x) \\omega \\equiv \\omega + \\Lambda x - \\left\\lfloor \\omega + \\Lambda x \\right\\rfloor. \n$$\nConsequently, under the conditions of the previous sections, we obtain for each \n$ k\\in \\mathbb{Z}^m$\n\\begin{equation*}\n\t\\gamma(\\xi_k)= 2\\pi {\\vert \\Lambda^T k \\vert},\n\\end{equation*}\nand applying Theorem \\ref{7864876874} (recall \\eqref{7863948tyfedf}),\nit follows that \n\\begin{equation*}\n\tH^1_\\gamma {\\left( \\mathbb{T}^m \\right)} \\subset \\! \\subset L^2{\\left( \\mathbb{T}^m \\right)}.\n\\end{equation*}\nTherefore, given a stochastic deformation $\\Phi$, we have $\\mathcal{H}_\\Phi \\subset \\! \\subset \\mathcal{L}_\\Phi$\nfor the class of quasi-periodic functions satisfying \\eqref{7863948tyfedf}, \nand it follows a solution to Bloch's spectral cell equation. \n\n\\subsection{Auxiliary celular equations}\n\\label{ACE}\n\n\nThe preposition below, which is an immediate consequence of Theorem~\\ref{768746hughjg576},\ngive us the necessaries characteristics to deduce from \nthe cell equation~\\eqref{92347828454trfhfd4rfghjls} other equations (called here auxiliary cellular equations),\nthat will be essential in our homogenization analysis.\n\t\\begin{proposition}\n\t\\label{2783546tydfh}\n\t\tGiven ${ \\theta \\in \\mathbb{R}^n }$, let $\\big(\\lambda(\\theta),\\Psi(\\theta)\\big)$ be a spectral point of the cell equation~\\eqref{92347828454trfhfd4rfghjls}. Suppose that \n\t\tfor some $\\theta_0\\in\\mathbb R^n$ the corresponding eigenvalue ${ \\lambda(\\theta_0) }$ has finite multiplicity. Then, there exists a neighborhood \n\t\t${ {\\mathcal{U}} \\subset \\mathbb{R}^n }$ of ${ \\theta_0 }$, such that the following functions \t\t\n\t\t\\begin{equation*}\n\t\t\t\\theta \\in {\\mathcal{U}} \\mapsto \\Psi(\\theta) \\in \\mathcal{H}_\\Phi \\;\\;\\; \\text{and} \\;\\;\\; \\theta \\in {\\mathcal{U}} \\mapsto \\lambda(\\theta) \\in \\mathbb{R}-\\{0\\},\n\t\t\\end{equation*}\n\t\tare analytical.\n\t\\end{proposition}\n\t\n\nNow, introducing the operator ${ \\mathbb{A}(\\theta) }$, (${ \\theta \\in \\mathbb{R}^n }$), defined on $\\mathcal{H}_\\Phi$ by \n\n\t\\begin{eqnarray*}\n&&\\mathbb{A}(\\theta)[F] = -({\\rm div}_{\\! z} + 2i\\pi \\theta) {\\Big( A {\\left( \\Phi^{-1}(z, \\omega), \\omega\\right)} {(\\nabla_{\\!\\! z} + 2i\\pi\\theta)} F \\Big)} \\\\\n&&\\qquad\\qquad\\qquad\\qquad+ V {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} F - \\lambda(\\theta) F,\n\t\\end{eqnarray*}\nand writing $\\theta=(\\theta_1,\\cdots, \\theta_n)$, we obtain for $k= 1, \\ldots, n$, \t\n\\begin{eqnarray}\n\\label{8654873526rtgdrfdrfdrfrd4}\n&& \\mathbb{A}(\\theta) {\\left[ \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} \\right]}\n=({\\rm div}_{\\! z} + 2i\\pi \\theta) {\\Big( A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} {( 2i\\pi e_k \\Psi(\\theta) )} \\Big)}\\nonumber\\\\\n&&\\qquad\\qquad+{( 2i\\pi e_k )} {\\Big( A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} {( {\\rm div}_{\\! z} + 2i\\pi \\theta )}\\Psi(\\theta) \\Big)}\\nonumber\\\\\n&&\\hspace{6.0cm}+\\frac{\\partial \\lambda}{\\partial \\theta_k}(\\theta) \\Psi(\\theta),\n\\end{eqnarray}\nwhere $\\{e_k{\\}}_{1\\le k\\le n}$ is the canonical basis of $\\mathbb R^n$. \nThe equation~\\eqref{8654873526rtgdrfdrfdrfrd4} is called the {\\it first auxiliary cellular equation} (or f.a.c. equation, in short). In the same way, we have \nfor $k, \\ell= 1, \\ldots, n$, \n\\begin{eqnarray}\\label{876786uydsytfgbnvbvbbv}\n&&\\mathbb{A}(\\theta) {\\left[ \\frac{\\partial^2 \\Psi(\\theta)}{\\partial \\theta_\\ell \\, \\partial \\theta_k} \\right]} = \n({\\rm div}_{\\! z} + 2i\\pi \\theta) {\\Big( A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} { 2i\\pi e_\\ell \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} } \\Big)}\n\\nonumber\\\\\n&&\\qquad\\qquad+({\\rm div}_{\\! z} + 2i\\pi \\theta) {\\Big(A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} {2i\\pi e_k \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_\\ell}} \\Big)}\n\\nonumber\\\\\n&&\\qquad\\qquad+ {(2i\\pi e_k)} {\\Big( A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} {( {\\rm div}_{\\! z} + 2i\\pi \\theta )} \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_\\ell} \\Big)}\n\\nonumber\\\\\n&&\\qquad\\qquad+ {(2i\\pi e_\\ell)} {\\Big( A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} {( {\\rm div}_{\\! z} + 2i\\pi \\theta )} \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} \\Big)}\n\\nonumber\\\\\n&&\\qquad\\qquad\\qquad+ {(2i\\pi e_k)} {\\Big(A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} {\\left( 2i\\pi e_\\ell \\Psi(\\theta) \\right)} \\Big)}\\\\\n&&\\qquad\\qquad\\qquad\\qquad+ {(2i\\pi e_\\ell)} {\\Big(A {\\left( \\Phi^{-1}(z, \\omega), \\omega \\right)} {\\left( 2i\\pi e_k \\Psi(\\theta) \\right)} \\Big)}\\nonumber\\\\\n&&\\qquad\\qquad\\qquad\\qquad+\\frac{\\partial \\lambda(\\theta)}{\\partial \\theta_k} \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_\\ell} \n+ \\, \\frac{\\partial \\lambda(\\theta)}{\\partial \\theta_\\ell} \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} + \\frac{\\partial^2 \\lambda(\\theta)}{\\partial \\theta_\\ell \\, \\partial \\theta_k} \\Psi(\\theta)\\nonumber,\n\\end{eqnarray}\nwhich we call the {\\it second auxiliary cellular equation} (or s.a.c. equation, in short). \n\n\\medskip\nIn order to make clear in which sense the auxiliary cellular equations are understood, we note that if ${ G\\in\\mathcal{H}_\\Phi }$ then the variational formulation of the f.a.c. \nequation~\\eqref{8654873526rtgdrfdrfdrfrd4} is given by \n\\begin{eqnarray}\\label{hjhkjhjkhggd0989874}\n&&\\int_\\Omega \\int_{\\Phi ([0,1)^n, \\omega)}\\Big\\{\nA {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} \\cdot \\overline{ {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} G } \\nonumber\n\\\\[5pt]\n&&\\qquad\\qquad+V {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} \\, \\overline{ G }-\\lambda(\\theta)\n\\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} \\, \\overline{ G }\\Big\\} \\, dz \\, d\\mathbb{P}(\\omega)\\nonumber\n\\\\[5pt]\n&&\\qquad\\qquad\\qquad=:\\Big{\\langle}\\mathbb{A}(\\theta) {\\left[ \\frac{\\partial \\Psi(\\theta)}{\\partial \\theta_k} \\right]}, G\\Big{\\rangle}\\\\\n&&\\qquad=-\\int_\\Omega \\int_{\\Phi ([0,1)^n, \\omega)}\\Big\\{ A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} \\Psi(\\theta) \\cdot \\overline{ {(2i\\pi e_k G)} } \\nonumber\n\\\\[5pt]\n&&\\qquad\\qquad\\qquad\\qquad-A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {(2i\\pi e_k \\Psi(\\theta))} \\cdot \\overline{ {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} G }\\nonumber\n\\\\[5pt]\n&&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad+\\frac{\\partial \\lambda(\\theta)}{\\partial \\theta_k}\\,\\Psi(\\theta) \\, \\overline{ G }\\Big\\} \\, dz \\, d\\mathbb{P}(\\omega).\\nonumber\n\\end{eqnarray}\nSimilar reasoning can be made with the s.a.c. equation~\\eqref{876786uydsytfgbnvbvbbv}. \n\n\\medskip\nIn the following, we highlight an important fact that is fundamental to determine the hessian nature of the effective tensor in our homogenization analysis concerning the Schr\\\"odinger equation~\\eqref{jhjkhkjhkj765675233}. This fact is brought out by choosing \n${ \\theta \\in {\\mathcal{U}} }$, ${ k\\in \\{1, \\ldots,n \\} }$ and defining $\\Lambda_k (z,\\omega, \\theta) := \\frac{1}{2i\\pi} \\frac{\\partial \\Psi}{\\partial \\theta_k} (z,\\omega, \\theta)$. Hence, \ntaking ${ \\Psi(\\theta) }$ as a test function in the s.a.c. equation~\\eqref{876786uydsytfgbnvbvbbv}, we get \n\\begin{eqnarray}\\label{hkjlhjklhljkhuytyiufsd4}\n&&\\frac{1}{4\\pi^2} \\frac{\\partial^2 \\lambda(\\theta)}{\\partial \\theta_\\ell \\, \\partial \\theta_k} \\int_\\Omega \\int_{\\Phi ([0,1)^n, \\omega)} {\\vert \\Psi(\\theta) \\vert}^2 \\, dz \\, d\\mathbb{P}(\\omega)\n\\nonumber\n\\\\\n&&\\qquad= \\int_\\Omega \\int_{\\Phi ([0,1)^n, \\omega)} \\Big\\{-A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {(e_\\ell \\Lambda_k(\\theta))} \\cdot \\overline{ {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} \\Psi(\\theta) }\\nonumber\\\\\n&&\\qquad\\qquad\\qquad\\,-A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {(e_k \\Lambda_\\ell(\\theta))} \\cdot \\overline{ {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} \\Psi(\\theta) }\\nonumber\n\\\\\n&&\\qquad\\qquad\\qquad\\,+A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} \\Lambda_k(\\theta) \\cdot \\overline{ {(e_\\ell \\, \\Psi(\\theta))} }\n\\nonumber\\\\\n&&\\qquad\\qquad\\qquad\\,+A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {\\left( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\right)} \\Lambda_\\ell(\\theta) \\cdot \\overline{ {(e_k \\, \\Psi(\\theta))} }\n\\nonumber\\\\\n &&\\qquad\\qquad\\qquad\\qquad\\,+A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {(e_k \\Psi(\\theta))} \\cdot \\overline{ {(e_\\ell \\, \\Psi(\\theta))} } \\\\\n &&\\qquad\\qquad\\qquad\\qquad\\,+A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {(e_\\ell \\Psi(\\theta))} \\cdot \\overline{ {(e_k \\, \\Psi(\\theta))} }\\nonumber\\\\\n&&\\qquad\\qquad\\qquad\\qquad +\\frac{1}{2i\\pi}\\Big(\\frac{\\partial \\lambda(\\theta)}{\\partial \\theta_\\ell}\\Lambda_k(\\theta)+\\frac{\\partial \\lambda(\\theta)}{\\partial \\theta_k}\\Lambda_\\ell(\\theta)\n\\Big)\\, \\overline{\\Psi(\\theta)}\\Big\\} \\, dz \\, d\\mathbb{P}(\\omega).\\nonumber\n\\end{eqnarray}\n\nOn the other hand, using $\\Lambda_k (z,\\omega, \\theta)$ as a test function in the f.a.c. equation~\\eqref{8654873526rtgdrfdrfdrfrd4} and due to Theorem~\\ref{648235azwsxqdgfd}, \nwe arrive at \n\\begin{equation}\n\\label{4576gdcrfvjc46we}\n\t\t\\begin{array}{l}\n\t\t\t\\displaystyle \\int_{\\mathbb{R}^n} A {\\left( \\Phi^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)},\\omega \\right)} {\\left( \\nabla + 2i\\pi \\frac{\\theta}{\\varepsilon} \\right)} \\Lambda_{k,\\varepsilon} (\\theta) \\cdot \\overline{ {\\left( \\nabla + 2i\\pi \\frac{\\theta}{\\varepsilon} \\right)} \\varphi } \\, dx \\\\ [12pt]\n\t\t\t\\displaystyle + \\frac{1}{\\varepsilon^2}\\int_{\\mathbb{R}^n} V {\\left( \\Phi^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)},\\omega \\right)} \\, \\Lambda_{k,\\varepsilon} (\\theta) \\, \\overline{ \\varphi } \\, dx - \\frac{\\lambda(\\theta)}{\\varepsilon^2}\\int_{\\mathbb{R}^n} \\Lambda_{k,\\varepsilon} (\\theta) \\, \\overline{ \\varphi } \\, dx \\\\ [12pt]\n\t\t\t\\hspace{2cm} \\displaystyle = - \\frac{1}{\\varepsilon}\\int_{\\mathbb{R}^n} A {\\left( \\Phi^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)},\\omega \\right)} {\\left( \\nabla + 2i\\pi \\frac{\\theta}{\\varepsilon} \\right)} \\Psi_\\varepsilon (\\theta) \\cdot \\overline{ {(e_k \\varphi)} } \\, dx \\\\ [12pt]\n\t\t\t\\hspace{2.5cm} \\displaystyle - \\frac{1}{\\varepsilon}\\int_{\\mathbb{R}^n} A {\\left( \\Phi^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)},\\omega \\right)} {(e_k \\Psi_\\varepsilon (\\theta) )} \\cdot \\overline{ {\\left( \\nabla + 2i\\pi \\frac{\\theta}{\\varepsilon} \\right)}\\varphi } \\, dx \\\\ [12pt]\n\t\t\t\\hspace{2.75cm} \\displaystyle + \\frac{1}{\\varepsilon^2} \\frac{1}{2i\\pi}\\frac{\\partial \\lambda}{\\partial \\theta_k}(\\theta) \\int_{\\mathbb{R}^n} \\Psi_\\varepsilon (\\theta) \\, \\overline{ \\varphi } \\, dx,\n\t\t\\end{array}\n\t\t\\end{equation}\nfor any ${ \\varphi \\in C^\\infty_{\\rm c}(\\mathbb{R}^n) }$ and a.e ${ \\omega \\in \\Omega }$. Here, \n$\\Lambda_{k,\\varepsilon} (x,\\omega,\\theta) := \\Lambda_k {\\left( \\frac{x}{\\varepsilon}, \\omega, \\theta \\right)}$.\n\nProceeding in a similar way with the cell equation~\\eqref{92347828454trfhfd4rfghjls}, we can find \n\\begin{multline}\\label{nbvnbvxzchgfs54}\n \\int_{\\mathbb{R}^n} A {\\left( \\Phi^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)},\\omega \\right)} {\\left( \\nabla + 2i\\pi \\frac{\\theta}{\\varepsilon} \\right)} \\Psi_\\varepsilon (\\theta) \\cdot \\overline{ {\\left( \\nabla + 2i\\pi \\frac{\\theta}{\\varepsilon} \\right)} \\varphi } \\, dx \n \\\\\n \\hspace{-3cm} + \\frac{1}{\\varepsilon^2}\\int_{\\mathbb{R}^n} V {\\left( \\Phi^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)},\\omega \\right)} \\, \\Psi_\\varepsilon (\\theta) \\, \\overline{ \\varphi } \\, dx \n - \\frac{\\lambda(\\theta)}{\\varepsilon^2}\\int_{\\mathbb{R}^n} \\Psi_\\varepsilon (\\theta) \\, \\overline{ \\varphi } \\, dx = 0,\n\\end{multline}\nfor any ${ \\varphi \\in C^\\infty_{\\rm c}(\\mathbb{R}^n) }$ and a.e. ${ \\omega \\in \\Omega }$.\t\t\t\n\n\n\t\n\\addcontentsline{toc}{section}{Part II: Asymptotic Equations}\n\\section*{Part II: Asymptotic Equations}\t\n\\section{\\!\\! On Schr\\\"odinger Equations Homogenization}\n\\label{HomoSchEqu}\n\nIn this section, we shall describe the asymptotic behaviour of the family of solutions \n$\\{ u_{\\varepsilon}{\\}_{\\varepsilon>0}}$ of the equation~\\eqref{jhjkhkjhkj765675233},\nthis is the content of Theorem~\\ref{876427463tggfdhgdfgkkjjlmk} below. It generalizes the similar result of Allaire, Piatnitski \\cite{AllairePiatnitski} where they consider the \nsimilar problem in the periodic setting. Our scenario is much different from one considered by them. \nHere, the coefficients of equation~\\eqref{jhjkhkjhkj765675233} are random perturbations accomplished by \nstochastic diffeomorphisms of stationary functions. \nSince the two-scale convergence technique is the best tool to deal with \nasymptotic analysis of linear operators, we make use of it in analogous way done in~\\cite{AllairePiatnitski}. \nAlthough, the presence of the stochastic deformation in the coefficients\nbrings out several complications, which we were able to overcome. \n\n\\medskip\nTo begin, some important and basic a priori estimates of the solution of the Schr\\\"odinger \nequation \\eqref{jhjkhkjhkj765675233} are needed. Then, we have the following \n\n\\begin{lemma}[Energy Estimates]\n\\label{63457rf2wertgh}\n\tAssume that the conditions \\eqref{ASSUM1}, \\eqref{ASSUM2} hold and let \n${ u_\\varepsilon }\\in C\\big([0,T);H^1(\\mathbb R^n)\\big)$ be the solution of the equation \\eqref{jhjkhkjhkj765675233} with initial data \n\t$u_{\\varepsilon}^0$. Then, for all ${ t\\in [0,T] }$ and a.e. ${ \\omega \\in \\Omega }$, the following a priori estimates hold:\n\t\\begin{itemize}\n\t\t\\item[(i)] $($Energy Conservation$.)$ ${ \\displaystyle\\int_{\\mathbb{R}^n} {\\vert u_\\varepsilon(t,x,\\omega) \\vert}^2 dx = \\int_{\\mathbb{R}^n} {\\vert u_{\\varepsilon}^0(x,\\omega) \\vert}^2 dx }$.\n\t\t\\item[(ii)] $( \\varepsilon \\nabla-$ Estimate$.)$\n\t\t\\begin{eqnarray*}\n\t\t \\int_{\\mathbb{R}^n} |\\varepsilon\\nabla u_{\\varepsilon}(t,x,\\omega)|^2\\, dx\n\t\t\\le C\\int_{\\mathbb R^n}\\Big\\{|\\varepsilon\\nabla u_{\\varepsilon}^0(x,\\omega)|^2+|u_{\\varepsilon}^0(x,\\omega)|^2\\Big\\} \\,dx,\n\t \t\\end{eqnarray*}\nwhere ${ C:=C\\big(\\Lambda,{\\Vert A \\Vert}_\\infty,{\\Vert V \\Vert}_\\infty,{\\Vert U \\Vert}_\\infty} \\big) $ is a positive constant which does not depend on $\\varepsilon > 0$.\n\t \t\\end{itemize}\n\\end{lemma}\n\\begin{proof}\n1. If we multiply the Eq. \\eqref{jhjkhkjhkj765675233} by $\\overline{u_{\\varepsilon}}$ and take the imaginary part, then we obtain \n$$\n\\frac{d}{dt}\\int_{\\mathbb R^n}|u_{\\varepsilon}(t,x,\\omega)|^2\\,dx=0,\n$$\nwhich gives the proof of the item $(i)$.\n\n2. Now, multiplying the Eq. \\eqref{jhjkhkjhkj765675233} by $\\overline{\\partial_t u_{\\varepsilon}}$ and taking the real part, we get \n\\begin{eqnarray*}\n&&\\frac{1}{2}\\frac{d}{dt}\\int_{\\mathbb R^n}\\Big\\{\\varepsilon^2A\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\right)\\nabla u_{\\varepsilon}\\cdot \\nabla \\overline{u_{\\varepsilon}}\\\\\n&&\\qquad\\qquad+\\Big( V\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\right)+\\varepsilon^2 U\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right),\\omega\\right)\\Big)|u_{\\varepsilon}|^2 \\Big\\}\\,dx=0,\n\\end{eqnarray*}\nwhich provides the proof of the item $(ii)$.\n\\end{proof}\n\nIts important to remember the followings facts that will be necessary in this section:\n\\begin{itemize}\n\\item The initial data of the equation \\eqref{jhjkhkjhkj765675233} is assumed to be well-prepared, that is, \nfor $(x,\\omega) \\in \\mathbb{R}^n \\! \\times \\! \\Omega$, and \n${ \\theta^\\ast \\in \\mathbb{R}^n }$\n\\begin{equation}\n\\label{WellPreparedness}\n\tu_\\varepsilon^0(x,\\omega) = e^{2i\\pi \\frac{\\theta^\\ast \\cdot x}{\\varepsilon}}\\psi \\left( \\Phi^{-1}(x\/\\varepsilon,\\omega),\\omega,\\theta^\\ast \\right) \nv^0(x),\n\\end{equation} \nwhere {${ v^0 \\in C_{\\rm c}^\\infty(\\mathbb{R}^n) }$}, and ${ \\psi(\\theta^\\ast) }$ is an eigenfunction of the cell problem \\eqref{92347828454trfhfd4rfghjls}. \n\\item Using the Ergodic Theorem, it is easily seen that the sequences \n$${ \\{u_{\\varepsilon}^0(\\cdot,\\omega)\\}_{\\varepsilon >0} } \\quad \\text{and} \\quad \n{ \\{\\varepsilon \\nabla u_{\\varepsilon}^0(\\cdot,\\omega) \\}_{\\varepsilon > 0} }$$ \nare bounded in ${ L^2(\\mathbb{R}^n) }$ and ${ [L^2(\\mathbb{R}^n)]^n }$, respectively. \n\\end{itemize}\n\nOne observes that, the main importance of the well preparedness of the initial data is the following: Trivially, the sequence of solutions \n$\\{ u_{\\varepsilon}{\\}_{\\varepsilon>0}}$ of the equation~\\eqref{jhjkhkjhkj765675233} two-scale converges to zero. However, if our initial data is well-prepared, we are able \nto correct the oscillations present in $u_{\\varepsilon}$ in such a way that, after this correction we can strengthen the weak convergence to the solution of a nontrivial homogenized \nSchr\\\"odinger Equation. For instance, we invite the readers to Allaire, Piatnistski \\cite{AllairePiatnitski}, Bensoussan, Lions, Papanicolaou \\cite[Chapter 4]{BensoussanLionsPapanicolaou} \nand Poupaud, Ringhofer \\cite{PoupaudRinghofer}. \n\n\\subsection{The Abstract Theorem.}\n\\label{ATH}\n\nIn the next, we establish an abstract homogenization theorem for Schr\\\"odinger equations.\n\n\\begin{theorem}\n\\label{876427463tggfdhgdfgkkjjlmk}\nLet $\\Phi(y,\\omega)$ be a stochastic deformation, and $\\tau:\\mathbb Z^n\\times \\Omega\\to \\Omega$ an ergodic $n-$dimensional dynamical \nsystem. \nAssume that the conditions \\eqref{ASSUM1}, \\eqref{ASSUM2} hold, and \nthere exists a Bloch frequence ${ \\theta^\\ast \\! \\in \\mathbb{R}^n }$ which is a critical point of \u00ad$\\lambda(\\cdot)$,\nthat is ${ \\nabla_{\\!\\! \\theta} \\, \\lambda (\\theta^\\ast) = 0 }$,\nwhere ${ \\lambda (\\theta^\\ast) }$ is a simple eigenvalue of the spectral cell equation~\\eqref{92347828454trfhfd4rfghjls} associated to the eigenfunction \n$\\Psi(z,\\omega,\\theta^\\ast)\\equiv \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)}$. Assume also that, the initial data is well-prepared in the sense of \n\\eqref{WellPreparedness}. If ${ u_\\varepsilon }\\in C\\big([0,T);H^1(\\mathbb R^n)\\big)$ is the solution of~\\eqref{jhjkhkjhkj765675233} for each $\\varepsilon> 0$ fixed, then the sequence \n${ v_\\varepsilon }$ defined by \n\t\t\\begin{equation*}\n\t\t\tv_\\varepsilon(t,x,\\omega) := e^{ -{\\left( i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon} \\right)} } u_\\varepsilon(t,x,\\omega), \\;\\, (t,x) \\in \\mathbb{R}^{n+1}_T, \\; \\omega \\in \\Omega, \n\t\t\\end{equation*}\n$\\Phi_{\\omega}-$two-scale converges to ${ v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega, \\theta^\\ast \\right)} }$, and satisfies for a.e. ${ \\omega \\in \\Omega }$\n\t\t\\begin{equation*}\n\t\t\t\\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} \\! {\\left\\vert v_\\varepsilon (t,x,\\omega) - v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\omega \\right)}, \\omega, \\theta^\\ast \\right)} \\right\\vert}^2 dx \\, dt \\, = \\, 0,\n\t\t\\end{equation*}\nwhere the function ${ v \\in C\\big([0,T); L^2(\\mathbb{R}^n)\\big) }$ is the unique solution of the homogenized Schr\\\"odinger equation \n\\begin{equation}\n\\label{HomSchEqu}\n\\left\\{\n\\begin{aligned}\n & i \\displaystyle\\frac{\\partial v}{\\partial t} - {\\rm div} {\\left( A^\\ast \\nabla v \\right)} + U^\\ast v= 0 \\, , \\;\\, \\text{in} \\;\\, \\mathbb{R}^{n+1}_T, \n \\\\[5pt]\n &\tv(0,x) = v^0(x) \\, , \\;\\, x\\in \\mathbb{R}^n,\n\\end{aligned}\n\\right.\n\\end{equation}\nwith effective (constant) coefficients: matrix ${ A^\\ast = D_\\theta^2 \\lambda(\\theta^\\ast) }$, and potential \n\t\t\\begin{equation*}\n\t\t\tU^\\ast = c^{-1}_\\psi \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} U{\\left( \\Phi^{-1} (z,\\omega),\\omega \\right)} {\\left\\vert \\psi {\\left( \\Phi^{-1} (z,\\omega), \\omega, \\theta^\\ast \\right)} \\right\\vert}^2 dz \\, d\\mathbb{P}(\\omega),\n\t\t\\end{equation*}\nwhere\n\t\t\\begin{equation*}\n\t\t\tc_\\psi = \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {\\left\\vert \\psi {\\left( \\Phi^{-1} (z,\\omega), \\omega, \\theta^\\ast \\right)} \\right\\vert}^2 dz \\, d\\mathbb{P}(\\omega).\n\t\t\\end{equation*}\n\\end{theorem}\n\n\\begin{proof}\nIn order to better understand the main difficulties brought by the presence of the stochastic deformation $\\Phi$, we split our proof in five steps. \n\n\\medskip\n1.({\\it\\bf A priori estimates and $\\Phi_{\\omega}-$two-scale convergence.}) \nFirst, we define \n\\begin{equation}\n\\label{jghkd65454ads3e}\n\t \tv_\\varepsilon (t,x,\\widetilde{\\omega}) := e^{-{\\left( i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon} \\right)} } \n\t\tu_\\varepsilon(t,x,\\widetilde{\\omega}), \\;\\, (t,x,\\widetilde{\\omega}) \\in \\mathbb{R}^{n+1}_T \\! \\times \\! \\Omega.\n\\end{equation}\nThen, computing the first derivatives with respect to the variable $x$, we get\n\t \\begin{equation}\n\t \\label{974967uhghjzpzas}\n\t \t\\varepsilon \\nabla u_\\varepsilon (t,x,\\widetilde{\\omega})\\, e^{-{\\left( i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} \n\t\t+ 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon} \\right)} } = (\\varepsilon \\nabla + 2i\\pi \\theta^\\ast) v_\\varepsilon (t,x,\\widetilde{\\omega}).\n\t \\end{equation}\n\n\n\n\t \n\t \n\t\n\t\nApplying Lemma~\\ref{63457rf2wertgh} yields:\n\t\n\t\n\t\n\t\n\t\t\\begin{itemize}\n\t\t\t\\item ${ \\displaystyle\\int_{\\mathbb{R}^n} {\\vert v_\\varepsilon(t,x,\\widetilde{\\omega}) \\vert}^2 dx = \\int_{\\mathbb{R}^n} \n\t\t\t{\\vert u_\\varepsilon^0(x,\\widetilde{\\omega}) \\vert}^2 dx },$\n\t\t\t\\item ${ \\displaystyle\\int_{\\mathbb{R}^n} {\\vert \\varepsilon \\nabla v_\\varepsilon(t,x,\\widetilde{\\omega}) \\vert}^2 dx \\leq \\widetilde{C} {\\displaystyle\\int_{\\mathbb{R}^n} \n\\Big( {\\vert \\varepsilon \\nabla u_\\varepsilon^0(x,\\widetilde{\\omega}) \\vert}^2 +{\\vert u_\\varepsilon^0(x,\\widetilde{\\omega}) \n\\vert}^2 \\Big) dx} }$\n\t\t\\end{itemize}\nfor all ${ t\\in[0,T) }$ and a.e. $\\widetilde{\\omega} \\in\\Omega$, where the constant ${ \\widetilde{C} }$ depends on\n$\\| A{\\|}_{\\infty}$, $\\|V{\\|}_{\\infty}$, $\\|U{\\|}_{\\infty}$ and $\\theta^\\ast$. \nThen, from the uniform boundedness of the sequences \n${ \\{u_{\\varepsilon}^0(\\cdot,\\widetilde{\\omega})\\}_{\\varepsilon >0} }$ and ${ \\{\\varepsilon \\nabla u_{\\varepsilon}^0(\\cdot,\\widetilde{\\omega}) \\}_{\\varepsilon > 0} }$, we deduce that the sequences \n$${ {\\{ v_{\\varepsilon}(\\cdot,\\cdot\\cdot,\\widetilde{\\omega}) \\}}_{\\varepsilon > 0} } \\quad \\text{and} \\quad { {\\{ \\varepsilon\\nabla v_\\varepsilon(\\cdot, \\cdot\\cdot, \\widetilde{\\omega}) \\}}_{\\varepsilon > 0} }$$\nare bounded, respectively, in ${ L^2(\\mathbb{R}^{n+1}_T) }$ and ${ {[ L^2(\\mathbb{R}^{n+1}_T)]}^n }$ for a.e. ${ \\widetilde{\\omega} \\in \\Omega }$. Therefore, applying Lemma \n\\ref{SYM1-5}, there exists a subsequence ${ \\{\\varepsilon^\\prime\\} }$(which may dependent on $\\tilde{\\omega}$), and a stationary function \n${ v^\\ast_{\\widetilde{\\omega}} \\in L^2(\\mathbb{R}^{n+1}_T, \\mathcal{H}) }$, for a.e. ${ \\widetilde{\\omega} \\in \\Omega }$, such that \n\t\t\\begin{equation*}\n\t\t\tv_{\\varepsilon^\\prime}(t,x,\\widetilde{\\omega}) \\; \\xrightharpoonup[\\varepsilon^\\prime \\to 0]{2-{\\rm s}}\\; v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)},\n\t\t\\end{equation*}\n\t\tand\n\t\t\\begin{equation*}\n\t\t\t\\varepsilon^\\prime \\frac{\\partial v_{\\varepsilon^\\prime}}{\\partial x_k}(t,x,\\widetilde{\\omega}) \\; \\xrightharpoonup[\\varepsilon^\\prime \\to 0]{2-{\\rm s}} \\; \n\t\t\t\\frac{\\partial }{\\partial z_k} {\\big( v^\\ast_{\\widetilde{\\omega}} {\\left(t,x,\\Phi^{-1}{(z,\\omega)},\\omega \\right)} \\big)}, \n\t\t\\end{equation*}\nwhich means that, for ${ k\\in \\{1,\\ldots,n\\} }$, we have \n\\begin{equation}\n\\label{jhjkhjfdasdfghyui}\n\\begin{aligned}\n \\lim_{\\varepsilon^\\prime \\to 0}\\iint_{\\mathbb{R}^{n+1}_T} & v_{\\varepsilon^\\prime} \\left(t, x,\\widetilde{\\omega} \\right) \\, \n\t\t\t\\overline{ \\varphi(t,x) \\,\\Theta\\left(\\Phi^{-1}{\\left(\\frac{x}{\\varepsilon^\\prime},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right) } \\, dx \\, dt\n\\\\[5pt]\n& = c_{\\Phi}^{-1} \\!\\! \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\!\\!\\!\\!\\! v^\\ast_{\\widetilde{\\omega}} \n\t\t\t{\\left(t,x,\\Phi^{-1}{(z,\\omega)},\\omega \\right)} \\,\n\\\\[5pt]\t\t\t\n\t\t\t & \\hspace{90pt} \\times \\; \\overline{ \\varphi(t,x) \\,\\Theta\\left(\\Phi^{-1}(z,\\omega),\\omega \\right)} \\, dz \\, d\\mathbb{P} \\, dx \\, dt\n\\end{aligned}\n\\end{equation}\nand \n\\begin{equation}\n\\label{uygkgcxzcxzsw}\n\\begin{aligned}\n \\lim_{\\varepsilon^\\prime \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} & \\varepsilon^\\prime \\frac{\\partial v_{\\varepsilon^\\prime}}{\\partial x_k} \\left(t, x, \\widetilde{\\omega} \\right) \\, \n \\overline{ \\varphi (t,x)\\,\\Theta\\left(\\Phi^{-1}{\\left(\\frac{x}{\\varepsilon^\\prime},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right) } \\, dx \\, dt\n\\\\[5pt]\n&= c_{\\Phi}^{-1} \\!\\! \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\frac{\\partial }{\\partial z_k} {\\left( v^\\ast_{\\widetilde{\\omega}} {\\left(t,x,\\Phi^{-1}{(z,\\omega)},\\omega \\right)} \\right)} \\, \n\\\\[5pt]\n& \\hspace{90pt} \\times \\, \\overline{ \\varphi (t,x)\\,\\Theta\\left(\\Phi^{-1}(z,\\omega),\\omega \\right)} \\, dz \\, d\\mathbb{P} \\, dx \\, dt,\n\\end{aligned}\n\\end{equation}\nfor all functions $\\varphi \\in C^\\infty_{\\rm c}((-\\infty, T) \\times \\mathbb{R}^n)$ and $\\Theta \\in L^{2}_{\\loc}\\left(\\mathbb R^n\\times\\Omega\\right)$ stationary. Moreover, the sequence \n${ {\\{ v_\\varepsilon^0(\\cdot, \\widetilde{\\omega}) \\}}_{\\varepsilon > 0} }$ defined by, \n\t\t\\begin{equation}\\label{5t345rte54ew3e2wswqqq1qdecv}\n\t\t\tv_\\varepsilon^0(x,\\omega):=\\psi{\\left( \\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\omega \\right)},\\omega,\\theta^\\ast \\right)} v^0(x), \\;\\; (x,\\omega) \\in \\mathbb{R}^n \\! \\times \\! \\Omega,\n\t\t\\end{equation}\nsatisfies\n\t\t\\begin{equation}\\label{766765t6y4rf5tzxcvbsgfhry}\n\t\t\tv_{\\varepsilon}^0(\\cdot,\\widetilde{\\omega}) \\; \\xrightharpoonup[\\varepsilon \\to 0]{2-{\\rm s}}\\; v^0(x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)},\n\t\t\\end{equation}\nfor each stationary function ${ \\psi(\\theta^\\ast) }$.\n\t\t\n\\bigskip\n2.({\\it\\bf The Split Process.}) We consider the following \n\n\\medskip\n \\underline {Claim:} \nThere exists ${ v_{\\widetilde{\\omega}} \\in L^2(\\mathbb{R}^{n+1}_T) }$, such that \n$$\n\\begin{aligned}\n v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)}&= v_{\\widetilde{\\omega}}(t,x) \\, \\psi {\\left(\\Phi^{-1}(z,\\omega),\\omega, \\theta^\\ast \\right)} \n\\\\[5pt]\n &\\equiv v_{\\widetilde{\\omega}} (t,x) \\, \\Psi(z,\\omega, \\theta^\\ast).\n\\end{aligned} \n$$\n \nProof of Claim: First, for any $\\widetilde{\\omega} \\in \\Omega$ fixed, we take the function \n\\begin{equation}\n\\label{7676567543409hj}\n\t\t\tZ_\\varepsilon (t,x,\\widetilde{\\omega}) = \\varepsilon^2 e^{i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} \\varphi (t, x) \\, \n\t\t\t\\Theta {\\big( \\Phi^{-1}\\big( \\frac{x}{\\varepsilon}, \\widetilde{\\omega} \\big), \\widetilde{\\omega} \\big)}\n\\end{equation}\nas a test function in the associated variational formulation of the equation \\eqref{jhjkhkjhkj765675233}, where ${ \\varphi \\in C^\\infty_{\\rm c}((-\\infty, T) \\! \\times \\! \\mathbb{R}^n) }$ \nand $ \\Theta\\in L^{\\infty}\\left(\\mathbb R^n\\times\\Omega\\right)$ stationary, with $\\Theta(\\cdot,\\omega)$ smooth. Therefore, we obtain \n$$\n\\begin{aligned} \n\t\t\t&- i \\iint_{\\mathbb{R}^{n+1}_T} u_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{Z_\\varepsilon}}{\\partial t} (t,x,\\widetilde{\\omega}) \\, dx \\, dt\n\t\t\t+ i \\int_{\\mathbb{R}^n} u_\\varepsilon^0(x,\\widetilde{\\omega}) \\, \\overline{Z_\\varepsilon}(0,x,\\widetilde{\\omega}) \\, dx\n\\\\[15pt]\n\t\t\t&+ \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}\\left( \\frac{x}{\\varepsilon}, \\widetilde{\\omega} \\right),\\widetilde{\\omega} \\right)} \\nabla u_\\varepsilon(t,x,\\widetilde{\\omega}) \\cdot \\nabla \\overline{Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt\n\\\\[15pt]\n\t\t\t&+ \\frac{1}{\\varepsilon^2} \\iint_{\\mathbb{R}^{n+1}_T} V {\\left(\\Phi^{-1}\\left( \\frac{x}{\\varepsilon}, \\widetilde{\\omega} \\right),\\widetilde{\\omega} \\right)} u_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt\n\\\\[15pt]\n\t\t\t&+ \\iint_{\\mathbb{R}^{n+1}_T} U {\\left( \\Phi^{-1}\\left( \\frac{x}{\\varepsilon}, \\widetilde{\\omega} \\right),\\widetilde{\\omega} \\right)} u_\\varepsilon(t,x,\\widetilde{\\omega})\n\t\t\t\\, \\overline{Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt= 0,\n\\end{aligned}\n$$\nand since \t\t\n$$\n\\begin{aligned}\n\t\t\t\\frac{\\partial Z_\\varepsilon }{\\partial t} (t,x,\\widetilde{\\omega})&= i \\lambda(\\theta^\\ast) \\, e^{i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} \n\t\t\t+ 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} \\, \\varphi (t, x) \\, \n\t\t\t\\Theta {( \\Phi^{-1}( \\frac{x}{\\varepsilon}, \\widetilde{\\omega}), \\widetilde{\\omega})} + \\mathrm{O}(\\varepsilon^2), \n\\\\[5pt]\n\t\t\t\\nabla Z_\\varepsilon (t,x,\\widetilde{\\omega})&= \\varepsilon \\, e^{i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} \\, (\\varepsilon \\nabla + 2i\\pi \\theta^\\ast) \n\t\t\t\\big( \\varphi(t,x) \\, \\Theta{( \\Phi^{-1}{(\\frac{x}{\\varepsilon},\\widetilde{\\omega})},\\widetilde{\\omega})}\\big), \n\\end{aligned}\n$$\nit follows that\n$$\n\\begin{aligned}\n &- \\lambda(\\theta^\\ast) \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \n \\overline{ \\varphi (t, x) \\, \\Theta {( \\Phi^{-1}\\big( \\frac{x}{\\varepsilon}, \\widetilde{\\omega} \\big), \\widetilde{\\omega})} } \\, dx \\, dt \n\\\\[5pt]\n & + \\iint_{\\mathbb{R}^{n+1}_T} A {( \\Phi^{-1}{\\big(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega})} \\, {(\\varepsilon \\nabla + 2i\\pi \\theta^\\ast)} v_\\varepsilon(t,x,\\widetilde{\\omega})\n\\\\[5pt]\n\t\t& \\hspace{60pt} \\cdot \\overline{ {(\\varepsilon \\nabla + 2i\\pi \\theta^\\ast)} {\\left( \\varphi (t,x) \\, \\Theta {\\left(\\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\right)} } \\, dx \\, dt \n\\\\[5pt]\n &+ \\iint_{\\mathbb{R}^{n+1}_T} V {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{ \\varphi (t,x) \\, \\Theta {\\left(\\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} } \\, dx \\, dt= \\mathrm{O}(\\varepsilon^2),\n\\end{aligned}\n$$\nwhere we have used \\eqref{jghkd65454ads3e}, \\eqref{974967uhghjzpzas}, \\eqref{5t345rte54ew3e2wswqqq1qdecv}, and \\eqref{7676567543409hj}.\nAlthough, it is more convenient to rewrite as \n\\begin{equation}\n\\label{56tugryfgrffdd}\n\\begin{aligned}\n &- \\lambda(\\theta^\\ast) \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{ \\varphi (t,x) \\, \\Theta {( \\Phi^{-1}\\big( \\frac{x}{\\varepsilon}, \\widetilde{\\omega}\\big), \\widetilde{\\omega})} } \\, dx \\, dt\n \\\\[5pt]\n&+ \\iint_{\\mathbb{R}^{n+1}_T} {(\\varepsilon \\nabla + 2i\\pi \\theta^\\ast)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \n\\\\[5pt]\n& \\hspace{20pt} \\cdot \\overline{ A {( \\Phi^{-1}{\\big(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega})} \\, \n{(\\varepsilon \\nabla + 2i\\pi \\theta^\\ast)} {\\big( \\varphi(t,x) \\, \\Theta{( \\Phi^{-1}{\\big(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega})} \\big)} } \\, dx \\, dt\n\\\\[5pt]\n&+ \\iint_{\\mathbb{R}^{n+1}_T} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{ \\varphi (t,x) \\, V {( \\Phi^{-1}{\\big(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega})}\n \\, \\Theta{( \\Phi^{-1}{\\big(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega})} } \\, dx \\, dt= \\mathrm{O}(\\varepsilon^2).\n\\end{aligned}\n\\end{equation}\n\\begin{comment}\nPESSOAL, ACHO QUE ESSA PARTE NAO PRECISA. POR ISSO, A SUPRIMI.\n\n\t\t\nIn the second brackets above, observe that \t\t\n\t\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t& A {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\bigg[ {(\\varepsilon \\nabla + 2i\\pi \\theta^\\ast)} {\\left( \\varphi (t,x) \\, \\zeta{\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\right)} \\bigg]} \\\\\n\t\t\t& \\hspace{3.75cm} = A {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)}{\\bigg[ \\varepsilon \\nabla \\varphi (t,x) \\, \\zeta {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} } \\\\ \n\t\t\t& \\hspace{4cm} + \\varphi(t,x) [\\nabla_{\\!\\! y} \\Phi]^{-1} {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, (\\nabla_{\\!\\! y} \\zeta) {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\\\\n\t\t\t& \\hspace{4.25cm} { + \\, 2i\\pi \\theta^\\ast \\varphi (t,x) \\zeta {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\bigg]},\n\t\t\\end{split}\n\t\t\\end{equation*}\nor equivalently \n\t\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t& A {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, {(\\varepsilon \\nabla + 2i\\pi \\theta^\\ast)} {\\left( \\varphi (t,x) \\, \\zeta{\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\right)} \\\\\n\t\t\t& \\hspace{0.5cm} = \\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k} (t,x) {\\bigg\\{ \\varepsilon A {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ e_k \\zeta {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\right]} \\bigg\\}} \\\\ \n\t\t\t& \\hspace{0.75cm} + \\varphi(t,x) {\\Bigg\\{ A {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, {\\bigg[ {[\\nabla_{\\!\\! y} \\Phi]}^{-1}{\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} (\\nabla_{\\!\\! y} \\zeta){\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\bigg]} \\Bigg\\} }\\\\\n\t\t\t& \\hspace{0.75cm} + \\varphi(t,x) {\\Bigg\\{ A {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\bigg[ 2i\\pi \\theta^\\ast \\zeta{\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\bigg]} \\Bigg\\}}.\n\t\t\\end{split}\n\t\t\\end{equation*}\nIt is easily seen that the sequences \n\t\t\\begin{itemize}\n\t\t\t\\item ${ \\displaystyle {\\left\\{ \\varepsilon A {\\left( \\Phi^{-1}{\\left(\\frac{\\cdot}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ e_k \\zeta {\\left( \\Phi^{-1}{\\left(\\frac{\\cdot}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\right]} \\right\\}}_{\\varepsilon > 0} }$,\n\t\t\t\\item ${ \\displaystyle {\\left\\{ A {\\left( \\Phi^{-1}{\\left(\\frac{\\cdot}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, {\\bigg[ {[\\nabla_{\\!\\! y} \\Phi]}^{-1}{\\left( \\Phi^{-1}{\\left(\\frac{\\cdot}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} (\\nabla_{\\!\\! y} \\zeta){\\left( \\Phi^{-1}{\\left(\\frac{\\cdot}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\bigg]} \\right\\}_{\\varepsilon > 0}} }$, \n\t\t\t\\item ${ \\displaystyle {\\left\\{ A {\\left( \\Phi^{-1}{\\left(\\frac{\\cdot}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, {\\bigg[ 2i\\pi \\theta^\\ast \\zeta{\\left( \\Phi^{-1}{\\left(\\frac{\\cdot}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\bigg]} \\right\\}_{\\varepsilon > 0}}, }$\n\t\t\\end{itemize}\nare such that the first two-scale converges strongly to ${ 0 }$ and the second and the third two-scale converge strongly to \n\\begin{itemize}\n\t\t\t\\item ${ (z,\\omega) \\; \\mapsto \\; A{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} {\\big[ \\nabla_{\\!\\! z} {\\big( \\zeta{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\big)} \\big]} }$, \n\t\t\t\\item ${ (z,\\omega) \\; \\mapsto \\; A{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} {\\big[ 2i\\pi\\theta^\\ast\\zeta{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\big]} }$,\n\t\t\\end{itemize}\nrespectively, for the functions \n\n\\begin{itemize}\n\t\t\t\\item ${ (y,\\omega) \\; \\mapsto \\; A (y,\\omega) {\\left[ e_k \\zeta (y,\\omega) \\right]} }$,\n\t\t\t\\item ${ \\displaystyle (y,\\omega) \\; \\mapsto \\; A(y,\\omega) {\\left[ {[\\nabla_{\\!\\! y} \\Phi]}^{-1}(y,\\omega) {\\big( \\nabla_{\\!\\! y} \\zeta \\big)}(y,\\omega) \\right]} \\cdot e_\\ell }$, ${ \\ell \\in \\{1,\\ldots,n\\} }$\n\t\t\t\\item ${ \\displaystyle (y,\\omega) \\; \\mapsto \\; A(y,\\omega){\\big[ 2i\\pi \\theta^\\ast \\zeta(y,\\omega) \\big]} \\cdot e_\\ell }$, ${ \\ell \\in \\{1,\\ldots,n\\} }$\n\t\t\\end{itemize}\nare stationary. Hence, for a.e. ${ \\widetilde{\\omega} \\in \\Omega }$ the sequence \n\t\t\\begin{equation*}\n\t\t\t{\\left\\{ A {\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, {(\\varepsilon \\nabla + 2i\\pi \\theta^\\ast)} {\\left( \\varphi (t,x) \\, \\zeta{\\left( \\Phi^{-1}{\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\right)} \\right\\}}_{\\varepsilon>0},\n\t\t\\end{equation*}\ntwo-scale converges strongly to \n\n\t\t\\begin{equation*}\n\t\t\t(z,\\omega) \\; \\mapsto \\; A{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} {\\left[ {(\\nabla_{\\!\\! z} + 2i\\pi\\theta^\\ast)} {\\left( \\zeta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)} \\right]}.\n\t\t\\end{equation*}\n\\end{comment}\nNow, making $\\varepsilon={ \\varepsilon^\\prime}$, letting $\\varepsilon'\\to 0 $ and using the Definition \\ref{two-scale}, we have for a.e. ${ \\widetilde{\\omega} \\in \\Omega }$,\nfor all ${ \\varphi \\in C^\\infty_{\\rm c}((-\\infty, T) \\! \\times \\! \\mathbb{R}^n) }$, \n$\\Theta \\in L^{\\infty}\\left(\\mathbb R^n\\times\\Omega\\right)$ stationary and $\\Theta(\\cdot,\\omega)$ smooth, \n\\begin{equation*}\n\\begin{aligned}\n&- \\lambda(\\theta^\\ast) \\, c_\\Phi^{-1} \\!\\!\\! \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \n\\\\[7pt]\n& \\hspace{90pt} \\times \\overline{ \\varphi (t,x) \\, \\Theta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega) \\, dx \\, dt\n\\\\[7pt]\n&+ c_\\Phi^{-1} \\!\\!\\! \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {(\\nabla_{\\!\\! z} + 2i\\pi \\theta^\\ast)} {\\left( v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)}\n\\\\[7pt]\n& \\cdot \\overline{ \\varphi (t,x) \\, A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} {\\left[ {(\\nabla_{\\!\\! z} + 2i\\pi \\theta^\\ast)} {\\left( \\Theta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)} \\right]} } \\, dz \\, d\\mathbb{P}(\\omega) \\, dx \\, dt \n\\\\[7pt]\n&+ c_\\Phi^{-1} \\!\\!\\! \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\, \n\\\\[7pt]\n&\\hspace{60pt} \\times \\overline{ \\varphi (t,x) \\, V {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, \\Theta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega) \\, dx \\, dt = 0.\n\\end{aligned}\n\\end{equation*}\nTherefore, due to an argument \nof density in the test functions\n(thanks to the topological structure of $\\Omega$), \nwe can conclude that \n\\begin{comment}\nNOVAMENTE PESSOAL, PODEMOS ENCURTAR ESSA PASSAGEM. O QUE ACHAM?\n\nfor each ${ \\zeta \\in \\mathcal{S} }$ there exists \n${ N_\\zeta \\subset (0,T) \\! \\times \\! \\mathbb{R}^n }$ such that ${ {\\vert N_\\zeta \\vert}=0 }$ (Here ${ \\vert \\cdot \\vert }$ denotes the Lebesgue measure in \n${ (0,T) \\! \\times \\! \\mathbb{R}^n }$) such that \n\t\t\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t- \\lambda(\\theta^\\ast) \\, \\displaystyle\\int_\\Omega \\int_{\\Phi(\\mathsf{Y}, \\omega)} v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\, \\overline{ \\zeta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega) \\\\ [15pt]\n\t\t\t+ \\displaystyle\\int_\\Omega \\int_{\\Phi(\\mathsf{Y}, \\omega)} A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} {(\\nabla_{\\!\\! z} + 2i\\pi \\theta^\\ast)} {\\left( v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)} \\cdot \\\\ [15pt]\n\t\t\t\\hspace{9.5cm} \\overline{ {(\\nabla_{\\!\\! z} + 2i\\pi \\theta^\\ast)} {\\left( \\zeta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)} } \\, dz \\, d\\mathbb{P}(\\omega) \\\\ [10pt]\n\t\t\t+ \\displaystyle\\int_\\Omega \\int_{\\Phi(\\mathsf{Y}, \\omega)} V {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\, \\overline{ \\zeta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P} = 0,\n\t\t\\end{array}\n\t\t\\end{equation*}\nfor all ${ (t,x) \\in (0,T) \\! \\times \\! \\mathbb{R}^n \\setminus N_\\zeta }$. Now, due to the enumerability of ${ \\mathcal{S} }$, there exists ${ N \\subset (0,T) \\! \\times \\! \\mathbb{R}^n }$ \nsuch that ${ {\\vert N \\vert}=0 }$ with the property\n\\end{comment}\n\\begin{equation*}\n\\begin{aligned}\n&- \\lambda(\\theta^\\ast) \\, \\displaystyle\\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\, \\overline{ \\Theta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega) \n\\\\[5pt]\n&+ \\displaystyle\\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} {(\\nabla_{\\!\\! z} + 2i\\pi \\theta^\\ast)} {\\left( v^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)}\n\\\\[5pt]\n&\\hspace{60pt} \\cdot \\overline{ {(\\nabla_{\\!\\! z} + 2i\\pi \\theta^\\ast)} {\\left( \\Theta {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\right)} } \\, dz \\, d\\mathbb{P}(\\omega)\n\\\\[5pt]\n&+ \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\!\\!\\! V {( \\Phi^{-1}(z,\\omega),\\omega)} \\, v^\\ast_{\\widetilde{\\omega}} \n{( t,x,\\Phi^{-1}(z,\\omega),\\omega)} \\, \n\\\\[5pt]\n&\\hspace{120pt} \\times \\overline{ \\Theta {( \\Phi^{-1}(z,\\omega),\\omega)} } \\, dz \\, d\\mathbb{P}(\\omega)= 0,\n\\end{aligned}\n\\end{equation*}\nfor a.e. ${ (t,x) \\in \\mathbb{R}^{n+1}_T }$ and for all ${ \\Theta}$ as above. Thus, the simplicity of the eigenvalue\u00ad $\\lambda(\\theta^\\ast)$ \nassures us that for a.e. $\\mathbb{R}^{n+1}_T $, the function \n$${ (z,\\omega) \\mapsto v^\\ast_{\\widetilde{\\omega}}{\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} }$$ (which belongs to the space ${ \\mathcal{H} }$) is parallel to the \nfunction ${ \\Psi(\\theta^\\ast) }$, i.e., we can find ${ v_{\\widetilde{\\omega}}(t,x) \\in \\mathbb{C} }$, such that \n\t\t\\begin{eqnarray*}\nv^\\ast_{\\widetilde{\\omega}} {\\left( t,x,\\Phi^{-1}(z,\\omega),\\omega \\right)} &=& v_{\\widetilde{\\omega}}(t,x) \\, \\Psi(z,\\omega, \\theta^\\ast)\\\\\n& \\equiv & v_{\\widetilde{\\omega}}(t,x) \\, \\psi {\\left(\\Phi^{-1}(z,\\omega),\\omega, \\theta^\\ast \\right)}.\n\t\t\\end{eqnarray*}\n\t\t\n\\medskip\nFinally, since ${ v^\\ast_{\\widetilde{\\omega}} \\in L^2 (\\mathbb{R}^{n+1}_T; \\mathcal{H}) }$, we conclude that ${ v_{\\widetilde{\\omega}}\\in L^2(\\mathbb{R}^{n+1}_T) }$, which \ncompletes the proof of our claim.\n\n\\medskip\n3.({\\it\\bf Homogenization Process.}) Let ${ \\Lambda_k (\\theta^\\ast)}$, for any $k\\in \\{1,\\ldots,n\\}$, be the function defined by \n\t\t\\begin{equation*}\n\t\t\t\\Lambda_k(z,\\omega,\\theta^\\ast)=\\frac{1}{2i\\pi}\\frac{\\partial \\Psi}{\\partial \\theta_k}(z,\\omega,\\theta^\\ast)=\\frac{1}{2i\\pi}\\frac{\\partial \\psi}{\\partial \\theta_k}\n\t\t\t{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)}, \\; (z,\\omega) \\in \\mathbb{R}^n \\! \\times \\! \\Omega,\n\t\t\\end{equation*}\nwhere the function $\\Psi(z,\\omega,\\theta^\\ast)=\\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)}$ \nis the eigenfunction of the spectral cell problem~\\eqref{92347828454trfhfd4rfghjls}. \nThen, we consider the following test function \n\\begin{equation*}\n\t\t\tZ_\\varepsilon(t,x,\\widetilde{\\omega}) = e^{i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} {\\big( \\varphi(t,x) \\, \n\t\t\t\\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) +\\varepsilon \\sum_{k=1}^n \n\t\t\t\\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, \\Lambda_{k,\\varepsilon} (x,\\widetilde{\\omega},\\theta^\\ast) \\big)},\n\\end{equation*}\nwhere ${ \\varphi \\in C^\\infty_{\\rm c}((-\\infty, T) \\! \\times \\! \\mathbb{R}^n) }$ and \n\t\t\\begin{equation*}\n\t\t\t\\Psi_\\varepsilon(x,\\widetilde{\\omega}, \\theta^\\ast) = \\Psi{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega},\\theta^\\ast\\right)} ,\\quad \\;\\; \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega}, \\theta^\\ast) = \\Lambda_k{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega},\\theta^\\ast\\right)}.\n\t\t\\end{equation*} \nUsing the function ${ Z_\\varepsilon }$ as test function in the variational formulation of the equation \\eqref{jhjkhkjhkj765675233}, \nwe obtain\t\t\n\\begin{equation}\n\\label{676745459023v}\n\\begin{aligned}\n& \\big[ i \\displaystyle\\int_{\\mathbb{R}^n} u_\\varepsilon^0(x,\\widetilde{\\omega}) \\, \\overline{Z_\\varepsilon}(0,x,\\widetilde{\\omega}) \\, dx \n - i \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} u_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{Z_\\varepsilon}}{\\partial t} (t,x,\\widetilde{\\omega}) \\, dx \\, dt \\big]\n\\\\[7pt]\n&+ \\big[ \\displaystyle\\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, \n\t\t\t\\nabla u_\\varepsilon (t,x,\\widetilde{\\omega}) \\cdot \\nabla \\overline{Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt \\big]\n\\\\[7pt]\n&\t\t\t+ \\big[ \\displaystyle\\frac{1}{\\varepsilon^2} \\iint_{\\mathbb{R}^{n+1}_T} V {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right) \\, \n\t\t\tu_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt} \n\\\\[7pt]\n& \\hspace{30pt}\t\t\t+ \\iint_{\\mathbb{R}^{n+1}_T} U {\\left(x, \\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, \n\t\t\tu_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt \\big]= 0. \n\\end{aligned}\n\\end{equation}\nIn order to simplify the manipulation of the above equation, we shall denote by $I_{k}^{\\varepsilon}(k=1,2,3)$ the respective term in the $k^{\\text{th}}$ brackets, so that we can rewrite the \nequation~\\eqref{676745459023v} as $I_{1}^{\\varepsilon}+I_{2}^{\\varepsilon}+I_{3}^{\\varepsilon}=0$.\n\n\\medskip\t\t\nThe analysis of the $I_{1}^{\\varepsilon}$ term is triggered by the following computation \n$$\n\\begin{aligned}\n\t\\frac{\\partial Z_\\varepsilon}{\\partial t}(t,x,\\widetilde{\\omega}) &= e^{i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} \n\t{\\Big[ i\\frac{\\lambda(\\theta^\\ast)}{\\varepsilon^2} { \\big( \\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) } }\n\\\\[5pt]\n\t\t\t& + \\, { { \\varepsilon\\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big)} \n\t\t\t+ \\frac{\\partial \\varphi}{\\partial t}(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) } \n\\\\[5pt]\n\t\t\t& { + \\, \\varepsilon \\sum_{k=1}^n \\frac{\\partial^2 \\varphi}{\\partial t \\, \\partial x_k}(t,x) \\, \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]},\n\\end{aligned}\n$$\ntherefore we have\n$$\n\\begin{aligned}\nI_{1}^{\\varepsilon}&= i \\int_{\\mathbb{R}^n} v_\\varepsilon^0 \\, \\overline{ {\\big( \\varphi (0,x)\\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) \n+\\varepsilon \\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(0,x) \\, \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big)} } dx \n\\\\[5pt]\n& - \\frac{\\lambda(\\theta^\\ast)}{\\varepsilon^2} \\iint_{\\mathbb{R}^{n+1}_T} v_\\varepsilon \\, \n\\overline{{\\big( \\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) +\\varepsilon \\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, \n\\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big)}} dx \\, dt \n\\\\[5pt] \n&- i \\iint_{\\mathbb{R}^{n+1}_T} v_\\varepsilon \\, \\overline{ {\\big( \\frac{\\partial \\varphi}{\\partial t}(t,x) \\,\n\\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) + \\varepsilon \\sum_{k=1}^n \\frac{\\partial^2 \\varphi}{\\partial t \\, \\partial x_k}(t,x) \n\\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big)} }.\n\\end{aligned}\n$$\t\t\nFor the analysis of the term $I_{2}^{\\varepsilon}$, we need to make the following computations \n$$\n\\begin{aligned}\n\\nabla Z_\\varepsilon(t,x,\\widetilde{\\omega}) &= e^{i \\frac{\\lambda(\\theta^\\ast) t} {\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} {\\big[ \\nabla \\varphi(t,x) \\, \n\\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) + \\varphi(t,x) \\, \\nabla \\Psi_\\varepsilon(z,\\widetilde{\\omega},\\theta^\\ast) }\n\\\\[5pt]\n\t&+ \\varepsilon \\sum_{k=1}^n \\nabla {\\big( \\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\big)} \\, \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \n\t+ \\varepsilon \\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, \\nabla \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \n\\\\[5pt]\n\t&+ \\, 2i\\pi \\frac{\\theta^\\ast}{\\varepsilon} { {\\big( \\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) + \\varepsilon \n\t\\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big)} \\big]} \n\\\\[5pt] \n\t& = e^{i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} {\\big[ \\varphi(t,x) \\, {\\big( \\nabla \n\t+ 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) } \n\\\\[5pt]\n\t&+ { \\, \\varepsilon \\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, {\\big( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} \n\t\\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) + \\nabla \\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) } \n\\\\[5pt]\n\t& + \\, { \\varepsilon \\sum_{k=1}^n \\nabla {\\big( \\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\big)} \\, \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big]},\n\\end{aligned}\n$$\t\nand from this, we have \n$$\n\\begin{aligned}\n&A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\nabla u_\\varepsilon(t,x,\\widetilde{\\omega}) \\cdot \\overline{\\nabla Z_\\varepsilon}(t,x,\\widetilde{\\omega})\n\\\\[5pt]\n&= A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\big[ \\nabla u_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \ne^{ -\\left( {i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon}} \\right)} \\big]} \n\\\\[5pt]\n&\\cdot \\big[ \\overline{\\varphi}(t,x) \\, \n( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} ) \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) } \n+ \\varepsilon \\! \\sum_{k=1}^n \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\, {( \\nabla \\!\\! - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon})\n\\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \n\\\\[5pt]\n&+ \\nabla \\overline{\\varphi}(t,x) \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \n+ \\varepsilon \\sum_{k=1}^n \\nabla {\\big( \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\big)} \\, \n\\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\big].\n\\end{aligned}\n$$\t\t\n\\begin{comment}\nAfter reordering conveniently and using \\eqref{974967uhghjzpzas}, we find \t\n\t\t\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t\\displaystyle \\!\\! A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\nabla u_\\varepsilon(t,x,\\widetilde{\\omega}) \\cdot \\overline{\\nabla Z_\\varepsilon}(t,x,\\widetilde{\\omega}) = \\\\ [15pt]\n\t\t\t\\displaystyle A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\right]} \\cdot {\\left[ \\overline{\\varphi}(t,x) \\, {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\\\ [15pt]\n\t\t\t\\displaystyle + \\, \\varepsilon A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\right]} \\cdot {\\left[ \\sum_{k=1}^n \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\, {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\\\ [15pt]\n\t\t\t\\displaystyle + A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)}{\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\right]} \\cdot {\\Big[ \\nabla \\overline{\\varphi}(t,x) \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]} \\\\ [15pt]\n\t\t\t\\displaystyle + \\, \\varepsilon A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)}{\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\right]} \\cdot {\\left[ \\sum_{k=1}^n \\nabla {\\left( \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\, \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]}.\n\t\t\\end{array}\n\t\t\\end{equation*}\n\\end{comment}\nThen, from equation \\eqref{974967uhghjzpzas} and using the terms\n$$\n{\\big( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} (v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{\\varphi}(t,x) ),\n\\quad \n\\big( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big) (v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x)), \n$$ \nit follows from the above equation that \n$$\n A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\nabla u_\\varepsilon \\cdot \\overline{\\nabla Z_\\varepsilon}\n =\\sum_{k=1}^n\\left( I_{2,1}^{\\varepsilon,k}+I_{2,2}^{\\varepsilon,k}+I_{2,3}^{\\varepsilon,k}\\right)(t,x,\\widetilde{\\omega}),\n$$\nwhere \n$$\n\\begin{aligned}\n & I_{2,1}^{\\varepsilon,k}(t,x,\\widetilde{\\omega}):= \\varepsilon \\,A {(\\Phi^{-1}{\\big( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega} )} \n\\big[ {\\big( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} {( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) )} \\big]\n \\\\[5pt]\n&\\quad \\cdot \\big[ {\\big( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\big]\n\\\\[5pt]\n& - A(\\Phi^{-1}{\\big( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega} )\n\\big[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\, e_k \\big] \\!\n \\cdot \\! \\big[ {\\big( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)\t\\big]\n\\\\[5pt]\n& + A (\\Phi^{-1}{\\big( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega} )\n{\\big[ {\\big( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} \n( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x))} \\big] \\!\n\\cdot \\! \\big[ {e_k} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)\\big],\n\\end{aligned}\n$$\n$$\n\\begin{aligned}\n& I_{2,2}^{\\varepsilon,k}(t,x,\\widetilde{\\omega}):=\n \\frac{1}{n}A {(\\Phi^{-1}{\\big( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega})} \n{\\big[ {\\big( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} (v_\\varepsilon(t,x,\\widetilde{\\omega}) \\overline{\\varphi}(t,x)) \\big]}\n\\\\[5pt]\n& \\quad\\quad \\cdot \\big[ {\\big( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\big)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big]\n\\\\[5pt]\n& \\quad \\quad - A (\\Phi^{-1}{\\big( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega} )\n{\\big[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\big]} \n\\cdot {\\big[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\big]}, \n\\end{aligned}\n$$\nand \n$$\n\\begin{aligned}\n&I_{2,3}^{\\varepsilon,k}(t,x,\\widetilde{\\omega}):= A(\\Phi^{-1}{\\big( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega} )\n\\big[ {\\left( \\varepsilon \\nabla + 2i\\pi\\theta^\\ast \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\big]\n \\\\[5pt]\n &\\quad \\cdot \\big[ \\nabla {\\big( \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\big)} \\, \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\big]\n \\\\[5pt]\n&- A(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega})\n\\big[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\big] \\! \\cdot \\!\n \\big[ {\\left( \\varepsilon \\nabla - 2i\\pi\\theta^\\ast \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\big]. \n\\end{aligned}\n$$\n\\begin{comment}\n\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t\\displaystyle \\!\\! A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\nabla u_\\varepsilon \\cdot \\overline{\\nabla Z_\\varepsilon} = \\\\ [15pt] \n\t\t \\displaystyle A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[{\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} (v_\\varepsilon(t,x,\\widetilde{\\omega}) \\overline{\\varphi}(t,x)) \\right]} \\cdot {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\\\ [15pt]\n\t\t\t\\displaystyle + \\, \\sum_{k=1}^n \\Bigg\\{\\varepsilon\\,A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} {\\left( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\right]} \\cdot \\\\ [15pt]\n\t\t\t\\displaystyle \\hspace{8.5cm} {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad - \\, A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\, e_k \\right]} \\cdot {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)\t\\right]} \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad + \\, A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} {\\left( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\right]} \\cdot {\\Big[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]} \\\\ [15pt]\n\t\t\\end{array}\n\t\t\\end{equation*}\n\t\t\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t\\displaystyle\\qquad\\qquad - \\,A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right]} \\cdot {\\Big[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]} \\\\ [15pt]\n\t\t\t\\displaystyle - \\, \\sum_{k=1}^n A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right]} \\cdot {\\Big[ {\\left( \\varepsilon \\nabla - 2i\\pi\\theta^\\ast \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]} \\\\ [15pt]\n\t\t\t\\displaystyle + \\, \\sum_{k=1}^n A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\Big[ {\\left( \\varepsilon \\nabla + 2i\\pi\\theta^\\ast \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\Big]}\\cdot {\\left[ \\nabla {\\left( \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\, \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]},\n\t\t\\end{array}\n\t\t\\end{equation*}\n\\end{comment}\nThus integrating in $\\mathbb{R}^{n+1}_T$ we recover the $I_2^{\\varepsilon}$ term, that is \n\\begin{eqnarray}\\label{HomProc1}\n&&I_2^{\\varepsilon}=\\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\nabla u_\\varepsilon(t,x,\\widetilde{\\omega}) \\cdot \\overline{\\nabla Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt\\nonumber\\\\\n&&\\qquad\\qquad=\\sum_{k=1}^n\\iint_{\\mathbb{R}^{n+1}_T}\\left( I_{2,1}^{\\varepsilon,k}+I_{2,2}^{\\varepsilon,k}+I_{2,3}^{\\varepsilon,k}\\right)(t,x,\\widetilde{\\omega})\\,dx\\,dt. \n\\end{eqnarray}\n\n\n\n\\begin{comment}\nwhere \n\\begin{eqnarray*}\n&&I_{2,1}^{\\varepsilon,k}:=\\iint_{\\mathbb{R}^{n+1}_T}\\Bigg\\{ \\varepsilon \\,A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} {\\left( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\right]} \\cdot\\\\\n&&\\hspace{8cm}{\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\\\\n&&\\qquad\\qquad\\qquad - \\, A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\, e_k \\right]} \\cdot \\\\\n&&\\hspace{8cm} {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)\t\\right]} \\\\\n&&\\qquad\\qquad +\\, A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} {\\left( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\right]} \\cdot\\\\\n&& \\hspace{8cm} {\\left[ {e_k} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)\t\\right]}\\Bigg\\} \\, dx \\, dt,\n\\end{eqnarray*}\n\n\\begin{eqnarray*}\n&&I_{2,2}^{\\varepsilon,k}:=\n\\iint_{\\mathbb{R}^{n+1}_T}\\Bigg\\{ \\frac{1}{n}A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} (v_\\varepsilon(t,x,\\widetilde{\\omega}) \\overline{\\varphi}(t,x)) \\right]} \\cdot \\\\\n& & \\hspace{8cm} {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\\\\n&&\\qquad\\qquad\\qquad - \\,A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right]} \\cdot {\\Big[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]}\\Bigg\\}\\,dx\\,dt\n\\end{eqnarray*}\nand \n\n\\begin{eqnarray*}\n&&I_{2,3}^{\\varepsilon,k}:=\\iint_{\\mathbb{R}^{n+1}_T}\\Bigg\\{ A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\Big[ {\\left( \\varepsilon \\nabla + 2i\\pi\\theta^\\ast \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\Big]} \\cdot \\\\\n& & \\hspace{8cm} {\\left[ \\nabla {\\left( \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\, \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]}\\\\\n&& \\qquad\\qquad\\qquad\\qquad-\\,\nA {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right]} \\cdot\\\\\n& & \\hspace{8cm} {\\Big[ {\\left( \\varepsilon \\nabla - 2i\\pi\\theta^\\ast \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]}\\Bigg\\} \\, dx \\, dt.\n\\end{eqnarray*}\n\\end{comment}\n\n\t\n\t\n\t\n\t\n\t\n\t\n\\begin{comment}\t\n\tReordenamos acima. \t\n\t\\begin{eqnarray}\n\t\t\t& & \\!\\!\\!\\!\\!\\! \\hspace{-0.5cm} \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\nabla u_\\varepsilon(t,x,\\widetilde{\\omega}) \\cdot \\overline{\\nabla Z_\\varepsilon}(t,x,\\widetilde{\\omega}) \\, dx \\, dt =: I_2^{\\varepsilon} = \\nonumber \\\\ [4pt] \n\t\t & & \\hspace{-0.5cm} \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} (v_\\varepsilon(t,x,\\widetilde{\\omega}) \\overline{\\varphi}(t,x)) \\right]} \\cdot \\nonumber \\\\\n\t\t & & \\hspace{8cm} {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\, dx \\, dt \\nonumber \\\\ [4pt]\n\t\t\t& & \\hspace{-0.5cm} + \\, \\varepsilon \\sum_{k=1}^n \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} {\\left( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\right]} \\cdot \\nonumber \\\\\n\t\t\t& & \\hspace{8cm} {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]} \\, dx \\, dt \\nonumber \\\\ [4pt]\n\t\t\t& & \\hspace{-0.5cm} - \\, \\sum_{k=1}^n \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\, e_k \\right]} \\cdot \\nonumber \\\\\n\t\t\t& & \\hspace{8cm} {\\left[ {\\left( \\nabla - 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)\t\\right]} \\, dx \\, dt \\nonumber \\\\ [4pt]\n\t\t\t& & \\hspace{-0.5cm} + \\, \\sum_{k=1}^n \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ {\\left( \\nabla + 2i\\pi\\frac{\\theta^\\ast}{\\varepsilon} \\right)} {\\left( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\right]} \\cdot \\label{45678uhgvce345tg} \\\\\n\t\t\t& & \\hspace{10.5cm} {\\Big[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]} \\, dx \\, dt \\nonumber \\\\ [4pt]\n\t\t\t& & \\hspace{-0.5cm} - \\, \\sum_{k=1}^n \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right]} \\cdot {\\Big[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]} \\, dx \\, dt \\nonumber \\\\ [4pt]\n\t\t\t& & \\hspace{-0.5cm} - \\, \\sum_{k=1}^n \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\left[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right]} \\cdot \\nonumber \\\\\n\t\t\t& & \\hspace{8cm} {\\Big[ {\\left( \\varepsilon \\nabla - 2i\\pi\\theta^\\ast \\right)} \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\Big]} \\, dx \\, dt \\nonumber \\\\ [4pt]\n\t\t\t& & \\hspace{-0.5cm} + \\, \\sum_{k=1}^n \\iint_{\\mathbb{R}^{n+1}_T} A {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} {\\Big[ {\\left( \\varepsilon \\nabla + 2i\\pi\\theta^\\ast \\right)} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\Big]} \\cdot \\nonumber \\\\\n\t\t\t& & \\hspace{8cm} {\\left[ \\nabla {\\left( \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) \\right)} \\, \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\right]}\\, dx \\, dt. \\nonumber\n\t\t\\end{eqnarray}\n\t\n\\end{comment}\n\nNow, with the help of the first auxiliar cell equation \\eqref{8654873526rtgdrfdrfdrfrd4}, \nwe intend to simplify the expression of $I_2^{\\varepsilon}$ to a more comely one. For this aim, we shall take \n${ v_\\varepsilon(t,\\cdot,\\widetilde{\\omega}) \\, \\overline{\\varphi}(t,\\cdot) }$, $t \\in (0,T)$, as a test function in equation~\\eqref{nbvnbvxzchgfs54}.\nThen, we obtain\n$$\n\\begin{aligned}\n& \\int_{\\mathbb{R}^n} \\!\\! A(\\Phi^{-1}{\\big( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\big)},\\widetilde{\\omega})\n [ \\big( \\nabla \\!+ 2i\\pi \\frac{\\theta^\\ast}{\\varepsilon} \\big) {(v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{\\varphi})}] \\! \\cdot \\! \n[ \\big( \\nabla \\! - 2i\\pi \\frac{\\theta^\\ast}{\\varepsilon} \\big) \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)] dx\n\\\\[5pt] \n& \\quad = \\frac{\\lambda(\\theta^\\ast)}{\\varepsilon^2}\\int_{\\mathbb{R}^n} {(v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \n\\overline{\\varphi}(t,x))} \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\, dx \n\\\\[5pt]\n& \\quad - \\frac{1}{\\varepsilon^2}\\int_{\\mathbb{R}^n} V(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega}) \\, \n{(v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{\\varphi}(t,x))} \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\, dx. \n\\end{aligned}\n$$\nTherefore, comparing with $I_{2,2}^{\\varepsilon,k}(t,x,\\widetilde{\\omega})$ term obtained before, we have \n\\begin{equation}\n\\label{HomProc2}\n\\begin{aligned}\n&\\iint_{\\mathbb{R}^{n+1}_T}I_{2,2}^{\\varepsilon,k}(t,x,\\widetilde{\\omega})\\,dx\\,dt\n\\\\\n& = \\frac{\\lambda(\\theta^\\ast)}{n\\varepsilon^2}\\iint_{\\mathbb{R}^{n+1}_T} {(v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{\\varphi}(t,x))} \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\, dx\\,dt \n\\\\\n& - \\frac{1}{n\\varepsilon^2}\\iint_{\\mathbb{R}^{n+1}_T} V {(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega})} \\, \n(v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{\\varphi}(t,x)) \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast) \\, dx\\,dt\n\\\\\n& - \\iint_{\\mathbb{R}^{n+1}_T} \\!\\!\\! A(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega}) {[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \n\\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x) ]} \\cdot {[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)]}\\,dx\\,dt. \n\\end{aligned}\n\\end{equation}\nAnalogously, taking ${ v_\\varepsilon(t,\\cdot,\\widetilde{\\omega}) \\, \\displaystyle\\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,\\cdot) }$, $t \\in (0,T)$, \nwith ${ k\\in\\{1,\\ldots,n\\} }$ as a test function in the equation~\\eqref{4576gdcrfvjc46we}, taking into account that \n$\\nabla_{\\! \\theta} \\lambda(\\theta^\\ast)= 0$ and comparing this\nexpression with $I_{2,1}^{\\varepsilon,k}(t,x,\\widetilde{\\omega})$, we deduce that\n\\begin{eqnarray}\n\\label{HomProc3}\n&& \\iint_{\\mathbb{R}^{n+1}_T}I_{2,1}^{\\varepsilon,k}(t,x,\\widetilde{\\omega})\\,dx\\,dt\\nonumber\n\\\\\n&&\\quad= \\frac{\\lambda(\\theta^\\ast)}{\\varepsilon}\\iint_{\\mathbb{R}^{n+1}_T} \n{( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x))} \\, \\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\, dx\\,dt\n\\\\\n&&\\quad - \\frac{1}{\\varepsilon}\\iint_{\\mathbb{R}^{n+1}_T} V {(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega})} \\, \n{( v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x))} \\, \n\\overline{\\Lambda_{k,\\varepsilon}}(x,\\widetilde{\\omega},\\theta^\\ast) \\, dx\\,dt.\\nonumber\n\\end{eqnarray}\nTherefore, summing equations \\eqref{HomProc2}, \\eqref{HomProc3}, we arrive at \n\\begin{eqnarray}\n\\label{HomProc4}\n&&\\sum_{k=1}^n\\iint_{\\mathbb{R}^{n+1}_T}\\Big(I_{2,1}^{\\varepsilon,k}+I_{2,2}^{\\varepsilon,k}\\Big)(t,x,\\widetilde{\\omega})\\,dxdt\\nonumber\n\\\\\n&&\\quad \\!\\!={\\frac{\\lambda(\\theta^\\ast)}{\\varepsilon^2} \\iint_{\\mathbb{R}^{n+1}_T} \\!\\!\\! v_\\varepsilon(t,x,\\widetilde{\\omega}) \\,\n \\overline{\\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) + \\varepsilon \\! \\sum_{k=1}^n\\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, \n \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)} \\, dx dt }\\nonumber\n \\\\[4pt]\n&& \\quad- \\frac{1}{\\varepsilon^2} \\iint_{\\mathbb{R}^{n+1}_T} \\! V {\\left(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega} \\right)} \\, \nv_\\varepsilon(t,x,\\widetilde{\\omega})\n\\\\\n& & \\hspace{3cm} \\times \\, \\overline{\\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) \n+ \\varepsilon \\sum_{k=1}^n\\frac{\\partial \\varphi}{\\partial x_k}(t,x) \\, \\Lambda_{k,\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)} \\, dx dt \\nonumber\n\\\\[4pt]\n& & \\quad - \\sum_{k=1}^n \\iint_{\\mathbb{R}^{n+1}_T} \\! A {(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega})} \n{[ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\nabla \\frac{\\partial \\overline{\\varphi}}{\\partial x_k}(t,x)]} \\! \\cdot \\! {[ e_k \\, \\overline{\\Psi_\\varepsilon}(x,\\widetilde{\\omega},\\theta^\\ast)]} \\, dx dt.\n\\nonumber\n\\end{eqnarray}\nMoreover, expressing the $I_3^{\\varepsilon}$ term as \n\\begin{eqnarray*}\n&& I_3^{\\varepsilon} =\\iint_{\\mathbb{R}^{n+1}_T}\\frac{1}{\\varepsilon^2}\\, V {(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega})}\\,\nv_{\\varepsilon}(t,x,\\widetilde{\\omega})\n\\\\\n&&\\qquad\\qquad\\qquad\\qquad \\times \\, \\overline{\\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast)+\\varepsilon \\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(t,x)\n\\Lambda_{k,\\varepsilon} (x,\\widetilde{\\omega},\\theta^\\ast)}\\,dx\\,dt\\\\\n&&\\qquad\\quad+\\iint_{\\mathbb{R}^{n+1}_T}U {( \\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega})}\\, \nv_\\varepsilon(t,x,\\widetilde{\\omega})\\\\\n&&\\qquad\\qquad\\qquad\\qquad \\times \\, \n\\overline{ \\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast)+\\varepsilon \\sum_{k=1}^n \\frac{\\partial \\varphi}{\\partial x_k}(t,x)\n\\Lambda_{k,\\varepsilon} (x,\\widetilde{\\omega},\\theta^\\ast)}\\,dx\\,dt,\n\\end{eqnarray*}\nadding with \\eqref{HomProc4} and $I_1^{\\varepsilon}$, we obtain\n\\begin{equation}\n\\label{HomProc5}\n\\begin{aligned}\n&I_1^{\\varepsilon}+\\sum_{k=1}^n\\iint_{\\mathbb{R}^{n+1}_T}\\Big(I_{2,1}^{\\varepsilon,k}+I_{2,2}^{\\varepsilon,k}\\Big)(t,x,\\widetilde{\\omega})\\,dx\\,dt+I_3^{\\varepsilon}\\nonumber\n\\\\[5pt]\n&={ i \\int_{\\mathbb{R}^n} \\!\\!\\! v_\\varepsilon^0(x,\\widetilde{\\omega}) \\, \\overline{ \\varphi (0,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) } dx \n- i \\! \\iint_{\\mathbb{R}^{n+1}_T}\\!\\! \\!\\! v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{ \\frac{\\partial \\varphi}{\\partial t}(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast) } }\\nonumber\n\\\\[5pt]\n&- \\sum_{k,\\ell=1}^n\\iint_{\\mathbb{R}^{n+1}_T} \\!\\!\\!\n{ v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, e_\\ell \\, \\frac{\\partial^2 \\overline{\\varphi}}{\\partial x_\\ell \\, \\partial x_k}(t,x) }\n\\cdot \\overline{ A {(\\Phi^{-1}{( \\frac{x}{\\varepsilon},\\widetilde{\\omega} )},\\widetilde{\\omega})} { \\; e_k \\Psi_\\varepsilon (x,\\widetilde{\\omega},\\theta^\\ast)} } \n\\,dx dt\\nonumber\n\\\\[5pt]\n&+\\iint_{\\mathbb{R}^{n+1}_T}U {(\\Phi^{-1}{\\left( \\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)},\\widetilde{\\omega})}\\, \nv_\\varepsilon(t,x,\\widetilde{\\omega})\\,\\overline{ \\varphi(t,x) \\, \\Psi_\\varepsilon(x,\\widetilde{\\omega},\\theta^\\ast)}\\,dx dt +\\, \\mathrm{O}(\\varepsilon).\n\\end{aligned}\n\\end{equation}\nThus, for $\\varepsilon=\\varepsilon'(\\widetilde{\\omega})$ and due to Step 2, that is to say \n\\begin{equation*}\nv_{\\varepsilon^\\prime}(t,x,\\widetilde{\\omega}) \\; \\xrightharpoonup[\\varepsilon^\\prime \\to 0]{2-{\\rm s}}\\; v_{\\widetilde{\\omega}}(t,x) \\, \\Psi(z,\\omega, \\theta^\\ast),\n\\end{equation*}\nwe obtain after letting $\\varepsilon'\\to 0$, from the previous equation\n\\begin{equation}\n\\label{HomProc5}\n\\begin{aligned}\n&\\lim_{\\varepsilon'\\to 0}\\Big(I_1^{\\varepsilon'}+\\sum_{k=1}^n\\iint_{\\mathbb{R}^{n+1}_T}\\Big(I_{2,1}^{\\varepsilon',k}\n+I_{2,2}^{\\varepsilon',k}\\Big)(t,x,\\widetilde{\\omega})\\,dx\\,dt+I_3^{\\varepsilon'}\\Big)\n\\\\\n&= i \\int_{\\mathbb{R}^n} {\\left( \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {\\left\\vert \\Psi(z,\\omega,\\theta^\\ast) \\right\\vert}^2 dz \\, d\\mathbb{P} \\right)} v^0(x) \\, \n\\overline{\\varphi}(0,x) \\, dx\n\\\\\n& - i \\iint_{\\mathbb{R}^{n+1}_T} \\!\\! {( \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \n{\\left\\vert \\Psi(z,\\omega,\\theta^\\ast) \\right\\vert}^2 dz \\, d\\mathbb{P})} \\, v_{\\widetilde{\\omega}}(t,x) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial t} (t,x) \\, dx \\, dt \n\\\\\n& - \\sum_{k,\\ell=1}^n \\iint_{\\mathbb{R}^{n+1}_T} \\!\\! {( \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\!\\! \nA{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {( e_\\ell \\, \\Psi)} \\cdot {( e_k \\, \\overline{\\Psi})} \\, dz \\, d\\mathbb{P})}\n\\\\\n& \\quad \\times \\, v_{\\widetilde{\\omega}}(t,x) \\, \\frac{\\partial^2 \\overline{\\varphi}}{\\partial x_\\ell \\, \\partial x_k}(t,x) \\, dx \\, dt \n\\\\\n& +\\iint_{\\mathbb{R}^{n+1}_T} \\!\\! {( \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\!\\!\\!\\! U {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \n{\\vert \\Psi \\vert}^2 dz \\, d\\mathbb{P})} \\, v_{\\widetilde{\\omega}}(t,x) \\, \\overline{\\varphi}(t,x) \\, dx \\, dt.\n\\end{aligned}\n\\end{equation}\nProceeding in the same way with respect to the term $I_{2,3}^{\\varepsilon,k}(t,x,\\widetilde{\\omega})$, we obtain\n\\begin{eqnarray}\n\\label{HomProc6}\n&&\\lim_{\\varepsilon'\\to 0} \\sum_{k=1}^n\\iint_{\\mathbb{R}^{n+1}_T}I_{2,3}^{\\varepsilon',k}(t,x,\\widetilde{\\omega})\\,dx\\,dt\\nonumber\\\\\n&&\\quad =\\sum_{k,\\ell=1}^n \\iint_{\\mathbb{R}^{n+1}_T} \\!\\! \\Big( \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\!\\! A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, \n{\\left( {\\left( \\nabla_{\\!\\! z} + 2i\\pi\\theta^\\ast \\right)} \\Psi(z,\\omega,\\theta^\\ast) \\right)} \\nonumber\n\\\\\n&&\\hspace{2cm} \\cdot {\\left( e_\\ell \\, \\overline{\\Lambda_k}(z,\\omega,\\theta^\\ast) \\right)} \\, dz \\, d\\mathbb{P}\\Big)\nv_{\\widetilde{\\omega}}(t,x) \\, \\frac{\\partial^2 \\overline{\\varphi}}{\\partial x_\\ell \\, \\partial x_k}(t,x) \\, dx \\, dt\\nonumber\n\\\\\n&&\\quad-\\sum_{k,\\ell=1}^n \\iint_{\\mathbb{R}^{n+1}_T} \\!\\! \\Big( \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\!\\! A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)}\n\\, {\\left( e_\\ell \\, \\Psi(z,\\omega,\\theta^\\ast) \\right)}\\\\\n&&\\hspace{2cm} \\cdot {\\left( \\nabla_{\\!\\! z} - 2i\\pi\\theta^\\ast \\right)} \\overline{\\Lambda_k}(z,\\omega,\\theta^\\ast) \\, dz \\, d\\mathbb{P}\\Big)\nv_{\\widetilde{\\omega}}(t,x) \\, \\frac{\\partial^2 \\overline{\\varphi}}{\\partial x_\\ell \\, \\partial x_k}(t,x) \\, dx \\, dt. \\nonumber\n\\end{eqnarray}\nTherefore, since $I_1^{\\varepsilon'}+I_2^{\\varepsilon'}+I_3^{\\varepsilon'}=0$, (see \\eqref{676745459023v}), \ncombining the two last equations \nwe conclude that, the function $v_{\\widetilde{\\omega}}$ is a distribution solution of the following homogenized Schr\\\"odinger equation\n\\begin{equation}\n\\label{567yt65trftdfxxzxzzxcvbn}\n\\left\\{\n\\begin{array}{c}\n\ti\\displaystyle\\frac{\\partial v_{\\widetilde{\\omega}}}{\\partial t}(t,x) - {\\rm div} {\\big( B^\\ast \\nabla v_{\\widetilde{\\omega}}(t,x) \\big)} \n\t+ U^\\ast v_{\\widetilde{\\omega}}(t,x)= 0, \\;\\, (t,x) \\in \\mathbb{R}^{n+1}_T, \n\t\\\\ [5pt]\n\tv_{\\widetilde{\\omega}}(0,x)=v^0(x), \\;\\, x\\in\\mathbb{R}^n,\n\\end{array}\n\\right.\n\\end{equation}\nwhere the effective tensor \n\\begin{eqnarray}\n\\label{7358586tygfjdshfbvvcc}\n&&B_{k,\\ell}^{\\ast}= \\frac{1}{c_\\psi} { \\int_\\Omega \\int_{\\Phi\\left([0,1)^n, \\omega\\right)}\\!\\!\\! \\big\\{A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\left( e_\\ell \\, \\Psi(z,\\omega,\\theta^\\ast) \\right)}}\n\\cdot {\\left( e_k \\, \\overline{\\Psi}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\\\\n&&\\qquad\\qquad+A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\left( e_\\ell \\, \\Psi(z,\\omega,\\theta^\\ast) \\right)}\n\\cdot {\\left( (\\nabla_{\\!\\! z} - 2i\\pi\\theta^\\ast) \\overline{\\Lambda_k}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\\\\n&&\\qquad\\qquad\\qquad-A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\Big( ( \\nabla_{\\!\\! z} + 2i\\pi\\theta^\\ast) \\Psi(z,\\omega,\\theta^\\ast) \\Big)}\\nonumber\\\\\n&&\\hspace{6cm}\\cdot {\\left( e_\\ell \\, \\overline{\\Lambda_k}(z,\\omega,\\theta^\\ast) \\right)}\\big\\} \\, dz \\, d\\mathbb{P}(\\omega),\n\\end{eqnarray}\nfor ${ k, \\ell \\in \\{1,\\ldots,n\\} }$,\nand the effective potential \n\\begin{equation*}\nU^\\ast = c_\\psi^{-1} \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} U {\\left( \\Phi^{-1}(z,\\omega), \\omega \\right)} {\\vert \\Psi (z,\\omega,\\theta^\\ast)\\vert}^2 dz \\, d\\mathbb{P}(\\omega)\n\\end{equation*}\nwith \n$$\n\\begin{aligned}\n\tc_\\psi= \\!\\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} &\\!\\!\\! {\\vert \\Psi(z,\\omega, \\theta^\\ast) \\vert}^2 dz \\, d\\mathbb{P}(\\omega)\n\t\\\\[5pt]\n\t&\\equiv \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\!\\!\\! {\\vert \\psi{( \\Phi^{-1}{( \\frac{x}{\\varepsilon},\\omega)},\\omega)} \\vert}^2 dz \\, d\\mathbb{P}(\\omega).\n\\end{aligned}\n$$\nMoreover, we are allowed to change the tensor $B^\\ast$ in the equation \\eqref{567yt65trftdfxxzxzzxcvbn} by the \ncorresponding symmetric part of it, that is \n\\begin{equation*}\n\t\t\tA^\\ast = \\big(B^\\ast + (B^\\ast)^t\\big)\/ 2.\n\t\t\\end{equation*}\n\n\\medskip\n4.({\\it\\bf The Form of the Matrix $A^{\\ast}$.}) Now, we show that the homogenized tensor $A^{\\ast}$ is a real value matrix, and it coincides with the hessian matrix of \nthe function ${ \\theta \\mapsto \\lambda(\\theta) }$ in the point $\\theta^{\\ast}$. In fact, using that ${ \\nabla_{\\!\\! \\theta} \\lambda (\\theta^\\ast) = 0 }$ the \nequation~\\eqref{hkjlhjklhljkhuytyiufsd4} can be written as \n\\begin{eqnarray}\\label{968r6f7tyudstfyusgdjsdxxxzxzxzx}\n&&\\quad\\frac{1}{4\\pi^2} \\frac{\\partial^2 \\lambda(\\theta^\\ast)}{\\partial \\theta_\\ell \\, \\partial \\theta_k}\\,c_\\psi\\nonumber\n\\\\\n&&\\qquad\\qquad = \\int_\\Omega \\int_{\\Phi ([0,1)^n, \\omega)}\\big\\{A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)}{\\left( e_\\ell \\, \\Psi(z,\\omega,\\theta^\\ast) \\right)} \\cdot\n{\\left( e_k \\, \\overline{\\Psi}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\\\\n&&\\qquad\\qquad\\qquad+A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\left( e_\\ell \\, \\Psi(z,\\omega,\\theta^\\ast) \\right)} \\cdot {\\left( (\\nabla_{\\!\\! z} - 2i\\pi\\theta^\\ast) \\overline{\\Lambda_k}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\\\\n&&\\qquad\\qquad\\qquad-A {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\left[ ( \\nabla_{\\!\\! z} + 2i\\pi\\theta^\\ast) \\Psi(z,\\omega,\\theta^\\ast) \\right]} \\cdot {\\left( e_\\ell \\, \\overline{\\Lambda_k}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\\\\n&&\\qquad\\qquad\\qquad+ \nA {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\left( e_k \\, \\Psi(z,\\omega,\\theta^\\ast) \\right)} \\cdot {\\left( e_\\ell \\, \\overline{\\Psi}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\\\\n&&\\qquad\\qquad\\qquad+\nA {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\left( e_k \\, \\Psi(z,\\omega,\\theta^\\ast) \\right)} \\cdot {\\left( (\\nabla_{\\!\\! z} - 2i\\pi\\theta^\\ast) \\overline{\\Lambda_\\ell}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\\\\n&&\\qquad\\qquad\\qquad-\nA {\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, {\\left( (\\nabla_{\\!\\! z} + 2i\\pi\\theta^\\ast) \\Psi(z,\\omega,\\theta^\\ast) \\right)}\\\\\n&&\\hspace{8.0cm} \\cdot {\\left( e_k \\, \\overline{\\Lambda_\\ell}(z,\\omega,\\theta^\\ast) \\right)}\\nonumber\\big\\}\\, dz \\, d\\mathbb{P}(\\omega),\n\\end{eqnarray}\nfrom which we obtain \n\\begin{equation*}\n\tA^\\ast = \\frac{1}{8\\pi^2} \\, D^2_{\\! \\theta} \\lambda(\\theta^\\ast).\n\\end{equation*}\nTherefore, from Remark \\ref{REMCOSTCOEFF} we deduce the \nwell-posedness of the homogenized \tSchr\\\"odinger \\eqref{567yt65trftdfxxzxzzxcvbn}. Hence the function ${ v_{\\widetilde{\\omega}} \\in L^2(\\mathbb{R}^{n+1}_T) }$ \ndoes not depend on ${ \\widetilde{\\omega} \\in {\\Omega} }$. Moreover, denoting by $v$ the unique solution of the problem~ \\eqref{567yt65trftdfxxzxzzxcvbn}, \nwe have that the sequence ${ \\{v_\\varepsilon(t,x,\\widetilde{\\omega})\\}_{\\varepsilon > 0} \\subset L^2(\\mathbb{R}^{n+1}_T) }$ $\\Phi_{\\omega}-$two-scale converges to the function \n$$\n v(t,x) \\, \\Psi(z,\\omega,\\theta^\\ast) \\equiv v(t,x) \\, \\psi {\\left(\\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)}.\n$$\n\n\\medskip\n5.({\\it\\bf A Corrector-type Result.}) \nFinally, we show the following corrector type result, that is, for a.e. ${ \\widetilde{\\omega} \\in \\Omega }$\n$$\n\\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} \\big|v_\\varepsilon (t,x,\\widetilde{\\omega}) - v(t,x) \\, \n\\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\big|^2 dx \\, dt= 0.\n$$\nWe begin by the simple observation \n\\begin{equation}\n\\label{86576567tjhghjgnbmnb}\n\\begin{aligned}\n&\\iint_{\\mathbb{R}^{n+1}_T} | v_\\varepsilon (t,x,\\widetilde{\\omega}) - v(t,x) \\, \n\\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} |^2 dx dt \n\\\\\n& \\quad = \\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v_\\varepsilon (t,x,\\widetilde{\\omega}) \\right\\vert}^2 dx \\, dt \n\\\\\n& \\quad - \\iint_{\\mathbb{R}^{n+1}_T} \nv_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{ v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} } \\, dx dt \n\\\\\n& \\quad - \\iint_{\\mathbb{R}^{n+1}_T} \\overline{v_\\varepsilon(t,x,\\widetilde{\\omega})} \\, v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\, dx \\, dt \n\\\\\n& \\quad + \\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\right\\vert}^2 dx \\, dt.\n\\end{aligned}\n\\end{equation}\nFrom Lemma~\\ref{63457rf2wertgh} we see that, the first integral of the right hand side of the above equation satisfies,\nfor all ${ t\\in [0,T] }$ and a.e. ${ \\widetilde{\\omega} \\in \\Omega }$\n\t\t\\begin{eqnarray*}\n\t\t\t\\int_{\\mathbb{R}^n} {\\left\\vert v_\\varepsilon (t,x,\\widetilde{\\omega}) \\right\\vert}^2 dx & = & \\int_{\\mathbb{R}^n} {\\left\\vert u_\\varepsilon (t,x,\\widetilde{\\omega}) \\right\\vert}^2 dx \\\\\n\t\t\t& = & \\int_{\\mathbb{R}^n} {\\left\\vert u_\\varepsilon^0 (x,\\widetilde{\\omega}) \\right\\vert}^2 dx \\;\\, = \\;\\, \\int_{\\mathbb{R}^n} {\\left\\vert v_\\varepsilon^0 (x,\\widetilde{\\omega}) \\right\\vert}^2 dx \\\\\n\t\t\t& = & \\int_{\\mathbb{R}^n} {\\left\\vert v^0(x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\right\\vert}^2 dx.\n\t\t\\end{eqnarray*}\nUsing the elliptic regularity theory (see E. De Giorgi \\cite{Giorgi}, G. Stampacchia \\cite{Stampacchia}), \nit follows that $\\psi(\\theta) \\in L^\\infty(\\mathbb{R}^n; L^2(\\Omega))$ and we can apply the Ergodic Theorem to obtain\n$$\n\\begin{aligned}\n\\lim_{\\varepsilon \\to 0} & \\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v_\\varepsilon (t,x,\\widetilde{\\omega}) \\right\\vert}^2 dx \\, dt \n\\\\\n& = \\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v^0(x) \\right\\vert}^2 {\\left\\vert \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\right\\vert}^2 dx dt \n\\\\\n& = c_\\Phi^{-1} \\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {\\left\\vert v^0(x) \\,\n\\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\right\\vert}^2 dz \\, d\\mathbb{P} \\, dx dt. \n\\end{aligned}\n$$\nSimilarly, we have \t \n\\begin{multline*}\n \\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\right\\vert}^2 dx dt\n\\\\\n = c_\\Phi^{-1} \\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {\\left\\vert v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\right\\vert}^2 dz \\, d\\mathbb{P} \\, dx dt.\n\\end{multline*}\nMoreover, seeing that for a.e. ${ \\widetilde{\\omega} \\in \\Omega }$\t\n$$\n\\begin{aligned}\n\\lim_{\\varepsilon \\to 0} & \\iint_{\\mathbb{R}^{n+1}_T} v_\\varepsilon(t,x,\\widetilde{\\omega}) \\, \\overline{ v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} } \\, dx dt\n\\\\\n& = c_\\Phi^{-1} \\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \n\t\\!\\!\\! v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\, \n\\\\\n& \\qquad \\qquad \\qquad \\qquad \\times \\overline{v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)}} \\, dz \\, d\\mathbb{P} \\, dx dt,\n\\end{aligned}\n$$\nwe can make \u00ad${ \\varepsilon \\to 0 }$ in the equation~\\eqref{86576567tjhghjgnbmnb} to find \n\\begin{comment}\n\\begin{multline*}\n\\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v_\\varepsilon (t,x,\\widetilde{\\omega}) - v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\right\\vert}^2 dx \\, dt \n\\\\\n\\hspace{-4.5cm}\\qquad\\qquad\\qquad\\qquad\\qquad\n = c_\\Phi^{-1} \\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {\\left\\vert v^0(x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\right\\vert}^2 dz \\, d\\mathbb{P} \\, dx dt \n \\\\\n\\hspace{1cm} - c_\\Phi^{-1} \\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\, \\overline{v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)}} \\, dz \\, d\\mathbb{P} \\, dx dt \n\\\\\n\\hspace{1cm} - c_\\Phi^{-1} \\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} \\overline{ v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} } \\, v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\, dz \\, d\\mathbb{P} \\, dx dt \n\\\\\n\t\t\t+ c_\\Phi^{-1} \\iint_{\\mathbb{R}^{n+1}_T} \\! \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {\\left\\vert v(t,x) \\, \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\right\\vert}^2 dz \\, d\\mathbb{P} \\, dx dt,\n\\end{multline*}\nwhich is equivalent to \n\\end{comment}\n\\begin{eqnarray*}\n&&\\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v_\\varepsilon (t,x,\\widetilde{\\omega}) - v(t,x) \\, \\psi{\\left( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast \\right)} \\right\\vert}^2 dx dt \n\\\\\n&&\\qquad=c_\\Phi^{-1}{\\big( \\int_\\Omega \\int_{\\Phi([0,1)^n, \\omega)} {\\left\\vert \\psi{\\left( \\Phi^{-1}(z,\\omega),\\omega,\\theta^\\ast \\right)} \\right\\vert}^2 dz \\, d\\mathbb{P}(\\omega) \\big)}\n\\\\\n&&\\qquad\\qquad\\qquad\\qquad \\times \\big({\\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v^0(x) \\right\\vert}^2 dx \\, dt}-{\\iint_{\\mathbb{R}^{n+1}_T} {\\left\\vert v(t,x) \\right\\vert}^2 dx \\, dt}\\big),\n\\end{eqnarray*}\nfor a.e. ${ \\widetilde{\\omega} \\in \\Omega }$. Therefore, using the energy conservation of \nthe homogenized Schr\\\"odinger equation~\\eqref{HomSchEqu}, that is, for all ${ t\\in [0,T] }$\n\t\t\\begin{equation*}\n\t\t\t\\int_{\\mathbb{R}^n} {\\left\\vert v(t,x) \\right\\vert}^2 dx = \\int_{\\mathbb{R}^n} {\\left\\vert v^0(x) \\right\\vert}^2 dx,\n\t\t\\end{equation*}\nwe obtain that, for a.e. ${ \\widetilde{\\omega} \\in \\Omega }$\n\\begin{equation*}\n\t\t\t\\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} |v_\\varepsilon (t,x,\\widetilde{\\omega}) - v(t,x) \\, \n\t\t\t\\psi{( \\Phi^{-1} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega}, \\theta^\\ast)} |^2 dx dt= 0,\n\t\t\\end{equation*}\ncompleting the proof of the theorem. \t\t\n\\end{proof}\n\n\\subsection{Radom Perturbations of the Quasi-Periodic Case}\n\nIn this section, we shall give a nice application of the framework introduced in \nthis paper, which can be used to homogenize a model beyond \nthe periodic settings considered by Allaire and Piatnitski in~\\cite{AllairePiatnitski}. \nFor reaching this aim, we shall make use of \nsome results discussed in Section \\ref{9634783yuhdj6ty} (Sobolev spaces on groups),\nin particular, Section \\ref{4563tgf5fd3}. \nOther interesting application will be given in the last section. \n\n\\medskip\nLet $n,m\\ge 1$ be integers numbers and $\\lambda_1,\\cdots,\\lambda_m$ be vectors in \n$\\mathbb R^n$ linearly independent over the set $\\mathbb{Z}$ satisfying \nthe condition that \n$$\n\\big\\{k\\in\\mathbb{Z}^m;\\,|k_1\\lambda_1+\\cdots+k_m\\lambda_m|0$. Let $\\left(\\Omega_0,\\mathcal{F}_0,\\mathbb{P}_0\\right)$ be a probability space and \n$\\tau_0:\\mathbb{Z}^n\\times \\Omega_0\\to\\Omega_0$ \nbe a discrete ergodic dynamical system and $\\mathbb R^m\/{\\mathbb{Z}^m}$ be the $m-$dimensional torus which can be identified with the cube $[0,1)^m$. \nFor $\\Omega:=\\Omega_0\\times [0,1)^m$, consider the following \ncontinuous dynamical system $T:\\mathbb R^n\\times \\Omega\\to \\Omega$, defined by \n$$\nT(x)(\\omega_0,s):=\\Big(\\tau_{\\left\\lfloor s+Mx \\right\\rfloor}\\omega_0,s+Mx-\\left\\lfloor s+Mx\\right\\rfloor\\Big),\n$$\nwhere $M$ is the matrix $M=\\Big(\\lambda_i\\cdot e_j{\\Big)}_{i=1,j=1}^{m,n}$ and \n$\\left\\lfloor y\\right\\rfloor$ denotes the unique element in $\\mathbb{Z}^m$ such that $y-\\left\\lfloor y\\right\\rfloor\\in [0,1)^m$. Now, we consider $[0,1)^m-$periodic functions \n$A_{\\rm per}:\\mathbb R^m\\to\\mathbb R^{n^2},\\,V_{\\rm per}:\\mathbb R^m\\to\\mathbb R$ and $U_{\\rm per}:\\mathbb R^m\\to\\mathbb R$ such that \n\\begin{itemize}\n\\item There exists $a_0,a_1>0$ such that for all $\\xi\\in\\mathbb R^n$ and for a.e $y\\in\\mathbb R^m$ we have \n$$\na_0|\\xi|^2\\le A_{\\rm per}(y)\\xi\\cdot \\xi\\le a_1|\\xi|^2.\n$$\n\\item $V_{\\rm per},\\,U_{\\rm per}\\in L^{\\infty}(\\mathbb R^m)$.\n\\end{itemize}\nLet $B_{\\rm per}:\\mathbb R^m\\to\\mathbb R^{n^2}$ be a $[0,1)^m-$periodic matrix and $\\Upsilon:\\mathbb R^n\\times [0,1)^m\\to\\mathbb R^n$ be any stochastic diffeomorphism\nsatisfying\n$$\n\\nabla \\Upsilon (x,s)=B_{\\rm per}\\Big(T(x)(\\omega_0,s)\\Big).\n$$ \nThus, we define the following stochastic deformation $\\Phi:\\mathbb R^n\\times \\Omega\\to \\mathbb R^n$ by \n$$\n\\Phi(x,\\omega)=\\Upsilon(x,s)+{\\bf X}(\\omega_0),\n$$\nwhere we have used the notation $\\omega$ for the pair $(\\omega_0,s)\\in\\Omega$ and ${\\bf X}:\\Omega_0\\to\\mathbb R^n$ is a random vector. Now, taking \n$$\n A(x,\\omega):=A_{\\rm per}\\left(T(x)\\omega\\right),\\,V(x,\\omega):=V_{\\rm per}\\left(T(x)\\omega\\right), \n \\; U(x,\\omega):= U_{\\rm per}\\left(T(x)\\omega\\right)\n$$ \nin the equation~\\eqref{jhjkhkjhkj765675233}, it can be seen after some computations that the spectral equation correspondent is \n\\begin{equation}\\label{ApHom}\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{l}\n\t\t\t\t-{\\Big( {\\rm div}_{\\rm {QP}} + 2i\\pi \\theta \\Big)} {\\left[ A _{\\rm per}{\\left(\\cdot \\right)} {\\Big( \\nabla^{\\rm {QP}} + 2i\\pi\\theta \\Big)} {\\Psi}_{\\rm per}(\\cdot) \\right]}\n\\\\ [7.5pt]\n \\hspace{2.0cm}+ V_{\\rm per}{\\left(\\cdot \\right)} {\\Psi}_{\\rm per}(\\cdot) = \\lambda {\\Psi}_{\\rm per}(\\cdot) \\;\\; \\text{in} \\,\\; [0,1)^m, \\\\ [7.5pt]\n\t\t\t\t\\hspace{1.5cm} {\\Psi}_{\\rm per}(\\cdot) \\;\\;\\; \\psi \\;\\, \\text{is a $[0,1)^m-$periodic function},\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\t\\end{equation}\nwhere the operators ${\\rm div}_{\\rm {QP}}$ and $\\nabla^{\\rm{QP}}$ are defined as \n\\begin{itemize}\n\\item $\\left(\\nabla^{\\rm {QP}}u_{\\rm per}\\right)(y):=B_{\\rm per}^{-1}(y)M^{\\ast}\\left(\\nabla u_{\\rm per}\\right)(y)$;\n\\item $\\left(\\rm{div}_{\\rm{QP}}\\,a\\right)(y):=\\rm{div}\\left(M B_{\\rm per}^{-1}(\\cdot)a(\\cdot)\\right)(y)$.\n\\end{itemize}\nAlthough the coefficients of the spectral equation~\\eqref{ApHom} can be seen as periodic functions, its analysis is possible thanks to the results developed in the \nSection \\ref{4563tgf5fd3}. \nThis happens due to the fact that, the bilinear form associated to the problem~\\eqref{ApHom} can lose its coercivity which unable us \nto apply the classic theory. \n\nAssume that for some $\\theta^{\\ast}\\in\\mathbb R^n$, the spectral equation~\\eqref{ApHom} admits a solution \n$\\big(\\lambda(\\theta^{\\ast}),\\Psi_{\\rm per}(\\theta^{\\ast})\\big)\\in \\mathbb R\\times H^1\\left([0,1)^m\\right)$, such that \n\\begin{itemize}\n\\item $\\lambda(\\theta^{\\ast})$ is a simple eigenvalue;\n\\item $\\nabla \\lambda(\\theta^{\\ast})=0$.\n\\end{itemize}\nNow, we consider the problem~\\eqref{jhjkhkjhkj765675233} with new coefficients as highlighted above and with well-prepared initial data, that is, \n$$\nu_{\\varepsilon}(x,\\omega):=e^{2\\pi i \\frac{\\theta^{\\ast}\\cdot x}{\\varepsilon}}\\,{\\Psi}_{\\rm per}\\Big(T\\left(\\Phi^{-1}\\left(\\frac{x}{\\varepsilon},\\omega\\right)\\right)\\omega,\\theta^{\\ast}\\Big)\nv^0(x),\n$$ \nfor $(x,\\omega)\\in \\mathbb R^n\\times \\Omega$ and $v^0\\in C^{\\infty}_c(\\mathbb R^n)$. Applying Theorem~\\ref{876427463tggfdhgdfgkkjjlmk}, the function \n\\begin{equation*}\n\t\t\tv_\\varepsilon(t,x,\\omega) := e^{ -{\\left( i \\frac{\\lambda(\\theta^\\ast) t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta^\\ast \\! \\cdot x}{\\varepsilon} \\right)} } u_\\varepsilon(t,x,\\omega), \\;\\, (t,x) \\in \\mathbb{R}^{n+1}_T, \\; \\omega \\in \\Omega, \n\t\t\\end{equation*}\n$\\Phi_\\omega-$two-scale converges strongly to ${ v(t,x) \\, {\\Psi}_{\\rm per}\\Big({T\\left( \\Phi^{-1}(z,\\omega)\\right)\\omega, \\theta^\\ast } }\\Big)$,\nwhere \n${ v \\in C([0,T], L^2(\\mathbb{R}^n)) }$ is the unique solution of the homogenized Schr\\\"odinger equation \n\\begin{equation*}\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{c}\n\t\t\t\ti \\displaystyle\\frac{\\partial v}{\\partial t} - {\\rm div} {\\left( A^\\ast \\nabla v \\right)} + U^\\ast v = 0 \\, , \\;\\, \\text{em} \\;\\, \\mathbb{R}^{n+1}_T, \\\\ [7,5pt]\n\t\t\t\tv(0,x) = v^0(x) \\, , \\;\\, x\\in \\mathbb{R}^n,\n\t\t\t\\end{array}\n\t\t\t\\right.\n\\end{equation*}\nwith effective matrix ${ A^\\ast = D_\\theta^2 \\lambda(\\theta^\\ast) }$ and effective potential \n\\begin{equation*}\n\t\t\tU^\\ast = c^{-1}_\\psi \\int_{[0,1)^m} U_{\\rm per}{\\left(y \\right)}\\, {\\left\\vert {\\Psi}_{\\rm per} {\\left(y, \\theta^\\ast \\right)} \\right\\vert}^2 \n\t\t\t|\\det \\left(B_{\\rm per}(y)\\right)|\\,dy,\n\t\t\\end{equation*}\n\t\twhere\n\t\t\\begin{equation*}\n\t\t\tc_\\psi = \\int_{[0,1)^m} {\\left\\vert {\\Psi}_{\\rm per} {\\left(y, \\theta^\\ast \\right)} \\right\\vert}^2 \\,|\\det \\left(B_{\\rm per}(y)\\right)|\\,dy .\n\t\t\\end{equation*}\n\t\t\nIt is worth highlighting that this singular example encompasses the settings considered by Allaire-Piatnitski in~\\cite{AllairePiatnitski}. For this, it is enough to take \n$$\n n= m,\\,\\lambda_j=e_j,\\,\\Upsilon(\\cdot,s)\\equiv I_{n \\times n}, \\; \\text{and ${\\bf X}(\\cdot)\\equiv 0$.}\n$$ \nMoreover, we consider $[0,1)^m-$periodic functions: $V_{\\rm per}, U_{\\rm per}: \\mathbb R^m\\to\\mathbb R$, \nand $A_{\\rm per}:\\mathbb R^m\\to\\mathbb R^{n^2}$, such that \n\\begin{itemize}\n\\item There exists $a_0,a_1>0$ such that for all $\\xi\\in\\mathbb R^n$ and for a.e $y\\in\\mathbb R^m$ we have \n$$\na_0|\\xi|^2\\le A_{\\rm per}(y)\\xi\\cdot \\xi\\le a_1|\\xi|^2;\n$$\n\\item $V_{\\rm per},\\,U_{\\rm per}\\in L^{\\infty}(\\mathbb R^m)$.\n\\end{itemize}\n\n\\section{\\! \\! \\!Homogenization of Quasi-Perfect Materials} \n\\label{6775765ff0090sds}\n\nPerfect materials (which represent the periodic setting) are rare in nature. However, there is a huge class of materials which have small deviation from perfect ones, called \nhere quasi-perfect materials. We consider in this section an interesting context, which is the small random perturbation of the periodic setting. In particular, this context is \nimportant for numerical applications. To begin, we remember the reader that \nin the previous section, it was seen that our homogenization analysis (see Theorem~\\ref{876427463tggfdhgdfgkkjjlmk}) \nof the equation~\\eqref{jhjkhkjhkj765675233} rely on the spectral study of the operator \n$L^{\\Phi}(\\theta)(\\theta\\in\\mathbb R^n)$ posed in the dual space ${ \\mathcal{H}^\\ast }$ and with domain \u00ad${ D(L^{\\Phi}(\\theta))=\\mathcal{H} }$ and defined by\n\\begin{equation}\\label{OperL}\n\\begin{array}{l}\n\tL^\\Phi(\\theta)[f] := - {\\big({\\rm div}_{\\! z} + 2i\\pi \\theta \\big)} {\\Big[ A {\\big( \\Phi^{-1} (\\cdot, {\\cdot\\cdot} ), {\\cdot\\cdot} \\big)} {\\big( \\nabla_{\\!\\! z} + 2i\\pi\\theta \\big)} f{\\big( \\Phi^{-1}(\\cdot, {\\cdot\\cdot} ),{\\cdot\\cdot} \\big)} \\Big]} \\\\ [10pt]\n\t\\hspace{4cm} + \\, V{\\big( \\Phi^{-1} (\\cdot, {\\cdot\\cdot} ), {\\cdot\\cdot} \\big)} f{\\big( \\Phi^{-1}(\\cdot, {\\cdot\\cdot} ), {\\cdot\\cdot} \\big)}, \n\\end{array}\n\\end{equation}\nwhere $\\Phi:\\mathbb R^n\\times\\Omega\\to\\mathbb R^n$ is a stochastic deformation, $A:\\mathbb R^n\\times\\Omega\\to\\mathbb R^{n^2}$ and $V:\\mathbb R^n\\times\\Omega\\to\\mathbb R$ are stationary functions. Also, remember that the variational formulation of the operator $L^{\\Phi}(\\theta)$ is given by:\n\n\\begin{equation*}\n\\begin{split}\n\t& {\\left\\langle L^\\Phi(\\theta)[f], g \\right\\rangle} := \\int_\\Omega \\int_{\\Phi ([0,1)^n, \\omega)} A {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} {\\big( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\big)} f{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\cdot \\\\\n\t& \\hspace{7cm} \\overline{ {\\big( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\big)} g{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega) \\\\\n\t& + \\int_\\Omega \\int_{\\Phi ([0,1)^n, \\omega)} V {\\left( \\Phi^{-1} ( z, \\omega),\\omega \\right)} f{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} \\, \\overline{ g{\\left( \\Phi^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega),\n\\end{split}\n\\end{equation*}\nfor ${ f, g \\in \\mathcal{H} }$.\n\n\n More precisely, it was required the existence of a pair ${ {\\big( \\theta^\\ast,\\lambda(\\theta^\\ast) \\big)} \\in \\mathbb{R}^n \\times \\mathbb{R} }$ \n that satisfies\n\\begin{equation}\\label{conds}\n\t\\left\\{ \\,\n\t\\begin{split}\n\t\t& \\lambda(\\theta^\\ast) \\; \\text{is a simple eigenvalue of} \\; L^\\Phi(\\theta^\\ast), \\\\\n\t\t&\\theta^\\ast \\; \\text{is a critical point of} \\; \\lambda(\\cdot), \\, \\text{that is}, \\nabla_{\\!\\! \\theta} \\lambda(\\theta^\\ast) = 0. \n\t\\end{split}\n\t\\right.\n\\end{equation}\n\nAs observed before, it is not clear the existence of a pair $(\\theta^{\\ast},\\lambda(\\theta^{\\ast}))$,\nin general stochastic environments, satisfying the two above \nconditions. \nThe reason is due mainly to the lack of compact embedding of ${ \\mathcal{H} }$ in ${ \\mathcal{L} }$. \nHowever, in the periodic settings there are concrete situations where such conditions take place (see, for \ninstance,~\\cite{AllairePiatnitski,BarlettiBenAbdallah,BensoussanLionsPapanicolaou}). \nOur aim in this section is to show realistic models whose spectral nature is inherited from the periodic ones.\n\n\\subsection{Perturbed Periodic Case: Spectral Analysis}\n\\label{PERTUSPECTANALY}\n\nIn this section we shall study the spectral properties of the operator ${ L^\\Phi(\\theta) }$, when the diffeomorphism ${ \\Phi }$ \nis a stochastic perturbation of the identity. This concept was introduced in \\cite{BlancLeBrisLions2},\nand well-developed by T. Andrade, W. Neves, J. Silva \\cite{AndradeNevesSilva} for modelling quasi-perfect materials. \n\n\\medskip\nLet $(\\Omega,\\mathcal{F},\\mathbb{P})$ be a probability space, \n$\\tau:\\mathbb{Z}^n\\times\\Omega\\to\\Omega$ a discrete \ndynamical system, and $Z$ any fixed stochastic deformation.\nThen, we consider the concept of stochastic perturbation of the identity given by the following\n\\begin{definition}\n\\label{37285gdhddddddddddd}\nGiven $\\eta \\in (0,1)$, let $\\Phi_\\eta: \\mathbb{R}^n \\times \\Omega \\to \\mathbb{R}^n$ be a stochastic deformation.\nThen $\\Phi_\\eta$ is said a stochastic perturbation of the identity, when \nit can be written as \n\\begin{equation}\n\\label{DefPertIden}\n\\Phi_\\eta(y,\\omega) = y + \\eta \\, Z(y,\\omega) + \\mathrm{O}(\\eta^2), \n\\end{equation}\nfor some stochastic deformation $Z$. \n\\end{definition}\nWe emphasize that the equality~\\eqref{DefPertIden} is understood in the sense of ${\\rm Lip}_{\\loc}\\big(\\mathbb R^n; L^2(\\Omega)\\big)$, i.e. \nfor each bounded open subset ${ \\mathcal{O} \\subset \\mathbb{R}^n }$, \nthere exist $\\delta, C > 0$, such that for all ${ \\eta \\in (0,\\delta) }$\n\\begin{eqnarray*}\n&&\\underset{y \\in \\mathcal{O}}{\\rm sup} \\, {\\left\\Vert \\Phi_\\eta(y,\\cdotp) - y - \\eta Z(y,\\cdotp) \\right\\Vert}_{L^2(\\Omega)}\\\\\n&&\\qquad +\\,\\underset{y \\in \\mathcal{O}}{\\rm ess \\, sup} \\, {\\left\\Vert \\nabla_{\\!\\! y} \\Phi_\\eta(y,\\cdotp) - I \n- \\eta \\, \\nabla_{\\!\\! y} Z(y,\\cdotp) \\right\\Vert}_{L^2(\\Omega)}\n\\leqslant C \\, \\eta^2.\n\\end{eqnarray*}\nMoreover, after some computations, we have\n\\begin{equation}\n\\label{654367ytr6tfclmlml}\n\\left \\{\n\\begin{aligned}\n\t\\nabla_y^{-1} \\Phi_{\\eta}&= I-\\eta\\,\\nabla_y Z+O(\\eta^2), \n\t\\\\[5pt]\n\t\\det \\big(\\nabla_y\\Phi_{\\eta}\\big)&= 1+\\eta\\, {\\rm div}_yZ +O(\\eta^2).\n\\end{aligned}\n\\right.\n\\end{equation}\n\nNow, we consider the periodic functions \n$A_{\\rm per}:\\mathbb R^n\\to\\mathbb R^{n^2},\\,V_{\\rm per}:\\mathbb R^n\\to\\mathbb R$ and $U_{\\rm per}:\\mathbb R^n\\to\\mathbb R$, such that \n\\begin{itemize}\n\\item There exists $a_0,a_1>0$ such that for all $\\xi\\in\\mathbb R^n$ and for a.e $y\\in\\mathbb R^n$ we have \n$$\na_0|\\xi|^2\\le A_{\\rm per}(y)\\xi\\cdot \\xi\\le a_1|\\xi|^2.\n$$\n\\item $V_{\\rm per},\\,U_{\\rm per}\\in L^{\\infty}(\\mathbb R^n)$.\n\\end{itemize}\nThe following lemma is well-known and it is stated explicitly here only for reference. For a proof, we recommend the reader to~\\cite{Evans1}. \n\n\n\\begin{lemma}\n\\label{7836565etyd43tre56rt3e54redgh}\nFor $\\theta \\in \\mathbb{R}^n$ and $f \\in H_{\\rm per}^1([0,1)^n)$, let ${ L_{\\rm per}(\\theta) }$ be the operator defined by\n\\begin{equation}\n\\label{753e6735827tdygetydr5de4se45se5}\nL_{\\rm per}(\\theta){[f]} := -({\\rm div}_{\\! y} + 2i\\pi \\theta) {\\big[ A_{\\rm per} (y) {(\\nabla_{\\!\\! y} + 2i\\pi\\theta)} f(y) \\big]} + V_{\\rm per}(y) f(y),\n\\end{equation}\nwith variational formulation\n\\begin{equation*}\n\\begin{array}{c}\n\\displaystyle {\\left\\langle L_{\\rm per}(\\theta){\\big[ f \\big]}, g \\right\\rangle} := \\int_{[0,1)^n} A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} f(y) \\cdot \\overline{ {\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} g(y) } \\, dy \\\\ [10pt]\n\\displaystyle \\hspace{1.7cm} + \\int_{[0,1)^n} V_{\\rm per}(y) \\, f(y) \\, \\overline{ g(y) } \\, dy, \n\\end{array}\n\\end{equation*}\nfor ${ f,g \\in H_{\\rm per}^1({[0,1)^n}) }$. Then ${ L_{\\rm per}(\\theta) }$ has the following properties:\n\t\t\\begin{enumerate}\n\t\t\t\\item[(i)] There exist ${ \\gamma_0, b_0 > 0 }$, such that ${ L_{\\gamma_0} := L_{\\rm per}(\\theta) + {\\gamma_0}I }$ satisfies\n\t\t\t for all $f \\in H_{\\rm per}^1({[0,1)^n})$, \n\t\t\t\\begin{equation*}\n\t\t\t\t{\\langle L_{\\gamma_0} {\\big[ f \\big]}, f \\rangle} \\geq b_0 {\\Vert f \\Vert}_{H_{\\rm per}^1({[0,1)^n})}^2.\n\t\t\t\\end{equation*}\n\t\t\t\\item[(ii)] The point spectrum of ${ L_{\\rm per}(\\theta) }$ is not empty and their eigenspaces have finite dimension, that is, the set\n\t\t\t\\begin{equation*}\n\t\t\t\t\\sigma_{\\rm point} {\\big( L_{\\rm per}(\\theta) \\big)} = \\{ \\lambda \\in \\mathbb{C} \\; ; \\; \\lambda \\; \\text{an eigenvalue of} \\; L_{\\rm per}(\\theta) \\}\n\t\t\t\\end{equation*}\n\t\t\tis not empty and for all ${ \\lambda \\in \\sigma_{\\rm point} {\\big( L_{\\rm per}(\\theta) \\big)} }$ fixed,\n\t\t\t\\begin{equation*}\n\t\t\t\t{\\rm dim} {\\big\\{ f \\in H^1_{\\rm per}({[0,1)^n}) \\; ; \\; L_{\\rm per}(\\theta){\\big[ f \\big]} = \\lambda f \\big\\}} < \\infty.\n\t\t\t\\end{equation*}\n\t\t\t\n\t\t\t\\item[(iii)] Every point in ${ \\sigma_{\\rm point}\\big( L_{\\rm per}(\\theta) \\big) }$ is isolated. \n\t\t\\end{enumerate}\n\t\\end{lemma}\n\n\\begin{remark}\nWe observe that, the properties of the ${ L_{\\rm per}(\\theta) }$, ${ \\theta \\in \\mathbb{R}^n }$,\ngiven by the Lemma~\\ref{7836565etyd43tre56rt3e54redgh} \ncan be conveyed to the space ${ \\mathcal{H} }$ in a natural way.\n\\end{remark}\n\t\n\nIn whats follow, we are interested in the study of spectral properties of the operator ${ L^{\\Phi_\\eta}(\\theta) }$ whose variational formulation is given by \n\t\\begin{equation}\\label{VarFor1}\n\t\\begin{split}\n\t\t& {\\left\\langle L^{\\Phi_\\eta}(\\theta)[f], g \\right\\rangle} := \\int_\\Omega \\int_{\\Phi_\\eta ([0,1)^n, \\omega)} A_{\\rm per} {\\left( \\Phi_\\eta^{-1} ( z, \\omega) \\right)} {\\big( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\big)} f{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\cdot \\\\\n\t\t& \\hspace{7cm} \\overline{ {\\big( \\nabla_{\\!\\! z} + 2i\\pi \\theta \\big)} g{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega) \\\\\n\t\t& + \\int_\\Omega \\int_{\\Phi_\\eta ([0,1)^n, \\omega)} V_{\\rm per} {\\left( \\Phi_\\eta^{-1} ( z, \\omega) \\right)} f{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\, \\overline{ g{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} } \\, dz \\, d\\mathbb{P}(\\omega),\n\t\\end{split}\n\t\\end{equation}\n\tfor ${ f,g \\in \\mathcal{H} }$. As we shall see in the next theorem, some of the spectral properties of the operator ${ L^{\\Phi_\\eta}(\\theta) }$ are inherited from the periodic case. \n\t\n\t\n\t\n\\begin{theorem}\n\\label{4087865567576ghghj}\n\t\tLet ${ \\Phi_\\eta }$, ${ \\eta \\in (0,1) }$ be a stochastic perturbation of identity and ${ \\theta_0 \\in \\mathbb{R}^n }$. If ${ \\lambda_0 }$ is an eigenvalue of ${ L_{\\rm per}(\\theta_0) }$ with multiplicity ${ k_0 \\in \\mathbb{N} }$, that is,\n\t\t\\begin{equation*}\n\t\t\t{\\rm dim} {\\big\\{ f \\in H^1_{\\rm per}([0,1)^n) \\; ; \\; L_{\\rm per}(\\theta_0){\\big[ f \\big]} = \\lambda_0 f \\big\\}} = k_0,\t\t\t\t\n\t\t\\end{equation*}\n\t\tthen there exist a neighbourhood ${ \\mathcal{U} }$ of ${ (0,\\theta_0) }$, ${ k_0 }$ real analytic functions\n\t\t\\begin{equation*}\n\t\t\t(\\eta,\\theta) \\in \\mathcal{U} \\; \\mapsto \\; \\lambda_k(\\eta,\\theta) \\in \\mathbb{R}, \\;\\; k\\in \\{1,\\ldots,k_0\\},\n\t\t\\end{equation*}\n\t\tand ${ k_0 }$ vector-value analytic maps \n\t\t\\begin{equation*}\n\t\t\t(\\eta,\\theta) \\in \\mathcal{U} \\; \\mapsto \\; \\psi_k(\\eta,\\theta) \\in \\mathcal{H} \\setminus \\{0\\}, \\;\\; k\\in \\{1,\\ldots,k_0\\},\n\t\t\\end{equation*}\n\t\tsuch that, for all ${ k\\in\\{1,\\ldots,k_0\\} }$,\n\t\t\\begin{itemize}\n\t\t\t\\item[(i)] ${ \\lambda_k(0,\\theta_0) = \\lambda_0 }$,\n\t\t\t\\item[(ii)] ${ L^{\\Phi_\\eta}(\\theta) {\\big[ \\psi_k(\\eta,\\theta) \\big]} = \\lambda_k(\\eta,\\theta) \\, \\psi_k(\\eta,\\theta) }$, ${ \\forall (\\eta,\\theta) \\in \\mathcal{U} }$,\n\t\t\t\\item[(iii)] ${ {\\rm dim}{\\big\\{ f \\in \\mathcal{H} \\; ; \\; L^{\\Phi_\\eta}(\\theta){\\big[ f \\big]}=\\lambda_k(\\eta,\\theta) f \\big\\}} \\leqslant k_0 }$, ${ \\forall (\\eta,\\theta) \\in \\mathcal{U} }$.\n\t\t\\end{itemize}\n\\end{theorem}\n\n\\begin{proof}\n1. The aim of this step is to rewrite the operator ${ L^{\\Phi_\\eta}(\\theta) \\in \\mathcal{B}(\\mathcal{H},\\mathcal{H}^\\ast) }$, for ${ \\eta\\in (0,1) }$ and ${ \\theta\\in\\mathbb{R}^n }$ \nas an expansion in the variable ${ (\\eta,\\theta) }$ of operators in ${ \\mathcal{B}(\\mathcal{H},\\mathcal{H}^\\ast) }$ around the point ${ (\\eta,\\theta)=(0,\\theta_0) }$. For this, \nusing the variational formulation~\\eqref{VarFor1}, a change of variables and the expansions~\\eqref{654367ytr6tfclmlml} we obtain\n \\begin{equation*}\n\t\t\\begin{split}\n\t\t\t& \\!\\!\\! {\\langle L^{\\Phi_\\eta}(\\theta) {\\big[ f \\big]}, g \\rangle} = \\\\\n\t\t\t& {\\left[ \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} f \\cdot \\overline{{\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} g} \\, d\\mathbb{P} \\, dy + \\int_{[0,1)^n} \\int_\\Omega V_{\\rm per}(y) \\, f \\, \\overline{g} \\, d\\mathbb{P} \\, dy \\right]} \\\\\n\t\t\t& \\hspace{0.25cm} + \\eta {\\left[ \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {\\left( -[\\nabla_{\\!\\! y} Z](y,\\omega)\\nabla_{\\!\\! y} f \\right)} \\cdot \\overline{{\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} g} \\, d\\mathbb{P} \\, dy \\right.} \\\\\n\t\t\t& \\hspace{1.5cm} { + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} f \\cdot \\overline{{\\left( -[\\nabla_{\\!\\! y} Z](y,\\omega) \\nabla_{\\!\\! y} g \\right)}} \\, d\\mathbb{P} \\, dy } \\\\\n\t\t\t& \\hspace{2.1cm} {\\left. + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} f \\cdot \\overline{{\\left( \\nabla_{\\!\\! y} + 2i\\pi \\theta \\right)} g} \\,\\, {\\rm div}_{\\! y} Z(y,\\omega) \\, d\\mathbb{P} \\, dy \\right]} \\\\\n\t\t\t& \\hspace{6.25cm} + \\mathrm{O}(\\eta^2),\n\t\t\\end{split}\n\t\t\\end{equation*}\n\t\tin $\\mathbb{C}$ as ${ \\eta \\to 0 }$, for ${ f,g \\in \\mathcal{H} }$. \n\t\t\n\t\t\nNow, making the expansion of it in the variable ${ \\theta }$ about the point ${ \\theta=\\theta_0 }$, it is convenient to rewrite the above expansion in the form \t\n\t\t\\begin{equation*}\n\t\t\t\\begin{split}\n\t\t\t\t& \\! {\\left\\langle L^{\\Phi_\\eta}(\\theta) {\\big[ f \\big]}, g \\right\\rangle} = \\\\\n\t\t\t\t& ((\\eta,\\theta)-(0,\\theta_0))^{(0,\\boldsymbol{0})}{\\left( \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} f \\cdot \\overline{ {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} g} \\, d\\mathbb{P} \\, dy \\right.} \\\\\n\t\t\t\t& \\hspace{9cm} {\\left. + \\int_{[0,1)^n} \\int_\\Omega V_{\\rm per}(y) \\, f \\, \\overline{g} \\, d\\mathbb{P} \\, dy \\right)} \\\\\n\t\t\t\t& + \\sum_{k=1}^n ((\\eta,\\theta)-(0,\\theta_0))^{(0,e_k)} {\\left( \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} f \\cdot \\overline{(2i\\pi e_k g)} \\, d\\mathbb{P} \\, dy \\right.} \\\\\n\t\t\t\t& \\hspace{5.25cm} {\\left. + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) (2i\\pi e_k f) \\cdot \\overline{ {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} g} \\, d\\mathbb{P} \\, dy \\right)}\n\t\t\t\\end{split}\n\t\t\\end{equation*}\n\t\t\\begin{equation*}\n\t\t\t\\begin{split}\n\t\t\t\t& + \\sum_{k, \\ell=1}^n ((\\eta,\\theta)-(0,\\theta_0))^{(0,e_k+e_\\ell)} {\\left( \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) (2i\\pi e_k f) \\cdot \\overline{(2i\\pi e_\\ell g)} \\, d\\mathbb{P} \\, dy \\right)} \\\\\n\t\t\t\t& + ((\\eta,\\theta)-(0,\\theta_0))^{(1,\\boldsymbol{0})} {\\left( \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {\\left( -[\\nabla_{\\!\\! y} Z](y,\\omega)\\nabla_{\\!\\! y} f \\right)} \\cdot \\overline{ {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} g} \\, d\\mathbb{P} \\, dy \\right.} \\\\\n\t\t\t\t& \\hspace{2.15cm} + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} f \\cdot \\overline{{\\left( -[\\nabla_{\\!\\! y} Z](y,\\omega) \\nabla_{\\!\\! y} g \\right)}} \\, d\\mathbb{P} \\, dy \\\\ \n\t\t\t\t& \\hspace{2.15cm} {\\left. + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} f \\cdot \\overline{ {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} g} \\,\\, {\\rm div}_{\\! y} Z(y,\\omega) \\, d\\mathbb{P} \\, dy \\right)} \\\\\n\t\t\t\t& + \\sum_{k=1}^n ((\\eta,\\theta)-(0,\\theta_0))^{(1,e_k)} {\\left( \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {\\left( -[\\nabla_{\\!\\! y} Z](y,\\omega) \\nabla_{\\!\\! y} f \\right)} \\cdot \\overline{(2i\\pi e_k g)} \\, d\\mathbb{P} \\, dy \\right.} \\\\\n\t\t\t\t& \\hspace{3cm} + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {(2i\\pi e_k f)} \\cdot \\overline{{\\left( -[\\nabla_{\\!\\! y} Z](y,\\omega) \\nabla_{\\!\\! y} g \\right)}} \\, d\\mathbb{P} \\, dy \\\\\n\t\t\t\t& \\hspace{3cm} + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} f \\cdot \\overline{(2i\\pi e_k g)} \\,\\, {\\rm div}_{\\! y} Z(y,\\omega) \\, d\\mathbb{P} \\, dy \\\\\n\t\t\t\t& \\hspace{3cm} {\\left. + \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {(2i\\pi e_k f)} \\cdot \\overline{ {( \\nabla_{\\!\\! y} + 2i\\pi \\theta_0)} g} \\,\\, {\\rm div}_{\\! y} Z(y,\\omega) \\, d\\mathbb{P} \\, dy \\right)} \\\\\n\t\t\t\t& + \\sum_{k, \\ell=1}^n ((\\eta,\\theta)-(0,\\theta_0))^{(1,e_k + e_\\ell)} {\\left( \\int_{[0,1)^n} \\int_\\Omega A_{\\rm per}(y) {(2i\\pi e_k f)} \\cdot \\overline{(2i\\pi e_\\ell g)} \\,\\, {\\rm div}_{\\! y} Z(y,\\omega) \\, d\\mathbb{P} \\, dy \\right)} \\\\ \n\t\t\t\t& \\hspace{6.25cm}+ \\mathrm{O}(\\eta^2),\n\t\t\t\\end{split}\n\t\t\\end{equation*}\n\t\tin ${ \\mathbb{C} }$ as ${ \\eta \\to 0 }$, for ${ f,g \\in \\mathcal{H} }$, which is the expansion in the variable ${ (\\eta,\\theta) }$ around the point ${ (0,\\theta_0) }$. Here, for ${ (\\alpha,\\beta) \\in \\mathbb{N} \\times \\mathbb{N}^n }$ and ${ \\beta=(\\beta_1,\\ldots,\\beta_n) }$, we are using the multi-index notation ${ ((\\eta, \\theta)-(0,\\theta_0))^{(\\alpha,\\beta)} = \\eta^\\alpha \\prod_{k=1}^n (\\theta_k-\\theta_{0k})^{\\beta_k} }$. Now, noting that the term of order ${ (\\eta,\\theta)^{(0,\\boldsymbol{0})} }$ is the variational formulation of ${ L_{\\rm per}(\\theta_0) }$ as in \\eqref{753e6735827tdygetydr5de4se45se5}, we can rewrite the above expansion in the form\n\t\\begin{equation}\\label{iy87678yhghj354g}\n\t\tL^{\\Phi_\\eta}(\\theta) = L_{\\rm per}(\\theta_0) + \\sum_{{\\vert (\\alpha,\\beta) \\vert} = 1}^{3} ((\\eta,\\theta)-(0,\\theta_0))^{(\\alpha,\\beta)}L_{(\\alpha,\\beta)} + \\mathrm{O}(\\eta^2),\n\t\\end{equation}\n\tin ${ \\mathcal{B}(\\mathcal{H},\\mathcal{H}^\\ast) }$ as ${ \\eta \\to 0 }$, where ${ L_{(\\alpha,\\beta)} \\in \\mathcal{B}(\\mathcal{H},\\mathcal{H}^\\ast) }$ and ${ {\\vert (\\alpha,\\beta) \\vert} = \\alpha + \\sum_{k=1}^n \\beta_k }$.\n\t\t\nClearly, we can consider the parameters ${ (\\eta,\\theta) }$ in the set $B(0,1) \\times \\mathbb{C}^n$.\n\n2. In this step, we shall modify the expansion \\eqref{iy87678yhghj354g} conveniently in order to obtain an holomorphic invertible operator in the variable ${ (\\eta,\\theta) }$. For this, \nremember that according to the item ${ (i) }$ in Lemma \\ref{7836565etyd43tre56rt3e54redgh}, there exists $\\gamma_0>0$ such that the operator ${ L_{\\rm per}(\\theta_0) + {\\gamma_0} I }$ is invertible. Then there exists ${ \\delta>0 }$ such that the expansion\n\t\t\\begin{equation}\\label{87tyrtdfdcccdasxzsaxzsa}\n\t\t\tL^{\\Phi_\\eta}(\\theta) + {\\gamma_0} I = (L_{\\rm per}(\\theta_0) + {\\gamma_0} I) + \\sum_{{\\vert (\\alpha,\\beta) \\vert} = 1}^{3} ((\\eta,\\theta)-(0,\\theta_0))^{(\\alpha,\\beta)}L_{(\\alpha,\\beta)}+ \\mathrm{O}(\\eta^2),\n\t\t\\end{equation}\n\t\tin ${ \\mathcal{B}(\\mathcal{H},\\mathcal{H}^\\ast) }$ as ${ \\eta \\to 0 }$, is invertible for all ${ (\\eta,\\theta) \\in B(0,\\delta) \\times B(\\theta_0,\\delta) }$, since the set of invertible bounded operators ${ GL(\\mathcal{H},\\mathcal{H}^\\ast) }$ is an open subset of ${ \\mathcal{B}(\\mathcal{H},\\mathcal{H}^\\ast) }$. Now, we denote by ${ S(\\eta,\\theta) }$ the inverse operator of ${ L^{\\Phi_\\eta}(\\theta) + {\\gamma_0} I }$, ${ (\\eta, \\theta) \\in B(0,\\delta) \\times B(\\theta_0,\\delta) }$. Since the map ${ L \\in GL(\\mathcal{H},\\mathcal{H}^\\ast) \\mapsto L^{-1} \\in \\mathcal{B}(\\mathcal{H}^\\ast,\\mathcal{H}) }$ is continuous, the map\n\t\t\\begin{equation*}\n\t\t\t(\\eta, \\theta) \\in B(0,\\delta) \\times B(\\theta_0,\\delta) \\mapsto S(\\eta,\\theta) \\in \\mathcal{B}(\\mathcal{H}^\\ast, \\mathcal{H})\n\t\t\\end{equation*}\n\t\tis continuous. As a consequence of this, for ${ (\\widetilde{\\eta}, \\widetilde{\\theta}) \\in B(0,\\delta) \\times B(\\theta_0,\\delta) }$ fixed, the limit of\n\t\t\\begin{equation*}\n\t\t\t\\frac{S(\\eta, \\widetilde{\\theta}) - S(\\widetilde{\\eta}, \\widetilde{\\theta})}{\\eta - \\widetilde{\\eta}} = -S(\\eta, \\widetilde{\\theta}) {\\left[ \\frac{(L^{\\Phi_\\eta}(\\widetilde{\\theta}) + {\\gamma_0} I) - (L^{\\Phi_{\\widetilde{\\eta}}}(\\widetilde{\\theta}) + {\\gamma_0} I)}{\\eta - \\widetilde{\\eta}} \\right]} S(\\widetilde{\\eta}, \\widetilde{\\theta}),\n\t\t\\end{equation*}\n\t\tas ${ \\widetilde{\\eta} \\not= \\eta \\to 0 }$, exists. Thus, ${ \\eta \\in B(0,\\delta) \\mapsto S(\\eta,\\widetilde{\\theta}) }$ is an holomorphic map. In analogy with it, for ${ j\\in{\\{1,\\ldots,n\\}} }$, we can prove that\n\t\t\\begin{equation*}\n\t\t\t\\theta_j \\mapsto S(\\widetilde{\\eta}, \\widetilde{\\theta}_1, \\ldots, \\widetilde{\\theta}_{j-1}, \\theta_j, \\widetilde{\\theta}_{j+1}, \\ldots, \\widetilde{\\theta}_n) \n\t\t\\end{equation*}\n\t\tis an holomorphic map. Therefore, by Osgood's Lemma, see for instance \\cite{GunningRossi}, we conclude that\n\t\t\\begin{equation}\\label{9867967689ndyfh}\n\t\t\t(\\eta, \\theta) \\in B(0,\\delta) \\times B(\\theta_0,\\delta) \\mapsto S(\\eta,\\theta) \\in \\mathcal{B}(\\mathcal{H}^\\ast, \\mathcal{H})\n\t\t\\end{equation}\n\t\tis a holomorphic function.\n\t\t\n\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n3. Finally, we are in conditions to prove items ${ (i) }$, ${ (ii) }$ and ${ (iii) }$ (the spectral analysis of the operator ${ S(\\eta, \\theta) }$). First, we shall note that for \n${ (\\eta,\\theta) }$ in a neighbourhood of ${ (0,\\theta_0) }$, the map ${ (\\eta, \\theta) \\mapsto S(\\eta, \\theta) }$ satisfies the assumptions of the Theorem \\ref{768746hughjg576}. \nWe begin recalling that the restriction operator ${ T \\in \\mathcal{B}(\\mathcal{H}^\\ast, \\mathcal{H}) \\mapsto T\\big\\vert_\\mathcal{L} \\in \\mathcal{B}(\\mathcal{L}, \\mathcal{L}) }$ is continuous and it satisfies\n\t\t\\begin{equation}\\label{78687326tygd53tegdcx}\n\t\t\t{\\Vert T \\Vert}_{\\mathcal{B}(\\mathcal{L}, \\mathcal{L})} \\leqslant {\\Vert T \\Vert}_{\\mathcal{B}(\\mathcal{H}^\\ast, \\mathcal{H})} \\, , \\;\\, \\forall T \\in \\mathcal{B}(\\mathcal{H}^\\ast, \\mathcal{H}).\n\t\t\\end{equation}\n\t\tThen, by \\eqref{9867967689ndyfh}, the map ${ (\\eta, \\theta) \\in B(0,\\delta) \\times B(\\theta_0,\\delta) \\mapsto S(\\eta, \\theta) \\in \\mathcal{B}(\\mathcal{L}, \\mathcal{L}) }$ is holomorphic. Since holomorphic maps are, locally, analytic maps there exists a neighbourhood ${ \\mathcal{U} }$ of ${ (0,\\theta_0) }$, ${ (0, \\theta_0) \\in \\mathcal{U} \\subset \\mathbb{C} \\times \\mathbb{C}^n }$, and a family ${ \\{S_{\\sigma}\\}_{\\sigma \\in \\mathbb{N} \\times \\mathbb{N}^n} }$ contained in ${ \\mathcal{B}(\\mathcal{L}, \\mathcal{L}) }$, such that\n\t\t\\begin{equation}\\label{rtfgrffcfdfdfdfdssdadssss}\n\t\t\tS(\\eta, \\theta) = S_{0} + \\sum_{\\substack{\\sigma \\in \\mathbb{N} \\times \\mathbb{N}^n \\\\ {\\vert \\sigma \\vert} \\neq 0}} (\\eta, \\theta)^\\sigma S_\\sigma \\, , \\;\\, \\forall (\\eta, \\theta) \\in \\mathcal{U}.\n\t\t\\end{equation}\n\t\t\n\t\t\tUsing \\eqref{87tyrtdfdcccdasxzsaxzsa} and \\eqref{rtfgrffcfdfdfdfdssdadssss}, it is easy to see that \n${ S_0 = (L_{\\rm per}(\\theta_0) + {\\gamma_0} I)^{-1} \\big\\vert_\\mathcal{L}}$. Notice also that ${ \\mu_0 := {\\left( \\lambda_0+\\gamma_0 \\right)}^{-1} }$ is an eigenvalue of ${ S_0 }$ if and only if ${ \\lambda_0 }$ is an eigenvalue of ${ L_{\\rm per}(\\theta_0) }$ that is\n\t\t\\begin{equation*}\n\t\t\tg \\in {\\{ f \\in \\mathcal{L} \\; ; \\; S_0 {\\big[ f \\big]} = \\mu_0 f \\}} \\; \\Leftrightarrow \\; g \\in {\\{ f \\in \\mathcal{L} \\; ; \\; L_{\\rm per}(\\theta_0) {\\big[ f \\big]} =\\lambda_0 f \\}}.\n\t\t\\end{equation*}\n\t\n\\medskip\n\t\t\n\t\tThe final part of the proof is a direct application of the Theorem~\\ref{768746hughjg576}. Due to our assumption, $\\mu_0$ is a real eigenvalue of the operator $S_0$ with \nmultiplicity $k_0$. Hence, by the Theorem~\\ref{768746hughjg576}, there exists a neighbourhood ${ \\widetilde{\\mathcal{U}} }$ of ${ (0,\\theta_0) }$, with ${ \\widetilde{\\mathcal{U}} \\subset \\mathcal{U} }$ and analytic maps\n\t\t\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t(\\eta, \\theta) \\in \\widetilde{\\mathcal{U}} \\; \\longmapsto \\; \\mu_{0 1}(\\eta,\\theta), \\mu_{0 2}(\\eta,\\theta), \\ldots, \\mu_{0 k_0}(\\eta, \\theta) \\in (0,\\infty), \\\\ [5pt]\n\t\t\t(\\eta, \\theta) \\in \\widetilde{\\mathcal{U}} \\; \\longmapsto \\; \\psi_{0 1}(\\eta,\\theta), \\psi_{0 2}(\\eta,\\theta), \\ldots, \\psi_{0 k_0}(\\eta,\\theta) \\in \\mathcal{L}-\\{0\\},\n\t\t\\end{array}\n\t\t\\end{equation*}\n\t\tsuch that\n\t\t\\begin{itemize}\n\t\t\t\\item ${ \\mu_{0 \\ell} (0,\\theta_0) = \\mu_0 }$, \n\t\t\t\\item ${ S(\\eta, \\theta) {\\big[ \\psi_{0 \\ell}(\\eta, \\theta) \\big]} = \\mu_{0 \\ell}(\\eta, \\theta) \\psi_{0 \\ell}(\\eta, \\theta) }$, \\; ${ \\forall (\\eta, \\theta) \\in \\widetilde{\\mathcal{U}} }$,\n\t\t\t\\item ${ {\\rm dim}{\\{ f \\in \\mathcal{L} \\; ; \\; S(\\eta,\\theta){\\big[ f \\big]} = \\mu_{0 \\ell}(\\eta,\\theta) f \\}}\\leqslant k_0 }$, \\; ${ \\forall (\\eta, \\theta) \\in \\widetilde{\\mathcal{U}} }$, \n\t\t\\end{itemize}\t\t\n\t\tfor all ${ \\ell \\in \\{1, \\ldots, k_0\\} }$. Thus, the proof of the item ${ (i) }$ is clear. \n\t\t\n\t\t\n\t\tUsing the second equality above, we obtain \n\t\t\\begin{eqnarray*}\n\t\t\t(L^{\\Phi_\\eta}(\\theta) + {\\gamma_0} I) {\\big[ \\psi_{0 \\ell}(\\eta, \\theta) \\big]} & = & \\frac{1}{\\mu_{0 \\ell}(\\eta, \\theta)} (L^{\\Phi_\\eta}(\\theta) + {\\gamma_0} I){\\left\\{ S(\\eta, \\theta) {\\big[ \\psi_{0 \\ell}(\\eta, \\theta) \\big]} \\right\\}} \\\\ [5pt]\n\t\t\t& = & \\frac{1}{\\mu_{0 \\ell}(\\eta, \\theta)} \\psi_{0 \\ell}(\\eta, \\theta), \n\t\t\\end{eqnarray*}\nwhich implies that ${ L^{\\Phi_\\eta}(\\theta) {\\big[ \\psi_{0 \\ell}(\\eta, \\theta) \\big]} = \\lambda_{0\\ell}(\\eta,\\theta) \\psi_{0 \\ell}(\\eta, \\theta) }$, for ${ (\\eta, \\theta) \\in \\widetilde{\\mathcal{U}} }$, ${ \\ell\\in \\{1, \\ldots, m_0\\} }$ and ${ \\lambda_{0 \\ell}(\\eta,\\theta) := [\\mu_{0 \\ell}(\\eta, \\theta)]^{-1} - {\\gamma_0} }$. This finish the proof of the item ${ (ii) }$.\n\t\t\n\\medskip \n\n\t\tFinally, note that ${ S(\\eta,\\theta) \\big[ \\mathcal{L} \\big] \\subset \\mathcal{H}}$ and \n\t\t\\begin{equation*}\n\t\t\tg \\! \\in \\! \\big\\{ f \\in \\mathcal{H} ; S(\\eta,\\theta) {\\big[ f \\big]} \\! = \\! \\mu_{0\\ell}(\\eta,\\theta) f \\big\\} \\Leftrightarrow g \\! \\in \\! \\big\\{ f \\in \\mathcal{H} \\; ; \\; L^{\\Phi_\\eta}(\\theta) {\\big[ f \\big]} \\! = \\! \\lambda_{0\\ell}(\\eta,\\theta) f \\big\\},\n\t\t\\end{equation*}\n\t\twhich concludes the proof of the item ${ (iii) }$. Therefore, the proof is completed. \n\\end{proof}\n\n\\subsection{Homogenization Analysis of the Perturbed Model}\n\nIn this section, we shall investigate in which way the stochastic perturbation of \nthe identity characterize the form of the coefficients, during the \nasymptotic limit of the Schr\\\"odinger equation \n\t\\begin{multline}\\label{765tdyyuty67tsss}\n\t\t\\left\\{\n\t\t\\begin{array}{l}\n\t\t\ti\\displaystyle\\frac{\\partial u_{\\eta\\varepsilon}}{\\partial t} - {\\rm div} {\\bigg( A_{\\rm per} {\\left( \\Phi_\\eta^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)} \\right)} \\nabla u_{\\eta\\varepsilon} \\bigg)} \\\\ [14pt]\n\t\t\t+ {\\bigg( \\displaystyle\\frac{1}{\\varepsilon^2} V_{\\rm per} {\\left( \\Phi_\\eta^{-1} {\\left( \\displaystyle\\frac{x}{\\varepsilon}, \\omega \\right)} \\right)} + U_{\\rm per} {\\left( \\Phi_\\eta^{-1} {\\left( \\displaystyle\\frac{x}{\\varepsilon}, \\omega \\right)} \\right)} \\bigg)} u_{\\eta\\varepsilon} = 0 \\quad \\text{in} \\;\\, \\mathbb{R}^{n+1}_T \\! \\times \\! \\Omega, \\\\ [14pt]\n\t\t\tu_{\\eta\\varepsilon} (0,x,\\omega)=u_{\\eta\\varepsilon}^0(x,\\omega), \\; \\; (x,\\omega) \\in \\mathbb{R}^n \\! \\times \\! \\Omega,\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{multline}\nwhere ${ 0 < T < \\infty }$, ${ \\mathbb{R}^{n+1}_T = (0,T) \\times \\mathbb{R}^n }$. The coefficients are accomplishing of the periodic functions ${ A_{\\rm per}(y) }$, ${ V_{\\rm per}(y) }$, ${ U_{\\rm per}(y) }$ (as defined in the last subsection) with a stochastic perturbation of identity ${ \\Phi_\\eta }$, ${ \\eta \\in (0,1) }$, presenting an rate of oscillation ${ \\varepsilon^{-1} }$, ${ \\varepsilon>0 }$. The function ${ u_{\\eta\\varepsilon}^0(x,\\omega) }$ is a well prepared initial data(see~\\eqref{well-prep.I}) and this well-preparedness is triggered by natural periodic conditions, \nto wit, on the existence of a pair \n${ \\big( \\theta^\\ast, \\lambda_{\\rm per}(\\theta^\\ast) \\big) \\in \\mathbb{R}^n \\times \\mathbb{R} }$ such that\n\t\\begin{equation}\\label{7t8drtys65edsrt3xcvvcxcvb}\n\t\t\\begin{split}\n\t\t\t(i) & \\;\\;\\, \\lambda_{\\rm per}(\\theta^\\ast) \\; \\text{is a simple eigenvalue of} \\; L_{\\rm per}(\\theta^\\ast), \\\\\n\t\t\t(ii) & \\;\\;\\, \\theta^\\ast \\; \\text{is a critical point of} \\; \\lambda_{\\rm per}(\\cdot), \\, \\text{that is}, \\nabla_{\\!\\! \\theta} \\lambda_{\\rm per}(\\theta^\\ast)=0.\n\t\t\\end{split}\n\t\\end{equation}\n\t\n\t\n\tBy the condition ${ (i) }$ and the Theorem \\ref{4087865567576ghghj}, there exists a neighborhood ${ \\mathcal{U} }$ of ${ (0,\\theta^{\\ast}) }$ and the analytic maps\n\t\\begin{equation}\\label{67ty3uhrjefd67tgrefdcx8ur7u}\n\t\t\\begin{split}\n\t\t\t(i) & \\;\\;\\, (\\eta,\\theta) \\in \\mathcal{U} \\; \\mapsto \\; \\lambda(\\eta,\\theta) \\in \\mathbb{R}, \\\\\n\t\t\t(ii) & \\;\\;\\, (\\eta,\\theta) \\in \\mathcal{U} \\; \\mapsto \\; \\psi(\\eta,\\theta) \\in \\mathcal{H}\\setminus\\{0\\},\n\t\t\\end{split}\n\t\\end{equation}\n\tsuch that ${ \\lambda(0,\\theta^\\ast) = \\lambda_{\\rm per}(\\theta^\\ast) }$, ${ L^{\\Phi_\\eta}(\\theta) \\big[ \\psi(\\eta,\\theta) \\big] = \\lambda(\\eta,\\theta) \\, \\psi(\\eta,\\theta) }$ and\n\t\\begin{equation*}\n\t\t{\\rm dim} \\big\\{ f \\in \\mathcal{H} \\; ; \\; L^{\\Phi_\\eta} (\\theta) = \\lambda(\\eta,\\theta) \\, f \\big\\} = 1, \\; \\forall (\\eta,\\theta) \\in \\mathcal{U}.\n\t\\end{equation*}\n\t\n\t\tThus,\n\t\\begin{equation}\\label{7rter44}\n\t\t\\lambda(\\eta,\\theta) \\; \\text{is a simple eigenvalue of} \\; L^{\\Phi_\\eta}(\\theta), \\forall (\\eta,\\theta) \\in \\mathcal{U}.\n\t\\end{equation}\n\t\n\t\nAdditionally, as ${ \\lambda(0,\\theta^\\ast)=\\lambda_{\\rm per}(\\theta^\\ast) }$ is an isolated point of ${ \\sigma_{\\rm point} \\big(L_{\\rm per}(\\theta^\\ast) \\big) }$ (any point has this property\t), ${ \\lambda(\\eta,\\theta) }$ is an isolated point of ${ \\sigma_{\\rm point} \\big(L^{\\Phi_\\eta}(\\theta^\\ast) \\big) }$ for each ${ (\\eta,\\theta) \\in \\mathcal{U} }$. Thus, we have \n${ \\lambda(0,\\cdot) = \\lambda_{\\rm per}(\\cdot) }$ in a neighbourhood of ${ \\theta^\\ast }$. We now denote ${ \\psi_{\\rm per}(\\cdot) := \\psi(0,\\cdot) }$. Without loss of generality, we assume ${ \\int_{[0,1)^n} {\\vert \\psi_{\\rm per}(\\theta^\\ast) \\vert}^2 dy = 1 }$. Moreover, we shall assume that the homogenized (periodic) matrix \n${ A_{\\rm per}^\\ast = D_{\\! \\theta}^2 \\lambda_{\\rm per}(\\theta^\\ast) }$ is invertible which happens if $\\theta=\\theta^{\\ast}$ is a point of local minimum or local maximum strict of \n$\\mathbb R^n\\ni \\theta\\mapsto \\lambda_{\\rm per}(\\theta)$. Thus, an immediate application of the Implicit Function Theorem gives us the following lemma:\n\n\\begin{lemma}\\label{6487369847639gfhdghjdftrtrtfgcbvbv}\n\t\tLet the condition \\eqref{7t8drtys65edsrt3xcvvcxcvb} be satisfied and ${ A_{\\rm per}^\\ast}$ be an invertible matrix. Then, there exists a neighborhood ${ \\mathcal{V} }$ of ${ 0 }$, ${ 0 \\in \\mathcal{V} \\subset \\mathbb{R} }$, and a ${ \\mathbb{R}^n }$-value analytic map\n\t\t\\begin{equation*}\n\t\t\t\\theta (\\cdot) : \\eta \\in \\mathcal{V} \\mapsto \\theta(\\eta) \\in \\mathbb{R}^n, \n\t\t\\end{equation*}\n\t\tsuch that ${ \\theta(0)=\\theta^\\ast }$ and\n\t\t\\begin{equation}\\label{67trsdasdsoktig}\n\t\t\t\\nabla_{\\!\\! \\theta} \\lambda \\big( \\eta,\\theta(\\eta) \\big) = 0, \\;\\; \\forall \\eta \\in \\mathcal{V}.\n\t\t\\end{equation}\n\\end{lemma}\n\n\nBy the analytic structure of the functions in \\eqref{67ty3uhrjefd67tgrefdcx8ur7u} and the Lemma \\ref{6487369847639gfhdghjdftrtrtfgcbvbv}, there exists a neighborhood ${ \\mathcal{V} }$ of ${ 0 }$, ${ 0 \\in \\mathcal{V} \\subset \\mathbb{R} }$, such that \n\t\\begin{equation}\\label{563gdc}\n\t\t\\begin{split}\n\t\t\t(i) & \\;\\;\\, \\eta \\in \\mathcal{V} \\; \\mapsto \\; \\lambda \\big( \\eta, \\theta(\\eta) \\big) \\in \\mathbb{R}, \\\\\n\t\t\t(ii) & \\;\\;\\, \\eta \\in \\mathcal{V} \\; \\mapsto \\; \\psi \\big( \\eta, \\theta(\\eta) \\big) \\in \\mathcal{H} \\setminus \\{0\\}, \\\\\n\t\t\t(iii) & \\;\\;\\, \\eta \\in \\mathcal{V} \\; \\mapsto \\; \\xi_k \\big( \\eta, \\theta(\\eta) \\big) \\in \\mathcal{H}, \\forall \\{1,\\ldots,n\\},\n\t\t\\end{split}\n\t\\end{equation}\n\tare analytic functions, where ${ \\xi_k(\\eta,\\theta) := (2i \\pi)^{-1}{\\partial_{\\theta_k} \\psi} (\\eta,\\theta) }$, for ${ k\\in\\{1,\\ldots,n\\} }$. We also consider ${ \\xi_{k,{\\rm per}}(\\cdot) = \\xi_k(0,\\cdot) }$. Furthermore, by \\eqref{7rter44} and \\eqref{67trsdasdsoktig}, for each fixed ${ \\eta \\in \\mathcal{V} }$ we have that the pair ${ \\big( \\theta(\\eta),\\lambda \\big( \\eta, \\theta(\\eta) \\big) \\big) \\in \\mathbb{R}^n \\times \\mathbb{R} }$ satisfies: \n\t\\begin{equation}\\label{674tyghd}\n\t\t\\begin{split}\n\t\t\t(i) & \\;\\;\\, \\lambda(\\eta,\\theta(\\eta)) \\; \\text{is a simple eigenvalue of} \\; L^{\\Phi_\\eta}\\big( \\theta(\\eta) \\big), \\\\\n\t\t\t(ii) & \\;\\;\\, \\theta(\\eta) \\; \\text{is a critical point of} \\; \\lambda(\\eta,\\cdot), \\, \\text{that is}, \\nabla_{\\!\\! \\theta} \\lambda(\\eta, \\theta(\\eta)) = 0.\n\t\t\\end{split}\n\t\\end{equation}\n\tThis means that the Theorem \\ref{876427463tggfdhgdfgkkjjlmk} can be used. Before, we establish a much simplified notations for the functions in \\eqref{563gdc} as follows:\n\t\\begin{equation}\\label{798723678rtyd5rdftrgdfdfsdssss}\n\t\t\\begin{split}\n\t\t\t(i) & \\;\\;\\, \\theta_\\eta := \\theta(\\eta), \\\\\n\t\t\t(ii) & \\;\\;\\, \\lambda_\\eta := \\lambda \\big( \\eta,\\theta(\\eta) \\big), \\\\\n\t\t\t(iii) & \\;\\;\\, \\psi_\\eta := \\psi \\big( \\eta,\\theta(\\eta) \\big), \\\\\n\t\t\t(iv) & \\;\\;\\, \\xi_{k,\\eta} := \\xi_k \\big( \\eta,\\theta(\\eta) \\big), \\, k\\in\\{1,\\ldots,n\\}.\n\t\t\\end{split}\n\t\\end{equation}\n\n\n\tFinally, from \\eqref{674tyghd}, for each fixed ${ \\eta \\in \\mathcal{V} }$, the notion of well-preparedness for the initial data $u_{\\eta\\varepsilon}^0$ is given as below. \n\t\n\t\\begin{equation}\\label{well-prep.I}\n\t\tu_{\\eta\\varepsilon}^0(x,\\omega) = e^{2i\\pi \\frac{\\theta_\\eta \\cdot x}{\\varepsilon}} \\, v^0(x) \\, \\psi_\\eta {\\left( \\Phi_{\\eta}^{-1} {\\left( \\frac{x}{\\varepsilon}, \\omega \\right)}, \\omega \\right)}, \\; (x,\\omega) \\in \\mathbb{R}^n \\times \\Omega,\n\t\\end{equation}\nwhere ${ v^0 \\in C_{\\rm c}^\\infty(\\mathbb{R}^n) }$. Thus, applying the Theorem \\ref{876427463tggfdhgdfgkkjjlmk}, if ${ u_{\\eta\\varepsilon} }$ is solution of \\eqref{765tdyyuty67tsss}, the sequence in ${ \\varepsilon>0 }$\n\t\\begin{equation*}\n\t\tv_{\\eta\\varepsilon}(t,x,\\widetilde{\\omega}) = e^{ -{\\left( i \\frac{\\lambda_\\eta t}{\\varepsilon^2} + 2i\\pi \\frac{\\theta_\\eta \\cdot x}{\\varepsilon} \\right)} } u_{\\eta\\varepsilon}(t,x,\\widetilde{\\omega}), \\;\\, (t,x,\\widetilde{\\omega}) \\in \\mathbb{R}^{n+1}_T \\times \\Omega, \n\t\\end{equation*}\n\t$\\Phi_{\\omega}-$two-scale converges to the limit ${ v_{\\eta}(t,x) \\, \\psi_{\\eta}{\\big( \\Phi^{-1}(z,\\omega),\\omega \\big)} }$ with\n\t\\begin{equation*}\n\t\t\\lim_{\\varepsilon \\to 0} \\iint_{\\mathbb{R}^{n+1}_T} \\! {\\left\\vert v_{\\eta \\varepsilon} (t,x,\\widetilde{\\omega}) - v_{\\eta}(t,x) \\, \\psi_{\\eta}{\\left( \\Phi^{-1}_{\\eta} {\\left(\\frac{x}{\\varepsilon},\\widetilde{\\omega} \\right)}, \\widetilde{\\omega} \\right)} \\right\\vert}^2 dx \\, dt \\, = \\, 0,\n\t\\end{equation*}\nfor a.e. ${ \\widetilde{\\omega} \\in \\Omega }$, where ${ v_{\\eta} \\in C \\big( [0,T], L^2(\\mathbb{R}^n) \\big) }$ is the unique solution of the homogenized Schr\\\"odinger equation \n\t\\begin{equation}\\label{askdjfhucomojkfdfd}\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\t\ti \\displaystyle\\frac{\\partial v_\\eta}{\\partial t} - {\\rm div} {\\left( A^\\ast_{\\eta} \\nabla v_\\eta \\right)} + U_{\\! \\eta}^\\ast v_\\eta = 0 \\, , \\;\\, \\text{in} \\;\\, \\mathbb{R}^{n+1}_T, \\\\ [7,5pt]\n\t\t\tv_\\eta(0,x) = v^0(x) \\, , \\;\\, x\\in \\mathbb{R}^n,\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\nwith effective coefficients ${ A^\\ast_{\\eta} = D_{\\! \\theta}^2 \\lambda \\big( \\eta,\\theta(\\eta) \\big) }$ and \n\t\\begin{equation}\\label{783874tgffffg}\n\t\tU^\\ast_{\\! \\eta} = c^{-1}_{\\eta} \\int_\\Omega \\int_{\\Phi_{\\eta}([0,1)^n, \\omega)} U_{\\rm per}{\\big( \\Phi^{-1}_{\\eta} (z, \\omega) \\big)} {\\left\\vert \\psi_{\\eta} {\\big( \\Phi^{-1}_{\\eta} (z,\\omega), \\omega \\big)} \\right\\vert}^2 dz \\, d\\mathbb{P}(\\omega),\n\t\\end{equation}\n\twhere\n\t\\begin{equation}\\label{874326984yghedf}\n\t\tc_{\\eta} = \\int_\\Omega \\int_{\\Phi_{\\eta}([0,1)^n, \\omega)} {\\left\\vert \\psi_{\\eta} {\\big( \\Phi^{-1}_{\\eta} (z,\\omega), \\omega \\big)} \\right\\vert}^2 dz \\, d\\mathbb{P}(\\omega).\n\t\\end{equation}\n\t\n\\begin{remark}\n\t\tWe remember that, using the equality~\\eqref{7358586tygfjdshfbvvcc}, we have for each ${ \\eta }$ fixed that the matrix ${ B_\\eta \\in \\mathbb{R}^{n \\times n} }$ must satisfy\n\t\t\\begin{equation}\\label{786587tdyghs7rsdfxsdfsdf}\n\t\t\\begin{split}\n\t\t\t& (B_\\eta)_{k\\ell} := c_\\eta^{-1} \\bigg[ \\int_\\Omega\\int_{\\Phi_\\eta([0,1)^n,\\omega)} A_{\\rm per}{\\left( \\Phi_\\eta^{-1}(z,\\omega) \\right)} {\\left( e_\\ell \\, \\psi_\\eta{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\right)} \\cdot \\\\\n\t\t\t& \\hspace{7.85cm} \\overline{\\left( e_k \\, \\psi_\\eta{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\right)} \\, dz \\, d\\mathbb{P}(\\omega) \\\\\n\t\t\t& + \\int_\\Omega\\int_{\\Phi_\\eta([0,1)^n,\\omega)} A_{\\rm per}{\\left( \\Phi_\\eta^{-1}(z,\\omega) \\right)} {\\left( e_\\ell \\, \\psi_\\eta{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\right)} \\cdot \\\\\n\t\t\t& \\hspace{6cm} \\overline{{\\left( \\nabla_{\\!\\! z} + 2i\\pi\\theta_\\eta \\right)} {\\left( \\xi_{k,\\eta}{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\right)}} \\, dz \\, d\\mathbb{P}(\\omega) \\\\\n\t\t\t& - \\int_\\Omega\\int_{\\Phi_\\eta([0,1)^n,\\omega)} A_{\\rm per}{\\left( \\Phi_\\eta^{-1}(z,\\omega) \\right)} {\\left( \\nabla_{\\!\\! z} + 2i\\pi\\theta_\\eta \\right)} {\\left( \\psi_\\eta{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\right)} \\cdot \\\\\n\t\t\t& \\hspace{7.8cm} \\overline{{\\left( e_\\ell \\, \\xi_{k,\\eta}{\\left( \\Phi_\\eta^{-1}(z,\\omega),\\omega \\right)} \\right)}} \\, dz \\, d\\mathbb{P}(\\omega) \\bigg],\n\t\t\\end{split}\n\t\t\\end{equation}\n\t\tfor ${ k,\\ell \\in \\{1,\\ldots,n\\} }$ and the homogenized matrix can be written as ${ A_\\eta^\\ast = 2^{-1} {\\big( B_\\eta + B_\\eta^t \\big)} }$.\t\n\\end{remark}\n\t\n\\subsubsection{Expansion of the effective coefficients}\nAs a consequence of the formula of the effective coefficients of the homogenized equation~\\eqref{askdjfhucomojkfdfd}, we have the following proposition:\n\n\t\\begin{proposition}\\label{jnchndhbvgfbdtegdferfer}\n\t\tThe maps ${ \\eta \\mapsto A_\\eta^\\ast \\in \\mathbb{R}^{n \\times n} }$, ${ \\eta \\mapsto B_\\eta \\in \\mathbb{R}^{n \\times n} }$ and ${ \\eta \\mapsto U_\\eta^\\ast \\in \\mathbb{R} }$ are analytics in a neighbourhood of ${ \\eta=0 }$.\n\t\\end{proposition}\n\\begin{proof}\n\t\tLet us assume ${ \\mathcal{U} }$ and ${ \\mathcal{V} }$ as in \\eqref{67ty3uhrjefd67tgrefdcx8ur7u} and \\eqref{563gdc}, respectively. For each ${ \\eta \\in \\mathcal{V} }$, the above arguments give us the formula ${ A^\\ast_{\\eta} = D_{\\! \\theta}^2 \\lambda \\big( \\eta,\\theta(\\eta) \\big) }$. Thus, as ${ (\\eta,\\theta) \\in \\mathcal{U} \\mapsto D_{\\! \\theta}^2\\lambda(\\eta,\\theta) \\in \\mathbb{R}^{n \\times n} }$ and ${ \\eta \\in \\mathcal{V} \\mapsto \\theta(\\eta) \\in \\mathbb{R}^n }$ are analytic maps, we conclude that ${ \\eta \\in \\mathcal{V} \\mapsto D_\\theta^2 \\lambda(\\eta,\\theta(\\eta)) \\in \\mathbb{R}^{n \\times n} }$ is also an analytic map. This means that ${ \\eta \\mapsto A_\\eta^\\ast }$ is an analytic map. \n\n\\medskip\n\n\t\tFrom \\eqref{783874tgffffg} and \\eqref{874326984yghedf}, making a change of variables, we have\n\t\t\\begin{equation*}\n\t\t\tU^\\ast_{\\! \\eta} = c^{-1}_{\\eta} \\int_\\Omega \\int_{[0,1)^n} U_{\\rm per}(y) {\\left\\vert \\psi_{\\eta}(y,\\omega) \\right\\vert}^2 {\\rm det} [\\nabla_{\\!\\! y} \\Phi_\\eta (y,\\omega)] \\, dz \\, d\\mathbb{P}(\\omega)\n\t\t\\end{equation*}\n\t\tand\n\t\t\\begin{equation*}\n\t\t\tc_\\eta = \\int_\\Omega \\int_{[0,1)^n} {\\left\\vert \\psi_{\\eta}(y,\\omega) \\right\\vert}^2 {\\rm det} [\\nabla_{\\!\\! y} \\Phi_\\eta (y,\\omega)] \\, dz \\, d\\mathbb{P}(\\omega) \\not= 0.\n\t\t\\end{equation*}\n\t\tThen, as the map ${ \\eta \\mapsto \\psi_\\eta \\in \\mathcal{H} \\setminus\\{0\\} }$ is analytic, the map ${ \\eta \\mapsto c_\\eta \\not= 0 }$ is also analytic. Hence the map ${ \\eta \\mapsto c_\\eta^{-1} }$ is analytic. Therefore, ${ \\eta \\mapsto U_\\eta^\\ast }$ is analytic.\n\t\\end{proof}\n\t\nAs a consequence of this proposition, there exist ${ \\{ A^{(j)},\\,B^{(j)} \\}_{j \\in \\mathbb{N}} \\subset \\mathbb{R}^{n \\times n} }$ and ${ \\{ U^{(j)} \\}_{j \\in \\mathbb{N}} \\subset \\mathbb{R} }$ such that \n\t\\begin{equation}\\label{678435}\n\t\t\\left\\{\n\t\t\\begin{array}{lll}\n\t\t\tA_\\eta^\\ast & = & A^{(0)} + \\eta A^{(1)} + \\eta^2 A^{(2)} +\\ldots, \\\\ [5pt]\n\t\t\tU_\\eta^\\ast & = & U^{(0)} + \\eta U^{(1)} + \\eta^2 U^{(2)} + \\ldots,\\\\ [5pt]\n\t\t\tB_\\eta^\\ast &= & B^{(0)} + \\eta B^{(1)} + \\eta^2 B^{(2)} +\\ldots.\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\n\t\n\nNow, the object of our interest is determine the terms of order ${ \\eta^0 }$ and ${ \\eta }$ of these homogenized coefficients. For this purpose, guided by~\\ref{654367ytr6tfclmlml} and \nby the formulas \\eqref{786587tdyghs7rsdfxsdfsdf}, \\eqref{783874tgffffg} and \\eqref{874326984yghedf}, we shall analyse the expansion of the analytic functions in \\eqref{798723678rtyd5rdftrgdfdfsdssss}. By analytic property, there exist the sequences ${ {\\{ \\theta^{(j)} \\}}_{j \\in \\mathbb{N}} \\subset \\mathbb{R}^n }$, ${ {\\{ \\lambda^{(j)} \\}}_{j \\in \\mathbb{N}} \\subset \\mathbb{R} }$, ${ {\\{ \\psi^{(j)} \\}}_{j \\in \\mathbb{N}} \\subset \\mathcal{H} }$ and ${ {\\{ \\xi_k^{(j)} \\}}_{j \\in \\mathbb{N}} \\subset \\mathcal{H} }$, ${ k\\in\\{1,\\ldots,n\\} }$, such that\n\t\\begin{eqnarray}\n\t\t\\theta_\\eta & = & \\theta^{(0)} + \\eta \\theta^{(1)} + \\eta^2 \\theta^{(2)} + \\ldots = \\theta^{(0)} + \\eta \\theta^{(1)} + \\mathrm{O}(\\eta^2), \\label{9789789794r6ttrtrtr} \\\\\n\t\t\\lambda_\\eta & = & \\lambda^{(0)} + \\eta \\lambda^{(1)} + \\eta^2 \\lambda^{(2)} + \\ldots = \\lambda^{(0)} + \\eta \\lambda^{(1)} + \\mathrm{O}(\\eta^2), \\label{6t8y365873586edtygc} \\\\\n\t\t\\psi_\\eta & = & \\psi^{(0)} + \\eta\\psi^{(1)} + \\eta^2 \\psi^{(2)} + \\ldots = \\psi^{(0)} + \\eta \\psi^{(1)} + \\mathrm{O}(\\eta^2), \\label{099uiuiyujhjchhtydfty} \\\\\n\t\t\\xi_{k,\\eta} & = & \\xi_k^{(0)} + \\eta\\xi_k^{(1)} + \\eta^2 \\xi_k^{(2)} + \\ldots = \\xi_k^{(0)} + \\eta \\xi_k^{(1)} + \\mathrm{O}(\\eta^2), \\label{67tyuxcvbdfgoikjjhbhb}\n\t\\end{eqnarray}\nwhere ${ k\\in\\{1,\\ldots,n\\} }$. \n\nAt first glance, in order to determine the coefficients of the expansions in~\\eqref{678435} we should solve, a priori, auxiliary problems that involves both, the deterministic \nand stochastic variables. This can be a disadvantage from the point of view of numerical analysis. Our aim hereafter is to prove that, we can simplify the computations of this \ncoefficients working in a periodic environment which is computationally cheaper. In order to do this, \nnote that ${ \\theta^{(0)} = \\theta^\\ast}$, ${ \\lambda^{(0)}=\\lambda_{\\rm per}(\\theta^\\ast) }$, ${ \\psi^{(0)} = \\psi_{\\rm per}(\\theta^\\ast) }$ and ${ \\xi_k^{(0)} = \\xi_{k,{\\rm per}}(\\theta^\\ast) }$, ${ k\\in\\{1,\\ldots,n\\} }$, which satisfy\n\t\\begin{eqnarray}\n\t & & \\left\\{\n\t\t\\begin{array}{l}\n\t\t\t{\\left( L_{\\rm per}(\\theta^\\ast) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} {\\big[ \\psi_{\\rm per}(\\theta^\\ast) \\big]} = 0 \\;\\, \\text{in} \\;\\, [0,1)^n, \\\\ [6pt]\n\t\t\t\\hspace{1.5cm} \\psi_{\\rm per}(\\theta^\\ast) \\;\\; [0,1)^n\\text{-periodic},\n\t\\end{array}\n\t\t\\right. \\label{yhujtgvjnjnhnvfnvfshjbn} \\\\ [7.5pt]\n\t\t& & \\left\\{\n\t\t\\begin{array}{l}\n\t\t\t{\\left( L_{\\rm per}(\\theta^\\ast) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} {\\big[ \\xi_{k,{\\rm per}}(\\theta^\\ast) \\big]} = \\mathcal{X} {\\big[ \\psi_{\\rm per}(\\theta^\\ast) \\big]} \\;\\, \\text{in} \\;\\, [0,1)^n, \\\\ [6pt]\n\t\t\t\\hspace{1.5cm} \\xi_{k,{\\rm per}}(\\theta^\\ast) \\;\\; [0,1)^n \\text{-periodic},\n\t\t\\end{array}\n\t\t\\right. \\label{yhfjsgfsfsdyhujtgvjnjnhnvfnvfshjbn}\n\t\\end{eqnarray}\n\twhere\n\t\\begin{equation*}\n\t\t\\mathcal{X} {\\big[ f \\big]} := {\\left( {\\rm div}_{\\! y} + 2i\\pi \\theta^\\ast \\right)} {\\big\\{ A_{\\rm per} (y) {( e_k f )} \\big\\}} + {( e_k )} {\\big\\{ A_{\\rm per}(y) {( \\nabla_{\\!\\! y} + 2i\\pi \\theta^\\ast )} f \\big\\}},\n\t\\end{equation*}\n\tfor ${ f\\in \\mathcal{H} }$. The equation~\\eqref{yhujtgvjnjnhnvfnvfshjbn} is the spectral \n\tcell equation and \\eqref{yhfjsgfsfsdyhujtgvjnjnhnvfnvfshjbn} is the first auxiliary cell equation \nrelated to the periodic case (see Section~\\ref{ACE}). \n\n\\medskip\n\nThe following theorem show us that, \nthe terms ${ \\psi^{(1)} }$ and ${ \\xi_k^{(1)} }$, $k\\in\\{1,\\ldots,n\\}$,\ngiven by \\eqref{099uiuiyujhjchhtydfty} and \\eqref{67tyuxcvbdfgoikjjhbhb}\nrespectively, satisfy auxiliary type cell equations. \n\t\\begin{theorem}\n\t\\label{THM58}\n\n\t\tLet ${ \\psi^{(1)} }$ and ${ \\xi_k^{(1)} }$, ${ k\\in\\{1,\\ldots,n\\} }$, be as above. Then these functions satisfy the following equations: \n\t\t\\begin{eqnarray}\n\t\t\t& & \\left\\{\n\t\t\t\\begin{array}{l}\n\t\t\t\t{\\left( L_{\\rm per}(\\theta^\\ast) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} {\\big[ \\psi^{(1)} \\big]} = \\mathcal{Y} {\\big[ \\psi_{\\rm per}(\\theta^\\ast) \\big]} \\;\\, \\text{in} \\;\\, [0,1)^n \\times \\Omega, \\\\ [6pt]\n\t\t\t\t\\hspace{1.5cm} \\psi^{(1)} \\; \\text{stationary},\n\t\t\t\\end{array}\n\t\t\t\\right. \\label{cvbnmdhfyhryfryhfiajbcjzx} \\\\ [7.5pt]\n\t\t\t& & \\left\\{\n\t\t\t\\begin{array}{l} \n\t\t\t\t{\\left( L_{\\rm per}(\\theta^\\ast) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} {\\big[ \\xi_k^{(1)} \\big]} = \\mathcal{X}{\\big[ \\psi^{(1)} \\big]} \\\\ [6pt]\n\t\t\t\t\\hspace{2cm} + \\, \\mathcal{Y}{\\big[ \\xi_{k,{\\rm per}}(\\theta^\\ast) \\big]} + \\mathcal{Z}_k{\\big[ \\psi_{\\rm per}(\\theta^\\ast) \\big]} \\;\\, \\text{in} \\;\\, [0,1)^n \\times \\Omega, \\\\ [6pt]\n\t\t\t\t\\hspace{1.5cm} \\xi_k^{(1)} \\; \\text{stationary},\n\t\t\t\\end{array} \n\t\t\t\\right. \\label{cvbnmdhfdwadawdwayhryfryhfiajbcjzx}\n\t\t\\end{eqnarray}\n\t\twhere the operators ${ \\mathcal{Y} }$ and ${ \\mathcal{Z}_k }$, ${ k\\in\\{1,\\ldots,n\\} }$, are defined by\n\t\t\\begin{eqnarray*}\n\t\t\n\t\t\t\\mathcal{Y} {\\big[ f \\big]} & \\!\\! := \\!\\! & {\\left( {\\rm div}_{\\! y} + 2i\\pi \\theta^\\ast \\right)} {\\big\\{ A_{\\rm per} (y) {\\big( -[\\nabla_{\\!\\! y} Z](y,\\omega) \\nabla_{\\!\\! y} f + 2i\\pi \\theta^{(1)} f \\big)} \\big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & - \\, {\\rm div}_{\\! y} {\\big\\{ [\\nabla_{\\!\\! y} Z]^t(y,\\omega) A_{\\rm per} (y) {(\\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast)} f \\big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & + {\\left( 2i\\pi\\theta^{(1)} \\right)} {\\big\\{ A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast \\right)} f \\big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & + \\, {\\left( {\\rm div}_{\\! y} + 2i\\pi \\theta^\\ast \\right)}{\\big\\{ {\\left[ {\\rm div}_{\\! y} Z (y,\\omega) A_{\\rm per} (y) \\right]} {(\\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast)} f \\big\\}} + \\lambda^{(1)} f \\\\ [1pt]\n\t\t\t\t\t\t& & + \\, {\\big\\{ {\\rm div}_{\\! y} Z(y,\\omega) \\, {\\left[ \\lambda_{\\rm per}(\\theta^\\ast) - V_{\\rm per}(y) \\right]} \\big\\}} f, \\\\ [6.5pt]\n\t\t\t\\mathcal{Z}_k {\\big[ f \\big]} & \\!\\! := \\!\\! & {\\left( {\\rm div}_{\\! y} + 2i\\pi \\theta^\\ast \\right)} {\\big\\{ {\\left[ {\\rm div}_{\\! y} Z(y,\\omega) A_{\\rm per} (y) \\right]} {( e_k f )} \\big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & - \\, {\\rm div}_{\\! y} {\\big\\{ [\\nabla_{\\!\\! y} Z]^t(y,\\omega) A_{\\rm per} (y) {( e_k f )} \\big\\}} + {\\left( 2i\\pi\\theta^{(1)} \\right)} {\\left\\{ A_{\\rm per}(y) {( e_k f )} \\right\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & + \\, {\\left( e_k \\right)} {\\big\\{ {\\big[ {\\rm div}_{\\! y} Z(y,\\omega) A_{\\rm per}(y) \\big]} {\\left( \\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast \\right)} f \\big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & - \\, {\\left( e_k \\right)} {\\left\\{ A_{\\rm per}(y) {\\left[ \\nabla_{\\!\\! y} Z \\right]}(y,\\omega) \\nabla_{\\!\\! y} f \\right\\}} + {\\left( e_k \\right)} {\\big\\{ A_{\\rm per}(y) {( 2i\\pi \\theta^{(1)} f )} \\big\\}},\n\t\t\\end{eqnarray*}\n\t\tfor ${ f \\in \\mathcal{H} }$.\n\t\\end{theorem}\nFor the proof of this theorem, we shall use essentially the structure of the spectral cell equation~\\eqref{92347828454trfhfd4rfghjls} and \n of the f.a.c. equation~\\eqref{8654873526rtgdrfdrfdrfrd4} with periodic coefficients accomplished by stochastic deformation of identity ${ \\Phi_\\eta }$ together with the identities~\\eqref{654367ytr6tfclmlml}.\n\\begin{proof}\n1. For begining, let us consider the set ${ \\mathcal{V} }$ as in \\eqref{563gdc}. Then, making change of variables in the spectral cell equation~\\eqref{92347828454trfhfd4rfghjls} adapted to this context, we find \n\t\t\\begin{eqnarray}\\label{8365287erdtfrewxzqzazaazzaaz}\n&&\\hspace{-0.5cm} \\int_{[0,1)^n} \\int_\\Omega \\Big\\{A_{\\rm per}(y) \\big( [\\nabla_{\\!\\! y} \\Phi_\\eta]^{-1} \\nabla_{\\!\\! y} \\psi_\\eta + 2i\\pi \\theta_\\eta \\psi_\\eta \\big) \\cdot \\overline{ \\big( [\\nabla_{\\!\\! y} \\Phi_\\eta]^{-1} \n\\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta_\\eta \\zeta \\big)} \\, \\nonumber\\\\\n&& \\qquad\\qquad\\qquad+ {\\left( V_{\\rm per}(y) - \\lambda_\\eta \\right)} \\, \\psi_\\eta \\, \\overline{\\zeta} \\, \\Big\\} {\\rm det} [\\nabla_{\\!\\! y} \\Phi_\\eta] \\, d\\mathbb{P}(\\omega) \\, dy = 0,\n\t\t\\end{eqnarray}\nfor all ${ \\eta \\in \\mathcal{V} }$ and ${ \\zeta \\in \\mathcal{H} }$. If we insert the equations~\\eqref{654367ytr6tfclmlml}, \\eqref{9789789794r6ttrtrtr}, \\eqref{6t8y365873586edtygc} and \\eqref{099uiuiyujhjchhtydfty} in equation~\\eqref{8365287erdtfrewxzqzazaazzaaz} and compute the term $\\eta$, we arrive at \n\n\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t\\displaystyle { \\int_{[0,1)^n} \\! \\int_\\Omega \\! \\Big\\{ A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} \\psi^{(1)} + 2i\\pi \\theta^\\ast \\psi^{(1)} \\right)} \\cdot \\overline{{\\left( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\right)}}+{\\left( V_{\\rm per}(y) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} \\psi^{(1)} \\, \\overline{\\zeta}} \\\\ [15pt]\n\t\t\t\\displaystyle \\qquad\\qquad\\qquad+ \\, \\! A_{\\rm per}(y) {\\left( - [\\nabla_{\\!\\! y} Z] \\, \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi\\theta^{(1)}\\psi_{\\rm per}(\\theta^\\ast) \\right)} \\! \\cdot \\! \\overline{{\\left( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\right)}}\\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad\\qquad + A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\psi_{\\rm per}(\\theta^\\ast) \\right)} \\cdot \\overline{{\\left( -[\\nabla_{\\!\\! y} Z] \\nabla_{\\!\\! y} \\zeta + 2i\\pi\\theta^{(1)} \\zeta \\right)}} \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad\\qquad + A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\psi_{\\rm per}(\\theta^\\ast) \\right)} \\cdot \\overline{{\\left( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\right)}} \\, {\\rm div}_{\\! y} Z \\\\ [15pt]\n\t\t\t\\displaystyle \\qquad\\qquad\\qquad- \\lambda^{(1)} \\, \\psi_{\\rm per}(\\theta^\\ast) \\, \\overline{\\zeta}+ {\\left( V_{\\rm per}(y) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} \\psi^{(0)} \\, \\overline{\\zeta} \\, {\\rm div}_{\\! y} Z\\Big\\}\\,d\\mathbb{P}(\\omega) \\, dy=0,\n\t\t\\end{array}\n\t\t\\end{equation*}\n\t\tfor all ${ \\eta \\in \\mathcal{V} }$ and ${ \\zeta \\in \\mathcal{H} }$. This equation is the variational formulation of the equation \\eqref{cvbnmdhfyhryfryhfiajbcjzx}, which concludes the first part of the proof. \t\n\n\\medskip\n2. For the second part of the proof, we proceed similarly with respect to the f.a.c. equation~\\eqref{8654873526rtgdrfdrfdrfrd4} and obtain\n\t\t\\begin{equation}\\label{8365wd}\n\t\t\\begin{split}\n\t\t\t& \\displaystyle \\int_{[0,1)^n} \\int_\\Omega\\Big\\{ A_{\\rm per}(y) \\big( [\\nabla_{\\!\\! y} \\Phi_\\eta]^{-1} \\nabla_{\\!\\! y} \\xi_{k,\\eta} + 2i\\pi \\theta_\\eta \\xi_{k,\\eta} \\big) \\cdot \\overline{\\big( [\\nabla_{\\!\\! y} \\Phi_\\eta]^{-1} \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta_\\eta \\zeta \\big)} \\\\\n\t\t\t& \\displaystyle\\qquad\\qquad\\qquad + A_{\\rm per}(y) {\\left( e_k \\, \\psi_\\eta \\right)} \\cdot \\overline{\\big( [\\nabla_{\\!\\! y} \\Phi_\\eta]^{-1} \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta_\\eta \\zeta \\big)} \\\\\n\t\t\t& \\displaystyle\\qquad\\qquad\\qquad - A_{\\rm per}(y) \\big( [\\nabla_{\\!\\! y} \\Phi_\\eta]^{-1} \\nabla_{\\!\\! y} \\psi_\\eta + 2i\\pi \\theta_\\eta \\psi_\\eta \\big) \\cdot \\overline{{\\left( e_k \\, \\zeta \\right)}} \\\\\n\t\t\t& \\displaystyle\\qquad\\quad + {\\left( V_{\\rm per}(y) - \\lambda_\\eta \\right)} \\, \\xi_{k,\\eta} \\, \\overline{\\zeta} - \\, \\frac{1}{2i\\pi} \\frac{\\partial \\lambda}{\\partial \\theta_k}(\\eta,\\theta(\\eta))\\,\n\t\t\t\\psi_\\eta \\, \\overline{\\zeta}\\Big\\}\\,{\\rm det} [\\nabla_{\\!\\! y} \\Phi_\\eta] \\, d\\mathbb{P}(\\omega) \\, dy = 0,\n\t\t\\end{split}\n\t\t\\end{equation}\n\t\tfor all ${ \\eta \\in \\mathcal{V} }$, ${ \\zeta \\in \\mathcal{H} }$ and ${ k \\in \\{1,\\ldots,n\\} }$. Hence, taking into account the Lemma \\ref{6487369847639gfhdghjdftrtrtfgcbvbv} and \ninserting the equations \\eqref{654367ytr6tfclmlml}, \\eqref{9789789794r6ttrtrtr}, \\eqref{6t8y365873586edtygc}, \\eqref{099uiuiyujhjchhtydfty} and \\eqref{67tyuxcvbdfgoikjjhbhb} \nin equation~\\eqref{8365wd}, a computation of the term of order ${ \\eta }$ lead us to\n\n\\begin{equation*}\n\t\t\\begin{array}{l} \n\t\t\t\\displaystyle\\hspace{-0.5cm} \\int_{[0,1)^n} \\int_\\Omega \\Big\\{A_{\\rm per}(y) \\big( \\nabla_{\\!\\! y}\\xi_k^{(1)} + 2i\\pi\\theta^\\ast \\xi_k^{(1)} \\big) \\cdot \\overline{ \\big( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\big)}+ {\\left( V_{\\rm per}(y) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} \\xi_k^{(1)} \\, \\overline{\\zeta} \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad + A_{\\rm per}(y) {\\left( e_k \\, \\psi^{(1)} \\right)} \\cdot \\overline{ \\big( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\big)} -\n\tA_{\\rm per}(y) \\big( \\nabla_{\\!\\! y} \\psi^{(1)} + 2i\\pi \\theta^\\ast \\psi^{(1)} \\big) \\cdot \\overline{{\\left( e_k \\, \\zeta \\right)}}\\\\ [15pt]\n\t\t\t\t\\displaystyle\\qquad\\qquad + A_{\\rm per}(y) \\big( - [\\nabla_{\\!\\! y} Z] \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast) + 2i\\pi\\theta^{(1)} \\xi_{k,{\\rm per}}(\\theta^\\ast) \\big) \\cdot \\overline{ \\big( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\big)} \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad + A_{\\rm per}(y) \\big( \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\xi_{k,{\\rm per}}(\\theta^\\ast) \\big) \\cdot \\overline{ \\big( -[\\nabla_{\\!\\! y} Z] \\nabla_{\\!\\! y} \\zeta + 2i\\pi\\theta^{(1)} \\zeta \\big)} \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad + A_{\\rm per}(y) \\big( \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\xi_{k,{\\rm per}}(\\theta^\\ast) \\big) \\cdot \\overline{\\big( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\big)} \\, {\\rm div}_{\\! y} Z \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad\\qquad\\qquad - \\lambda^{(1)} \\, \\xi_{k,{\\rm per}}(\\theta^\\ast) \\, \\overline{\\zeta} + {\\left( V_{\\rm per}(y) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)} \\xi_{k,{\\rm per}}(\\theta^\\ast) \\, \\overline{\\zeta} \\, {\\rm div}_{\\! y} Z\\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad\\qquad +\\, A_{\\rm per}(y) {\\left( e_k \\, \\psi_{\\rm per}(\\theta^\\ast) \\right)} \\cdot \\Big(\\overline{ \\big( \\nabla_{\\!\\! y} \\zeta + 2i\\pi \\theta^\\ast \\zeta \\big) \\, {\\rm div}_{\\! y} Z -[\\nabla_{\\!\\! y} Z] \\nabla_{\\!\\! y} \\zeta + 2i\\pi\\theta^{(1)} \\zeta}\\Big) \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad - A_{\\rm per}(y) \\big( \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\psi_{\\rm per}(\\theta^\\ast) \\big) \\cdot \\overline{{\\left( e_k \\, \\zeta \\right)}} \\, {\\rm div}_{\\! y} Z \\\\ [15pt]\n\t\t\t\\displaystyle\\qquad\\qquad -\\, A_{\\rm per}(y) \\big( - [\\nabla_{\\!\\! y} Z] \\, \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi\\theta^{(1)} \\psi_{\\rm per}(\\theta^\\ast) \\big) \\cdot \\overline{{\\left( e_k \\, \\zeta \\right)}}\\Big\\} \\, d\\mathbb{P}(\\omega) \\, dy =0,\n\t\t\\end{array}\n\t\t\\end{equation*}\n\t\tfor all ${ \\zeta \\in \\mathcal{H} }$. Noting that this is the variational formulation of the equation \\eqref{cvbnmdhfdwadawdwayhryfryhfiajbcjzx}, we conclude the proof. \n\n\\end{proof}\n\nWe remember the reader that if $f:\\mathbb R^n\\times\\Omega\\to\\mathbb R$ is a stationary function, then we shall use the following notation \n$$\n \\mathbb{E}[f(x,\\cdot)]= \\int_\\Omega f(x,\\omega) \\,d\\mathbb{P}(\\omega),\n$$\nfor any $x\\in\\mathbb R^n$. Roughly speaking, the theorem below tell us that the homogenized matrix of the problem~\\eqref{765tdyyuty67tsss} can be obtained by solving periodic problems. \n\n\\begin{theorem}\n\\label{873627yuhfdd}\n\t\tLet ${ A_\\eta^\\ast }$ be the homogenized matrix as in \\eqref{askdjfhucomojkfdfd}. Then\n\t\t\\begin{equation*}\n\t\t\tA_\\eta^\\ast = A_{\\rm per}^\\ast + \\eta A^{(1)} + \\mathrm{O}(\\eta^2).\n\t\t\\end{equation*}\nMoreover, the term of order ${ \\eta^0 }$ is given by the homogenized matrix of the periodic case, that is, ${ A_{\\rm per}^\\ast = 2^{-1}{\\left( B^{(0)} + (B^{(0)})^t \\right)} }$, where the matrix ${ B^{(0)} }$ is the term of order ${ \\eta^0 }$ in \\eqref{678435} and it is defined b\n\t\t\\begin{equation*}\n\t\t\\begin{array}{r}\n\t\t\t\\displaystyle (B^{(0)})_{k\\ell} := \\int_{[0,1)^n} A_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot {(e_k \\, \\overline{\\psi_{\\rm per}(\\theta^\\ast)})} \\, dy \\hspace{4cm} \\\\ [10pt]\n\t\t\t\\displaystyle + \\, \\int_{[0,1)^n} A_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot \\overline{\\left( \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\xi_{k,{\\rm per}}(\\theta^\\ast) \\right)} \\, dy \\\\ [10pt]\n\t\t\t\\displaystyle - \\, \\int_{[0,1)^n} A_{\\rm per}(y) {\\Big( {\\left( \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\psi_{\\rm per}(\\theta^\\ast) \\right)} \\Big)} \\cdot \\overline{{(e_\\ell \\, \\xi_{k,{\\rm per}}(\\theta^\\ast))}} \\, dy .\n\t\t\\end{array} \n\t\t\\end{equation*}\n\t\tThe term of order ${ \\eta }$ is given by ${ A^{(1)} = 2^{-1} {\\left( B^{(1)}+(B^{(1)})^t \\right)} }$, where the matrix ${ B^{(1)} }$ is the term of order ${ \\eta }$ in \\eqref{678435} and it is defined b\n\t\t\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t\\displaystyle (B^{(1)})_{k\\ell} = { {\\bigg[ \\int_{[0,1)^n} A_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot {(e_k \\, \\overline{\\psi_{\\rm per}(\\theta^\\ast)})} \\, \\mathbb{E}\\Big[{\\rm div}_{\\! y} Z(y,\\cdot)\\Big] \\, dy }} \\\\ [11pt]\n\t\t\t\\displaystyle + \\, \\int_{[0,1)^n} A_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot {( e_k \\, \\overline{ \\mathbb{E}\\big[ \\psi^{(1)}(y,\\cdot)\\big]} )} \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle + \\int_{[0,1)^n} A_{\\rm per}(y) {\\left( e_\\ell \\, {\\mathbb{E}\\big[ \\psi^{(1)}(y,\\cdot)\\big]} \\right)} \\cdot {(e_k \\, \\overline{\\psi_{\\rm per}(\\theta^\\ast)})} \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle + \\int_{[0,1)^n} A_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot \n\t\t\t\\overline{\\left( \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\xi_{k,{\\rm per}}(\\theta^\\ast) \\right)} \\, {\\mathbb{E}\\big[{\\rm div}_{\\! y} Z(y,\\cdot)\\big]} \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle + \\int_{[0,1)^n} A_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot \\overline{( \\nabla_{\\!\\! y} {\\mathbb{E}\\big[ \\xi_k^{(1)}(y,\\cdot)\n\t\t\t\\big]} + 2i\\pi\\theta^\\ast {\\mathbb{E}\\Big[ \\xi_k^{(1)}(y,\\cdot)\\Big]}} \\\\ [11pt]\n\t\t\t\\hspace{4cm} \\overline{ + \\, 2i\\pi\\theta^{(1)} \\xi_{k,{\\rm per}}(\\theta^\\ast)-{\\mathbb{E}\\Big[[\\nabla_{\\!\\! y} Z](y,\\cdot)\\Big]} \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast)}) \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle + \\int_{[0,1)^n} \\!\\!\\! A_{\\rm per}(y) {\\left( e_\\ell \\, {\\mathbb{E}\\Big[ \\psi^{(1)}(y,\\cdot)\\Big]} \\right)} \\cdot \\overline{\\left( \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\xi_{k,{\\rm per}}(\\theta^\\ast) \\right)} \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle - \\int_{[0,1)^n} \\!\\!\\! A_{\\rm per}(y) {( {\\left( \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\psi_{\\rm per}(\\theta^\\ast) \\right)})} \\cdot \\overline{{(e_\\ell \\, \\xi_{k,{\\rm per}}(\\theta^\\ast))}} \\, {\\mathbb{E}\\big[{\\rm div}_{\\! y} Z(y,\\cdot)\\big]} \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle - \\int_{[0,1)^n} \\!\\!\\! A_{\\rm per}(y) {\\Big( {\\left( \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\psi_{\\rm per}(\\theta^\\ast) \\right)} \\Big)} \\cdot \\overline{{\\Big(e_\\ell \\, {\\mathbb{E}\\Big[ \\xi_k^{(1)}(y,\\cdot)\\Big]} \\Big)}} \\, dy\n\t\t\\end{array}\n\t\t\\end{equation*}\n\t\t\\begin{equation*}\n\t\t\\begin{array}{l}\n\t\t\t\\displaystyle \\hspace{0.75cm} - \\int_{[0,1)^n} A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} {\\mathbb{E}\\Big[ \\psi^{(1)}(y,\\cdot)\\Big]} + \n\t\t\t2i\\pi\\theta^\\ast {\\mathbb{E}\\Big[ \\psi^{(1)}(y,\\cdot)\\Big]}\\right.} \\\\ [10pt]\n\t\t\t\\hspace{2cm} {{\\left. + \\, 2i\\pi\\theta^{(1)} \\psi_{\\rm per}(\\theta^\\ast) -{\\mathbb{E}\\Big[[\\nabla_{\\!\\! y} Z](y,\\cdot)\\Big]} \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast)\\right)} \\cdot \\overline{{(e_\\ell \\, \\xi_{k,{\\rm per}}(\\theta^\\ast))}} \\, dy \\bigg]} \\\\ [11pt]\n\t\t\t\\displaystyle - {\\bigg[ \\int_{[0,1)^n} {\\vert \\psi_{\\rm per}(\\theta^\\ast) \\vert}^2 {\\mathbb{E}\\Big[ {\\rm div}_{\\! y} Z(y,\\cdot)\\Big]} \\, dy + \\int_{[0,1)^n} \\psi_{\\rm per}(\\theta^\\ast) \\, \\overline{ \\mathbb{E}\\Big[\\psi^{(1)}(y,\\cdot) \\Big]} } \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle \\hspace{0.75cm} { + \\int_{[0,1)^n} {\\mathbb{E}\\Big[\\psi^{(1)}(y,\\cdot) \\Big]} \\, \\overline{\\psi_{\\rm per}(\\theta^\\ast)} \\, dy \\bigg]} \\cdot {\\bigg[ \\int_{[0,1)^n}\n\t\t\tA_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot {(e_k \\, \\overline{\\psi_{\\rm per}(\\theta^\\ast)})} \\, dy } \\\\ [11pt]\n\t\t\t\\displaystyle \\hspace{0.75cm} + \\int_{[0,1)^n} A_{\\rm per}(y) {(e_\\ell \\, \\psi_{\\rm per}(\\theta^\\ast))} \\cdot \\overline{\\left( \\nabla_{\\!\\! y} \\xi_{k,{\\rm per}}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\xi_{k,{\\rm per}}(\\theta^\\ast) \\right)} \\, dy \\\\ [11pt]\n\t\t\t\\displaystyle \\hspace{0.75cm} - \\, {{ \\int_{[0,1)^n} A_{\\rm per}(y) {\\left[ {\\left( \\nabla_{\\!\\! y} \\psi_{\\rm per}(\\theta^\\ast) + 2i\\pi \\theta^\\ast \\psi_{\\rm per}(\\theta^\\ast) \\right)} \\right]} \\cdot \\overline{{(e_\\ell \\, \\xi_{k,{\\rm per}}(\\theta^\\ast))}} \\, dy \\bigg]}}.\n\t\t\\end{array}\n\t\t\\end{equation*}\n\t\\end{theorem}\n\\begin{proof}\n1. Taking into account ${ \\mathcal{V} }$ as in \\eqref{563gdc}, we get from~\\eqref{786587tdyghs7rsdfxsdfsdf}, for ${ \\eta \\in \\mathcal{V} }$, that the homogenized matrix \nis given by ${ A_\\eta^\\ast = 2^{-1}(B_\\eta + B_\\eta^t) }$. Thus, in order to describe the terms of the expansion of ${ A_\\eta^\\ast }$, we only need to determine the terms in the expansion of ${ B_\\eta }$. \n\n2. Using the equations~\\eqref{654367ytr6tfclmlml} and \\eqref{099uiuiyujhjchhtydfty}, the map ${ \\eta \\mapsto c_\\eta \\in (0,+\\infty) }$ has an expansion about ${ \\eta=0 }$. Remembering \nthat ${ \\int_{[0,1)^n} {\\vert \\psi_{\\rm per}(\\theta^\\ast) \\vert}^2 dy = 1 }$, we have\n\n\\begin{equation*}\\label{47647342rfedrdrefrdfer} \n\t\t\\begin{split}\n\t\t\tc_\\eta^{-1} = & 1 - \\eta {\\left[ \\int_\\Omega \\int_{[0,1)^n} {\\vert \\psi_{\\rm per}(\\theta^\\ast) \\vert}^2 {\\rm div}_{\\! y} Z(y,\\omega) \\, dy \\, d\\mathbb{P} \\right.} \\\\\n\t\t\t& {\\left. + \\int_\\Omega \\int_{{[0,1)^n}} \\psi_{\\rm per}(\\theta^\\ast) \\overline{\\psi^{(1)}} \\, dy \\, d\\mathbb{P} + \\int_\\Omega \\int_{[0,1)^n} \\psi^{(1)} \\overline{\\psi_{\\rm per}(\\theta^\\ast)} \\, dy \\, d\\mathbb{P} \\right]} + \\; \\mathrm{O}(\\eta^2),\n\t\t\\end{split}\n\t\t\\end{equation*}\nin ${ \\mathbb{C} }$ as ${ \\eta \\to 0 }$. Thus, using the expansions \\eqref{654367ytr6tfclmlml}, \\eqref{9789789794r6ttrtrtr}, \\eqref{099uiuiyujhjchhtydfty} and \\eqref{67tyuxcvbdfgoikjjhbhb} in the formula \\eqref{786587tdyghs7rsdfxsdfsdf}, the computation of the resulting term of order ${ \\eta^0 }$ of ${ B_\\eta }$ give us the desired expression for $(B^{(0)})_{k,\\ell}$. \nThe same reasoning with a little more computations, which is an exercise that we leave to the reader, allow us to obtain the expression for $(B^{(1)})_{k\\ell}$. \n\\end{proof}\n\n\\medskip\n\n\n\n\\begin{remark}\n\t\tWe next record the observation that the computation of the coefficients of ${ A_{\\rm per}^\\ast }$ is performed by solving the equations \\eqref{yhujtgvjnjnhnvfnvfshjbn} and \\eqref{yhfjsgfsfsdyhujtgvjnjnhnvfnvfshjbn}, which are equations with periodic boundary conditions. In order to compute the coefficients of ${ A^{(1)} }$, we need to know the functions \n${ \\psi^{(1)} }$ and ${ \\xi_k^{(1)} }$, ${ k\\in\\{1,\\ldots,n\\} }$, which are a priori stochastic in nature (see the equations~\\eqref{cvbnmdhfyhryfryhfiajbcjzx} and \\eqref{cvbnmdhfdwadawdwayhryfryhfiajbcjzx}, respectively). But in Theorem \\ref{873627yuhfdd}, it has seen that we only need their expectation values, \n${ \\mathbb{E}\\Big[ \\psi^{(1)}(y,\\cdot)\\Big] }$ and ${ \\mathbb{E}\\Big[ \\xi_k^{(1)}(y,\\cdot) \\Big] }$, ${ k \\in \\{1,\\ldots,n\\} }$, which are ${ [0,1)^n }$-periodic functions and, respectively, solutions of the following equations:\n\t\n\t\t\\begin{eqnarray*}\n\t\t\t& & \\left\\{\n\t\t\t\\begin{array}{l}\n\t\t\t\t{\\Big( L_{\\rm per}(\\theta^\\ast) - \\lambda_{\\rm per}(\\theta^\\ast) \\Big)}\\, {\\mathbb{E}\\Big[ \\psi^{(1)}(y,\\cdot)\\Big]} = \\mathcal{Y}_{\\rm per} {\\big[ \\psi_{\\rm per}(\\theta^\\ast) \\big]} \\;\\, \\text{in} \\;\\, [0,1)^n, \\\\ [6pt]\n\t\t\t\t\\hspace{2cm} {\\mathbb{E}\\Big[ \\psi^{(1)}(y,\\cdot)\\Big]} \\;\\text{is $[0,1)^n$-periodic},\n\t\t\t\\end{array}\n\t\t\t\\right. \\\\ [7.5pt]\n\t\t\t& & \\left\\{\n\t\t\t\\begin{array}{l} \n\t\t\t\t{\\left( L_{\\rm per}(\\theta^\\ast) - \\lambda_{\\rm per}(\\theta^\\ast) \\right)}{\\mathbb{E}\\Big[ \\xi_k^{(1)}(y,\\cdot) \\Big]} = \n\t\t\t\t\\mathcal{X}\\Big[{\\mathbb{E}\\Big[ \\psi^{(1)}(y,\\cdot)\\Big]}\\Big] \\\\ [6pt]\n\t\t\t\t\\hspace{1.75cm} + \\, \\mathcal{Y}_{\\rm per} {\\big[ \\xi_{k,{\\rm per}}(\\theta^\\ast) \\big]} + \\mathcal{Z}_{k,{\\rm per}} {\\big[ \\psi_{\\rm per}(\\theta^\\ast) \\big]} \\;\\, \\text{in} \\;\\, \n\t\t\t\t[0,1)^n, \\\\ [6pt]\n\t\t\t\t\\hspace{2cm} {\\mathbb{E}\\Big[ \\xi_k^{(1)}(y,\\cdot) \\Big]} \\;\\text{is $[0,1)^n$-periodic},\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\t\\end{eqnarray*}\n\t\twhere \n\t\t\\begin{eqnarray*}\n\t\t\t\\mathcal{Y}_{\\rm per} {\\big[ f \\big]} & \\!\\! := \\!\\! & {\\left( {\\rm div}_{\\! y} + 2i\\pi \\theta^\\ast \\right)} {\\Big\\{ A_{\\rm per} (y)\n\t\t\t {\\big( - \\mathbb{E}\\Big[[\\nabla_{\\!\\! y} Z](y,\\cdot)\\Big] \\nabla_{\\!\\! y} f + 2i\\pi \\theta^{(1)} f \\big)} \\Big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & - \\, {\\rm div}_{\\! y} {\\Big\\{ \\mathbb{E}\\Big[[\\nabla_{\\!\\! y} Z](y,\\cdot)\\Big]^t A_{\\rm per} (y) {(\\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast)} f \\Big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & + {\\left( 2i\\pi\\theta^{(1)} \\right)} {\\big\\{ A_{\\rm per}(y) {\\left( \\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast \\right)} f \\big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & + \\, {\\left( {\\rm div}_{\\! y} + 2i\\pi \\theta^\\ast \\right)}{\\Big\\{ {\\left[ \\mathbb{E}\\Big[{\\rm div}_{\\! y} Z(y,\\cdot)\\Big] A_{\\rm per} (y) \\right]} {(\\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast)} f \\Big\\}} + \\lambda^{(1)} f \\\\ [1pt]\n\t\t\t\t\t\t& & + \\, {\\Big\\{ \\mathbb{E}\\Big[{\\rm div}_{\\! y} Z(y,\\cdot)\\Big] \\, {\\left[ \\lambda_{\\rm per}(\\theta^\\ast) - V_{\\rm per}(y) \\right]} \\Big\\}} f, \n\t\t\\end{eqnarray*}\n\t\t\\begin{eqnarray*}\n\t\t\t\\mathcal{Z}_{k,{\\rm per}} {\\big[ f \\big]} & \\!\\! := \\!\\! & {\\left( {\\rm div}_{\\! y} + 2i\\pi \\theta^\\ast \\right)} {\\Big\\{ {\\left[ \\mathbb{E}\\Big[{\\rm div}_{\\! y} Z(y,\\cdot)\\Big] A_{\\rm per} (y) \\right]} {( e_k f )} \\Big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & - \\, {\\rm div}_{\\! y} {\\Big\\{ \\mathbb{E}\\Big[[\\nabla_{\\!\\! y} Z](y,\\cdot)\\Big]^t A_{\\rm per} (y) {( e_k f )} \\Big\\}} + {\\left( 2i\\pi\\theta^{(1)} \\right)} {\\left\\{ A_{\\rm per}(y) {( e_k f )} \\right\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & + \\, {\\left( e_k \\right)} {\\Big\\{ {\\Big[ \\mathbb{E}\\Big[{\\rm div}_{\\! y} Z(y,\\cdot)\\Big] A_{\\rm per}(y) \\Big]} {\\left( \\nabla_{\\!\\! y} + 2i\\pi\\theta^\\ast \\right)} f \n\t\t\t\t\t\t\\Big\\}} \\\\ [1pt]\n\t\t\t\t\t\t& & - \\, {\\left( e_k \\right)} {\\left\\{ A_{\\rm per}(y) \\mathbb{E}\\Big[[\\nabla_{\\!\\! y} Z](y,\\cdot)\\Big] \\nabla_{\\!\\! y} f \\right\\}} + {\\left( e_k \\right)} {\\big\\{ A_{\\rm per}(y) {( 2i\\pi \\theta^{(1)} f )} \\big\\}}.\n\t\t\\end{eqnarray*}\n\t\tfor ${ f\\in H^1_{\\rm per}([0,1)^n) }$.\n\t\n\t\\end{remark}\nSumming up, the determination of the homogenized coefficients for~\\eqref{jhjkhkjhkj765675233} is a stochastic problem in nature. However, when we consider the \ninteresting context of materials which have small deviation from perfect ones (modeled by periodic functions), this problem, in the specific case~\\eqref{37285gdhddddddddddd} \nreduces, at the first two orders in $\\eta$, to the simpler solution to the two periodic problems above. Both of them are of the same nature. Importantly, note that \n$Z$ in~\\eqref{37285gdhddddddddddd} is only present through $\\mathbb{E}\\Big[{\\rm div}_{\\! y} Z(y,\\cdot)\\Big]$ and $\\mathbb{E}\\Big[[\\nabla_{\\!\\! y} Z](y,\\cdot)\\Big]$.\n\n\\medskip\nIn the theorem below, we assume that the homogenized matrix of the periodic case satisfies the uniform coercive condition, that is, \n$$\n A_{\\rm per}^\\ast \\xi \\cdot \\xi\\ge \\Lambda |\\xi|^2,\n$$\nfor some $\\Lambda>0$ and for all $\\xi\\in\\mathbb R^n$, which has experimental evidence for metals and semiconductors. \nTherefore, due to Theorem~\\ref{873627yuhfdd} the homogenized matrix of the perturbed case ${ A_\\eta^\\ast }$ has similar \nproperty for $\\eta\\sim 0$. \n\t\n\\begin{theorem}\n\\label{THM511}\nLet ${ v_\\eta }$ be the solution of homogenized equation \\eqref{askdjfhucomojkfdfd}. Then\n\t\t\\begin{equation*}\n\t\t\tv_\\eta\\Big(t,\\sqrt{A_\\eta^\\ast}\\,x\\Big) = v_{\\rm per}\\Big(t,\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big) + \\eta\\, v^{(1)}\\Big(t,\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big) + \\mathrm{O}(\\eta^2),\n\t\t\\end{equation*}\nweakly in ${ L^2(\\mathbb{R}^n_T) }$ as ${ \\eta \\to 0 }$, that means,\n\t\t\\begin{eqnarray*}\n\t\t\t&& \\int_{\\mathbb{R}^n_T} {\\Bigg( v_\\eta\\Big(t,\\sqrt{A_\\eta^\\ast}\\,x\\Big)-v_{\\rm per}\\Big(t,\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big) \n-\\eta\\, v^{(1)}\\Big(t,\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big) \\Bigg)} \\, h(t,x) \\, dx \\, dt\\\\\n&&\\qquad\\qquad\\qquad= \\mathrm{O}(\\eta^2),\n\t\t\\end{eqnarray*}\nfor each ${ h \\in L^2(\\mathbb{R}^n_T) }$, where ${ v_{\\rm per} }$ is the solution of the periodic homogenized problem \n\t\t\\begin{equation}\\label{987tr7tef76756g7rg5467g546r7g5}\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{c}\n\t\t\t\ti \\displaystyle\\frac{\\partial v_{\\rm per}}{\\partial t} - {\\rm div} {\\left( A_{\\rm per}^\\ast \\nabla v_{\\rm per} \\right)} + U_{\\! \\rm per}^\\ast v_{\\rm per} = 0 , \\;\\, \\text{in} \\;\\, \\mathbb{R}^{n+1}_T, \\\\ [7,5pt]\n\t\t\t\tv_{\\rm per}(0,x) = v_0(x) \\, , \\;\\, x\\in \\mathbb{R}^n,\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\t\\end{equation}\n\t\tand ${ v^{(1)} }$ is the solution of\n\t\t\\begin{equation}\\label{73547tr764tr63tr4387tr8743tr463847}\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{c}\n\t\t\t\ti \\displaystyle\\frac{\\partial v^{(1)}}{\\partial t} - {\\rm div} {\\left( A_{\\rm per}^\\ast \\nabla v^{(1)} \\right)} + U_{\\! \\rm per}^\\ast v^{(1)} = {\\rm div} {\\left( A_{\\rm per}^\\ast \\nabla v_{\\rm per} \\right)} - U^{(1)}v_{\\rm per}, \\;\\, \\text{in} \\;\\, \\mathbb{R}^{n+1}_T, \\\\ [7,5pt]\n\t\t\t\tv^{(1)}(0,x) = v^{1}_0(x) \\, , \\;\\, x\\in \\mathbb{R}^n,\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\t\\end{equation}\n\t\twhere ${ U^{(1)} }$ is the coefficient of the term of order ${ \\eta }$ of the expansion ${ U_\\eta^\\ast }$ and $v_0^{1}\\in C_c^{\\infty}(\\mathbb R^n)$ is given by the limit\n$$\nv_0^1\\Big(\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big):= \\lim_{\\eta\\to 0}\\frac{v_0\\Big(\\sqrt{A_\\eta^\\ast}\\,x\\Big)-v_0\\Big(\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big)}{\\eta}.\n$$\n\\end{theorem}\n\n\\begin{proof}\n1. Taking into account the set ${ \\mathcal{V} }$ as in \\eqref{563gdc}, we have for ${ \\eta \\in \\mathcal{V} }$ and from the conservation of energy of the homogenized Schr\\\"odinger equation \\eqref{askdjfhucomojkfdfd}, that the solution ${ v_\\eta : \\mathbb{R}^n_T \\to \\mathbb{C} }$ satisfies\n\n\t\t\\begin{equation*}\n\t\t\t{\\Vert v_\\eta \\Vert}_{L^2(\\mathbb{R}^{n+1}_T)} = T {\\Vert v_0 \\Vert}_{L^2(\\mathbb{R}^n)}, \\;\\, \\forall \\eta \\in \\mathcal{V}.\n\t\t\\end{equation*}\nThus, after possible extraction of a subsequence, we have the existence of a function ${ v^{(0)} \\in L^2(\\mathbb{R}^{n+1}_T) }$ such that \n\t\t\\begin{equation}\\label{tvdfvcfvfvfvfcdcdcdcdc}\n\t\t\tv_{\\eta} \\; \\xrightharpoonup[\\eta \\to 0]{} \\; v^{(0)} \\; \\text{em} \\; L^2(\\mathbb{R}^{n+1}_T).\n\t\t\\end{equation}\n\nBy the variational formulation of the equation \\eqref{askdjfhucomojkfdfd}, we find\n\t\t\\begin{equation}\\label{yhnygbtgbrvftgbdfervd}\n\t\t\\begin{array}{l}\n\t\t\t0 = \\displaystyle i \\int_{\\mathbb{R}^n} v_0(x) \\, \\overline{\\varphi}(0,x) \\, dx - i \\int_{\\mathbb{R}^n_T} v_\\eta(t,x) \\, \\frac{\\partial \\overline{\\varphi}}{\\partial t} (t,x) \\, dx \\, dt \\\\ [15pt]\n\t\t\t\\displaystyle \\qquad\\qquad+ \\int_{\\mathbb{R}^n_T}\\Bigg\\{- {\\left\\langle A_\\eta^\\ast v_\\eta(t,x), D^2 {\\varphi}(t,x) \\right\\rangle} + U_\\eta^\\ast v_\\eta(t,x) \\, \\overline{\\varphi}(t,x)\\Bigg\\} \\, dx \\, dt,\n\t\t\\end{array}\n\t\t\\end{equation}\nfor all ${ \\varphi \\in C_{\\rm c}^1((-\\infty,T)) \\otimes C_{\\rm c}^2(\\mathbb{R}^n) }$. Recall that ${ {\\left\\langle P,Q \\right\\rangle} := {\\rm tr}(P \\overline{Q}^t) }$, for ${ P,Q }$ in \n${ \\mathbb{C}^{n \\times n} }$. Then, using~\\eqref{tvdfvcfvfvfvfcdcdcdcdc}, the Theorem~\\ref{jnchndhbvgfbdtegdferfer}, making ${ \\eta \\to 0 }$ and invoking the \nuniqueness property of the equation~\\eqref{987tr7tef76756g7rg5467g546r7g5}, we conclude that ${ v^{(0)}=v_{\\rm per} }$.\n\n2. Now, using that ${ U_\\eta^\\ast = U_{\\rm per}^\\ast + \\eta\\, U^{(1)} + \\mathrm{O}(\\eta^2) }$ as ${ \\eta \\to 0 }$, defining $V_{\\eta}(t,x):=v_\\eta\\Big(t,\\sqrt{A_\\eta^\\ast}\\,x\\Big)$ and \nusing the homogenized equation \\eqref{askdjfhucomojkfdfd}, we arrive at \n\\begin{equation}\\label{PertCase1}\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\t\ti \\displaystyle\\frac{\\partial V_\\eta}{\\partial t} - \\Delta V_{\\eta} + U_{\\rm per}^\\ast V_\\eta = -\\Big(\\eta\\,U^{(1)} + \\mathrm{O}(\\eta^2)\\Big)\\,V_{\\eta}\\, , \\;\\, \\text{in} \\;\\, \\mathbb{R}^{n+1}_T, \\\\ [7,5pt]\n\t\t\tV_\\eta(0,x) = v^0\\Big(\\sqrt{A_\\eta^\\ast}\\,x\\Big) \\, , \\;\\, x\\in \\mathbb{R}^n,\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\nProceeding similarly with respect to $V(t,x):=v_{\\rm per}\\Big(t,\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big)$, we obtain\n\n\\begin{equation}\\label{PertCase2}\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\t\ti \\displaystyle\\frac{\\partial V}{\\partial t} - \\Delta V + U_{\\rm per}^\\ast V_\\eta = 0 \\, , \\;\\, \\text{in} \\;\\, \\mathbb{R}^{n+1}_T, \\\\ [7,5pt]\n\t\t\tV(0,x) = v^0\\Big(\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big) \\, , \\;\\, x\\in \\mathbb{R}^n.\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\nNow, the difference between the equations~\\eqref{PertCase1} and~\\eqref{PertCase2} yields, \n\\begin{equation}\\label{PertCase3}\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\t\ti \\displaystyle\\frac{\\partial (V_\\eta-V)}{\\partial t} - \\Delta (V_{\\eta}-V) + U_{\\rm per}^\\ast (V_\\eta-V) = -\\Big(\\eta\\,U^{(1)} + \\mathrm{O}(\\eta^2)\\Big)\\,V_{\\eta}\\, , \\;\\, \\text{in} \\;\\, \\mathbb{R}^{n+1}_T, \\\\ [7,5pt]\n\t\t\t(V_\\eta-V)(0,x) = v^0\\Big(\\sqrt{A_\\eta^\\ast}\\,x\\Big)-v^0\\Big(\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big) \\, , \\;\\, x\\in \\mathbb{R}^n.\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\nHence, multiplying the last equation by $\\overline{V_{\\eta}-V}$, integrating over $\\mathbb R^n$ and taking the imaginary part yields\n$$\n\\frac{d}{dt}\\|V_{\\eta}-V{\\|}_{L^2(\\mathbb{R}^n)}\\le \\mathrm{O}(\\eta),\n$$\nfor $\\eta\\in \\mathcal{V}$. Defining \n\\begin{equation*}\n\t\t\tW_\\eta(t,x) := \\frac{V_\\eta(t,x)-V(t,x)}{\\eta}, \\;\\, \\eta \\in \\mathcal{V},\n\t\t\\end{equation*}\nthe last inequality provides \n$$\n\\sup_{\\eta\\in \\mathcal{V}} \\| W_{\\eta}{\\|}_{L^2(\\mathbb{R}^{n+1}_T)}< +\\infty.\n$$\nThus, taking a subsequence if necessary, there exists ${ v^{(1)} \\in L^2(\\mathbb{R}^{n+1}_T) }$ such that\n\t\t\\begin{equation}\\label{67dtguystrt6756456rt3yd}\n\t\t\tW_{\\eta}(t,x) \\; \\xrightharpoonup[\\eta \\to 0]{} \\; v^{(1)}\\Big(t,\\sqrt{A_{\\rm per}^\\ast}\\,x\\Big), \\; \\text{in} \\; L^2(\\mathbb{R}^{n+1}_T).\n\t\t\\end{equation}\n\t\t\nHence, multiplying the equation~\\eqref{PertCase3} by $\\eta^{-1}$, letting $\\eta\\to 0$ and performing a change of variables, we reach the \nequation~\\eqref{73547tr764tr63tr4387tr8743tr463847} finishing the proof of the theorem. \n\n\n\\end{proof}\n\n\n\\section*{Acknowledgements}\nConflict of Interest: Author Wladimir Neves has received research grants from CNPq\nthrough the grant 308064\/2019-4. Author Jean Silva has received research grants from CNPq through the Grant 302331\/2017-4.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGravitationally unstable (GI) disks are expected in the early phases of star formation, when the disk mass can still\nbe an appreciable fraction of the stellar mass, during the Class 0\/Class I stage.\nWhether disk fragmentation is a common outcome of GI and results in long-lived objects that contract to become gas giant\nplanets (Mayer et al. 2004; Boss 2005), or even lower mass planets via tidal mass loss (Boley et al. 2010; Galvagni \\& Mayer 2014), \nis still debated (Helled et al. 2014). Yet disk instability offers a natural\nexplanation for the massive planets on wide orbits discovered via imaging surveys in the last decade\n(e.g. Marois et al. 2008) because the conditions required for disk fragmentation, namely a Toomre instability parameter $Q < 1.4$ \nand short radiative cooling timescales, should be satisfied in the disk at $R > 30$ AU (Durisen et al. 2007; Rafikov 2007; \nClarke 2009; Boley et al. 2010; Meru \\& Bate 2010;2012).\nYet direct evidence that disk fragmentation into planetary-sized objects can take place is still\nlacking. In disk instability when protoplanets form from condensations in overdense spiral arms they are massive and extended, spanning\n2-6 AU in size for a typical mass of a few Jupiter masses (Boley et al. 2010). \nThe first phase of clump collapse should last $10^3-10^4$ yr (Galvagni et al. 2012),\nafter which rapid contraction to Jupiter-like densities should occur owing to H$_2$ dissociation \n(Helled et al. 2006).\nThe initial slow phase of collapse, in which\nthe protoplanet is still very extended, should be the easiest to detect due to less stringent angular resolution constraints.\nThe huge step in sensitivity and angular resolution made\npossible with the advent of the ALMA observatory prompted us to consider the possible detection of such early stages of planet\nformation by disk instability.\n\nRecently several works have indeed focused on detecting spiral structure in gravitationally unstable disks using ALMA \n(Cossins et al. 2010; Douglas et al. 2013; Dipierro et al. 2014; Evans et al. 2015). Similar studies of marginally unstable\ndisks exhibiting strong spiral pattern have also been carried out with near-infrared observations of scattered light (Dong et al. \n2015). These studies have been \nmotivated\nby the recent discovery of several disks with prominent spiral arms, such as MWC 758 (Benisty et al. 2015) and SAO 206462\n(Garufi et al. 2013). They do not focus on the detectability of the extreme outcome of GI, namely fragmentation into gas giant\nplanets or more massive sub-stellar companions, thus they do not address if ALMA could detect a \"smoking gun\" signature\nof planet formation by GI. \nIn addition, spiral arms can also be produced by migrating planets (Zhu et al. 2015; Dong et al. 2015; \nPohl et al. 2015) , or perturbations by nearby\nstellar companions, while extended clumps are a unique feature of disk instability. \n\nWith the exception of Douglas et al. (2013), previous works studying the detectability of spiral structure in GI disks\nhave employed simulations with simple radiative cooling prescriptions for the disk rather than coupling the hydro solver with radiative\ntransfer.\nThis applies also to the only previous study of the detectability of GI clumps, which employed 2D simulations of \nvery massive embedded protostellar disks fragmenting predominantly into brown-dwarf sized objects (Vorobyov, Zakhozhay \\& \nDunham 2013). The limited treatment of radiation\ncan strongly affect the resulting temperature structure in the disk (Durisen et al 2007; Boley et al. 2006; Evans et al. 2015), and hence any inference \nconcerning detectability.\n\nHere we report on the first study of the detectability of massive protoplanets formed by disk instability \nwith ALMA which employs state-of-the-art 3D radiation-hydro SPH simulations. The latter are used to generate \nALMA images by means of the ray-tracing code RADMC-3D \\footnote { \nhttp:\/\/www.ita.uni-heidelberg.de\/$\\sim$dullemond\/software\/radmc-3d\/} combined with the ALMA simulator.\n\n\n\\section{Methods}\n\n\n\\subsection{The simulations}\n\nWe perform very high resolution 3D radiation hydrodynamics simulations using the\nGASOLINE SPH code (Wadsley et al. 2004) with the implementation of an implicit \nscheme for flux-limited diffusion and photospheric\ncooling described and thoroughly tested in Rogers \\& Wadsley (2011;2012). \nWe solve the mono-frequency radiation-hydro equations using\nRosseland mean and Planck mean opacities by means of a look-up table. We use the opacities\nof d'Alessio et al (2001). Stellar irradiation is included in the computation of the initial equilibrium\nof the disk but not in the simulation. We adopt an equation of state with a variable adiabatic index\nas a function of temperature which includes the effect of hydrogen ionization and molecular dissociation\n(Boley et al. 2007; Galvagni et al. 2012).\n\nWe model self-gravitating protoplanetary disks without an embedding envelope.\nThe disk parameters are very similar to those adopted in Boley et al. (2010).\nThe central star has a mass of $1.35 M_{\\odot}$, comparable to that in the HR8799 \nexoplanetary system (Marois et al. 2006).\nThe disk mass is $0.69 M_{\\odot}$ out to a radius of $200$ AU. \nA high disk mass,\ncomprising a significant fraction of the mass of the host star, should be typical\nof Class 0-I disks (Greaves \\& Rice 2010; Dunham et al. 2014), although occasionally massive systems are found also in\nthe Class II\/T-Tauri stage (Miotello et al. 2014).\n\n\n\\begin{figure*}\n\\epsscale{0.8}\n\\plotone{fig1.pdf}\n\\caption{Disk surface density (top) and temperature (bottom) at two representative times, immediately\nbefore (early) and one rotation period after fragmentation (late). A slice with thickness equal to one grid cell, namely 0.16 AU,\nis shown}.\n\\end{figure*}\n\n\nThe initial conditions are constructed using an iterative procedure to ensure local balance between pressure, gravity\nand centrifugal forces, taking into account the actual gravitational potential of each gas\nparticle as determined by both the disk and the central star (Rogers \\& Wadsley 2011). \nThe temperature profile is determined by imposing an initial Toomre Q parameter that \nreaches a minimum of $Q_{min} \\sim 1.4$ at $R \\sim 60$ AU (see e.g. Durisen et al. 2007).\nThe simulations comprise 1 million particles in the disk,\nwith a fixed gravitational softening of $0.16$ AU and a variable SPH smoothing length\nwhich is comparable to the softening at the beginning but can become as small as $0.05$ AU in the highest density regions.\nThe simulation employed in this paper is part of a set of simulations with different disk masses, stellar\nmasses and opacities. Here we focus on one particular simulation, which produces a few massive clumps, one of which is \ngravitationally bound by the end of the simulation, thereby lending itself naturally for the analysis that we intend to carry out.\n\n\n\n\\subsection{Post-processing radiative transfer}\n\n\nWe map the SPH data to a homogeneous grid with dimensions $2500 \\times 2500 \\times\n1250$\nand a cell size of $0.16\\,$AU. We assume a constant gas-to-dust ratio of $100$ and\nthat gas and dust are collisionally coupled, so that the gas and dust temperatures\nare identical. We use the radiative-transfer code\nRADMC-3D\nto produce synthetic dust emission maps from these data. We follow Dipierro et al. (2014),\nwho used the opacity law adopted in Cossins et al. (2010):\n\n\\begin{equation}\n\\kappa_\\nu = 0.025 \\left( \\frac{\\nu}{10^{12}\\,\\mathrm{Hz}} \\right)\n\\,\\mathrm{cm}^2\\,\\mathrm{g}^{-1}\n\\end{equation}\nfor dust in solar metallicity gas. Note that in this step we adopt a frequency-dependent\nopacity law while the simulations simply employed frequency-integrated opacities\nto limit the computational burden of the radiative calculation (see previous section).\nWe produce images of thermal dust emission at four frequencies ($230\\,$GHz, $345\\,$GHz,\n$460\\,$GHz and $690\\,$GHz) and for five different inclination angles (face-on,\n$30^\\circ$,\n$45^\\circ$, $60^\\circ$ and edge-on). \n\n\n\n\\subsection{ALMA synthetic observations}\n\nWe simulate the ALMA full array observations of the dust continuum emission using\ntasks \\verb+simobserve+ and \\verb+simanalyze+ in CASA 4.1.0 (McMullin et al. 2007).\nWe assume the disk is at the distance of the Ophiuchus star forming\nregion, 125 pc.\nWe simulate the continuum ALMA observations for a 10-minutes on-source time, with a\n2\\,GHz bandwidth, and using 5 different array configurations (alma.out01, alma.out07,\nalma.out14, alma.out21, alma.out28). We chose these parameters because they represent\nfeasible parameters of future snapshot surveys for young disks. The different array\nconfigurations allow for a clear display of the trade-off between sensitivity and angular\nresolution. \nNote that the chosen integration time is realistic and at the same time allows to achieve\na good signal-to-noise ratio. For comparison, Dipierro et al. (2014) have considered \nlonger integration times (typically 30-120 minutes) in their analysis of spiral structure\ndetection, while we preferred to be conservative. Clearly for systems located at significantly\nlarger distances (e.g. 400 pc for Orion) longer integration times will be required in order to approach\nthe quality of the results presented here.\nIn addition, we have also combined the synthetic observations of two array configurations:\nalma.out14 and alma.out28. The imaging of these combined datasets was done using\nstandard clean and multi-scale clean. The images created using multi-scale clean show an\nimproved image fidelity when compared to the standard clean, as we show in the next section.\n\n\n\\section{Results}\n\nThe disk quickly develops a prominent spiral pattern that grows in amplitude, quickly leading to overdensities\nalong the arms. \nThe disk fragments into two clumps with masses of several $M_J$ after a few orbits,\nat a radius of about 80 AU, where the orbital time is $\\sim 2000$ years (Figure 1). A third \noverdensity begins to form along one of the spiral arms near the end of the simulation.\nThe simulation is stopped once the first clump that forms , seen at 10 o'clock in Figure 1 \n(right panel), \nbecomes gravitationally bound and collapses\nfurther, reaching extremely high central densities that render the time-integration prohibitively slow.\nThe bound mass of the latter clump is $\\sim 6.2 M_{J}$ at the last\nsnapshot, and it has been measured using the SKID group finder with unbinding procedure (see \n\\footnote{http:\/\/hpcforge.org\/projects\/skid\/}).\nThe strong spiral pattern, dominated by low-order modes, $m=2-4$, and the masses of the clumps, are fairly typical\nof GI-unstable disks undergoing fragmentation (Mayer et al. 2004; 2007; Durisen et al. \n2007; Boley et al. 2010). While the clump has a mass\nat the high end of the mass distribution of extrasolar gas giants, we note that small-scale\nsimulations with much higher resolution, capable of following the collapse of clumps to near-planetary densities, have found that \nthe planetary mass resulting at the end of the collapse is at least a factor of 2 lower since a significant fraction\nof the mass resides in an extended, loosely bound circumplanetary disk which can be easily stripped by stellar tides\nas the protoplanet migrates inward (Galvagni et al. 2012; Galvagni \\& Mayer 2014; Malik et al. 2015). \n\n\nOur most important result is that massive clumps formed by GI are detectable. Figure 2 shows \na comparison of the resulting ALMA images for the face-on disk projection. In general\nthe higher frequency channels, 460 GHz and 690 GHz, are those that best capture the\nactual substructure in the disk and its density contrast, separating correctly the clumps, even the more\ndiffuse ones, from the spiral arms. At lower frequency the noisy map renders the interpretation\nmore uncertain, making it difficult to single out even the bound clump. Note that in all \nthese images multi-scale clean has been adopted. Its adoption as well as the combination of both\nhigh and mid resolution configuration are crucial, as shown by the comparison in Figure 3. Interestingly,\nFigure 3 shows that high resolution alone produces severe artifacts that prevent any identification\nof clumps or spiral structure.\nFinally Figure 4 shows the comparison of images\nobtained for different inclination angles for our best configuration and frequency band.\nIt is clear that\nsubstructure and its relative contrast can be identified for a range of inclinations.\n\n\n\n\\begin{figure*}\n\\epsscale{1.0}\n\\plotone{fig2.pdf}\n\\caption{Comparison of observed dust continuum emission using two ALMA configurations (C34-14 and C34-28) and multiscale clean. Left and \nright column\npresent the emission in the Early and Late stages of the disk evolution. From top to bottom, each panels shows the observation at different \nfrequency and covering the main bands\navailable with ALMA. The colorscale is the same for images at the same frequency using an arcsinh stretch, the color bar is shown at the \nright-hand edge. Beam size is shown at the\nbottom left corner.}.\n\\end{figure*}\n\n\n\nA key information that one would like to extract from the ALMA images is the mass of the clumps. This is\nfor two reasons. First, the inferred masses can be used to \nsupport or refute the hypothesis that the clumps are candidate gas giants rather than e.g. brown dwarf-sized \nobjects or simply false detections. Second, by combining the information on mass and size in \nthe ALMA images the comparison with simulations can allow to assess if clumps are \ngravitationally bound objects rather than transient over-densities. We thus measured fluxes on the \nmaps shown in Figure 2 with an aperture of radius $0.06\\arcsec$, corresponding to 7.5 AU at the distance\nof Ophiucus, which visually well identifies the (bound) clump in Figure 1. \nThis is about a factor of 2 larger than the radius of the bound clump estimated with SKID so it should yield an upper limit on its\nmass. Assuming optically-thin emission, a dust-to-gas ratio of 0.01, a temperature of 300 K, and\nthe Cossins (2010) opacities, we estimate a mass (in increasing frequency) of\n18.3, 17.7, 17.1, and 16.5 $M_J$ for this clump.\nUsing another popular choice of the opacity law,\n$\\kappa _{\\nu}=0.1 (\\nu\/1.2\\,THz) cm^2$\/g (Hildebrand, 1983), the estimated masses would be a factor of 3.3 lower,\nhence nearly identical to the actual bound mass of the clump.\n\n\nMass estimates are extremely sensitive to the assumed gas temperature. If we adopt 30K, \nwhich is close to the background disk temperature rather than to the temperature of the gas in the\nclump region, the inferred mass is largely overestimated, in the range 70-90 $M_J$ even for the\nlowest opacities. This would lead to the erroneous conclusion that the clump is a brown dwarf rather than a gas giant.\nTherefore the simulations are instrumental in yielding a good guess on important parameters such as local temperature.\nThe temperature is however well constrained as spiral shocks in massive gravitationally unstable disks\nyield temperature of order 200-400 K quite irrespective of the details of the disk model, hydro code\nand radiation solver adopted (see e.g. Mayer et al. 2007; Podolak, Mayer \\& Quinn et al. 2011; Boley et al. 2006; Rogers\n\\& Wadsley 2011). Since spiral arms are the sites of clump formation this is the temperature expected for\nclumps soon after their formation. As a gravitationally bound clump collapses further its core temperature eventually approaches\nthe $H_2$ dissociation temperature of 2000 K eventually, but this occurs on scales of a few Jupiter radii (Helled et al.\n2014) that are not resolved with ALMA. Before that happens, however, high resolution simulations of clump\ncollapse show that the mean temperature rapidly increases to 500-600 K (Galvagni et al. 2012).\nUsing a temperature of 500 K would yield masses of\n$3-10 M_J$ depending on opacity choice, hence very close to the actual bound mass.\n\nFinally, we verified that the inferred mass estimates, for the apparent size of $~ \\sim 7.5$ AU\n and for the reference temperature of 300 K, automatically yield that the clump is \nvirialized (assuming the scalar virial theorem and spherical symmetry), hence bound. \nThe second clump at $\\sim 5$ o'clock in Figure 1 (right panel) is not\nbound according to our SKID analysis, and less so is the over-density at 6 o'clock. Note that these\nother structures, while weaker in contrast, do show up quite well in the ALMA images at the \nhighest frequencies (690 GHz). They appear as extended as they are in the actual simulation at 690 \nGHz (and marginally at 480 GHz), while at lower frequencies they are less clear and blend with \nspiral arms, making it difficult to determine their presence as physical substructure in the disk \n(especially for the over-density at 6 o'clock). Note that marginally bound, transient \nover-densities that are easily dissolved by shear are a recurrent feature of GI disks, hence \nrecovering their presence is almost as important as\nbeing able to identify a single bound clump as it is never observed in simulations that \na fragmenting disk produces only a single clump (Mayer et al. 2004 ; Meru 2015).\nApplying our flux-based\nmass estimates across the different frequency bands we obtain masses also in the gas\ngiant planet range for such two overdensities, varying in the range $1-4 M_J$,\ndepending on the assumed opacity.\n\n\n\n\\begin{figure}\n\\epsscale{1.2}\n\\plotone{fig3.pdf}\n\\caption{Comparison of the simulated disk with three mock observations from ALMA at 690GHz.\nBottom left: dust continuum emission at 690GHz obtained from the radiative transfer of the numerical simulation, this is used as input for \nthe ALMA simulated observations.\nTop left: ALMA simulated image of the highest angular resolution possible (C34-28) using standard clean. Notice the severe imaging artifacts \ndue to the limited uv-coverage, where the\ndisk emission is more extended.\nTop right: ALMA simulated imaged of the combined high- and medium-angular resolution configurations (C34-14 and C34-28) using standard clean.\nBottom right: ALMA simulated imaged of the combined high- and medium-angular resolution configurations (C34-14 and C34-28) using multiscale \nclean.\nIn all panels the color stretch is between 0 and the peak of the image using an arcsinh stretch. Beam size is shown in the bottom left \ncorner.}\n\\end{figure}\n\n\n\n\\section{Discussion and Conclusions}\n\nWe have reported a proof-of-concept study which combines high-resolution radiation hydro simulations of GI disks with synthetic observations. \nOur study shows that ALMA can detect GI clumps on the scale of gas giants in the early stages of their collapse. This \nfinding extends the results of Dipierro et al. (2014), who showed how even fairly complex spiral structure can be detected by ALMA for a variety of \nfrequencies.\nWhile the possibility of clump detection by ALMA was already suggested by Vorobyov et al. (2013), we note that the 2D simulations\nin their work considered extremely massive disks, arising soon after the collapse of the molecular cloud core, that \nwere almost \nentirely shattered by fragmentation into very massive objects on the scale of brown dwarfs. Such a violent disk instability \nphase, if it ever happens, would last only a couple of rotations, after which the disk itself would disappear.\nIt would then be hardly observable, and it would not\nlead to long-lasting planetary-sized objects (Helled et al. 2014). Furthermore, their synthetic ALMA maps were obtained\nfrom SED modeling rather than by means of a ray-tracing radiative transfer calculation as we do here.\n\nBased on our results detection is possible not only for\nvery dense, gravitationally bound clumps, which yield the highest density contrast, but also for more loosely bound overdensities\nwhich, while transient, are expected during a GI phase.\nHigh-frequency, combination \nof two (or more) ALMA configurations, and imaging with multi-scale clean provide the optimal setup to capture \nvery closely the substructure in the disk at large radii, where the optical depth is relatively low. \nThis leads even to fairly accurate mass estimates for clumps, once these can be clearly identified and provided\nthat a sensible temperature is assumed. Radiation-hydro simulations are crucial in constraining the temperature.\n\nNote that\nwe find little dependence of the ability to detect clumps and spiral structure on the inclination angle, which is at variance\nwith Dipierro et al. (2014). However, in the latter work inclination was affecting the simulated ALMA maps when a high-order, tightly\nwound spiral pattern was present. In our case the disk, being very massive, develops only global, large-scale, large pitch\nangle $m=2-4$ modes (see also Dong et al. 2015), whose morphology is inherently less affected by inclination, as found also\nby Douglas et al. (2013).\n\nOur simulations have some limitations that could have an impact on the ALMA mocks.\nWe do not include a gaseous envelope, which should still embed the disk in Class 0-I phases, and be the source\nof still significant accretion. In principle\nprotostellar collapse simulations in realistic turbulent cores should be considered.\nNote that the envelope could have a non-trivial temperature distribution, on average colder than\nthe spiral shocks and clumps (with temperatures of a few tens of K), but with hot spots where accretion shocks hit the disk. Since\naccretion is filamentary and patchy in a turbulent core (e.g. Hayfield et al. 2011) inhomogeneities in density and\ntemperature might arise in the outer disk that could render clump identification more difficult. However, using\nmocks for both continuum and molecular line emission\/absorption, Douglas et al. (2013) have shown that a strong spiral pattern \nshould be detectable with ALMA even in embedded disks.\n\nAnother important caveat is that here we assumed dust and gas to be distributed in the same\nway, both in the simulations and in the post-processing step with RADMC-3D. Recent observations of putative\nyoung protoplanets in HD100546 provide a striking example of how different the distributions of dust and\ngas can be (Pineda et al. 2014; Walsh et al. 2014). In particular, in GI unstable disks dust\nwould tend to concentrate in spiral arms due to ensuing negative pressure gradients towards the overdensity peaks \n(Rice et al. 2005), and would do even more so after dense clumps have formed (Boley \\& Durisen 2010). Therefore \nopacities could be significantly higher at clump sites, perhaps by an order of magnitude, and also to some extent in spiral arms, \nrelative to the background flow. In this case lower frequency observations may have to be considered. The\nresulting ALMA mocks will have to be investigated in detail by future work.\n \n\n\n\n\\begin{figure}\n\\epsscale{1.4}\n\\plotone{fig4.pdf}\n\\caption{Comparison of simulated observations of the dust continuum emission at 690GHz of the disk in a Late stage as observed by ALMA using \ntwo array configurations (C34-14 and\nC34-28) and imaged using multiscale clean. The inclination angles shown are 0 (face-on), 45 and 60 \ndegrees, and listed in the top left \ncorners. The beam size is shown in the bottom\nleft corner.}\n\\end{figure}\n\n\\bigskip\n\n\\acknowledgements\n\n\\smallskip\n\nThe authors thanh Patrick Rogers for deploying the new disk initial condition generator\nused, Marina Galvagni for running the GASOLINE simulations used in this paper, and Joachim Stadel\nfor improving the TIPGRID code employed to map particle datasets onto grids.\nWe thank Ravit Helled, Aaron Boley and Farzana Meru for useful comments during preparation of the \nfinal manuscript.\nL.M. thanks the Munich Institute for Astro and Particle Physics (MIAPP)\nfor hospitality during a crucial phase of this work during summer 2015. \nJ.E.P. was supported\nby the SINERGIA grant \"STARFORM\" of the Swiss National Science Foundation during the\nearly stages of this work, which also enabled the collaboration with L.M and T.P.\nJ.E.P. acknowledges the financial support of the European Research Council (ERC; project PALs 320620).\nT.P. acknowledges support by a \"Forschungskredit\" grant of the\nUniversity of Z\\\"urich and by the DFG Priority Program 1573 {\\em\nPhysics of the Interstellar Medium}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\label{sec:intro}\nStrongly correlated electron systems, in which the Coulomb interaction between electrons plays an essential role, can exhibit a variety of phenomena such as ferromagnetism, antiferromagnetism, and superconductivity. The Hubbard model has been introduced as a minimum model to describe such systems \\cite{Kanamori1963, Gutzwiller1964, Hubbard1963}. \nDispite its apparent simplicity, the intricate competition between the kinetic and the \non-site Coulomb terms in the model is hard to deal with analytically. \nSo far exact\/rigorous results have been mostly limited to one dimension~\\cite{essler2005one} or systems with special hopping and filling~\\cite{Tasaki1992, Mielke1993, Mielke1993a, Tasaki1997, Tasaki2019mb, Derzhko2015}.\\par\n\n\nRecently, it has become possible to simulate the Hubbard model using ultracold atoms in optical lattices~\\cite{Kohl2005, Jordens2008, Schneider2008}. \nFurthermore, it was proposed theoretically~\\cite{Honerkamp2004} and demonstrated experimentally~\\cite{Taie2012} that multi-component fermionic systems with SU($n$) symmetry can be realized in cold-atom setups.\nThese systems are well described by the SU($n$) Fermi-Hubbard model, in which each atom carries $n$ internal degrees of freedom. When $n=2$, the model reduces to the original Hubbard model with spin-independent interaction. \nAlthough the SU($n$) ($n>2$) symmetry has been less explored in the condensed matter literature, there is a growing interest in recent years in studying the SU($n$) Hubbard model theoretically. \nFor example, it is argued that the SU($n$) Hubbard model can exhibit exotic \nphases that do not appear in the SU($2$) counterpart~\\cite{cazalilla2009ultracold, Honerkamp2004, Chung2019}.\nBesides, enlarged symmetry other than SU($n$), such as SO(5), \nin higher-spin systems has also been discussed~\\cite{wu2003exact}.\n\\par\nThe SU($n$) Hubbard model is, in general, harder to theoretically study than the SU(2) Hubbard model.\nIt has been reported that the Nagaoka ferromagnetism~\\cite{Nagaoka1966, Tasaki1989}, which is the first rigorous result for the SU($2$) Hubbard model, can be generalized to the case of SU($n$)~\\cite{Katsura2013, Bobrow2018}.\nFlat-band ferromagnetism is another example of rigorous results for the SU(2) Hubbard model.\nHere, a flat band refers to a structure of single-particle energy spectrum which has a macroscopic degeneracy.\nA tight-binding model with a flat band can be constructed using standard methods such as the line-graph~\\cite{mielke1991ferromagnetic} and the cell constructions~\\cite{Tasaki1992}.\nIn the SU(2) case, it is known that if the system has a flat band at the bottom of the single-particle spectrum and the particle number is the same as the number of unit cells, the ground state of the model exhibits ferromagnetism~\\cite{Tasaki1992, Mielke1993, Tasaki1997, Li2004}.\nAn SU($n$) counterpart of the flat-band ferromagnetism has also been discussed recently~\\cite{Liu2019}.\\par\n\nIn this paper, \nwe consider the SU($n$) Hubbard model on a one-dimensional (1D) lattice called the railroad-trestle lattice\nand derive rigorous results. We first treat the model with a flat band at the bottom and prove that the model exhibits SU($n$) ferromagnetism in its ground states provided that the on-site interaction is repulsive and the total fermion number is the same as the number of unit cells. This is a slight generalization of the result obtained by Liu {\\it et al}. in~\\cite{Liu2019}, in the sense that our hopping Hamiltonian has one more parameter. \nWe then discuss SU($n$) ferromagnetism in a perturbed model obtained by adding extra hopping terms that make the flat band dispersive. We prove that this particular perturbation leaves the SU($n$) ferromagnetic ground states unchanged when the band width of the bottom band is sufficiently narrow. This is our main result and can be thought of as an SU($n$) extension of the previous theorem for the SU($2$) Hubbard model with nearly flat bands~\\cite{Tasaki1995}.\\par \nThe rest of this paper is organized as follows. In Sec.~\\ref{sec:thm1}, we introduce the SU($n$) Hubbard model with completely flat band and prove that its ground states exhibit SU($n$) ferromagnetism.\nIn Sec.~\\ref{sec:thm2}, we study a model with nearly flat band and prove that the ground states remain SU($n$) ferromagnetic when the repulsive interaction and the band gap are sufficiently large. \nWe present our conclusions in Sec.~\\ref{sec:conclusion}.\n\\section{Model with completely flat band}~\\label{sec:thm1}\nLet $M$ be an arbitrary positive integer and $\\Lambda = \\{1, 2, \\dots , 2M\\}$ be a set of $2M$ sites on the railroad-trestle lattice [Fig. \\ref{fig:deltachain}].\nWe impose periodic boundary condition, so that site $j$ and $j + 2M$ are identified. \nWe denote by $\\mathcal{E}$ and $\\mathcal{O}$ subsets of $\\Lambda$ consisting of even sites and odd sites, respectively.\nWe define creation and annihilation operators $c_{x, \\alpha}^{\\dag}$ and $c_{x, \\alpha}$ for a fermion at site $x \\in \\Lambda $ with color $\\alpha = 1, \\dots ,n$.\nThey satisfy $\\{c_{x, \\alpha}, c_{y, \\beta}^{\\dag}\\} = \\delta_{\\alpha, \\beta} \\delta_{x, y}$.\nThe number operator of fermion at site $x$ with color $\\alpha$ is denoted by $n_{x, \\alpha} = c_{x, \\alpha}^{\\dag} c_{x, \\alpha}$.\nWe consider the SU($n$) Hubbard Hamiltonian\n\\begin{align}\nH_{1} \n&=H_{\\mathrm{hop}} + H_{\\mathrm{int}} \\label{hamiltonian1}, \\\\ \nH_{\\mathrm{hop}}\n&= \\sum^n_{\\alpha=1} \\sum_{x,y \\in \\Lambda} t_{x, y} c_{x, \\alpha}^{\\dag} c_{y, \\alpha}, \\\\\nH_{\\mathrm{int}}\n&= U \\sum_{1 \\le \\alpha < \\beta \\le n} \\, \\sum_{x \\in \\Lambda} n_{x, \\alpha}n_{x, \\beta}, \\label{hint}\n\\end{align}\nwhere $t_{2x-1,2x-1} = t$, $t_{2x, 2x} = 2\\nu^{2} t$, $t_{2x-1,2x} = t_{2x, 2x-1}= \\nu t$, $t_{2x-2,2x}= t_{2x, 2x-2} = \\nu^{2} t$, and the remaining elements of $t_{x, y}$ are zero (see Fig. \\ref{fig:deltachain}).\nThe parameters $t, \\nu$ and $U$ are positive.\n\\begin{figure}[H]\n\t\\begin{tabular}{c}\n\t\t\\centering\n\t\t\\begin{minipage}{0.5 \\hsize}\n\t\t\\subcaption{}\\label{fig:deltachain}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\columnwidth]{fig1a.eps}\n\t\t\\end{minipage}\n\t\t\n\t\t\\begin{minipage}{0.5 \\hsize}\n\t\t\\subcaption{}\\label{fig:Energyband}\t\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\columnwidth]{fig1b.eps}\n\t\t\\end{minipage}\n\t\\end{tabular}\n\t\\caption{\\subref{fig:deltachain} The railroad-trestle lattice with hopping amplitudes $t_{1} = \\nu t$ and $t_{2} = \\nu^{2} t$. \n\tOdd (black) and even (white) sites have on-site potentials $t$ and $2 \\nu^{2} t$, respectively.\n\tThe shaded region indicates the unit cell.\n\t\\subref{fig:Energyband} The energy bands for $t = 1, \\nu = 1\/\\sqrt{2}$. The lowest band at zero energy is completely flat. }\n\t\\label{fig:flatband}\n\\end{figure}\nWhen $U = 0$, the model reduces a tight-binding model and we see that it has two bands with $\\epsilon_{1}(k) = 0, \\epsilon_{2}(k) = t(2\\nu^{2}+1) + 2\\nu^{2} t \\cos{k}$.\nClearly, the lowest band is dispersionless as shown in Fig. \\ref{fig:Energyband}.\\par\nWe define total number operators of fermion with color $\\alpha$ and color-raising and lowering operators as $ F^{\\alpha, \\beta} = \\sum_{x \\in \\Lambda} c_{x, \\alpha}^{\\dag} c_{x, \\beta} $.\nSince the Hamiltonian $H_{1}$ has SU($n$) symmetry, they commute with $H_{1}$.\nWe denote the eigenvalue of $F^{\\alpha, \\alpha}$ by $N_{\\alpha}$.\nSince $F^{\\alpha, \\alpha}$ commute with the Hamiltonian $H_{1}$, the eigenstates of $H_{1}$ are separated into different sectors labeled by $(N_{1}, \\dots, N_{n})$.\nIf the total fermion number $N_{\\mathrm{f}} = \\sum_{x \\in \\Lambda} \\sum_{\\alpha = 1}^{n} n_{x, \\alpha}$ is fixed, $N_{\\alpha}$ must satisfy $\\sum_{\\alpha=1}^{n} N_{\\alpha} = N_{\\mathrm{f}}$. \\par\nNow we define a new set of operators \n\\begin{align}\n&a_{x, \\alpha} := -\\nu c_{x-1, \\alpha} + c_{x, \\alpha} -\\nu c_{x+1, \\alpha} \\ \\ \\ &\\text{for} \\ x \\in \\mathcal{E}, \\\\\n&b_{x, \\alpha} := \\nu c_{x-1, \\alpha} + c_{x, \\alpha} + \\nu c_{x+1, \\alpha} \\ \\ \\ &\\text{for} \\ x \\in \\mathcal{O}, \n\\end{align}\nwhich satisfy \n\\begin{align}\n\\{a_{x, \\alpha}, b_{y, \\beta}^{\\dag}\\} &= 0, \\label{ab}\\\\\n\\{a_{x, \\alpha}, a_{y, \\beta}^{\\dag}\\} &= \n\\begin{cases}\n\\delta_{\\alpha, \\beta} (\\nu^{2} + 2) \\ \\ &\\text{if}\\ x = y , \\\\\n\\delta_{\\alpha, \\beta} \\ \\nu^{2} \\ \\ &\\text{if}\\ x = y \\pm 2, \\\\\n0 \\ \\ &\\text{otherwise},\n\\end{cases} \\\\\n\\{b_{x, \\alpha}, b_{y, \\beta}^{\\dag}\\} & = \n\\begin{cases}\n\\delta_{\\alpha, \\beta} (\\nu^{2} + 2) \\ \\ &\\text{if}\\ x = y, \\\\\n\\delta_{\\alpha, \\beta} \\ \\nu^{2}\\ \\ &\\text{if}\\ x = y \\pm 2, \\\\\n0 \\ \\ &\\text{otherwise}.\n\\end{cases}\n\\end{align} \nThe hopping Hamiltonian $H_{\\mathrm{hop}}$ is rewritten in terms of $b_{x, \\alpha}$ and $b_{x, \\alpha}^{\\dag}$ as \n\\begin{align}\nH_{\\mathrm{hop}} = t \\sum_{\\alpha=1}^{n} \\sum_{x \\in \\mathcal{O}} b_{x,\\alpha}^{\\dag} b_{x, \\alpha} \\label{hop1}\n\\end{align}\nand hence positive semi-definite.\nThe interaction term $H_{\\mathrm{int}}$ is also positive semi-definite because $n_{x, \\alpha} n_{x, \\beta} = \\left(c_{x, \\alpha} c_{x, \\beta}\\right)^{\\dag} c_{x, \\alpha} c_{x, \\beta}$.\nTherefore, the total Hamiltonian $H_{1} = H_{\\mathrm{hop}} + H_{\\mathrm{int}}$ is positive semi-definite as well.\nFrom now on, we fix the total fermion number as $N_{\\mathrm{f}} = |\\mathcal{E}| = M$ and define a fully polarized state as $\\ket{\\Phi_{\\mathrm{all}, \\alpha}} := \\prod_{x \\in \\mathcal{E}} a_{x, \\alpha}^{\\dag} \\ket{\\Phi_{\\mathrm{vac}}}$, where $\\ket{\\Phi_{\\mathrm{vac}}}$ is a vacuum state of $c_{x, \\alpha}$.\nFrom the anti-commutation relation (\\ref{ab}), we find that $\\ket{\\Phi_{\\mathrm{all}, \\alpha}}$ is an eigenstate of $H_{1}$ with eigenvalue zero.\nSince $H _{1}\\geq 0$, the fully polarized states are ground states of $H_{1}$.\nDue to the SU($n$) symmetry, one obtains a general form of degenerate ground states as\n\\begin{align}\n\\ket{\\Phi_{N_{1}, \\dots, N_{n}}} = \\left(F^{n, 1}\\right)^{N_{n}} \\dots \\left(F^{2, 1}\\right)^{N_{2}} \\ket{\\Phi_{\\mathrm{all},1}}, \\label{fully polarized}\n\\end{align}\nwhere $N_{1} = M -\\sum_{\\alpha=2}^{n}N_{\\alpha}$.\nWe also refer to states of the form Eq. (\\ref{fully polarized}) as fully polarized states~\\footnote{\nThe total number of such states is $\\frac{(M+n-1)!}{M! (n-1)!}$.\n}.\n\nThe first result of this paper is the following: \\par\n{\\it Theorem 1.}---Consider \nthe Hubbard Hamiltonian (\\ref{hamiltonian1}) with the total fermion number $N_{\\mathrm{f}} = M$.\nFor arbitrary $t>0$ and $U>0$, the ground states of the Hamiltonian (\\ref{hamiltonian1}) are the fully polarized states and unique apart from trivial degeneracy due to the SU($n$) symmetry.\n\n\\renewcommand{\\proofname}{{\\indent \\it Proof of Theorem 1}}\n\\renewcommand{\\qedsymbol}{$\\blacksquare$}\n{\\it Proof of Theorem 1.}---Let $\\ket{\\Phi_{\\mathrm{GS}}}$ be an arbitrary ground state of $H_{1}$ with $N_{\\mathrm{f}} = M$.\nSince the ground state energy is zero, we have $H_{1} \\ket{\\Phi_{\\mathrm{GS}}} =0$.\nThe inequalities $H_{\\mathrm{hop}} \\geq 0$ and $H_{\\mathrm{int}} \\geq 0$ imply that $H_{\\mathrm{hop}} \\ket{\\Phi_{\\mathrm{GS}}} =0$ and $H_{\\mathrm{int}} \\ket{\\Phi_{\\mathrm{GS}}}=0$, which means that\n\\begin{align}\n&b_{x, \\alpha} \\ket{\\Phi_{\\mathrm{GS}}} \n=0\\ \\ \\text{for any $x \\in \\mathcal{O}$ and $\\alpha = 1, \\dots n$}, \\label{condition1}\\\\\n&c_{x,\\alpha} c_{x, \\beta} \\ket{\\Phi_{\\mathrm{GS}}} \n= 0\\ \\ \\text{for any $x \\in \\Lambda$ and $\\alpha \\neq \\beta$}. \\label{condition2}\n\\end{align}\nSince $a_{x, \\alpha}$ and $b_{x, \\alpha}$ obey the anti-commutation relation (\\ref{ab}), the condition (\\ref{condition1}) implies that $\\ket{\\Phi_{\\mathrm{GS}}}$ does not contain any $b_{x, \\alpha}^{\\dag}$ operator when it is constructed by acting with creation operators on the vacuum state.\nTherefore, it is written as\n\\begin{align}\n&\\ket{\\Phi_{\\mathrm{GS}}} \\nonumber \\\\\n& = \\! \\sum_{\\substack{A_{1}, A_{2} ,\\dots A_{n} \\subset \\mathcal{E}\\\\\n\\sum_{\\alpha =1}^{n} |A_{\\alpha}|= M}} \\!\nf(\\{A_{\\alpha}\\}) \\left( \\prod_{x \\in A_{1}} a_{x,1}^{\\dag}\\right) \\! \\!\n\\dots\n\\! \\left( \\prod_{x \\in A_{n}} a_{x,n}^{\\dag}\\right) \\! \\!\n\\ket{\\Phi_{\\mathrm{vac}}},\n\\end{align}\nwhere $A_{\\alpha}$ is a subset of $\\mathcal{E}$ and $f(\\{A_{\\alpha}\\})$ is a certain coefficient.\\par\nNext, we make use of the condition (\\ref{condition2}).\nWe take an even site $x \\in \\mathcal{E}$. \nUsing the anti-commutation relation $\\{c_{x, \\alpha}, a_{y,\\beta}^{\\dag} \\} = \\delta_{\\alpha, \\beta} \\delta_{x,y}$ and\nEq. (\\ref{condition2}) we see that \n$f(\\{A_{\\alpha}\\})=0$ if there exist $A_{\\alpha}$ and $A_{\\beta}$ such that $A_{\\alpha} \\cap A_{\\beta} \\neq \\emptyset$.\nSince $\\sum_{\\alpha=1}^{n}|A_{\\alpha}| = M$ and $A_{\\alpha} \\cap A_{\\beta} = \\emptyset$ for $\\alpha \\neq \\beta$, we find that $\\cup_{\\alpha=1}^{n} A_{\\alpha} = \\mathcal{E}$.\nThis means that the ground state is rewritten as \n\\begin{align}\n\\ket{\\Phi_{\\mathrm{GS}}}\n= \\sum_{\\bm{\\alpha}}C(\\bm{\\alpha}) \\left(\\prod_{x \\in \\mathcal{E}} a_{x, \\alpha_{x}}^{\\dag}\\right)\n\\ket{\\Phi_{\\mathrm{vac}}}, \n\\end{align}\nwhere the sum is over all \npossible color configurations $\\bm{\\alpha} = (\\alpha_{x})_{x \\in \\mathcal{E}}$ with $\\alpha_{x} = 1, \\dots ,n$. \nThen we consider the condition (\\ref{condition2}) for $x \\in \\mathcal{O}$.\nBy using \n\\begin{align}\n\\{c_{x,\\alpha}, a_{y, \\beta}^{\\dag}\\} =\n\\begin{cases}\n-\\nu \\delta_{\\alpha, \\beta} \\ \\ &\\text{if $y = x\\pm1$}, \\\\\n0 \\ \\ &\\text{otherwise},\n\\end{cases} \n\\label{anticom1}\n\\end{align}\nwe get\n\\begin{align}\n&c_{x, \\alpha}c_{x, \\beta} \\ket{\\Phi_{\\mathrm{GS}}} \\nonumber \\\\\n&= \\sum_{\\substack{\\bm{\\alpha}\\\\\n\t\\mathrm{s.t.} \\alpha_{p} = \\beta, \\\\\n\t\\alpha_{q} = \\alpha}} \n\t\\nu^{2} \\left[C(\\bm{\\alpha} ) - C(\\bm{\\alpha}_{p\\leftrightarrow q})\\right]\n\\left(\\prod_{y \\in \\mathcal{E}\\backslash \\{x\\pm 1\\}} a_{y, \\alpha_{y}}^{\\dag}\\right)\\ket{\\Phi_{\\mathrm{vac}}}, \n\\end{align}\nwhere $p=x-1$ and $q = x+1$.\nThe color configuration $\\bm{\\alpha}_{p \\leftrightarrow q}$ is obtained from $\\bm{\\alpha}$ by swapping $\\alpha_{p}$ and $\\alpha_{q}$.\nSince all the states in the sum are linearly independent, we find from the condition (\\ref{condition2}) that $C(\\bm{\\alpha}) = C(\\bm{\\alpha}_{p\\leftrightarrow q})$ for all $\\bm{\\alpha}$ and all $x \\in \\mathcal{O}$.\nAs the two localized states on \nneighboring even sites share an odd site between them, we see that\n\\begin{align}\nC(\\bm{\\alpha}) = C(\\bm{\\alpha}_{x \\leftrightarrow y}), \\label{symmetric}\n\\end{align}\nwhere $x, y$ are arbitrary different sites in $\\mathcal{E}$.\\par\nTo show that \nstates satisfying Eq. (\\ref{symmetric}) are the fully polarized states, i.e., SU($n$) ferromagnetic, we introduce a concept of a word~\\cite{kitaev2011patterns}. \nA {\\it word} $w = (w_{1}, \\dots, w_{M})$ is a sequence of integers where $w_{i} \\in \\{1,\\dots,n\\}$ for all $i$. \nWe denote by $|w|_{\\alpha}$ the number of occurences of $\\alpha$ in $w$. \nWe define the set of words for which $|w|_{\\alpha} = N_{\\alpha}$ holds as follows: $W(N_{1}, \\dots , N_{n}) = \\{w | \\ |w|_{\\alpha} = N_{\\alpha}, \\ \\alpha = 1, \\dots , n \\}$.\nFor example, $W(2, 0, 1)$ consists of $(1, 1, 3), (1, 3, 1)$ and $(3, 1, 1)$.\nIt follows from \nEq. (\\ref{symmetric}) that the ground state of $H_{1}$ in the sector labeled by $(N_{1}, \\dots , N_{n})$ can be written as\n\\begin{align}\n\\ket{\\widetilde{\\Phi}_{N_{1}, \\dots , N_{n}}} = \\sum_{w \\in W(N_{1}, \\dots , N_{n})} a_{2, w_{1}}^{\\dag} a_{4, w_{2}}^{\\dag} \\dots a_{2M, w_{M}}^{\\dag} \\ket{\\Phi_{\\mathrm{vac}}}.\n\\end{align}\nNow using commutation relations $[F^{\\beta, \\alpha}, a_{x, \\gamma}^{\\dag}] = \\delta_{\\alpha, \\gamma} a_{x, \\beta}^{\\dag}$ for all $x \\in \\mathcal{E}$ , we see that \n\\begin{align}\n\\left(F^{2, 1}\\right)^{N_{2}} \\! \\ket{\\Phi_{\\mathrm{all}, 1}} \n= \\! \\! \\! \\! \\sum_{w \\in W(M-N_{2}, N_{2})} \\! \\! \\! \\! a_{2, w_{1}}^{\\dag} a_{4, w_{2}}^{\\dag} \\dots a_{2M, w_{M}}^{\\dag} \\ket{\\Phi_{\\mathrm{vac}}}.\n\\end{align}\nBy repeating the procedure, we have the desired result $\\ket{\\widetilde{\\Phi}_{N_{1}, \\dots , N_{n}}} = \\ket{\\Phi_{N_{1}, \\dots, N_{n}}}$.\nThis proves that the ground states of $H_{1}$ are fully polarized states.\n\\hspace{\\fill}$\\blacksquare$\n\\section{Model with nearly flat band}\n\\label{sec:thm2}\nSo far, we have considered the flat-band model, but this is an idealized case in which the lowest energy band becomes completely dispersionless.\nAs a more realistic model, we consider a model with nearly flat band by adding a perturbation to the model in the previous section~\\footnote{\nIt was proposed that the hopping part of the Hamiltonian with a nearly flat band can be realized with ultracold atoms in a sawtooth lattice~\\cite{Zhang2015}.\n}.\nHere we define another Hubbard model on the same lattice as Theorem 1:\n\\begin{align}\nH_{2} \n&= H_{\\mathrm{hop}}' + H_{\\mathrm{int}}, \\label{ham2}\n\\end{align}\nwhere $H_{\\mathrm{hop}}'$ is defined as \n\\begin{align}\nH_{\\mathrm{hop}}' \n&= -s \\sum_{\\alpha=1}^{n} \\sum_{x \\in \\mathcal{E}} a_{x,\\alpha}^{\\dag} a_{x, \\alpha}\n+ t \\sum_{\\alpha=1}^{n} \\sum_{x \\in \\mathcal{O}} b_{x,\\alpha}^{\\dag} b_{x, \\alpha}, \\label{hop2}\n\\end{align}\nand $H_{\\mathrm{int}}$ is defined in Eq.(\\ref{hint}) with parameters $s, t, U > 0$.\nWhen the hopping Hamiltonian $H_{\\mathrm{hop}}'$ is written in terms of original fermion operator $c_{x, \\alpha}$, \nit takes the form $H_{\\mathrm{hop}}' = \\sum_{\\alpha}\\sum_{x,y \\in \\Lambda} t'_{xy} c_{x, \\alpha}^{\\dag} c_{y, \\alpha}$\n, where $t'_{2x-1, 2x-1} = t-2\\nu^{2}s$, $t'_{2x,2x} = -s + 2\\nu^{2}t$, $t'_{2x-1,2x} = t'_{2x,2x-1} = \\nu(t+s)$, $t'_{2x-2, 2x} = t'_{2x, 2x-2} = \\nu^{2}t$, $t'_{2x-1, 2x+1} = t'_{2x+1, 2x-1} = -\\nu^{2}s$, and the remaining elements of $t'_{x, y}$ are zero.\nWhen we consider the single particle problem, we obtain two bands with $\\epsilon_{1}(k) = -s(2\\nu^{2}+1) - 2\\nu^{2}s \\cos{k}$, $\\epsilon_{2}(k) = t(2\\nu^{2}+1) + 2\\nu^{2}t \\cos{k}$ (see Fig. \\ref{fig:Energy band2}).\nWe see that the lowest band is no longer flat, however, it can be regarded as a nearly flat band when $s$ is small enough.\nWe focus on this model in the following and prove a theorem on the ferromagnetism.\\par\n\\begin{figure}[H]\n\t\\begin{tabular}{c}\n\t\t\\centering\n\t\t\\begin{minipage}{0.5\\hsize}\n\t\t\\subcaption{}\\label{fig:deltachain2}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\columnwidth]{fig2a.eps}\n\t\t\\end{minipage}\n\t\t\\begin{minipage}{0.5\\hsize}\n\t\t\\subcaption{}\\label{fig:Energy band2}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\columnwidth]{fig2b.eps}\n\t\t\\end{minipage}\n\t\t\n\t\\end{tabular} \n\t\\caption{\\subref{fig:deltachain2} The lattice geometry of $H_{\\mathrm{hop}}'$.\n\tThe hopping amplitudes are given by $t_{1} = \\nu(t+s)$, $t_{2} = \\nu^{2} t$, and $t_{2}' = -\\nu^{2} s$.\n\tOdd (black) and even (white) sites have on-site potentials $t-2\\nu^{2}s$ and $-s + 2\\nu^{2} t $, respectively.\n\tThe corresponding energy bands are shown in \\subref{fig:Energy band2} for $t = 1, \\nu = 1\/\\sqrt{2}, s = 1\/10$.\n\t}\\label{fig:flatband2}\n\\end{figure}\n{\\it Theorem 2.}---Consider the Hamiltonian (\\ref{ham2}) with the total fermion number $N_{\\mathrm{f}} = M$.\nFor sufficiently large $t\/s >0$ and $U\/s > 0$, the ground states are the fully polarized states and unique apart from the trivial degeneracy due to the SU($n$) symmetry.\n\n\\smallskip\n\n{\\it Proof of Theorem 2.}---\nFirst, we decompose the Hamiltonian (\\ref{ham2}) into the sum of local Hamiltonians as \n\\begin{align}\nH_{2} = -sM(2\\nu^{2}+1) + \\lambda H_{\\mathrm{flat}} + \\sum_{x\\in \\mathcal{E}} h_{x}, \\label{hamdecomposed}\n\\end{align}\nwhere \n\\begin{align}\nH_{\\mathrm{flat}} = \\sum_{\\alpha=1}^{n}\\sum_{x \\in \\mathcal{O}} b_{x, \\alpha}^{\\dag} b_{x, \\alpha} + \\sum_{x \\in \\Lambda} \\sum_{\\alpha < \\beta} n_{x,\\alpha}n_{x, \\beta}\n\\end{align}\nand \n\\begin{align}\n&h_{x} \\nonumber \\\\ \n&= \\sum_{\\alpha=1}^{n} \\left(-s a_{x, \\alpha}^{\\dag} a_{x, \\alpha} \n+ \\frac{t-\\lambda}{2} (b_{x-1, \\alpha}^{\\dag} b_{x-1, \\alpha} + b_{x+1, \\alpha}^{\\dag} b_{x+1, \\alpha})\n\\right) \\nonumber \\\\\n& + \\frac{\\kappa(U -\\lambda)}{4}n_{x-2}(n_{x-2}-1) + \\frac{U-\\lambda}{4} n_{x-1}(n_{x-1}-1) \\nonumber\\\\\n& + \\frac{(1-\\kappa)(U-\\lambda)}{2} n_{x}(n_{x}-1) + \\frac{U-\\lambda}{4} n_{x+1}(n_{x+1}-1) \\nonumber\\\\\n& + \\frac{\\kappa(U-\\lambda)}{4}n_{x+2}(n_{x+2}-1) + s(2\\nu^{2}+1), \n\\label{local_ham}\n\\end{align}\nwhere $n_{x}$ is defined as $n_{x} = \\sum_{\\alpha} n_{x, \\alpha}$.\nThe two parameters $\\lambda$ and $\\kappa$ satisfy $0 < \\lambda < \\min\\{t,U\\}$ and $0 \\leq \\kappa < 1$.\nTo prove Theorem 2, we use the following lemmas.\\par\n{\\it Lemma 1.}---Suppose the local Hamiltonian $h_{x}$ is positive semi-definite for any $x \\in \\mathcal{E}$. \nThen the ground states of the Hamiltonian (\\ref{hamdecomposed}), and hence Eq. (\\ref{ham2}), are fully polarized states and unique apart from the trivial degeneracy due to the SU($n$) symmetry.\\par\n\n\n\n\n\n\n{\\it Lemma 2.}---Suppose that $t, U$ are infinitely large and $0 < \\kappa < 1$.\nThen the local Hamiltonian (\\ref{local_ham}) is positive semi-definite.\n(We take $\\lambda$ and $\\kappa$ to be proportional to $s$.)\\par\n\nWe note that $h_{x}$ can be regarded as a finite dimensional matrix independent of the system size since the local Hamiltonian $h_{x}$ acts nontrivially only on a finite number of sites.\nThis means that the energy levels of $h_{x}$ depend continuously on the parameters.\nTherefore, Lemma 2 guarantees that $h_{x}$ is positive semi-definite when $t, U$ are finite but sufficiently large.\nThen Lemma 1 implies that the ground states of the Hamiltonian (\\ref{ham2}) are fully polarized states, which proves Theorem 2.\n\\hspace{\\fill}$\\blacksquare$\n\\par\nBelow, we prove Lemmas 1 and 2.\n\\par\n{\\it Proof of Lemma 1.}---First, it is noted that a fully polarized state $\\ket{\\Phi_{\\mathrm{all},1}} = \\left(\\prod_{x \\in \\mathcal{E}} a_{x, 1}^{\\dag}\\right) \\ket{\\Phi_{\\mathrm{vac}}}$ satisfies $h_{x} \\ket{\\Phi_{\\mathrm{all},1}} = 0$ for each $h_{x}$.\nSince $h_{x}$ is SU($n$) invariant, all fully polarized states have zero energy.\nWe assume that $h_{x} \\geq 0$ for all $x \\in \\mathcal{E}$. \nLet $\\ket{\\Phi_{\\mathrm{GS}}^{\\mathrm{flat}}}$ be an arbitrary ground state of $H_{\\mathrm{flat}}$.\nSince $H_{\\mathrm{flat}} \\ket{\\Phi_{\\mathrm{GS}}^{\\mathrm{flat}}} = 0$ and $h_{x} \\ket{\\Phi_{\\mathrm{GS}}^{\\mathrm{flat}}} = 0$, we see that $H_{2} \\ket{\\Phi_{\\mathrm{GS}}^{\\mathrm{flat}}} = -s M(2\\nu^{2} +1) \\ket{\\Phi_{\\mathrm{GS}}^{\\mathrm{flat}}}$.\nFrom $h_{x} \\geq 0$, the ground energy of $H_{2}$ is $-s M (2\\nu^{2} +1)$.\nIf $\\ket{\\Phi_{\\mathrm{GS}}}$ is an arbitrary ground state of $H_{2}$, it satisfies $H_{2} \\ket{\\Phi_{\\mathrm{GS}}} = -s M(2\\nu^{2} + 1)$.\nFrom $H_{\\mathrm{flat}} \\geq 0$ and $h_{x} \\geq 0$, we find $H_{\\mathrm{flat}} \\ket{\\Phi_{\\mathrm{GS}}} = 0$ and $h_{x} \\ket{\\Phi_{\\mathrm{GS}}} = 0$.\nThis shows that any ground state of $H_{2}$ must be a ground state of $H_{\\mathrm{flat}}$.\nThe Hamiltonian $H_{\\mathrm{flat}}$ is nothing but the Hamiltonian $H_{1}$ with $t = U = 1$.\nThus, the ground states of $H_{2}$ are fully polarized and unique.\n\\hspace{\\fill} $\\blacksquare$\n\\par\n\nWe remark that one can check whether $h_{x}$ is positive semi-definite by numerically diagonalizing a finite dimensional matrix.\nThe result for the SU(4) case is shown in Fig.\\ref{fig:SU4boundary}.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.6\\columnwidth]{fig3.eps}\n\t\\caption{The positive semi-definiteness of $h_{x}$($n=4$) holds in the shaded region for $\\nu=1\/\\sqrt{2}, \\kappa = 0$.\n\tThe plot is obtained by diagonalizing $h_{x}$ numerically.\n\tLemma 1 says that the ground states of the full Hamiltonian $H_{2}$ are fully polarized states in the shaded region.\n\tFor example, the ferromagnetism is established if $t\/s \\geq 4.5$ when $U\/s = 25$.\n\t}\n\t\\label{fig:SU4boundary}\n\\end{figure}\n{\\it Proof of Lemma 2.}---Due to the translational invariance, it suffices to show the case for $h_{0}$.\nThe local Hamiltonian $h_{0}$ is regarded as an operator defined on five sites $\\{-2, -1, 0, 1, 2\\}$, where $x = -2, -1$, and $0$ are identified with $x = 2M-2, 2M-1$, and $2M$, respectively.\nOn these sites, we define operators \n\\begin{align}\n\\tilde{a}_{-2,\\alpha}&:= \\frac{1}{\\sqrt{\\nu^{2}+1}}(c_{-2, \\alpha} - \\nu c_{-1, \\alpha}), \\\\\n\\tilde{b}_{-1, \\alpha}&:= \\nu c_{-2, \\alpha} + c_{-1, \\alpha} + \\nu c_{0, \\alpha}, \\\\ \n\\tilde{a}_{0, \\alpha}&:= -\\nu c_{-1, \\alpha} + c_{0, \\alpha} + -\\nu c_{1, \\alpha}, \\\\ \n\\tilde{b}_{1, \\alpha}&:= \\nu c_{0, \\alpha} + c_{1, \\alpha} + \\nu c_{2, \\alpha}, \\\\ \n\\tilde{a}_{2, \\alpha}&:= \\frac{1}{\\sqrt{\\nu^{2}+1}}(-\\nu c_{1, \\alpha} + c_{2, \\alpha}).\n\\end{align}\nThese operators satisfy \n\\begin{align}\n\\{\\tilde{a}_{y, \\alpha}, \\tilde{a}_{y', \\beta}^{\\dag}\\}\n&= \\begin{cases}\n\\delta_{\\alpha,\\beta} (2\\nu^{2}+1) \\ \\ &\\text{for}\\ \\ y=y'=0, \\\\\n\\delta_{\\alpha,\\beta} \\frac{\\nu^{2}}{\\sqrt{\\nu^{2}+1}}\\ \\ &\\text{for}\\ \\ | y - y'| = 2, \\\\\n\\delta_{\\alpha,\\beta} \\delta_{y,y'}\\ \\ &\\text{for}\\ \\ y,y' = \\pm2, \n\\end{cases} \\label{anticom2}\\\\ \n\\{\\tilde{a}_{y,\\alpha}, \\tilde{b}_{y', \\alpha}^{\\dag}\\} &= 0.\n\\end{align}\nSingle-fermion states corresponding to these operators are linearly independent.\nTo show the lemma, we only need to consider states $\\ket{\\Phi}$ which have finite energy in this limit, i.e., $\\lim_{t,U \\rightarrow \\infty} \\bra{\\Phi} h_{0} \\ket{\\Phi} < \\infty$ .\nThe condition that $\\ket{\\Phi}$ has finite energy is equivalent to the following: \n\\begin{align}\n\\tilde{b}_{y, \\alpha} \\ket{\\Phi} &= 0 \\ \\ \\text{for $y= \\pm 1$}, \\label{cond_b}\\\\\nc_{y, \\alpha} c_{y, \\beta} \\ket{\\Phi} &= 0 \\ \\ \\text{for $y = 0, \\pm1, \\pm2$}. \\label{cond_c}\n\\end{align}\nLet $\\ket{\\Phi}$ be a state which has finite energy.\nFrom Eqs. (\\ref{cond_b}) and (\\ref{cond_c}) with $y=-2, 0, 2$, $\\ket{\\Phi}$ is written as \n\\begin{align}\n\\ket{\\Phi} = \\! \\! \\! \\sum_{\\substack{A_{1} , \\dots , A_{n} \\subset \\widetilde{\\mathcal{E}}\\\\\nA_{\\alpha }\\cap A_{\\beta} = \\emptyset\n}} \\! \\! f(\\{ A_{\\alpha}\\}) \n\\! \\left(\\prod_{y \\in A_{1}} \\tilde{a}_{y,1}^{\\dag}\\right)\n\\! \\dots \\!\n\\left(\\prod_{y \\in A_{n}} \\tilde{a}_{y,n}^{\\dag}\\right)\n\\! \\ket{\\Phi_{\\mathrm{vac}}},\n\\end{align}\nwhere $\\widetilde{\\mathcal{E}} = \\{-2, 0, 2\\}$ and $A_{\\alpha}$ is an arbitrary subset of $\\widetilde{\\mathcal{E}}$.\nSince $\\widetilde{\\mathcal{E}}$ contains three sites, the particle number of finite energy states must be less than or equal to three. \nUsing the condition Eq. (\\ref{cond_c}) with $y = \\pm 1$, we see that all the finite energy states $\\ket{\\Phi}$ must be a fully polarized state \nover the five sites and have zero energy when the particle number is three.\nFor one-particle states, all the eigenvalues of $h_{0}$ are non negative.\nThus we only need to verify the positive semi-definiteness for two-particle sectors labeled by $(N_{\\alpha}, N_{\\beta}) = (2,0)$ and $(N_{\\alpha}, N_{\\beta}) = (1,1)$.\nTo this end, we solve the eigenvalue problem for $Ph_{0}P$ where $P$ denotes the projection operator onto the space of finite energy states.\nIn the sector $(2, 0)$, we find that there are three eigenstates \n\\begin{align}\n\\ket{\\Phi_{1}} &= \\tilde{a}_{-2, \\alpha}^{\\dag} \\tilde{a}_{0, \\alpha}^{\\dag} \\ket{\\Phi_{\\mathrm{vac}}}, \\\\\n\\ket{\\Phi_{2}} &= \\tilde{a}_{0, \\alpha}^{\\dag} \\tilde{a}_{2, \\alpha}^{\\dag} \\ket{\\Phi_{\\mathrm{vac}}}, \\\\\n\\ket{\\Phi_{3}} \\! &= \\! \\left[\n\t\t\\frac{\\nu^{2}}{\\nu^{2}+1} \n(\\tilde{a}_{-2, \\alpha}^{\\dag} \\!-\\! \\tilde{a}_{2, \\alpha}^{\\dag}) \\tilde{a}_{0, \\alpha}^{\\dag}\n\t\t\\! - \\! (2\\nu^{2}+1) \\tilde{a}_{-2, \\alpha}^{\\dag} \\tilde{a}_{2, \\alpha}^{\\dag} \n\t\t\\right] \\ket{\\Phi_{\\mathrm{vac}}}\n\\end{align}\nand their corresponding eigenenergies are 0, 0, and $s(2\\nu^{2} + 1)$, respectively.\nIn the sector $(1,1)$, there are four eigenstates.\nThree of them can be obtained by applying $F^{\\beta, \\alpha}$ to the states $\\ket{\\Phi_{1}}, \\ket{\\Phi_{2}}$ and $\\ket{\\Phi_{3}}$.\nAs a state orthogonal to them, we get a singlet state \n\\begin{align}\n\\ket{\\Phi_{4}} = \\left(\\tilde{a}_{-2, \\alpha}^{\\dag}\\tilde{a}_{2, \\beta}^{\\dag} - \\tilde{a}_{-2, \\beta}^{\\dag} \\tilde{a}_{2, \\alpha}^{\\dag} \\right) \\ket{\\Phi_{\\mathrm{vac}}},\n\\end{align}\nand we find that this state satisfies \n\\begin{align}\nP h_{0} P \\ket{\\Phi_{4}} = s(2\\nu^{2} + 1) \\ket{\\Phi_{4}}.\n\\end{align}\nClearly, the state $\\ket{\\Phi_{4}}$ has a positive energy.\nHence, we see that all the eigenvalues of $h_{0}$ are nonnegative.\nThus, we have proved Lemma 2.\n\\hspace{\\fill} $\\blacksquare$\n\\par\n\\section{Conclusion} \\label{sec:conclusion}\nWe have presented an extension of flat-band ferromagnetism to the SU($n$) Hubbard model on the railroad-trestle lattice.\nFurthermore, we proved that in the nearly flat-band case, all the ground states are fully polarized if $t$ and $U$ are sufficiently large. \nOne can similarly construct and analyze models in higher dimensions, in which \nthe ground states are fully polarized if the lowest band is completely flat. \nThe previous results for the SU(2) Hubbard models in higher dimensions suggest that the parameter $\\nu$ has to be larger than a threshold value $\\nu_{c}>0$ when the lowest band is nearly flat~\\cite{Shen1998, Tasaki2003}. The details will be discussed elsewhere. \n\n\nAlthough we have focused on models with a nonzero band gap, it would be interesting to see if the method developed in this paper can be extended to include SU($n$) Hubbard models with gapless flat or nearly flat bands~\\cite{tanaka2003stability}. \nIt would also be interesting to study SU($n$) ferromagnetism in systems with topological flat bands carryng nontrivial Chern number, as its SU($2$) counterpart has been discussed in~\\cite{katsura2010ferromagnetism}. \nAnother direction for future research is to explore ferromagnetism in multiorbital Hubbard models, including the one with SU($n$) symmetry. In such systems, rigorous~\\cite{li2014exact, li2015exact} and numerical results~\\cite{xu2015sign} about ferromagnetism, which are different from the flat-band scenario, have been obtained recently. It is thus interesting to see to what extent our results can be generalized to the multiorbital case. \n\n\\acknowledgments\nWe would like to thank Hal Tasaki and Akinori Tanaka for valuable discussions. H.K. was supported in part by JSPS Grant-in-Aid for Scientific Research on Innovative Areas: No. JP18H04478 and JSPS KAKENHI Grant No. JP18K03445.\n\\bibliographystyle{apsrev4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:introduction}\n\nSpinning neutron stars with nonaxisymmetric deformations are expected to emit quasi-monochromatic and\nlong-lasting gravitational waves (GWs), commonly referred to as \\emph{continuous waves}\n(CWs)~\\cite{prix06:_cw_review, bildsten1998:_gwns, ushomirsky2000:_deformations, JMcD2013:_maxelastic}.\nOne of the main search methods for CWs was developed for ground-based detectors (such as LIGO~\\cite{LIGORef:2009},\nVirgo~\\cite{VirgoRef:2011}, GEO\\,600~\\cite{GEORef:2010}) and it is the so-called $\\F$-statistic.\nThe $\\F$-statistic was originally derived as a maximum-likelihood detection statistic\n\\cite{jks98:_data,cutler05:_gen_fstat}; it was later shown that it can also be derived as a Bayes\nfactor using somewhat unphysical priors for the signal amplitude parameters \\cite{prix09:_bstat}.\n\nIn the context of CW searches, the GW data is reasonably well described by an underlying Gaussian noise\ndistribution with additional non-Gaussian disturbances (see, e.g., Fig.~3 in\nRef.~\\cite{abbott2004:_geoligo}, Fig.~3 in Ref.~\\cite{aasi13:_eathS5} and Sec.~5.8 in\nRef.~\\cite{behnke2013:_phdthesis}).\nThe $\\F$-statistic corresponds to a binary hypothesis test between a signal hypothesis and a Gaussian-noise\nhypothesis.\nAs a consequence, it is possible to obtain large $\\F$-statistic values due to non-Gaussian disturbances in the\ndata, even if they are not well matched to the signal model. Large $\\F$-statistic values only imply that the\nsignal hypothesis is a \\emph{better} fit to the data than pure Gaussian noise, but they do not imply a\n\\emph{good} fit.\n\nThe most problematic instrumental artifacts for any specific analysis of GW data are typically\nthose that resemble the signal family it searches for, i.e., disturbances with non-negligible projection\nonto the signal templates of a search.\nFor example, searches for short transient signals, such as bursts (e.g., from core-collapse\nsupernovae~\\cite{fryer2011:_collapsereview}) or compact binary coalescences\n(CBCs~\\cite{abadie2010:_cbcrates}),\nare most affected by ``glitches'' in the data, i.e., short broad-band\ndisturbances~\\cite{Blackburn2008:_glitch, Aasi2012:_virgochar, Slutsky2010:_falsealarms,\nprestegard2012:_transartifact}.\n\nOn the other hand, searches for CW signals are mainly affected by so-called ``lines'', i.e., narrow-band\ndisturbances that are present for a sizable fraction of the observation time.\nExamples include the so-called \\emph{mains} lines (i.e., lines at multiples of the 60\\,Hz electrical\npower system frequency for LIGO, or 50\\,Hz for Virgo and GEO600), the resonance frequencies of the\ndetector suspensions (different for each detector), and lines from digital components -- see\nRef.~\\cite{aasi13:_eathS5} for more details and a list of known instrumental lines identified in the data from\nthe fifth LIGO science run (S5), and Refs.~\\cite{Aasi2012:_virgochar, Accadia2012:_noemi} for line\nidentification in Virgo data.\n\nIn this article, we apply a Bayesian model-selection approach using an additional alternative noise hypothesis\nfor lines.\nSince the characteristics of the population of instrumental lines affecting CW searches are not well\nunderstood, we use a very simple line model. The model is based on an observed\ndistinguishing feature of many lines, namely that they do not affect all detectors in the same way.\nHence, we define our line model as any feature in the data that resembles the signal template \\emph{in only\none detector}.\n\nThis approach can also be seen as adding a coincidence criterion to the coherent multidetector\n$\\F$-statistic. Such a method is applicable only to multidetector CW searches, and in practice the most\nrecent $\\F$-statistic-based searches have all used data from multiple detectors.\nBy employing this approach, we obtain a new line-robust detection statistic, generalizing the\n$\\F$-statistic.\n\nThe plan of this paper is as follows: In Sec.~\\ref{sec:lit-review} we give a short review of existing\nmethods to deal with glitches and lines in GW data.\nIn Sec.~\\ref{sec:hypotheses} we describe signal-noise hypotheses relevant for the detection problem at hand:\nthe standard Gaussian-noise hypothesis, in Sec.~\\ref{sec:hypo_gauss}, the CW\nsignal hypothesis, in Sec.~\\ref{sec:hypo_signal} and a new simple line hypothesis in Sec.~\\ref{sec:hypo_line}.\nIn Sec.~\\ref{sec:line-veto-stats-coh} we use these hypotheses to construct two new detection statistics, a\n``line-veto'' statistic and a more general line-robust statistic.\nWe generalize the hypotheses and statistics to the case of semicoherent searches in\nSec.~\\ref{sec:semicoherent}. Next we discuss the choice of prior parameters for the line-robust statistic and we\npresent a simple method, albeit somewhat ad hoc, to choose decent priors in Sec.~\\ref{sec:tuning}. This\nconcludes the analytical part of the paper. In Sec.~\\ref{sec:tests} we assess the performance of the new\nstatistics through a series of numeric tests: on fully synthetic data in Sec.~\\ref{sec:tests_simdata} and on\nLIGO S5 data in\nSec.~\\ref{sec:tests_realdata}. We summarize our findings in Sec.~\\ref{sec:conclusions} and give a short\noutlook on applications and future generalizations of this approach.\nThe appendix~\\ref{sec:expect-f-stat} contains a short derivation of the expected $\\F$-statistic value under\nthe\nsimple line hypothesis.\n\n\\section{Existing methods to mitigate detector artifacts}\n\\label{sec:lit-review}\n\nThe problem of non-Gaussian artifacts in the data affects both searches for long-lived signals (e.g., the CWs\nwhich are the topic of this paper) as well as searches for short-lived signals.\nShort-lived signals are expected from the late phase of the inspiral of binaries of compact objects such as\nneutron stars and black holes as well as from catastrophic events such as supernovae.\nFor short-lived signal searches the artifacts that are responsible for an increase in the false alarm rates\nwith respect to purely Gaussian noise manifest themselves as loud glitches in the time-domain data.\nOn the other hand, for long-lived signal searches the most troublesome artifacts are broadly speaking those\nthat appear in the Fourier spectra on the typical time scales of the search.\n\nAn interesting distinction in search pipelines is the order in which multidetector coherence and coincidence\nare used. If the first step in the search is a coherent multidetector statistic (as\nfor the CW searches that we consider here), then the noise-artifact-mitigation strategy\nmay use subsequent consistency checks on the statistics from the individual detectors. If,\non the other hand, the first step was a single-detector search followed by a selection of triggers\nwhere only coincidence in the various detectors is required, an additional multidetector coherent statistic\ncan serve as an artifact-mitigation technique.\n\nA wide range of methods have been developed in order\nto deal with instrumental artifacts. In the following sub-sections, we give a short review of\nsuch methods for CBC, burst and CW searches.\n\nGenerally, we can distinguish between two fundamentally different approaches to artifact mitigation: Bayesian\nmodel-selection and heuristic methods.\nThe former is based on \\emph{explicit} alternative models whereas the latter consists in constructing\nad-hoc statistics to detect certain observed deviations from the GW signal model, which in the\nBayesian picture corresponds to a test against \\emph{implicit} (and often unknown) alternative hypotheses.\nA third ``hybrid'' approach uses Bayesian inference to directly construct empirical noise and\nsignal likelihoods \\cite{cannon2008:_bayescoinc} using the actual data and simulated GW signals,\nso-called \\emph{injections}.\n\n\n\\subsection{Instrumental glitches in burst and CBC searches}\n\\label{sec:glitch-short-trans}\n\nIn searches for short-lived GW signals popular ad-hoc glitch-veto methods are the\n$\\chi^2$-veto \\cite{allen2005:_chi2}, the null-stream veto \\cite{wen2005:_nullstream}, and signal amplitude\nconsistency vetoes \\cite{abbott2005:_cbcS2}.\nThese (among others) are commonly used in CBC searches (e.g., see Refs.~\\cite{babak2013:_searchcbc,\nharry2011:_targetcbc, abadie2012:_lowmasscbc}) and in burst searches (e.g., see\nRefs.~\\cite{sutton2010:_xpipeline, abbott2009:_s5burst, abadie2012:_allskyburst}).\n\nFor instance, in low-mass CBC searches the first step is a separate search in each detector.\nAfter a cut on single-detector $\\chi^2$ values, the glitch mitigation strategy consists in applying a\ncoincidence criterion and then\nconstructing on the surviving candidates a new multidetector statistic. This folds in the single-detector\nstatistics and $\\chi^2$ values. Significance thresholds are set based on Monte-Carlo studies on actual data and\ninjections.\n\nIn searches for signals for which we lack a waveform model (i.e., generic bursts), a main multidetector\nstatistic is constructed that accounts appropriately for time delays and antenna responses of the different\ndetectors to the same putative GW. This statistic is then augmented by other statistics\n(see Ref.~\\cite{sutton2010:_xpipeline} for details) specifically designed to\nfurther check for signal consistency across the detectors by means of appropriate veto conditions.\n\nVarious explicit glitch models have been considered, including Sine-Gaussians \\cite{clark2007:_ringdown,\ndalcanton2014:_sinegaussian} and wavelets \\cite{littenberg2010:_artifacts}, and Bayesian approaches have also\nbeen proposed to use these in constructing glitch-robust searches\n\\cite{clark2007:_ringdown, littenberg2010:_artifacts}. Notably, Veitch and Vecchio\n\\cite{veitch2010:_bayesian} have defined a glitch model describing coincident single-detector candidates with\nindependent amplitude parameters in different detectors.\nOn the other hand, the signal model requires candidates to be both coincident and coherent across all\ndetectors.\nBoth hypotheses would fit a true signal equally well, but the glitch hypothesis would be weighed down by its\nlarger prior volume (``Occam's razor''). In the case of glitches, however, the glitch hypothesis will\ngenerally provide a much better fit, allowing it to overcome its larger prior volume.\n\n\\subsection{Instrumental lines in CW searches}\n\\label{sec:deal-with-instr}\n\nThe most commonly used approaches to deal with instrumental lines in CW searches are all heuristic and can be\nsummarized as follows:\n\\begin{enumerate}[(i)] \\itemsep1pt \\parskip0pt\n\\item \\emph{Line cleaning} This is a widely used approach with many variants. It consists in effectively\n excluding frequency bands from the search when they are known or believed to be affected by instrumental\n lines.\n This could be either as a result of previous detector characterization work or because the frequency-domain\n data was flagged as particularly disturbed (referred to as {\\emph{line flagging}}).\n Among the examples of this approach are the LIGO\/Virgo searches\n \\cite{aasi13:_eathS5, abadie12:_powerflux,aasi2013:_gc-search, abbot2008:_s4cw, abadie2010:_casa}.\n\n A downside of this method is the relatively large fraction of the total frequency band it\n typically vetoes. For example, in Ref.~\\cite{abadie12:_powerflux} it vetoed a total of 270\\,Hz out of the\n 1140\\,Hz searched, i.e., $\\sim24\\%$ of the data.\n Furthermore, this method is either limited to known instrumental lines or, when the line-flagging variant\n is used, its efficacy is limited to strong disturbances.\n Weaker disturbances can only be identified with time baselines much longer than the ones typically\n used by the line-flagging algorithms. Furthermore, the Fourier-transform-based line-flagging algorithm is not\n optimally suited to detect lines with nonconstant frequency.\n\n\\item \\emph{S-veto} This is a method to remove candidates from a (frequency and spin-down dependent) region of\n the sky.\n This region is typically around the poles, where the corresponding signal templates are not well\n distinguished from typical instrumental line artifacts. This method was initially developed in PowerFlux\n \\cite{abbot2008:_s4cw} and subsequently adapted to $\\F$-statistic searches \\cite{abbott2009:_s4eath}.\n\n The fraction of the total parameter space vetoed a priori through this approach can again be\n quite large, for example about $\\sim 30\\%$ in Ref.~\\cite{abbott2009:_s4eath}.\n\n\\item \\emph{$\\F$-statistic consistency veto} If a candidate from a multidetector search has a\n single-detector $\\F$-statistic value exceeding its multidetector $\\F$-statistic, then it is\n vetoed as a likely instrumental line. This approach was described in more detail and tested in\n Ref.~\\cite{aasi13:_eathS5} and Refs.~\\cite{behnke2013:_phdthesis, aasi2013:_gc-search}.\n\\end{enumerate}\n\nThe approach proposed in the present work is not a heuristic method as the ones described above. Instead, it\nshares some similarities to the glitch-robust method proposed in Ref.~\\cite{veitch2010:_bayesian}, but it\ndiffers in the following:\nIn the incoherent CBC pipeline, any candidate is already required to be \\emph{coincident} between detectors,\nso the method of Ref.~\\cite{veitch2010:_bayesian} adds the requirement of multidetector \\emph{coherence} to\ndistinguish GW signals from glitches.\nIn our case, instead, we start from the \\emph{coherent} multidetector $\\F$-statistic and add a\n\\emph{coincidence} requirement to distinguish CW signals from (noncoincident) lines.\n\nCurrently we do not include coincident lines in the alternative hypothesis, as we expect that this would\nsubstantially weaken the detection power of this method.\nMore work is required to deal with coincident lines that trigger the same templates in multiple detectors.\n\nHowever, the prevalence of coincident lines in detector data appears to be limited.\nFor example, the lines of known instrumental origin identified in the LIGO S5 data from the two detectors (see\nTables VI and VII in Ref.~\\cite{aasi13:_eathS5}) overlap by 1.6~Hz, corresponding to about 11\\% of the\ncontaminated bandwidth.\nFurthermore, in the $\\F$-statistic based analysis~\\cite{aasi13:_eathS5}, 0.46\\% of final high-significance\ncandidates passed the $\\F$-statistic consistency veto and therefore could be considered as caused by\ncoincident lines.\n\nThe approach taken here is that in a full CW search pipeline the noncoincident line model would serve as a\ncheap and simple ``first line of defense'' to reduce the number of spurious candidates, while more\nsophisticated steps can be applied to the surviving candidates in later steps.\n\n\\section{Hypotheses about the observed data}\n\\label{sec:hypotheses}\n\nLet $x^X(t)$ be the time series of GW strain measured in a detector $X$, where we use\n$X,Y,\\ldots$ as detector indices. Following the multidetector notation from\nRefs.~\\cite{cutler05:_gen_fstat,prix06:_searc}, using boldface indicates a multidetector vector, i.e.,\nwe write $\\detVec{x}(t)$ for the multidetector data vector with components $x^X(t)$.\n\nWe will consider three different\nhypotheses about the observed data $\\detVec{x}$ and derive their posterior probabilities: the Gaussian noise\nhypothesis $\\Hyp_\\Gauss$, the CW signal hypothesis $\\Hyp_\\Signal$ and a simple ``line'' hypothesis $\\Hyp_\\Line$.\n\n\\subsection{The Gaussian noise hypothesis \\texorpdfstring{$\\Hyp_\\Gauss$}{HG}}\n\\label{sec:hypo_gauss}\n\nThe Gaussian-noise hypothesis $\\Hyp_\\Gauss$ states that the measured multidetector time series $\\detVec{x}(t)$ only\ncontains stationary Gaussian noise, which we denote as $\\detVec{n}(t)$, i.e.,\n\\begin{equation}\n \\label{eq:hypG}\n \\Hyp_\\Gauss: \\detVec{x}(t) = \\detVec{n}(t)\\,,\n\\end{equation}\nwith a single-sided power-spectral density (PSD) $\\detVec{S}_n$ that is assumed to be known.\nThe corresponding likelihood for measuring the data $\\detVec{x}$ can therefore be written\nas\n\\begin{equation}\n \\label{eq:gaussian}\n \\prob{\\detVec{x}}{\\Hyp_\\Gauss} = \\kappa\\,\\eto{-\\frac{1}{2}\\scalar{\\detVec{x}}{\\detVec{x}}}\\,,\n\\end{equation}\nwhere $\\kappa$ is a data-independent normalization constant, and the scalar product is defined as\n\\begin{equation}\n \\label{eq:scalarproduct}\n \\scalar{\\detVec{x}}{\\detVec{y}} \\equiv \\sum_X \\frac{1}{\\SnX}\\int_0^T x^X(t)\\,y^X(t)\\,dt\\,,\n\\end{equation}\nassuming that the noise spectra $\\SnX$ are uncorrelated between different detectors $X$ and constant over the\n(narrow) frequency band of interest.\nFor simplicity of notation we omit the sometimes customary notation of a conditional ``$I$'' denoting all\nimplicit and explicit model assumptions, i.e., we write $\\prob{a}{b}$ as a shortcut for $\\prob{a}{b\\,,I}$, and\n$\\probI{a}$ as an abbreviation for $\\prob{a}{I}$.\n\nThe posterior probability for $\\Hyp_\\Gauss$ given the observed data $\\detVec{x}$ follows from Bayes' theorem as\n\\begin{equation}\n \\label{eq:pHG}\n \\prob{\\Hyp_\\Gauss}{\\detVec{x}} = \\frac{\\probI{\\Hyp_\\Gauss}}{\\probI{\\detVec{x}}}\\,\\kappa\\,\\eto{-\\frac{1}{2}\\scalar{\\detVec{x}}{\\detVec{x}}} \\,,\n\\end{equation}\nwhere $\\probI{\\Hyp_\\Gauss}$ is the prior probability for the Gaussian-noise hypothesis.\nThe normalization $\\probI{\\detVec{x}}$ depends on the full set of assumed hypotheses $\\{\\mathcal{H}_i\\}$, i.e.,\n\\mbox{$\\probI{\\detVec{x}} = \\sum_i \\prob{\\detVec{x}}{\\mathcal{H}_i}\\,\\probI{\\mathcal{H}_i}$}, but in the following we will only consider\nthe \\emph{odds} between different hypotheses, where this term drops out.\n\n\\subsection{The CW signal hypothesis \\texorpdfstring{$\\Hyp_\\Signal$}{HS}}\n\\label{sec:hypo_signal}\n\nThe hypothesis $\\Hyp_\\Signal$ for CW signals \\cite{jks98:_data,prix06:_cw_review} states that the data $\\detVec{x}$\ncontains a CW signal $\\detVec{h}$ in addition to Gaussian noise $\\detVec{n}$, namely $\\detVec{x} = \\detVec{n} + \\detVec{h}$.\n\nThe signal model $\\detVec{h}$ depends on a number of (generally unknown) signal parameters. For\npractical reasons, we usually distinguish between the set of four \\emph{amplitude parameters} $\\mathcal{A}$ and the\nremaining \\emph{phase-evolution parameters} $\\lambda$, i.e., we write the CW signal family as $\\detVec{h}(t;\\mathcal{A},\\lambda)$.\n\nTo fully specify the signal hypothesis, we therefore need a prior probability distribution\n$\\prob{\\mathcal{A},\\lambda}{\\Hyp_\\Signal}$ for the signal parameters, i.e.,\n\\begin{equation}\n \\begin{split}\n \\label{eq:hypS}\n \\Hyp_\\Signal: \\detVec{x}(t) = \\detVec{n}(t) + \\detVec{h}(t;\\mathcal{A},\\lambda)\\\\\n \\text{with prior}\\;\\;\\prob{\\mathcal{A},\\lambda}{\\Hyp_\\Signal}\\,.\n\\end{split}\n\\end{equation}\n\nThe amplitude parameters $\\mathcal{A}$ describe the signal amplitude $h_0$, the inclination angle $\\iota$, the\npolarization angle $\\psi$ and the initial phase $\\phi_0$. As first shown in Ref.~\\cite{jks98:_data}, a\nparticular parametrization $\\mathcal{A}^\\mu = \\mathcal{A}^\\mu ( h_0,\\cos\\iota,\\psi,\\phi_0)$, with $\\mu = 1\\ldots4$, allows one to\nwrite the signal model in the factorized form\n\\begin{equation}\n \\label{eq:Amuhmu}\n \\detVec{h}(t;\\mathcal{A},\\lambda) = \\mathcal{A}^\\mu\\,\\detVec{h}_\\mu(t;\\lambda)\\,,\n\\end{equation}\nin terms of four basis functions $\\detVec{h}_\\mu(t;\\lambda)$ and using the automatic summation convention over repeated\nindices.\n\nIn order to simplify the following discussion and notation, we follow the approach\nof Refs.~\\cite{prix09:_bstat,prix11:_transient} and formally restrict ourselves to a single-template statistic\nin $\\lambda$. This is equivalent to the assumption of known phase parameters, i.e., $\\lambda = \\lambda_\\mathrm{s}$. This can\nbe done without loss of generality, as for unknown $\\lambda\\in\\mathbb{P}$ this analysis would apply for each\ntemplate $\\lambda_i\\in\\mathbb{P}$, and one would then marginalize over the prior parameter space $\\mathbb{P}$.\nStudying this in further detail is outside the scope of the present work.\nWe will therefore assume a prior of the form\n\\begin{equation}\n \\label{eq:priorAlambda}\n \\prob{\\mathcal{A},\\lambda}{\\Hyp_\\Signal} = \\prob{\\mathcal{A}}{\\Hyp_\\Signal}\\,\\delta(\\lambda - \\lambda_\\mathrm{s})\\,,\n\\end{equation}\nand drop the phase-evolution parameters $\\lambda$ from the following expressions.\n\nWe can obtain the likelihood for \\emph{a particular} signal $\\detVec{h}(t;\\mathcal{A})$ by noting\nthat, according to $\\Hyp_\\Signal$, the combination $\\left[\\detVec{x} - \\detVec{h}(\\mathcal{A})\\right]$ is described by Gaussian noise.\nIn fact, by inserting the signal factorization from Eq.~\\eqref{eq:Amuhmu} and by factoring out terms\nequivalent to the Gaussian noise likelihood from Eq.~\\eqref{eq:pHG}, we obtain\n\\begin{align}\n \\prob{\\detVec{x}}{\\Hyp_\\Signal,\\mathcal{A}} &= \\kappa \\, \\eto{-\\frac{1}{2} \\scalar{\\detVec{x}-\\detVec{h}(\\mathcal{A})}{\\detVec{x}-\\detVec{h}(\\mathcal{A})}} \\notag\\\\\n &= \\kappa \\, \\eto{-\\frac{1}{2} \\scalar{\\detVec{x}}{\\detVec{x}}} \\,\n \\eto{\\scalar{\\detVec{x}}{\\mathcal{A}^\\mu\\detVec{h}_\\mu} - \\frac{1}{2}\n \\scalar{\\mathcal{A}^\\mu\\detVec{h}_\\mu}{\\mathcal{A}^\\nu\\detVec{h}_\\nu}} \\label{eq:likeli_HSA}\\\\\n &= \\prob{\\detVec{x}}{\\Hyp_\\Gauss} \\exp\\left[\\mathcal{A}^\\mu \\, x_\\mu - \\frac{1}{2} \\mathcal{A}^\\mu\n \\mathcal{M}_{\\mu\\nu}\\mathcal{A}^\\nu\\right],\\notag\n\\end{align}\nwhere we introduced the four projections $x_\\mu$ of the data and the (symmetric positive-definite) matrix\n$\\mathcal{M}_{\\mu\\nu}$ as\n\\begin{equation}\n \\label{eq:xmuMmunu}\n x_\\mu \\equiv \\scalar{\\detVec{x}}{\\detVec{h}_\\mu} \\quad \\text{and} \\quad\n \\mathcal{M}_{\\mu\\nu} \\equiv \\scalar{\\detVec{h}_\\mu}{\\detVec{h}_\\nu}\\,.\n\\end{equation}\n\nThe \\emph{marginal} likelihood $\\prob{\\detVec{x}}{\\Hyp_\\Signal}$ (sometimes referred to as ``evidence'') for the\nsignal hypothesis from Eq.~\\eqref{eq:hypS} can be obtained by marginalizing over the unknown\namplitudes $\\mathcal{A}$, namely\n\\begin{equation}\n \\prob{\\detVec{x}}{\\Hyp_\\Signal} = \\int \\prob{\\detVec{x}}{\\Hyp_\\Signal,\\mathcal{A}}\\,\\prob{\\mathcal{A}}{\\Hyp_\\Signal}\\,d\\mathcal{A}\\,. \\label{eq:likeli_HSmarg}\n\\end{equation}\nThis integral can be solved analytically for certain choices of amplitude priors $\\prob{\\mathcal{A}}{\\Hyp_\\Signal}$.\nIn particular, as discussed in Refs.~\\cite{prix09:_bstat,prix11:_transient}, for the (somewhat unphysical)\nprior that is uniform in $\\mathcal{A}^\\mu$, we can recover the standard $\\F$-statistic, namely, assuming\n\\begin{equation}\n \\label{eq:priorA}\n \\prob{\\{\\mathcal{A}^\\mu\\}}{\\Hyp_\\Signal} = \\left\\{\\begin{array}{ll}\n C & \\text{for} \\quad h_0^4(\\mathcal{A}) < \\frac{70\\,c_*}{\\sqrt{|\\mathcal{M}|}}\\,.\\\\\n 0 & \\text{otherwise}\\,,\n \\end{array}\\right.\n\\end{equation}\nwhere $|\\mathcal{M}|$ is the determinant of $\\mathcal{M}_{\\mu\\nu}$ and $c_*$ is an ad-hoc cutoff\\footnote{This\ntranslates to the notation of Ref.~\\cite{prix11:_transient} via $c_* = \\frac{{\\widehat{\\rho}_{\\mathrm{max}}}^4}{70}$.}\nused to normalize the prior, namely, \\mbox{$C = \\frac{\\sqrt{|\\mathcal{M}|}}{(2\\pi)^2}\\, c_*^{-1}$}.\n\nUsing this prior and taking the integration boundary to infinity, $c_*\\rightarrow\\infty$, we obtain\nthe (marginal) signal likelihood, from Eq.~\\eqref{eq:likeli_HSmarg}, in the form\n\\begin{equation}\n \\label{eq:likeli_HS}\n \\prob{\\detVec{x}}{\\Hyp_\\Signal} = \\prob{\\detVec{x}}{\\Hyp_\\Gauss} \\, c_*^{-1}\\,\\eto{\\F(\\detVec{x})}\\,,\n\\end{equation}\nwhere we define the (coherent) multidetector $\\F$-statistic as\n\\begin{equation}\n \\label{eq:Fstat}\n 2\\F(\\detVec{x}) \\equiv x_\\mu\\,\\mathcal{M}^{\\mu\\nu}\\,x_\\nu\\,,\n\\end{equation}\nand $\\mathcal{M}^{\\mu\\nu}$ denotes the inverse matrix to $\\mathcal{M}_{\\mu\\nu}$, i.e.,\n$\\mathcal{M}_{\\mu\\alpha}\\mathcal{M}^{\\alpha\\nu} = \\delta_{\\mu}^{\\nu}$.\nWe obtain the posterior probability for the signal hypothesis as\n\\begin{equation}\n \\label{eq:pHS_final}\n \\prob{\\Hyp_\\Signal}{\\detVec{x}} = \\prior{\\OSG}\\,c_*^{-1}\\,\\prob{\\Hyp_\\Gauss}{\\detVec{x}}\\,\\eto{\\F(\\detVec{x})}\\,,\n\\end{equation}\nwhere $\\prior{\\OSG}\\equiv \\probI{\\Hyp_\\Signal}\/\\probI{\\Hyp_\\Gauss}$ denotes the prior odds between the signal- and\nGaussian-noise hypotheses.\n\nThe posterior odds between signal hypothesis $\\Hyp_\\Signal$ and Gaussian-noise hypothesis $\\Hyp_\\Gauss$ are therefore\nequivalent to the standard multidetector $\\F$-statistic\\footnote{\\emph{Equivalence} in the Neyman-Pearson\nsense: the same false-dismissal as a function of false-alarm probability.}, as we see by writing\n\\begin{equation}\n \\label{eq:OSG}\n \\OSG(\\detVec{x}) \\equiv \\frac{\\prob{\\Hyp_\\Signal}{\\detVec{x}}}{\\prob{\\Hyp_\\Gauss}{\\detVec{x}}}\n = \\prior{\\OSG}\\,c_*^{-1}\\, \\eto{\\F(\\detVec{x})}\\,.\n\\end{equation}\nNote that the corresponding (marginal) likelihood ratio\n\\begin{equation}\n \\label{eq:2}\n \\Bayes_{\\Signal\\Gauss}(\\detVec{x}) \\equiv \\frac{\\prob{\\detVec{x}}{\\Hyp_\\Signal}}{\\prob{\\detVec{x}}{\\Hyp_\\Gauss}} = c_*^{-1}\\, \\eto{\\F(\\detVec{x})}\\,,\n\\end{equation}\nis generally known as the \\emph{Bayes factor}, and is closely related to the odds via $\\OSG(\\detVec{x}) = \\prior{\\OSG}\\,\\Bayes_{\\Signal\\Gauss}(\\detVec{x})$.\n\nWhile this statistic is close to optimal for detecting signals in pure Gaussian noise \\cite{prix09:_bstat}, it\nis vulnerable to various signal-like instrumental artifacts in the data.\nAs discussed in Sec.~\\ref{sec:introduction}, we see from Eqs.~\\eqref{eq:OSG} and \\eqref{eq:2} that detector\nartifacts can trigger $\\OSG(\\detVec{x})$ or $\\Bayes_{\\Signal\\Gauss}(\\detVec{x})$, provided they resemble $\\Hyp_\\Signal$ \\emph{more} than $\\Hyp_\\Gauss$\neven if the agreement with $\\Hyp_\\Signal$ is poor.\nIn order to deal with this problem, we need to introduce an alternative hypothesis, which describes\ninstrumental lines \\emph{better} than $\\Hyp_\\Signal$.\n\n\\subsection{Simple line hypothesis: A CW-like disturbance in a single detector}\n\\label{sec:hypo_line}\n\nHere we introduce a simple line hypothesis designed to match one prominent feature of many instrumental lines,\ndistinguishing them from CW signals: the fact that they appear only in one detector.\nInspired by this, we reuse the signal hypothesis from Eq.~\\eqref{eq:hypS} in order to define a line in\ndetector $X$ :\n\\begin{equation}\n \\begin{split}\n \\label{eq:hypLX}\n \\Hyp_\\Line^{X} : x^{X}(t) = n^{X}(t) + h^{X}(t;\\mathcal{A}^{X})\\\\\n \\text{with prior}\\quad\\prob{\\mathcal{A}^{X}}{\\Hyp_\\Line^{X}}\\,.\n\\end{split}\n\\end{equation}\nWe would expect lines to have a different amplitude distribution than real signals, but in the absence of any\nmore detailed knowledge on this point, we choose to reuse the signal amplitude prior given by\nEq.~\\eqref{eq:priorA} for $\\prob{\\mathcal{A}^X}{\\Hyp_\\Line^X}$. This choice simplifies the following\ncalculations. In analogy to Eq.~\\eqref{eq:pHS_final}, we directly obtain the probability for $\\Hyp_\\Line^X$:\n\\begin{equation}\n \\label{eq:pHLX}\n \\prob{\\Hyp_\\Line^X}{x^X} = c_*^{-1}\\,\\prob{\\Hyp_\\Gauss^X}{x^X}\\,\\prior{\\OLG}^X\\,\\eto{\\F^X(x^X)}\\,.\n\\end{equation}\nHere we define the per-detector prior line odds \\mbox{$\\prior{\\OLG}^X \\equiv {\\probI{\\Hyp_\\Line^X}}\/{\\probI{\\Hyp_\\Gauss^X}}$},\nwhich encode prior knowledge about how likely a line is, compared to pure Gaussian noise, in a given\ntemplate $\\lambda$ and detector $X$. The detector-specific $\\F$-statistic $\\F^X(x^X)$ is simply given by\nEq.~\\eqref{eq:Fstat} restricted to detector $X$.\n\nFor multiple detectors we can now formulate the simple line hypothesis $\\Hyp_\\Line$ as a CW-like disturbance\n$\\Hyp_\\Line^{X}$ in any one detector $X$ and data consistent with Gaussian noise $\\Hyp_\\Gauss^Y$ in all other detectors\n$Y\\not=X$:\n\\begin{equation}\n \\begin{split}\n \\Hyp_\\Line \\equiv& \\left( \\Hyp_\\Line^1 \\;\\mathrm{and}\\; \\Hyp_\\Gauss^2 \\;\\mathrm{and}\\; \\Hyp_\\Gauss^3 \\ldots\\right) \\;\\mathrm{or}\\; \\\\\n & \\left( \\Hyp_\\Gauss^1 \\;\\mathrm{and}\\; \\Hyp_\\Line^2 \\;\\mathrm{and}\\; \\Hyp_\\Gauss^3 \\ldots \\right) \\;\\mathrm{or}\\; \\ldots\\,. \\label{eq:hypL}\n \\end{split}\n\\end{equation}\nNote that in this approach $\\Hyp_\\Line$ does not include lines that are coincident across different detectors,\nwhich is postponed to future work.\n\nWe assume the different detectors to be independent to the extent that\nknowing $\\Hyp_\\Gauss^X$ or $\\Hyp_\\Line^X$ for detector $X$\ndoes not inform us about $\\Hyp_\\Gauss^Y$ or $\\Hyp_\\Line^Y$ for other detectors $Y\\not=X$. We also assume the different\nalternatives in Eq.~\\eqref{eq:hypL} to\nbe mutually exclusive. The laws of probability therefore yield\n\\begin{align}\n \\prob{\\Hyp_\\Line}{\\detVec{x}} &= \\prob{\\Hyp_\\Line^1}{x^1} \\prob{\\Hyp_\\Gauss^2}{x^2} \\prob{\\Hyp_\\Gauss^3}{x^3}\\times \\ldots \\nonumber \\\\\n & + \\prob{\\Hyp_\\Gauss^1}{x^1} \\prob{\\Hyp_\\Line^2}{x^2} \\prob{\\Hyp_\\Gauss^3}{x^3}\\times \\ldots \\nonumber \\\\\n & + \\ldots \\nonumber \\\\\n &= \\sum_{X} \\prob{\\Hyp_\\Line^{X}}{x^{X}} \\prod_{Y\\not=X} \\prob{\\Hyp_\\Gauss^Y}{x^Y}\\,. \\label{eq:pHL_initial}\n\\end{align}\n\nBy combining Eqs.~\\eqref{eq:hypL}, \\eqref{eq:pHLX} and the (per-detector) Gaussian-noise probability from\nEq.~\\eqref{eq:pHG}, we find the posterior probability for the line hypothesis $\\Hyp_\\Line$ as\n\\begin{equation}\n \\label{eq:pHL_inserted}\n \\prob{\\Hyp_\\Line}{\\detVec{x}} = c_*^{-1}\\,\\prob{\\Hyp_\\Gauss}{\\detVec{x}} \\, \\sum_X \\prior{\\OLG}^X\\,\\eto{\\F^X(x^X)}\\,,\n\\end{equation}\nwhere we used the fact that $\\prod_X \\prob{\\Hyp_\\Gauss^X}{x^X} = \\prob{\\Hyp_\\Gauss}{\\detVec{x}}$. Note that\n\\begin{equation}\n \\label{eq:sumlX}\n \\sum_X \\prior{\\OLG}^X = \\frac{\\probI{\\Hyp_\\Line}}{\\probI{\\Hyp_\\Gauss}} \\equiv \\prior{\\OLG}\\,,\n\\end{equation}\nwhere $\\prior{\\OLG}$ denotes the prior odds for a line versus Gaussian noise (in the present template $\\lambda$)\nincluding all detectors.\n\nIt will be convenient to define relative detector weights $r^X$ for the prior line odds, namely for\n${N_{\\mathrm{det}}}$ detectors:\n\\begin{equation}\n \\label{eq:rX}\n r^X \\equiv \\frac{\\prior{\\OLG}^X}{\\prior{\\OLG} \/ {N_{\\mathrm{det}}}}\\,,\\quad\\text{such that}\\quad \\sum_X r^X = {N_{\\mathrm{det}}}\\,.\n\\end{equation}\nIf all detectors are equally likely to contain a line, then $r^X = 1$ for all $X$.\nWe further denote the average of a quantity $Q^X$ over detectors as\n\\begin{equation}\n \\label{eq:1}\n \\avgX{Q^X} \\equiv \\frac{1}{{N_{\\mathrm{det}}}} \\sum_X Q^X\\,,\n\\end{equation}\nand hence $\\avgX{r^X} = 1$.\nBy using these definitions, we can write Eq.~\\eqref{eq:pHL_inserted} as follows:\n\\begin{equation}\n \\label{eq:pHL_avg}\n \\prob{\\Hyp_\\Line}{\\detVec{x}} = c_*^{-1} \\, \\prob{\\Hyp_\\Gauss}{\\detVec{x}} \\, \\prior{\\OLG}\\, \\avgX{r^X\\,\\eto{\\F^X(x^X)}}\\,.\n\\end{equation}\n\n\\section{Coherent line-robust statistics}\n\\label{sec:line-veto-stats-coh}\n\nWe use the posterior line probability of Eq.~\\eqref{eq:pHL_avg} to compute the odds for\nadditional model comparisons, thereby extending the standard multidetector $\\F$-statistic\ngiven by Eq.~\\eqref{eq:OSG}.\nIn particular, we consider two approaches:\n\\begin{enumerate}[(i)] \\itemsep1pt \\parskip0pt\n\\item Define a ``line-veto'' statistic as the odds between the signal hypothesis $\\Hyp_\\Signal$ and\n the line hypothesis $\\Hyp_\\Line$.\n This may be useful, for example, as a follow-up statistic for strong candidates from\n an initial $\\F$-statistic search, which compared $\\Hyp_\\Signal$ versus Gaussian noise $\\Hyp_\\Gauss$.\n In such a two-stage approach, one would test the signal hypothesis against the line-hypothesis\n if the Gaussian-noise hypothesis has been ruled out with sufficient confidence.\n\n\\item \\emph{Extend} the standard signal-versus-Gaussian-noise odds $\\OSG(\\detVec{x})$ to a more\n line-robust statistic $\\OSN(\\detVec{x})$ by allowing the noise hypothesis to include either pure Gaussian\n noise $\\Hyp_\\Gauss$ or a line $\\Hyp_\\Line$.\n\\end{enumerate}\n\n\\subsection{Line-veto statistic \\texorpdfstring{$O_{\\Signal\\Line}(\\detVec{x})$}{OSL(x)}}\n\\label{sec:line-veto-stat}\n\nUsing the posterior probabilities given by Eqs.~\\eqref{eq:pHS_final} and \\eqref{eq:pHL_avg}, we obtain the\nposterior signal-versus-line odds as\n\\begin{equation}\n \\label{eq:OSL}\n O_{\\Signal\\Line}(\\detVec{x}) \\equiv \\frac{\\prob{\\Hyp_\\Signal}{\\detVec{x}}}{\\prob{\\Hyp_\\Line}{\\detVec{x}}} =\n \\prior{\\OSL} \\; \\frac{ \\eto{\\F(\\detVec{x})} }{\\avgX{r^X\\,\\eto{\\F^X(x^X)}}}\\,,\n\\end{equation}\nwith the prior odds $\\prior{\\OSL} \\equiv \\probI{\\Hyp_\\Signal}\/\\probI{\\Hyp_\\Line} = \\prior{\\OSG}\/\\prior{\\OLG}$. Note\nthat the amplitude-prior cutoff $c_*$ has disappeared, as we have used the same amplitude\nprior on lines and signals.\n\nIn the following we will often neglect the dependency on $\\detVec{x}$ and $x^X$ to simplify notation.\nIt is instructive to consider the log-odds, which we can write as\n\\begin{align}\n \\lnO_{\\Signal\\Line} &= \\ln \\prior{\\OSL} + \\F - \\ln \\avgX{r^X\\eto{\\F^X}} \\nonumber \\\\\n &= \\ln \\prior{\\OSL} + \\F - \\F'_{\\mathrm{max}} - \\ln \\avgX{r^X\\eto{\\left(\\F^X -\n \\F'_{\\mathrm{max}}\\right)}}\\,,\\label{eq:logOSL}\n\\end{align}\nwhere we define\n\\begin{equation}\n \\label{eq:Fmaxp}\n \\F'_{\\mathrm{max}} \\equiv \\max_X \\left( \\F^X + \\ln r^X \\right) \\,.\n\\end{equation}\nThe terms in the detector-average in Eq.~\\eqref{eq:logOSL} are bounded within $[0, 1]$, with at least one term\nbeing equal to $1$.\nHence, the logarithmic average $\\ln\\avgX{\\ldots}$ is bounded within\n$[-\\ln{N_{\\mathrm{det}}},0]$, i.e., of order 1.\n\nActually, for strong $\\F$-statistic candidates, i.e., $\\F\\gg1$, the logarithmic correction is\nnegligible, and therefore we can approximate\n\\begin{equation}\n \\label{eq:logOSL_approx}\n \\lnO_{\\Signal\\Line}(\\detVec{x}) \\approx \\F(\\detVec{x}) - \\F'_{\\mathrm{max}}(\\detVec{x}) + \\ln\\prior{\\OSL} \\,.\n\\end{equation}\nWithout prior knowledge about one detector being more affected by instrumental lines than others,\nwe would have $r^X = 1$ and therefore $\\F'_{\\mathrm{max}}(x) = \\max_X \\F^X(x^X)$.\nConsidered as a detection statistic, $O_{\\Signal\\Line}(\\detVec{x})$ is therefore approximately equivalent to the\ndifference between the multidetector $\\F$-statistic and the largest $\\F$-statistic value from the\nindividual detectors.\n\nBy choosing a special threshold of $O_{\\Signal\\Line}(\\detVec{x})=\\prior{\\OSL}$ and assuming equal prior line probabilities for all\ndetectors, we recover the well-known \\emph{$\\F$-statistic consistency veto}, namely\n\\begin{equation}\n \\label{eq:F-veto}\n \\textrm{If}\\quad \\F(\\detVec{x}) < \\max_{X}\\{\\F^X(x)\\}\\;\\;\\implies\\;\\textrm{veto the candidate}\\,,\n\\end{equation}\nwhich has been successfully used and tested in\nRefs.~\\cite{aasi13:_eathS5, behnke2013:_phdthesis, aasi2013:_gc-search}.\nCombining this veto with $\\F$-statistic ranking corresponds to defining a new statistic:\n\\begin{equation}\n \\label{eq:Fveto}\n \\F^{\\mathrm{+veto}}(\\detVec{x}) \\equiv \\left\\{\n \\begin{array}{cc}\n \\F(\\detVec{x})\\quad & \\textrm{if } \\F(\\detVec{x}) \\ge \\max_{X}\\{\\F^X(x)\\}\\,,\\\\\n 0 & \\textrm{otherwise}\\,,\n \\end{array}\n \\right.\n\\end{equation}\nwhich we will refer to as the $\\Fveto$-statistic.\n\n\n\\subsection{Line-robust detection statistic \\texorpdfstring{$\\OSN(\\detVec{x})$}{OSN(x)}}\n\\label{sec:extend-detect-stat}\n\nFrom the standpoint of probability theory it is more natural to use the line hypothesis to extend\nwhat we mean by ``noise'', namely, either pure Gaussian noise $\\Hyp_\\Gauss$ or a line $\\Hyp_\\Line$.\nHence, we introduce an extended noise hypothesis as\n\\begin{equation}\n \\label{eq:hypN}\n \\Hyp_\\Noise : \\left( \\Hyp_\\Gauss \\;\\mathrm{or}\\; \\Hyp_\\Line \\right) \\,.\n\\end{equation}\nSince we take $\\Hyp_\\Gauss$ and $\\Hyp_\\Line$ to be mutually exclusive, the posterior probability for $\\Hyp_\\Noise$ is\n\\begin{align}\n \\prob{\\Hyp_\\Noise}{\\detVec{x}} &= \\prob{\\Hyp_\\Gauss}{\\detVec{x}} + \\prob{\\Hyp_\\Line}{\\detVec{x}} \\nonumber \\\\\n &= \\prob{\\Hyp_\\Gauss}{\\detVec{x}} \\left( 1 + c_*^{-1} \\,\\prior{\\OLG} \\,\\avgX{r^X \\eto{\\F^X(x^X)}} \\right) \\,,\n\\label{eq:pHN}\n\\end{align}\nwhere we have used Eq.~\\eqref{eq:pHL_avg} for the explicit line posterior.\n\nInterestingly, we can express the odds $\\OSN(\\detVec{x})$ of the signal versus extended noise hypotheses as\n\\begin{equation}\n \\label{eq:OSN_initial}\n \\OSN(\\detVec{x}) \\equiv \\frac{\\prob{\\Hyp_\\Signal}{\\detVec{x}}}{\\prob{\\Hyp_\\Noise}{\\detVec{x}}}\n = \\left[ \\OSG^{-1}(\\detVec{x}) + O_{\\Signal\\Line}^{-1}(\\detVec{x}) \\right]^{-1}\\,.\n\\end{equation}\nWe can compare this result with the ad-hoc two-stage approach discussed previously, where one would\nset two independent thresholds on $\\OSG$ and on $O_{\\Signal\\Line}$. As we see from Eq.~\\eqref{eq:OSN_initial}, instead the\nlaws of probability tell us to compute the harmonic sum of $\\OSG$ and $O_{\\Signal\\Line}$ and to set a single threshold on\nthe resulting statistic.\n\nInserting the explicit expressions provided by Eqs.~\\eqref{eq:pHN} and \\eqref{eq:pHS_final}, we obtain\n\\begin{equation}\n \\label{eq:OSN_cstar}\n \\OSN(\\detVec{x}) = \\frac{\\prior{\\OSG} \\, \\eto{\\F(\\detVec{x})}}\n {c_* + \\prior{\\OLG} \\avgX{r^X \\eto{\\F^X}}}\\,.\n\\end{equation}\nThe amplitude-prior cutoff parameter $c_*$ from Eq.~\\eqref{eq:priorA} is only a scale\nfactor in $\\OSG$ and thus not relevant for the performance as a detection statistic, and it is canceled\nout completely in $O_{\\Signal\\Line}$.\nHowever, in $\\OSN$ this parameter does affect the properties of the resulting statistic.\n\nWe can rewrite Eq.~\\eqref{eq:OSN_cstar} by introducing prior odds $\\prior{\\OSN} \\equiv \\probI{\\Hyp_\\Signal}\/\\probI{\\Hyp_\\Noise}$ and\n[noting that \\mbox{$\\prior{\\OSG} = \\prior{\\OSN}\\left( 1 + \\prior{\\OLG}\\right)$}] we obtain\n\\begin{equation}\n \\label{eq:OSN_final}\n \\OSN(\\detVec{x}) = \\prior{\\OSN} \\, \\frac{\\eto{\\F(\\detVec{x})}}\n {(1-p_\\Line)\\,\\eto{\\Fth^{(0)}} + p_\\Line\\, \\avgX{r^X \\eto{\\F^X(x^X)}}}\\,,\n\\end{equation}\nwhere we define the prior line probability $p_\\Line$ as\n\\begin{equation}\n \\label{eq:lineprob}\n p_\\Line \\equiv \\frac{\\prior{\\OLG}}{1 + \\prior{\\OLG}} = \\frac{\\probI{\\Hyp_\\Line}}{\\probI{\\Hyp_\\Noise}} = \\prob{\\Hyp_\\Line}{\\Hyp_\\Noise}\\in [0,1]\\,,\n\\end{equation}\nand we used a more natural reparametrization of $c_*$ by defining\n\\begin{equation}\n \\label{eq:Fth0}\n \\Fth^{(0)} \\equiv \\ln c_*\\,.\n\\end{equation}\n\n\\subsubsection{Limiting cases of \\texorpdfstring{$\\OSN(\\detVec{x})$}{OSN}}\n\\label{sec:limiting-behavior}\n\nWe now consider the limiting behavior of $\\OSN$ as a function of the line prior $p_\\Line$ and of the\nsingle-detector $\\F^X(x)$ values.\nWe see from Eq.~\\eqref{eq:OSN_final} that $\\OSN(\\detVec{x})$ reduces to the $\\F$-statistic if we are certain that\nthere are no lines, i.e., $\\OSN(\\detVec{x}) \\rightarrow\\OSG(\\detVec{x})\\propto\\eto{\\F(\\detVec{x})}$ for $p_\\Line\\rightarrow0$.\nOn the other hand, it reduces to the pure line-veto statistic of Eq.~\\ref{eq:OSL} when we believe the noise to\nbe completely dominated by lines, i.e., $\\OSN(\\detVec{x}) \\rightarrow O_{\\Signal\\Line}(\\detVec{x})$ for $p_\\Line \\rightarrow 1$.\n\nFor fixed $p_\\Line$ we see that the transition between these two extremes depends on\nthe $\\F^X(x)$ values compared to the prior scale $\\Fth^{(0)}$. To illustrate this more clearly, we first rewrite\nEq.~\\eqref{eq:OSN_final} using the relations $\\prior{\\OSN}=p_\\Line\\,\\,\\prior{\\OSL}$ and $(1-p_\\Line)\/p_\\Line=\\prior{\\OLG}^{-1}$.\nIntroducing the ``transition scale'' $\\F_*$ as\n\\begin{equation}\n \\label{eq:Fth}\n \\F_* \\equiv \\Fth^{(0)} - \\ln {\\prior{\\OLG}}\\,,\n\\end{equation}\nwe obtain\n\\begin{equation}\n \\label{eq:OSN_Fstar}\n \\OSN(\\detVec{x}) = \\prior{\\OSL}\\,\\frac{\\eto{\\F(\\detVec{x})}}{\\eto{\\F_*} + \\avgX{r^X \\eto{\\F^X(x^X)}}}\\,.\n\\end{equation}\nFrom this reparametrization, we see that $\\F_*$ defines the scale of a smooth transition of $\\OSN(\\detVec{x})$\nbetween $\\OSG(\\detVec{x})\\propto\\eto{\\F(\\detVec{x})}$ and $O_{\\Signal\\Line}(\\detVec{x})$ depending on the values of $\\F^X$: namely the\n``line-veto term'' $\\avgX{r^X \\eto{\\F^X}}$ in Eq.~\\eqref{eq:OSN_Fstar} only starts to play a role when it is\ncomparable to $\\eto{\\F_*}$.\n\nTo see this more explicitly, we write the log-odds as\n\\begin{equation}\n \\begin{split}\n \\label{eq:logOSN}\n \\ln &\\OSN(\\detVec{x}) = \\ln \\prior{\\OSL} + \\F(\\detVec{x}) - \\F''_{\\mathrm{max}}(x)\\\\\n & - \\ln \\left( \\eto{\\F_*-\\F''_{\\mathrm{max}}} + \\avgX{r^X \\eto{\\F^X(x^X)-\\F''_{\\mathrm{max}}(\\detVec{x})} } \\right)\\,,\n \\end{split}\n\\end{equation}\nwhere we define $\\F''_{\\mathrm{max}}(\\detVec{x}) \\equiv \\max\\left( \\F_*,\\, \\F^X(x^X) + \\lnr^X\\right)$.\nThe logarithmic correction is of order unity, therefore this effectively corresponds to\n$\\lnO_{\\Signal\\Line}(\\detVec{x})$ when \\mbox{$\\max( \\F^X(x) + \\ln r^X ) > \\F_*$}, and to $\\ln\\OSG(\\detVec{x})$ otherwise.\n\nIn practice it can be difficult to determine good prior values for $\\Fth^{(0)}$, due to the unphysical choice of\namplitude priors in Eq.~\\eqref{eq:priorA}. We will discuss this issue in more detail in\nSec.~\\ref{sec:choosing-prior-value}.\n\nThis transitioning behavior is reminiscent of the two-stage line-veto approach discussed in\nSec.~\\ref{sec:line-veto-stats-coh}. There one applies a line veto only to candidates that are ``strong'' in\nterms of $\\OSG(\\detVec{x})\\propto\\eto{\\F(\\detVec{x})}$, which means that the Gaussian-noise hypothesis\nis already considered sufficiently unlikely.\nNote, however, that for $\\OSN(\\detVec{x})$ the transition from $\\OSG(\\detVec{x})$ to $O_{\\Signal\\Line}(\\detVec{x})$ is smooth and depends on\nthe strength of the single-detector statistics $\\F^X(x^X)$ rather than the multidetector statistic $\\F(\\detVec{x})$.\n\n\n\\section{Semicoherent line-robust statistics}\n\\label{sec:semicoherent}\n\nFor unknown signal parameters $\\lambda$, the use of the fully coherent (in time) $\\F$-statistic is usually\nprohibitive in terms of computing cost.\nThus, \\emph{semicoherent} methods are typically used, being more sensitive at fixed computing cost\n\\cite{brady2000:_hierarchical,prix12:_optimal}.\nIn this approach the data $\\detVec{x}$ is divided into ${N_{\\mathrm{seg}}}$ segments of shorter duration, denoted as\n$\\{\\detVec{x}_\\segk\\}_{k=1}^{{N_{\\mathrm{seg}}}}$. The coherent statistic $\\F_\\segk(\\detVec{x}_\\segk;\\lambda)$ in a template $\\lambda$ is\ncomputed for each segment $\\segk$ separately and then combined \\emph{incoherently}, typically by summing over\nall data segments. This is often referred to as the ``StackSlide'' method. Other incoherent combinations\nsuch as the ``Hough transform'' method \\cite{krishnan04:_hough} will not be discussed here.\nThe following discussion refers to the statistic in a single template $\\lambda$, and we will therefore simplify\nthe notation again by dropping $\\lambda$.\n\nAs shown in Ref.~\\cite{prix11:_transient}, the semicoherent StackSlide $\\F$-statistic can be derived by\nrelaxing the requirement of consistent signal amplitudes $\\mathcal{A}$ across different segments, i.e., allowing for\na set of ${N_{\\mathrm{seg}}}$ independent amplitude parameters $\\mathcal{A}_\\segk$ in Eq.~\\eqref{eq:hypS}. This defines the\nsemicoherent signal hypothesis $\\sc{\\Hyp}_{\\Signal}$ as\n\\begin{equation}\n \\label{eq:hypSsc}\n \\sc{\\Hyp}_{\\Signal} : \\detVec{x}_\\segk = \\detVec{n}_\\segk + \\mathcal{A}_\\segk^\\mu\\,\\detVec{h}_\\mu\\,,\\quad\n \\text{for } k = 1, \\ldots {N_{\\mathrm{seg}}}\\,,\n\\end{equation}\nwhere here and in the following the hat $\\sc{\\;\\;}$ notation refers to semicoherent quantities.\n\nFor the per-segment amplitude priors $\\prob{\\mathcal{A}_\\segk}{\\sc{\\Hyp}_{\\Signal}}$, we reuse the amplitude prior given by\nEq.~\\eqref{eq:priorA}. Hence, by marginalization as in Eq.~\\eqref{eq:likeli_HSmarg}, we obtain\nthe posterior\n\\begin{equation}\n \\label{eq:pHSsc}\n \\prob{\\sc{\\Hyp}_{\\Signal}}{\\detVec{x}} = \\prior{\\OSGsc}\\,\\prob{\\Hyp_\\Gauss}{\\detVec{x}}\\, c_*^{-{N_{\\mathrm{seg}}}}\\, \\eto{\\sc{\\F}(\\detVec{x})}\\,,\n\\end{equation}\nwhere we define the StackSlide $\\F$-statistic $\\sc{\\F}$ in parameter-space point $\\lambda$ as\n\\begin{equation}\n \\label{eq:avFstat}\n \\sc{\\F}(\\detVec{x};\\lambda) \\equiv \\sum\\limits_{k=1}^{{N_{\\mathrm{seg}}}} \\F_\\segk(\\detVec{x}_\\segk;\\lambda)\\,.\n\\end{equation}\nFor Gaussian noise we have $\\sc{\\Hyp}_{\\Gauss}=\\Hyp_\\Gauss$, but for consistency of notation we still write $\\sc{\\Hyp}_{\\Gauss}$ throughout\nthis section.\nThe posterior odds between the signal and Gaussian-noise hypotheses across the ${N_{\\mathrm{seg}}}$ segments is\n\\begin{equation}\n \\label{eq:OSGsc}\n \\OSGsc(\\detVec{x}) \\equiv \\frac{ \\prob{\\sc{\\Hyp}_{\\Signal}}{\\detVec{x}} }{ \\prob{\\sc{\\Hyp}_{\\Gauss}}{\\detVec{x}} } =\n \\prior{\\OSGsc}\\,c_*^{-{N_{\\mathrm{seg}}}}\\, \\eto{\\sc{\\F}(\\detVec{x})}\\,.\n\\end{equation}\n\nWe can now generalize the single-detector line hypothesis of Eq.~\\eqref{eq:hypL} to the semicoherent case as\nwas done for the signal hypothesis in Eq.~\\eqref{eq:hypSsc}, namely,\n\\begin{equation}\n \\begin{split}\n \\label{eq:hypLsc}\n \\sc{\\Hyp}_{\\Line} =& \\left(\\sc{\\Hyp}_{\\Line}^{1} \\;\\mathrm{and}\\; \\sc{\\Hyp}_{\\Gauss}^{2} \\;\\mathrm{and}\\; \\sc{\\Hyp}_{\\Gauss}^{3}\\ldots\\right) \\;\\mathrm{or}\\;\\\\\n &\\left(\\sc{\\Hyp}_{\\Gauss}^{1} \\;\\mathrm{and}\\; \\sc{\\Hyp}_{\\Line}^{2} \\;\\mathrm{and}\\; \\sc{\\Hyp}_{\\Gauss}^{3} \\ldots\\right) \\;\\mathrm{or}\\; \\ldots\n \\end{split}\n\\end{equation}\nThe probability of the line hypothesis in detector $X$ across all segments is\n\\begin{equation}\n \\label{eq:pHLXsc}\n \\prob{\\sc{\\Hyp}_{\\Line}^X}{x^X} = \\prob{\\sc{\\Hyp}_{\\Gauss}^X}{x^X}\\,c_*^{-{N_{\\mathrm{seg}}}} \\,\\prior{\\OLGsc}^X\\, \\eto{\\scF^X(x^X)}\\,,\n\\end{equation}\nwhere the semicoherent line-odds in detector $X$ is\n\\mbox{$\\prior{\\OLGsc}^X \\equiv \\probI{\\sc{\\Hyp}_{\\Line}^X}\/\\probI{\\sc{\\Hyp}_{\\Gauss}^X}$}.\nSimilarly to Eq.~\\eqref{eq:pHL_avg}, the posterior probability for the semicoherent\nline-hypothesis $\\sc{\\Hyp}_{\\Line}$ is obtained as\n\\begin{equation}\n \\label{eq:pHLsc}\n \\prob{\\sc{\\Hyp}_{\\Line}}{\\detVec{x}} = \\prob{\\sc{\\Hyp}_{\\Gauss}}{\\detVec{x}} \\, c_*^{-{N_{\\mathrm{seg}}}} \\,\\prior{\\OLGsc}\\, \\avgX{\\sc{r}^X\\,\\eto{\\scF^X(x^X)}}\\,,\n\\end{equation}\nwhere in analogy to Eqs.~\\eqref{eq:sumlX} and \\eqref{eq:rX} we define\n\\begin{align}\n \\prior{\\OLGsc} &\\equiv \\frac{\\probI{\\sc{\\Hyp}_{\\Line}}}{\\probI{\\sc{\\Hyp}_{\\Gauss}}} = \\sum_X \\prior{\\OLGsc}^X\\,, \\label{eq:oLG_sc}\\\\\n \\sc{r}^X &\\equiv \\frac{\\prior{\\OLGsc}^X}{\\prior{\\OLGsc}\/{N_{\\mathrm{det}}}}\\,.\\label{eq:rX_sc}\n\\end{align}\nThe posterior probability for the extended noise hypothesis,\n\\begin{equation}\n \\label{eq:4}\n \\sc{\\Hyp}_{\\Noise} \\equiv \\left( \\sc{\\Hyp}_{\\Gauss} \\;\\mathrm{or}\\; \\sc{\\Hyp}_{\\Line} \\right) \\,,\n\\end{equation}\nis therefore given by\n\\begin{equation}\n \\label{eq:pHNsc}\n \\prob{\\sc{\\Hyp}_{\\Noise}}{\\detVec{x}} = \\prob{\\Hyp_\\Gauss}{\\detVec{x}} \\left( 1 + c_*^{-{N_{\\mathrm{seg}}}} \\,\\prior{\\OLGsc} \\,\\avgX{\\sc{r}^X \\eto{\\scF^X(x^X)}}\n\\right).\n\\end{equation}\n\nWe can now define a semicoherent line-veto statistic, namely,\n\\begin{equation}\n \\label{eq:OSLsc}\n \\sc{O}_{{\\Signal\\Line}}(\\detVec{x}) \\equiv \\frac{\\prob{\\sc{\\Hyp}_{\\Signal}}{\\detVec{x}}}{\\prob{\\sc{\\Hyp}_{\\Line}}{\\detVec{x}}} =\n \\prior{\\OSLsc}\\, \\frac{ \\eto{\\sc{\\F}(\\detVec{x})}}{\\avgX{\\sc{r}^X\\,\\eto{\\scF^X(x^X)} } }\\,,\n\\end{equation}\nand a semicoherent line-robust detection statistic as\n\\begin{equation}\n \\label{eq:OSNsc_initial}\n \\OSNsc(\\detVec{x}) \\equiv \\frac{\\prob{\\sc{\\Hyp}_{\\Signal}}{\\detVec{x}}}{\\prob{\\sc{\\Hyp}_{\\Noise}}{\\detVec{x}}} = \\left[\\OSGsc^{-1}(\\detVec{x}) +\n \\sc{O}_{{\\Signal\\Line}}^{-1}(\\detVec{x})\\right]^{-1}\\!\\!.\n\\end{equation}\nThe latter can be written explicitly as\n\\begin{equation}\n \\label{eq:OSNsc_final}\n \\OSNsc(\\detVec{x}) = \\prior{\\OSNsc}\\,\\frac{ \\eto{\\sc{\\F}(\\detVec{x}) } }\n {(1-\\sc{p}_{{\\Line}})\\,e^{\\scF_*^{(0)}} + \\sc{p}_{{\\Line}} \\avgX{\\sc{r}^X \\eto{\\scF^X(x^X)} } }\\,,\n\\end{equation}\nwith (semicoherent) line probability\n\\begin{equation}\n \\label{eq:lineprob_sc}\n \\sc{p}_{{\\Line}} \\equiv \\frac{\\prior{\\OLGsc}}{1 + \\prior{\\OLGsc}} = \\prob{\\sc{\\Hyp}_{\\Line}}{\\sc{\\Hyp}_{\\Noise}} \\;\\in\\; [0, 1]\n\\end{equation}\nand, in analogy to Eq.~\\eqref{eq:Fth0}, a prior cutoff parametrization of\n\\begin{equation}\n \\label{eq:scFtho}\n \\scF_*^{(0)} \\equiv \\ln c_*^{{N_{\\mathrm{seg}}}}\\,.\n\\end{equation}\nSimilarly to Eq.~\\eqref{eq:OSN_Fstar}, we can therefore write this equivalently as\n\\begin{equation}\n\\label{eq:OSNsc_Fstar}\n\\OSNsc(\\detVec{x}) = \\prior{\\OSLsc}\\,\\frac{\\eto{\\sc{\\F}(\\detVec{x})}}{\\eto{\\scF_*} + \\avgX{\\sc{r}^X \\eto{\\scF^X(x^X)}}}\\,,\n\\end{equation}\nwhere the semicoherent transition scale $\\scF_*$ is defined as\n\\begin{equation}\n \\label{eq:scFth}\n \\scF_* \\equiv \\scF_*^{(0)} - \\ln {\\prior{\\OLGsc}}\\,,\n\\end{equation}\nby generalizing Eq.~\\eqref{eq:Fth}. Hence, we find that $\\OSNsc(\\detVec{x})$ transitions from the standard\nsemicoherent statistic $\\OSGsc(\\detVec{x})\\propto\\eto{\\sc{\\F}}$ to the line-veto statistic $\\sc{O}_{{\\Signal\\Line}}(\\detVec{x})$ when\n\\begin{equation}\n \\label{eq:denomTermsTransition_sc}\n \\avgX{\\sc{r}^X \\eto{\\scF^X}} \\sim \\eto{\\scF_*}\\,.\n\\end{equation}\n\nWe can rewrite the log-odds as\n\\begin{equation}\n \\begin{split}\n \\label{eq:logOSNsc}\n \\ln &\\OSNsc(\\detVec{x}) = \\ln \\prior{\\OSLsc} + \\sc{\\F}(\\detVec{x}) - \\scF''_{\\mathrm{max}}(\\detVec{x})\\\\\n &- \\ln \\left( \\eto{\\scF_*-\\scF''_{\\mathrm{max}}(\\detVec{x})} + \\avgX{\\sc{r}^X \\eto{\\scF^X(x^X)-\\scF''_{\\mathrm{max}}(\\detVec{x})} } \\right)\\,,\n \\end{split}\n\\end{equation}\nwith $\\scF''_{\\mathrm{max}}(\\detVec{x}) \\equiv \\max\\left( \\scF_*,\\,\\scF^X(x^X) + \\ln \\sc{r}^X \\right)$.\nNote that in the semicoherent case we typically deal with much larger numerical values of $\\sc{\\F}$ [due to its\ndefinition as a sum over segments in Eq.~\\eqref{eq:avFstat}]. However, the\nlogarithmic correction term is still of order unity. This implies that the transition from\n$\\OSGsc(\\detVec{x})$ to the line-veto odds $\\sc{O}_{{\\Signal\\Line}}(\\detVec{x})$ is expected to be sharper than in the coherent case of\nEq.~\\eqref{eq:logOSN}.\n\nIncorporating the ad-hoc $\\F$-statistic consistency veto discussed in\nSec.~\\ref{sec:line-veto-stat}, we can define a semicoherent $\\scFveto$-statistic{} as\n\\begin{equation}\n \\label{eq:scFveto}\n \\sc{\\F}^{\\mathrm{+veto}}(\\detVec{x}) \\equiv \\left\\{\n \\begin{array}{cc}\n \\sc{\\F}(\\detVec{x})\\; & \\textrm{if } \\sc{\\F}(\\detVec{x}) \\ge \\max_{X}\\{\\sc{\\F}^X(x)\\}\\,,\\\\\n 0 & \\textrm{otherwise}\\,.\n \\end{array}\n \\right.\n\\end{equation}\n\n\\section{Choice of priors}\n\\label{sec:tuning}\n\nThe new line-veto and line-robust statistics derived in this paper depend on some prior\nparameters which need to be specified. We will now discuss a way to set their values.\n\nThe coherent statistics described in Sec.~\\ref{sec:line-veto-stats-coh} are simply special cases of the\nsemicoherent expressions given in Sec.~\\ref{sec:semicoherent} for ${N_{\\mathrm{seg}}}=1$. Hence, in the\nfollowing we can use the semicoherent notation without loss of generality.\n\nThe pure line-veto statistic $\\sc{O}_{{\\Signal\\Line}}(\\detVec{x})$ of Eq.~\\eqref{eq:OSLsc} seems, at first glance, to have ${N_{\\mathrm{det}}}$\nfree parameters.\nHowever, with the sum constraint \\eqref{eq:rX} on the line-probability weights $\\sc{r}^X$ and the fact that\nthe overall prior odds $\\prior{\\OLGsc}$ only enter through the proportionality factor $\\prior{\\OSLsc}$, this reduces to an\neffective ${N_{\\mathrm{det}}}-1$ parameters.\nHere we make use again of the fact that all monotonic functions of a test statistic are equivalent in the\nNeyman-Pearson sense.\n\nThe line-robust statistic $\\OSNsc(\\detVec{x})$ depends on the prior odds $\\prior{\\OLGsc}$ and on the amplitude-prior cutoff\nparameter $c_*$, not just as mere prefactors.\nHowever, these two prior parameters only appear in $\\OSNsc$ through the combination\n$\\scF_* \\equiv \\scF_*^{(0)} - \\ln {\\prior{\\OLGsc}}$ as defined in Eq.~\\eqref{eq:scFth}. Therefore, $\\OSNsc$ effectively has\n${N_{\\mathrm{det}}}$ free parameters.\n\nWhile the prior odds $\\prior{\\OLGsc}$ have a clear intuitive interpretation, this is not the case for the\nprior amplitude cutoff parameter $c_*$ and thus for $\\scF_*^{(0)} $, as defined in Eq.~\\eqref{eq:scFtho}.\nThis parameter results from the rather unphysical choice of the amplitude prior in Eq.~\\eqref{eq:priorA}, as\ndiscussed in more detail in Refs.~\\cite{prix09:_bstat,prix11:_transient}.\nHence, a certain amount of empirical ``tuning'' will be required to determine a reasonable value\nfor $\\scF_*^{(0)}$, which we will discuss in Sec.~\\ref{sec:choosing-prior-value}.\n\n\\subsection{Proxy estimate of prior line probabilities from the data}\n\\label{sec:estimate-line-probs}\n\nA maximally uninformative choice for the line-priors would be $\\sc{r}^X = 1$ and $\\prior{\\OLGsc}=1$, where the presence\nof lines is considered just as likely as pure Gaussian noise and all detectors are equally likely to be\naffected by lines.\nA more informed choice should be based on prior characterization of the detectors.\n\nA practical way to achieve this is to judiciously use the observed data $\\detVec{x}$ for a simple ``proxy'' estimate\nof $\\prior{\\OLGsc}^X$.\nEmpirically we find promising results when adopting the {line-flagging} method of Ref.~\\cite{wette09:_thesis}.\nWe use data from all frequency bins potentially contributing to the detection statistics in a given\nsearch band.\nWe compute the time-averaged normalized power over these bins and count how many exceed a\npredetermined threshold. The measured fraction of such outliers is used as a proxy estimate for the\nprior line probability.\n\nMore specifically, the data for $\\F$-statistic searches is usually prepared in the form of \\emph{Short Fourier\nTransforms} (SFTs) of the original time-domain data, conventionally spanning\nstretches of duration $T_\\sft = 1800\\,\\sec$ (e.g., see Ref.~\\cite{krishnan04:_hough}).\nWe compute the normalized average SFT power $\\Psft^X(f)$ for each detector $X$ as (e.g., see\nRef.~\\cite{abbott2004:_geoligo})\n\\begin{equation}\n \\label{eq:Psft}\n \\Psft^X(f) \\equiv \\frac{2}{N_\\sft\\,T_\\sft} \\sum_{\\alpha=1}^{N_\\sft}\n \\frac{ \\left| \\widetilde{x}_\\alpha^X(f)\\right|^2 }{\\SnXal(f)}\\,,\n\\end{equation}\nwhere the sum is over all $N_\\sft$ SFTs, $\\widetilde{x}_\\alpha^X(f)$ and $\\SnXal(f)$ denote the\nFourier-transformed data and the noise PSD in the $\\alpha$th SFT, respectively.\n\nWe estimate the prior line probability $\\sc{p}_{{\\Line}}^X$ for that frequency band as\n\\begin{equation}\n \\label{eq:lineestimator}\n \\sc{p}_{{\\Line}}^X = \\frac{N_{\\Psft > \\Psftthr}^X}{N_{\\mathrm{bins}}}\\,,\n\\end{equation}\nwhere $N_{\\Psft > \\Psftthr}^X$ is the number of bins $\\in [0, N_{\\mathrm{bins}}]$ for which $\\Psft^X(f)$ crossed the threshold\n$\\Psftthr^X$. A typical band is of the order of $100\\,\\mathrm{mHz}$ wide, corresponding to a few hundred bins.\nThe threshold $\\Psftthr^X$ is chosen empirically to be safely above the typical noise fluctuations in the data.\n\nFrom $\\sc{p}_{{\\Line}}^X$ the prior line odds may be computed as\n\\begin{equation}\n \\label{eq:olg_estimate}\n \\prior{\\OLGsc}^X = \\frac{\\sc{p}_{{\\Line}}^X}{1 - \\sc{p}_{{\\Line}}^X}\\,,\n\\end{equation}\nwhich also fully specifies $\\sc{r}^X$ and $\\prior{\\OLGsc}$ via Eqs.~\\eqref{eq:oLG_sc} and \\eqref{eq:rX_sc}.\n\nWe determine the threshold $\\Psftthr^X$ by fixing a certain false-alarm probability $p_{\\mathrm{FA},\\Psft}$.\nFor large $N_\\sft$ this can be computed approximately from a Gaussian distribution with unit mean and standard\ndeviation $\\sigma=1\/\\sqrt{N_\\sft}$.\n\nAs an illustrative example, Fig.~\\ref{fig:tuning_normSFTpower_example} shows $\\Psft^X(f)$ for a\n$\\sim60\\,\\mathrm{mHz}$ wide band of simulated Gaussian data consisting of 50 SFTs. The data is\ngenerated with a noise PSD of $\\Sn^X = 3 \\times 10^{-22}\\,\\mathrm{Hz}^{-1\/2}$ in two detectors\n$X\\in\\{\\textrm{H1},\\textrm{L1}\\}$, where $\\textrm{H1}$ and $\\textrm{L1}$ stand for the LIGO detectors at Hanford and Livingston,\nrespectively. A monochromatic stationary line of amplitude $h_0 = 2 \\times 10^{-23}\\,\\mathrm{Hz}^{-1\/2}$ at\n$50\\,\\mathrm{Hz}$ is injected in $\\textrm{H1}$ only. More examples from real data are presented in\nSec.~\\ref{sec:tests_realdata}.\n\nWe stress that the line-flagging procedure proposed here is not meant to yield a direct estimator of\n$\\sc{p}_{{\\Line}}^X$ but rather to provide an indication for the presence of lines based on spectral\nfeatures that can be robustly identified.\n\nFor instance, observing no threshold crossings in the average SFT power $\\Psft$ does not necessarily imply\nthat the $\\F$-statistic could not be affected by instrumental artifacts, while seeing many outliers in $\\Psft$\ndoes not always yield high values of $\\F$.\nHence we will not consider values of $\\prior{\\OLGsc}^X$ that suggest more confidence than seems justifiable, and\ntruncate its range to\n\\begin{equation}\n \\label{eq:olg_trunc}\n \\prior{\\OLGsc}^X \\in [0.001,\\; 1000]\\,.\n\\end{equation}\n\n\\begin{figure}[h!tbp]\n \\includegraphics[width=\\columnwidth]{fake_50dot00Hz_1seg_normSFT_H1L1}\n \\caption{\n \\label{fig:tuning_normSFTpower_example}\n Example of the normalized SFT power $\\Psft^X(f)$ as a function of frequency $f$ for LIGO $\\textrm{H1}$\n (solid) and $\\textrm{L1}$ (dashed) for simulated Gaussian data containing a line in $\\textrm{H1}$.\n The horizontal line shows the threshold $\\Psftthr$ at a false-alarm level of $p_{\\mathrm{FA},\\Psft}=10^{-9}$.}\n\\end{figure}\n\nFor the example simulated data set used in Fig.~\\ref{fig:tuning_normSFTpower_example} we can detail the\nmethod as follows: there are 127 frequency bins in the band considered, and for the threshold\n$\\Psftthr^\\textrm{H1}=\\Psftthr^\\textrm{L1}=\\Psftthr(p_{\\mathrm{FA},\\Psft}=10^{-9},N_\\sft=50)\\approx1.84$ there is a single crossing in $\\textrm{H1}$\nand none in $\\textrm{L1}$. Hence, we estimate the line priors as\n$\\prior{\\OLGsc}^\\textrm{H1}=\\mathrm{max}\\left(0.001,\\tfrac{1\/127}{1-1\/127}\\right)\\approx0.008$ and\n$\\prior{\\OLGsc}^\\textrm{L1}=\\mathrm{max}\\left(0.001,\\tfrac{0\/127}{1-0\/127}\\right)=0.001$.\n\nWe believe that this data-dependent prior estimation is not prone to the\n``sample reuse fallacy'' \\cite{jaynes:_logic_of_science}.\nThe reason is that the proxy estimate for $\\prior{\\OLGsc}^X$ is\nsufficiently \\emph{independent} from the posterior for the line\nhypothesis $\\Hyp_\\Line$, as they are derived from data sets with effectively very little data in common.\nThe line hypothesis $\\Hyp_\\Line$ (being based on the signal hypothesis $\\Hyp_\\Signal$) describes a\nnarrow-band signal, which in each half-hour SFT is confined to a few bins.\nIn fact the current $\\F$-statistic implementation \\cite{prix:_cfsv2} uses only $16$ frequency bins per SFT to\nconstruct the detection statistic, and they are very heavily weighted toward a few central ones.\nOn the other hand, the line-flagging prior estimate uses $\\sim\\Ord{100-200}$ frequency bins and each counts\nequally in the estimate.\nFurthermore, the results in Sec.~\\ref{sec:tests_realdata} show that this procedure appears to be ``safe'' also\nin the presence of (injected) signals.\n\n\n\\subsection{Empirical choice of transition scale \\texorpdfstring{$\\scF_*$}{F*}}\n\\label{sec:choosing-prior-value}\n\nAn additional free parameter in the line-robust statistic $\\OSNsc(\\detVec{x})$, as expressed in\nEq.~\\eqref{eq:OSNsc_Fstar}, is the transition scale $\\scF_*= \\scF_*^{(0)} - \\ln\\prior{\\OLGsc}$ of Eq.~\\eqref{eq:scFth}.\n\nAs discussed in Sec.~\\ref{sec:limiting-behavior}, $\\scF_*$ sets the scale (in terms of $\\scF^X$) for the\ntransition of $\\OSNsc$ from the signal-versus-Gaussian-noise odds $\\OSGsc\\propto\\eto{\\sc{\\F}}$ (for\n$\\scF^X \\ll \\scF_*$) to the signal-versus-line odds $\\sc{O}_{{\\Signal\\Line}}$ (for $\\scF^X \\gg \\scF_*$).\n\nThus we can interpret $\\scF_*^{(0)}$ as the transition scale in the case of even prior odds, i.e., $\\prior{\\OLGsc}=1$,\nbetween the line and Gaussian-noise hypotheses.\n\nThe effect of $\\prior{\\OLGsc}$, which we estimate with the method described in the previous section, is to shift the\ntransition scale up or down from this baseline, depending on whether prior knowledge gives lines lower or\nhigher odds, respectively.\n\nWe can also express $\\scF_*^{(0)}$ in terms of a Gaussian-noise false-alarm probability, denoted as\n$p_{\\mathrm{FA}*}^{(0)}$:\n \\begin{equation}\n \\label{eq:scFtho_pFA}\n p_{\\mathrm{FA}*}^{(0)} = \\prob{\\sc{\\F}^X > \\scF_*^{(0)}}{\\Hyp_\\Gauss}\n \\end{equation}\nThis follows a central $\\chi^2$-distribution with $4{N_{\\mathrm{seg}}}$ degrees of freedom.\nWe find it useful to fix a value for $p_{\\mathrm{FA}*}^{(0)}$ and use it to determine $\\scF_*^{(0)}\\left(p_{\\mathrm{FA}*}^{(0)},{N_{\\mathrm{seg}}}\\right)$.\n\nOn the one hand, we want $\\scF_*^{(0)}$ to be low enough ($p_{\\mathrm{FA}*}^{(0)}$ high enough) to suppress even weak lines, but\nnot so low as to compromise the performance in Gaussian noise.\nWhen most of the data is approximately Gaussian (as is typically the case for CW\nsearches,~\\cite{abbott2004:_geoligo,aasi13:_eathS5,behnke2013:_phdthesis}), a reasonable choice is to\nuse the lowest $\\scF_*^{(0)}$ (highest $p_{\\mathrm{FA}*}^{(0)}$) that does not yet adversely affect the detection power in\nGaussian noise.\nIn practice, we resort to an empirical choice of $p_{\\mathrm{FA}*}^{(0)}$ based on Monte-Carlo simulations on a small subset\nof Gaussian or near-Gaussian data.\n\n\n\\section{Performance tests}\n\\label{sec:tests}\n\nHere we will discuss the detection efficiency of the statistics introduced in\nthe previous sections for a population of signals embedded in different types of noise.\nIn order to do this we use two different and somewhat complementary approaches:\n(i) fully ``synthetic'' simulations, which allow for efficient large-scale explorations under idealized\nconditions, and (ii) injections of simulated signals into LIGO S5 data containing instrumental\nartifacts.\n\nWe compare the performance of the following statistics (the second equation always refers to the\ncorresponding semicoherent version):\n\\begin{enumerate}[(1)] \\itemsep1pt \\parskip0pt\n \\item Standard multidetector $\\F$-statistic, Eqs.~\\eqref{eq:Fstat} and \\eqref{eq:avFstat}\n \\item $\\Fveto$-statistic{}, Eqs.~\\eqref{eq:Fveto} and \\eqref{eq:scFveto}\n \\item Line-veto statistic $O_{\\Signal\\Line}$, Eqs.~\\eqref{eq:OSL} and \\eqref{eq:OSLsc}\n \\item Line-robust statistic $\\OSN$, Eqs.~\\eqref{eq:OSN_Fstar} and \\eqref{eq:OSNsc_Fstar}\n\\end{enumerate}\nIn the case of the line-robust statistic $\\OSN$ we use different transition scales $\\Fth^{(0)}$ corresponding to\nfalse-alarm levels $p_{\\mathrm{FA}*}^{(0)}$, which we denote as\n\\begin{equation}\n \\label{eq:7}\n \\OSNpFA{-n}(\\detVec{x}) \\equiv \\OSN(\\detVec{x};\\; p_{\\mathrm{FA}*}^{(0)}=10^{-n})\\,.\n\\end{equation}\nIn the following tests we use $\\OSNpFA{-1}$, $\\OSNpFA{-3}$, and $\\OSNpFA{-6}$, corresponding to\ntransition-scale false-alarm levels of $p_{\\mathrm{FA}*}^{(0)}=10^{-1},10^{-3},10^{-6}$, respectively.\n\nIn order to assess the importance of the choice of prior line odds $\\prior{\\OLG}^X$, we consider two cases:\n\\begin{enumerate}[(i)] \\itemsep1pt \\parskip0pt\n \\item Uninformative priors, i.e., $\\prior{\\OLG}^{X} = 1$ for all $X$: the corresponding ``untuned'' statistics are\n denoted as $\\OSL^{(0)}$ and $\\utOSNpFA{-n}$.\n \\item Line priors $\\prior{\\OLG}^X$ using prior information on the line population: the corresponding ``tuned''\n statistics are denoted as $O_{\\Signal\\Line}$ and $\\OSNpFA{-n}$, respectively.\n\\end{enumerate}\n\n\\subsection{Tests using synthetic draws}\n\\label{sec:tests_simdata}\n\nIn this section, for simplicity, we consider only the coherent case (cf.~Sec.~\\ref{sec:line-veto-stats-coh}).\nUsing the synthesizing approach described in Refs.~\\cite{prix09:_bstat,prix11:_transient}, one can directly\ngenerate random draws of the various statistics of interest for pure noise and for noise containing a signal.\n\nThe synthesizing method consists in generating random draws of the $\\{x^X_\\mu\\}$ of Eq.~\\eqref{eq:xmuMmunu}\nusing their known (multivariate) Gaussian distribution.\nFrom these we compute the $\\F$- and $\\F^X$-statistics from Eq.~\\eqref{eq:Fstat}, $O_{\\Signal\\Line}$ from\nEq.~\\eqref{eq:OSL} and $\\OSN$ from Eq.~\\eqref{eq:OSN_Fstar}.\nIn the following we refer to each draw of $\\{x^X_\\mu\\}$ together with the resulting statistics as a\n\\emph{candidate}.\n\nWe generate the noise draws in such a way that a fraction $f_\\Line$ contains a line according to\n$\\Hyp_\\Line$ of Eq.~\\eqref{eq:hypL}, namely a CW signal in a single detector. The remaining fraction $1-f_\\Line$\nof noise draws follows the Gaussian-noise hypothesis $\\Hyp_\\Gauss$ of Eq.~\\eqref{eq:gaussian}.\nIn the following we refer to $f_\\Line$ as the \\emph{line contamination}.\n\nFrom the noise draws we estimate for each statistic a threshold corresponding to a particular false-alarm\nprobability $p_{\\mathrm{FA}}$. Applying this threshold to the signal candidates yields the detection\nprobability $\\pDet(p_{\\mathrm{FA}})$ for each statistic at the false-alarm level $p_{\\mathrm{FA}}$.\nThis is known as the \\emph{receiver operator characteristic} (ROC).\n\nThe strength of the injected signals is characterized by the (multidetector) \\emph{signal-to-noise ratio}\n$\\snr_{\\Signal}$, defined in the usual way \\cite{jks98:_data} as\n\\begin{equation}\n \\label{eq:snrS}\n \\snr_{\\Signal}^2 \\equiv \\scalar{h}{h} = \\mathcal{A}^\\mu \\mathcal{M}_{\\mu\\nu} \\mathcal{A}^\\nu \\,.\n\\end{equation}\nThis is related to the expectation value of the $\\F$-statistic as $E[2\\F]_{\\Hyp_\\Signal}=4+\\snr_{\\Signal}^2$.\nAs shown in Appendix \\ref{sec:expect-f-stat}, for a line according to $\\Hyp_\\Line$ in detector $Y$, the\nexpectation value of the multidetector $\\F$-statistic is approximately\n\\begin{equation}\n \\label{eq:exp_F_line}\n \\expect{2\\F}_{\\Hyp_\\Line} \\approx 4 + \\frac{1}{{N_{\\mathrm{det}}}}\\,\\snr_{\\Line}^2\\;\\;\\text{with}\\;\\;\n \\snr_{\\Line}^2 \\equiv \\mathcal{A}_Y^\\mu\\mathcal{M}^Y_{\\mu\\nu}\\mathcal{A}_Y^\\nu\\,,\n\\end{equation}\nwhere we refer to the (single-IFO) SNR $\\snr_{\\Line}$ as the ``line SNR''.\n\nThe signal candidates are generated for a fixed SNR of $\\snr_{\\Signal}=6$, and a data length of $T=25\\,\\mathrm{h}$ is\nassumed.\nThis signal strength is chosen to be representative of reasonably detectable signals in a wide-parameter-space\nsearch.\nIn such a search we would require a low (single-trial) false-alarm threshold $p_{\\mathrm{FA}}$ in order to consider a\ncandidate as significant.\nThe choice of $\\snr_{\\Signal}=6$ corresponds to a detection probability of $\\pDet\\approx70\\%$ at a false-alarm\nprobability of $p_{\\mathrm{FA}}=10^{-6}$ in Gaussian noise (for example, see Fig.~\\ref{fig:newSynth_Gauss}).\n\nThe signal amplitude parameters are drawn uniformly in $\\cos\\iota\\in[-1,1]$, $\\psi\\in[-\\pi\/4,\\pi\/4]$ and\n$\\phi_0\\in[0,2\\pi]$.\nThe sky position is drawn isotropically over the sky, and $(h_0\/\\sqrt{\\Sn})$ is determined by the fixed signal\nSNR of $\\snr_{\\Signal}=6$ according to Eq.~\\eqref{eq:snrS}.\nThe line draws use the same prior distributions, but the signal is added to only one detector, and\n$(h_0\/\\sqrt{\\Sn})$ is determined by fixing a (single-IFO) line SNR $\\snr_{\\Line}$ according to\nEq.~\\eqref{eq:exp_F_line}.\n\nIn each simulation we generate $10^7$ noise candidates and $10^7$ noise+signal candidates for two detectors,\nLIGO $\\textrm{H1}$ and $\\textrm{L1}$. These detectors are assumed here to have identical sensitivity.\nLines are only injected into $\\textrm{H1}$ without loss of generality.\nWe consider three examples of noise populations:\n\\begin{enumerate}[(i)] \\itemsep1pt \\parskip0pt\n \\item pure Gaussian noise without lines ($f_\\Line=0$, $\\snr_{\\Line}=0$)\n \\item 10\\% line contamination in H1 ($f_\\Line^{\\textrm{H1}}=0.1,\\;f_\\Line^{\\textrm{L1}}=0$) with line SNR of $\\snr_{\\Line}=9$,\n \\item 10\\% line contamination in H1 ($f_\\Line^{\\textrm{H1}}=0.1,\\;f_\\Line^{\\textrm{L1}}=0$) with line SNR of $\\snr_{\\Line}=15$.\n\\end{enumerate}\nThe line SNR of $\\snr_{\\Line}=9$ corresponds to lines that are marginally stronger than the injected signals, namely,\n$\\expect{2\\F}_{\\Hyp_\\Line}\\approx 44.5$ from Eq.~\\eqref{eq:exp_F_line}), while $\\expect{2\\F}_{\\Hyp_\\Signal} = 40$ from\nEq.~\\eqref{eq:14}).\nThe lines with $\\snr_{\\Line}=15$ are substantially stronger (namely $\\expect{2\\F}_{\\Hyp_\\Line} \\approx 117$) than the\ninjected signals.\n\nNote that for the synthesized statistics we cannot use the line-prior estimation method for $\\prior{\\OLG}^X$ of\nSec.~\\ref{sec:estimate-line-probs}. Instead we assume ``perfect tuning'': in\nthe Gaussian-noise example we set $\\prior{\\OLG}^X=0.001$ for $X=\\textrm{H1},\\textrm{L1}$, and in the two line examples we use\n$p_\\Line^{\\textrm{H1}}=f_\\Line^{\\textrm{H1}}=0.1$ (therefore $\\prior{\\OLG}^\\textrm{H1}=1\/9$) and $\\prior{\\OLG}^\\textrm{L1}=0.001$ (no lines were\ninjected into L1).\n\n\\begin{figure}[b!]\n \\includegraphics[width=1.05\\columnwidth,clip]{newSynth_snrS6_pL0_snrL0_N1e07-final}\n \\caption{\n \\label{fig:newSynth_Gauss}\n Detection probability $\\pDet$ as a function of false-alarm $p_{\\mathrm{FA}}$ of different synthesized statistics,\n for a signal population of fixed SNR of $\\snr_{\\Signal}=6$ in pure Gaussian noise ($f_\\Line=0$, $\\snr_{\\Line}=0$).\n Statistical errors are similar to the line width.\n }\n\\end{figure}\n\nIn Gaussian noise the coherent $\\F$-statistic is close to optimal \\cite{jks98:_data,prix09:_bstat}, and follows\na $\\chi^2$ distribution with 4 degrees of freedom and noncentrality parameter $\\snr_{\\Signal}^2$, which we denote as\n$\\chi^2_4(\\snr_{\\Signal})$. This is plotted as a thick solid line in Figs.~\\ref{fig:newSynth_Gauss} and\n\\ref{fig:newSynth_Lines} for the signal population of $\\snr_{\\Signal}=6$.\n\nIn the Gaussian-noise example shown in Fig.~\\ref{fig:newSynth_Gauss}, the $\\F$-statistic follows closely\nthe theoretical prediction, while the (untuned) line-veto statistic $\\OSL^{(0)}$ is notably less powerful.\nThe line-robust statistics $\\OSNpFA{-n}$ increasingly approach the $\\F$-statistic performance with decreasing\n$p_{\\mathrm{FA}*}^{(0)}$, i.e., increasing transition scale $\\Fth^{(0)}$.\nIn particular, starting from $\\OSNpFA{-3}$ (corresponding to a transition scale of $\\Fth^{(0)}\\approx 9.23$),\nthere are no appreciable losses in detection probability $\\pDet$ over the false-alarm range $p_{\\mathrm{FA}}\\in\n[10^{-6},1]$.\n\nAt low $p_{\\mathrm{FA}}$, the $\\Fveto$-statistic{} performs almost optimally, while\nthere are some losses above $p_{\\mathrm{FA}}\\gtrsim10^{-4}$. These are due to $\\F^{\\mathrm{+veto}}$ containing intrinsic upper\nbounds on the achievable $p_{\\mathrm{FA}}$ and $\\pDet$ as a result of vetoing a finite fraction of candidates.\nFor a practical GW analysis, where low $p_{\\mathrm{FA}}$ are required, this behavior is not particularly\nrelevant.\n\n\\begin{figure}[b]\n \\raggedright (a)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{newSynth_snrS6_pL0dot1_snrL9_N1e07-final}\n \\raggedright(b)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{newSynth_snrS6_pL0dot1_snrL15_N1e07-final}\n \\caption{\n \\label{fig:newSynth_Lines}\n Detection probability $\\pDet$ as a function of false-alarm $p_{\\mathrm{FA}}$ for different synthesized statistics,\n for a signal population with fixed SNR of $\\snr_{\\Signal}=6$ in Gaussian noise with 10\\% line contamination,\n with line-SNR of (a) $\\snr_{\\Line}=9$ and (b) $\\snr_{\\Line}=15$.\n Statistical errors are similar to the line width.\n }\n\\end{figure}\n\nThe performance in the two examples with $10\\%$ line contamination is shown in\nFig.~\\ref{fig:newSynth_Lines}.\nHere the $\\F$-statistic is found to perform substantially worse than in Gaussian noise at false-alarm\nprobabilities below $p_{\\mathrm{FA}}\\lesssim 0.1$. This is due to the fact that in $10\\%$ of the noise cases the\nfalse-alarm threshold is set by the line population, which is either difficult (for $\\snr_{\\Line}=9$, left plot) or\nalmost impossible (for $\\snr_{\\Line}=15$, right plot) for the $\\F$-statistic to cross for signals with SNR of\n$\\snr_{\\Signal}=6$.\n\n\\begin{figure}[b]\n \\raggedright (a)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{newSynth_snrS6_pL0dot1_snrL9_N1e07-finalAdaptive}\n \\raggedright(b)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{newSynth_snrS6_pL0dot1_snrL15_N1e07-finalAdaptive}\n \\caption{\n \\label{fig:newSynth_LinesTuning}\n Comparison of ``tuned'' statistics $\\OSNpFA{-n}$ (solid lines) using\n ``perfect knowledge'' line-priors $\\{\\prior{\\OLG}^\\textrm{H1}=1\/9,\\,\\prior{\\OLG}^\\textrm{L1}=10^{-3}\\}$ versus ``untuned'' statistic\n $\\utOSNpFA{-n}$ (dashed lines) using uninformative line-priors $\\prior{\\OLG}^X=1$.\n Detection probability $\\pDet$ as a function of false-alarm $p_{\\mathrm{FA}}$ of different synthesized statistics,\n for a signal population with fixed SNR of $\\snr_{\\Signal}=6$ in Gaussian noise with 10\\% line contamination,\n with line-SNR of (a) $\\snr_{\\Line}=9$ and (b) $\\snr_{\\Line}=15$.\n Statistical errors are similar to the line width.\n }\n\\end{figure}\n\nWe observe that the $\\Fveto$-statistic{} starts to fail below false-alarm levels of\n$p_{\\mathrm{FA}}\\lesssim 10^{-4}$ in the case of weaker lines with $\\snr_{\\Line}=9$ (see Fig.~\\ref{fig:newSynth_Lines}(a)).\nThis can be understood as follows:\nFor the $\\snr_{\\Line}=9$ line population, we find that a fraction of $\\sim6\\times10^{-4}$ of line candidates survive\nthe veto.\nGiven that lines are present in 10\\% of the noise cases, this means that a fraction of\n$\\sim6\\times10^{-5}$ of total noise candidates are line candidates surviving the consistency veto.\nGiven that these have high $\\F$-statistic values, signal candidates can hardly surpass them, and thus the\ndetection probability drops toward zero at false-alarm probabilities below $\\sim6\\times10^{-5}$.\n\n\nThe same effect is also present for stronger lines, but the corresponding ``failure'' threshold is\npushed to lower values.\nFor example, for $\\snr_{\\Line}=12$ it would happen only below $p_{\\mathrm{FA}}\\lesssim10^{-6}$, while for $\\snr_{\\Line}=15$ it\nis too low to be resolvable by $10^7$ random draws.\n\nThe behavior of the line-robust statistics $\\OSNpFA{-n}$ depends on the choice of transition scale.\nIn the case of lines with $\\snr_{\\Line}=9$, shown in Fig.~\\ref{fig:newSynth_Lines}(a), the statistic $\\OSNpFA{-3}$\nperforms best, while using either lower or higher values of $p_{\\mathrm{FA}*}^{(0)}$ is less powerful at low false-alarm\nprobabilities.\nIn the case of stronger lines with $\\snr_{\\Line}=15$, shown in Fig.~\\ref{fig:newSynth_Lines}(b), the statistic\n$\\OSNpFA{-6}$ performs almost optimally, with $\\OSNpFA{-3}$ performing only slightly worse.\n\nThe line-veto statistic $\\OSL^{(0)}$ performs somewhat poorly in all three examples\nshown (Figs.~\\ref{fig:newSynth_Gauss} and \\ref{fig:newSynth_Lines}). This is not surprising, given that at\nmost $10\\%$ of noise draws contain a line, while $O_{\\Signal\\Line}$ would only be optimal for a noise population\nconsisting exclusively of lines.\n\nFigure~\\ref{fig:newSynth_LinesTuning} shows the effect of ``tuning'' the prior line odds\n$\\prior{\\OLG}^X$, using the same line populations as in Fig.~\\ref{fig:newSynth_Lines}. We see that the untuned\nstatistics $\\OSL^{(0)}$ and $\\utOSNpFA{-n}$ using uninformative line-odds $\\prior{\\OLG}^X=1$ perform reasonably well\ncompared to $O_{\\Signal\\Line}$ and $\\OSNpFA{-n}$, which are based on ``perfect-knowledge'' tuning.\nNote that tuning of $\\prior{\\OLG}^X$ can sometimes also {decrease} the detection power of a statistic,\nparticularly in cases where the choice of the transition scale $\\Fth^{(0)}(p_{\\mathrm{FA}*}^{(0)})$ is a poor fit to the actual\nline population.\nThis can be seen in the case of $\\OSNpFA{-6}$ with lines of SNR $\\snr_{\\Line}=9$, as shown in\nFig.~\\ref{fig:newSynth_LinesTuning}(a).\nIn cases where $\\Fth^{(0)}(p_{\\mathrm{FA}*}^{(0)})$ is a good match to the line population, the tuning of $\\prior{\\OLG}^X$ can yield gains\nin detection power of up to 5--10\\%.\n\n\\subsection{Tests using LIGO S5 data}\n\\label{sec:tests_realdata}\n\nHere we conduct a study using LIGO S5 data sets as noise.\nWe inject signals and search for them using methods similar to those of actual CW searches.\nInstead of ROC curves, i.e., $\\pDet(p_{\\mathrm{FA}})$, we present results in terms of the detection efficiency as\na function of signal strength scaled by the total multidetector noise PSD $\\Sn$, i.e.,\n$\\pDet(h_0\/\\sqrt{\\Sn})$.\nThis form is more suitable to assess improvements in sensitivity, which is typically expressed as the\nweakest signal $h_0$ detectable with a certain confidence $\\pDet$.\nTo compute an astrophysically motivated detection probability, these results could in principle be\nconvolved with an astrophysical prior on $h_0$, if available.\n\nThe injection and detection procedure used here is modeled after those commonly employed for estimating upper\nlimits on $h_0$ in CW searches such as Refs.~\\cite{aasi13:_eathS5, aasi2013:_gc-search}.\n\nSimulated CW signals are added to the data using the \\texttt{Makefakedata\\_v4} code \\cite{lalsuite}.\nThe resulting data set is analyzed both coherently and semicoherently using\n\\texttt{HierarchSearchGCT} \\cite{lalsuite}, a StackSlide implementation based on\nthe ``global correlations'' method of Ref.~\\cite{pletsch2009:_gct}. We have extended this code to also compute\nthe new statistics $\\sc{O}_{{\\Signal\\Line}}(\\detVec{x})$ and $\\OSNsc(\\detVec{x})$, in addition to $\\sc{\\F}(\\detVec{x})$.\nFor the coherent search we use shorter subsets of the data, and the coherent statistics are simply obtained as\nthe special case ${N_{\\mathrm{seg}}}=1$.\n\nThe tuning of the line priors $\\prior{\\OLGsc}^X$ in $\\sc{O}_{{\\Signal\\Line}}$ and $\\OSNscpFA{-n}$ is based on the method described in\nSec.~\\ref{sec:estimate-line-probs}, namely, Eqs.~\\eqref{eq:lineestimator},~\\eqref{eq:olg_estimate}, and\n\\eqref{eq:olg_trunc}.\nAs explained in Sec.~\\ref{sec:choosing-prior-value}, we fix the transition scale $\\scF_*^{(0)}$ of $\\OSNsc$\naccording to its performance in Gaussian noise.\nSpecifically, we perform injections on simulated Gaussian noise and analyze them as\ndescribed below for several values of $p_{\\mathrm{FA}*}^{(0)}$.\nWe then chose the highest $p_{\\mathrm{FA}*}^{(0)}$ value such that the achieved performance is indistinguishable within\nstatistical uncertainties from that of the $\\F$-statistic.\nAs a result of this we select $\\OSNpFA{-6}$,\nwith the false-alarm level of $p_{\\mathrm{FA}*}^{(0)}=10^{-6}$ corresponding to a transition scale of\n\\mbox{$\\Fth^{(0)}({N_{\\mathrm{seg}}}=1)\\approx16.7$} and \\mbox{$\\scF_*^{(0)}({N_{\\mathrm{seg}}}=84)\\approx237.0$}, respectively.\n\n\\subsubsection{Data selection}\n\\label{sec:data-selection}\nWe use four\\ narrow frequency bands of LIGO S5 data.\nThese bands are chosen depending on how severely they appear to be affected by lines:\n\\begin{enumerate}[(a)]\n\\item a ``quiet'' band where the distribution of the data is very close to Gaussian,\n\\item a band with a single line in $\\textrm{L1}$,\n\\item a band with a single line in $\\textrm{L1}$, narrower than in (b),\n\\item a band with multiple disturbances in $\\textrm{H1}$.\n\\end{enumerate}\nThe normalized SFT power $\\Psft^X(f)$ of Eq.~\\eqref{eq:Psft} for each of the four\\ bands is shown\nin Fig.~\\ref{fig:tests_realdata_normSFT_coh} for the coherent case, and in\nFig.~\\ref{fig:tests_realdata_normSFT_semicoh} for the semicoherent case.\nMore details about these sample frequency bands are given in Tables~\\ref{tbl:PsftthrCoh} and\n\\ref{tbl:PsftthrSC}, respectively.\n\nThe data sets are taken from the first year of the LIGO S5 science run.\nFor the semicoherent searches we use ${N_{\\mathrm{seg}}}=84$ data segments, spanning $T=25\\,\\mathrm{h}$ each, while the\ncoherent searches use only a single segment.\nThese segments were originally selected for the Einstein@Home~\\cite{EatH} search described in\nRef.~\\cite{aasi13:_eathS5}.\nSince CW searches on this data have not found any signals\n\\cite{abbott09:_earlyS5,abadie12:_powerflux,aasi13:_eathS5}, we consider it as a \\emph{pure noise} set\nfor the purpose of this study.\n\n\n\\begin{table*}[h!tbp]\n \\input{table_coherent_data_four.tex}\n \\caption{\n \\label{tbl:PsftthrCoh}\n Data used for tests of the coherent statistics in Sec.~\\ref{sec:tests_realdata}.\n \n All data is taken from the first year of the LIGO S5 run.\n \n CW signals are injected with frequencies $f\\in f_{\\mathrm{inj}}$, while $\\fsft$ denotes the SFT frequency\n range used for the search and the prior line estimation.\n \n Each data set starts at a GPS time of $t_{\\mathrm{start}}$ and spans 25 hours, containing $N_\\sft^X$\n SFTs of duration $T_\\sft=1800\\,\\sec$ from each detector.\n \n The multidetector noise PSD $\\Sn$ was obtained as the harmonic mean over SFTs and arithmetic\n mean over frequency bins.\n \n The column labeled $\\maxnoise{2\\F}$ shows the corresponding highest multidetector $2\\F$ value without\n injections.\n \n The noise PSD per detector is $\\SnX$.\n \n The column $\\Psftthr^X$ gives the threshold on the normalized SFT power $\\Psft^X$ at $p_{\\mathrm{FA},\\Psft}=10^{-9}$,\n which is used to estimate the prior line-odds $\\prior{\\OLG}^X$ as described in Sec.~\\ref{sec:estimate-line-probs}.\n }\n\\end{table*}\n\n\n\\begin{table*}[h!tbp]\n \\input{table_semicoherent_data_four.tex}\n \\caption{\n \\label{tbl:PsftthrSC}\n Data used for tests of the semicoherent statistics in Sec.~\\ref{sec:tests_realdata}.\n All data is taken from the first year of the LIGO S5 run, corresponding to the segment\n selection used in an Einstein@Home search (S5R3) \\cite{aasi13:_eathS5}, spanning 381.04~days\n starting from GPS epoch $t_{\\mathrm{start}}=818845553$, containing ${N_{\\mathrm{seg}}}=84$ segments, each 25 hours long.\n The column labeled $\\maxnoise{2\\avgSeg{\\F}}$ refers to the highest average multidetector\n $2\\avgSeg{\\F}$ value without injections (the average is over segments).\n The remaining labels are identical to those in Table~\\ref{tbl:PsftthrCoh}.\n }\n\\end{table*}\n\n\\subsubsection{Signal injection and detection criterion}\n\\label{sec:sign-inject-detect}\n\nThe search setup used here is different from that of Ref.~\\cite{aasi13:_eathS5}, and employs the\n\\texttt{HierarchSearchGCT} code instead of the Hough-transform~\\cite{krishnan04:_hough}.\nThis code is used in recent and ongoing wide-parameter-space searches such as\nRefs.~\\cite{aasi2013:_gc-search,EatH}.\n\nThe grid spacings in frequency and spin-down are $\\delta f \\approx 1.6 \\times 10^{-6}\\,\\mathrm{Hz}$ and $\\delta\\dot{f}\n\\approx 5.8 \\times 10^{-11}\\,\\mathrm{Hz}\/s$, respectively.\nThe angular sky-grid spacings are approximately $0.15\\,\\mathrm{rad}$ at $f = 54\\,\\mathrm{Hz}$, and scale with\nfrequency as $1\/f$.\n\nWe find that this template bank yields an average relative loss of SNR$^2$ (also known as mismatch) of $m\\sim\n0.6$ in the\nsemicoherent searches and of $m\\lesssim 0.05$ in the coherent searches.\n\n\nWe first perform searches on the data without any injections,\ncovering the whole sky in each of the four{} frequency bands of width $\\Delta f = 50\\,\\mathrm{mHz}$ (see\n$f_{\\mathrm{inj}}$ in Tables \\ref{tbl:PsftthrCoh} and \\ref{tbl:PsftthrSC}), and a fixed band $[-\\Delta\\dot{f},\\,0]$ in\nspin-down ${\\dot{\\Freq}}$, with $\\Delta \\dot{f} \\approx 2.6 \\times 10^{-9}\\,\\mathrm{Hz}\/\\sec$.\n\nFor each of the four statistics $\\{\\sc{\\F},\\sc{\\F}^{\\mathrm{+veto}},\\sc{O}_{{\\Signal\\Line}},\\OSNsc\\}$ we record the loudest noise candidate over\nthe whole template grid.\nA signal will be considered as detected with a given statistic if its highest value exceeds this noise value.\nThis definition of detection is equivalent to the common method of setting loudest-event upper\nlimits, employed for example in Ref.~\\cite{aasi13:_eathS5}.\n\nThe signals are injected using the \\texttt{Makefakedata\\_v4} code, with signal parameters randomly drawn\nfrom uniform distributions in the sky coordinates $\\{\\alpha,\\delta\\}$,\ninclination $\\cos\\iota$ and polarization angle $\\psi$, and at varying signal amplitude $h_0$.\nThe signal frequency and spin-down are drawn uniformly from the bands used in the noise search.\nFor each value of $h_0$ we perform 1000 injections.\nFor each injection we search a small parameter-space volume containing the signal.\nThis search region consists of a frequency band of $\\Delta f = 1\\,\\mathrm{mHz}$, a spin-down band of\n$\\Delta \\dot{f} \\approx\n2.3 \\times 10^{-10}\\,\\mathrm{Hz}\/\\sec$ and the 10 sky-grid points closest (in the metric sense \\cite{prix06:_searc})\nto the injection.\n\nNote that in (b) and (c) some of these injection searches do not use any data containing the narrow\ndisturbances. Hence, the statements in this section apply to \\emph{bands that contain disturbances}, and not\nonly to \\emph{sets of disturbed candidates}.\n\n\n\n\\begin{figure}[h!tbp]\n \\raggedright ($\\widetilde{\\textrm{a}}$)\\\\\\vspace*{-1cm}\n \\includegraphics[width=\\columnwidth]{S5R3_54dot20Hz_seg26_normSFT_H1L1} \\\\\n \\raggedright ($\\widetilde{\\textrm{b}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{S5R3_66dot50Hz_seg64_normSFT_H1L1}\\\\\n \\raggedright ($\\widetilde{\\textrm{c}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{S5R3_69dot70Hz_seg9_normSFT_H1L1} \\\\\n \\raggedright ($\\widetilde{\\textrm{d}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{S5R3_58dot50Hz_seg13_normSFT_H1L1}\n \\caption{\n \\label{fig:tests_realdata_normSFT_coh}\n Normalized average SFT power $\\Psft^X(f)$ of Eq.~\\eqref{eq:Psft} as a function of frequency $f$\n for LIGO H1 (solid) and L1 (dashed) data used in the coherent searches.\n The horizontal lines mark, for each detector, the threshold $\\Psftthr^X$ at $p_{\\mathrm{FA},\\Psft}=10^{-9}$ used in the\n line prior estimation.\n The panels show:\n ($\\widetilde{\\textrm{a}}$) a quiet band,\n ($\\widetilde{\\textrm{b}}$), ($\\widetilde{\\textrm{c}}$) two bands with lines,\n ($\\widetilde{\\textrm{d}}$) a band with multiple disturbances. See Table~\\ref{tbl:PsftthrCoh} for more details on these data\n sets.}\n\\end{figure}\n\\begin{figure}[h!tbp]\n \\raggedright ($\\widetilde{\\textrm{a}}$)\\\\\\vspace*{-1cm}\n \\includegraphics[width=\\columnwidth]{gct_injections_detprobs_54dot20Hz_seg26} \\\\\n \\raggedright ($\\widetilde{\\textrm{b}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{gct_injections_detprobs_66dot50Hz_seg64} \\\\\n \\raggedright ($\\widetilde{\\textrm{c}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{gct_injections_detprobs_69dot70Hz_seg9}\n \\raggedright ($\\widetilde{\\textrm{d}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{gct_injections_detprobs_58dot50Hz_seg13}\n \\caption{\n \\label{fig:tests_realdata_detprobs_coherent}\n Detection efficiency $\\pDet$ as a function of scaled signal amplitude $h_0\/\\sqrt{\\Sn}$ for four\n different coherent statistics:\n $\\F$,\n $\\F^{\\mathrm{+veto}}$,\n $\\OSL^{(0)}$,\n and $\\OSNpFA{-6}$.\n Statistical errors are similar to the size of the symbols.\n The dashed horizontal line marks the $95\\%$ detection probability level.\n \n The panels show:\n ($\\widetilde{\\textrm{a}}$) a quiet band,\n ($\\widetilde{\\textrm{b}}$), ($\\widetilde{\\textrm{c}}$) two bands with lines,\n ($\\widetilde{\\textrm{d}}$) a band with multiple disturbances. See Fig.~\\ref{fig:tests_realdata_normSFT_coh} and\n Table~\\ref{tbl:PsftthrCoh} for more details on these data sets.\n }\n\\end{figure}\n\n\n\\begin{figure}[h!tbp]\n \\raggedright($\\sc{\\textrm{a}}$)\\\\\\vspace*{-1cm}\n \\includegraphics[width=\\columnwidth]{S5R3_54dot20Hz_84seg_normSFT_H1L1} \\\\\n \\raggedright ($\\sc{\\textrm{b}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{S5R3_66dot50Hz_84seg_normSFT_H1L1} \\\\\n \\raggedright (\\csc)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{S5R3_69dot70Hz_84seg_normSFT_H1L1}\n \\raggedright ($\\sc{\\textrm{d}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth]{S5R3_58dot50Hz_84seg_normSFT_H1L1}\n \\caption{\n \\label{fig:tests_realdata_normSFT_semicoh}\n Normalized average SFT power $\\Psft^X(f)$ of Eq.~\\eqref{eq:Psft} as a function of frequency $f$\n for LIGO H1 (solid) and L1 (dashed) data used in the semicoherent searches.\n The horizontal lines mark, for each detector, the threshold $\\Psftthr^X$ at $p_{\\mathrm{FA},\\Psft}=10^{-9}$ used in the\n line prior estimation.\n The panels show:\n ($\\sc{\\textrm{a}}$) a quiet band,\n ($\\sc{\\textrm{b}}$), (\\csc) two bands with lines,\n ($\\sc{\\textrm{d}}$) a band with multiple disturbances.\n See Table~\\ref{tbl:PsftthrSC} for more details on these data sets.\n }\n\\end{figure}\n\\begin{figure}[h!tbp]\n \\raggedright ($\\sc{\\textrm{a}}$)\\\\\\vspace*{-1cm}\n \\includegraphics[width=\\columnwidth,clip]{gct_injections_detprobs_54dot20Hz_84seg} \\\\\n \\raggedright ($\\sc{\\textrm{b}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{gct_injections_detprobs_66dot50Hz_84seg} \\\\\n \\raggedright (\\csc)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{gct_injections_detprobs_69dot70Hz_84seg}\n \\raggedright ($\\sc{\\textrm{d}}$)\\\\\\vspace*{-0.5cm}\n \\includegraphics[width=\\columnwidth,clip]{gct_injections_detprobs_58dot50Hz_84seg}\n \\caption{\n \\label{fig:tests_realdata_detprobs_semicoherent}\n Detection efficiency $\\pDet$ as a function of scaled signal amplitude $h_0\/\\sqrt{\\Sn}$ for four\n different semicoherent statistics:\n $\\sc{\\F}$,\n $\\sc{\\F}^{\\mathrm{+veto}}$,\n $\\OSLsc^{(0)}$,\n and $\\OSNscpFA{-6}$.\n Statistical errors are similar to the size of the symbols.\n The dashed horizontal line marks the $95\\%$ detection probability level.\n \n The panels show:\n ($\\sc{\\textrm{a}}$) a quiet band,\n ($\\sc{\\textrm{b}}$), (\\csc) two bands with lines,\n ($\\sc{\\textrm{d}}$) a band with multiple disturbances. See Fig.~\\ref{fig:tests_realdata_normSFT_semicoh} and\n Table~\\ref{tbl:PsftthrSC} for more details on these data sets.\n }\n\\end{figure}\n\n\n\\subsubsection{Results for coherent statistics}\n\\label{sec:results-using-coher}\n\nFigure~\\ref{fig:tests_realdata_detprobs_coherent} shows the detection efficiency $\\pDet$ as a function of the\nscaled signal amplitude $h_0\/\\sqrt{\\Sn}$, for the single-segment coherent statistics.\n\nIn the quiet band, shown in Fig.~\\ref{fig:tests_realdata_detprobs_coherent}~($\\widetilde{\\textrm{a}}$),\nwe find that the line-veto statistic $\\OSL^{(0)}$ has less detection power than the\n$\\F$-statistic, as would be expected since it does not match the noise population.\nThe conventional $\\Fveto$-statistic{} is safer than $\\OSL^{(0)}$ and performs just as well as the pure\n$\\F$-statistic.\nThe line-robust statistic $\\OSN$ performs equally well as $\\F$ and $\\F^{\\mathrm{+veto}}$ on this line-free data set.\n\nIn the disturbed bands shown in\nFig.~\\ref{fig:tests_realdata_detprobs_coherent}~($\\widetilde{\\textrm{b}}$)-($\\widetilde{\\textrm{d}}$),\nall statistics lose detection power to varying degrees.\nWe find that the $\\Fveto$-statistic{} is often able to recover most of the losses of the pure $\\F$-statistic.\nThe line-veto statistic $\\OSL^{(0)}$ performs similarly in case ($\\widetilde{\\textrm{b}}$) and yields an improvement over $\\F^{\\mathrm{+veto}}$ in\ncases ($\\widetilde{\\textrm{c}}$) and ($\\widetilde{\\textrm{d}}$).\nHowever, these cases show $\\OSN$ to be more robust than either of the simpler vetoes.\n\nSummarizing these results, we see that the line-robust statistic $\\OSN$ consistently shows the best\nperformance over the different types of data: it is more robust to varying kinds of disturbances than $\\F^{\\mathrm{+veto}}$\nand safer in Gaussian noise than $\\OSL^{(0)}$.\n\n\\subsubsection{Results for semicoherent statistics}\n\\label{sec:results-using-semi}\n\nFigure~\\ref{fig:tests_realdata_detprobs_semicoherent} shows the detection efficiency $\\pDet$ as a function of\n$h_0\/\\sqrt{\\Sn}$ for the semicoherent statistics over the full data set.\nQualitatively, we find very similar results to the coherent case of\nFig.~\\ref{fig:tests_realdata_detprobs_coherent}.\n\nFor the quiet band, shown in Fig.~\\ref{fig:tests_realdata_detprobs_semicoherent}~($\\sc{\\textrm{a}}$),\nwe find that the simple line-veto $\\OSLsc^{(0)}$ loses a significant fraction of detection power compared to the\nsemicoherent $\\sc{\\F}$-statistic and to $\\sc{\\F}^{\\mathrm{+veto}}$, while the line-robust statistic $\\OSNsc$ does not show any\nsignificant degradation.\n\nIn the bands with noise disturbances\n(Fig.~\\ref{fig:tests_realdata_detprobs_coherent}~($\\sc{\\textrm{b}}$)-($\\sc{\\textrm{d}}$)),\nit is again the $\\sc{\\F}$-statistic which suffers the most.\nThese examples show the line-robust statistic $\\OSNsc$ consistently performing better than $\\sc{\\F}$ and as well\nas or better than either $\\OSLsc^{(0)}$ or $\\sc{\\F}^{\\mathrm{+veto}}$ in all the disturbed bands.\nThe largest improvement is found in the example shown in\nFig.~\\ref{fig:tests_realdata_detprobs_semicoherent}~(\\csc), where the signal amplitude at $95\\,\\%$ detection\nprobability is nearly two times smaller for $\\OSNsc$ compared to $\\sc{\\F}^{\\mathrm{+veto}}$.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nWe have extended the standard derivation of the $\\F$-statistic by adding an explicit simple line hypothesis to\nthe standard Gaussian-noise hypothesis, namely a CW-signal-like disturbance in a single detector.\nMore work would be required to deal with coincident disturbances in multiple detectors.\n\nUsing the Bayesian framework we have derived two new detection statistics:\na ``line-veto'' statistic $O_{\\Signal\\Line}$, which complements the $\\F$-statistic and may be appropriate\nfor the follow-up of strong outliers, and a new line-robust detection statistic $\\OSN$, which\ncontains both $\\F$ and $O_{\\Signal\\Line}$ as limiting cases.\nWe have also generalized both statistics to semicoherent searches.\n\nThe line-robust $\\OSN$ requires choosing several prior parameters.\nWe have found in particular that the performance of $\\OSN$ is sensitive to $\\Fth^{(0)}$, which regulates the\ntransition scale between $\\F$ and $O_{\\Signal\\Line}$.\nThis parameter stems from a rather unphysical prior in the $\\F$-statistic derivation \\cite{prix09:_bstat}, and\nwe could therefore only provide an ad-hoc empirical prescription for choosing it.\nFurther work to improve on this prior could also result in increased robustness when the detectors are not\nequally sensitive.\n\nThe remaining parameters are more straightforward to interpret, as they encode the prior probability of line\nartifacts.\nFor these we have tested both an ignorance prior and a simple adaptive tuning method.\n\nWe have tested the detection power of the new statistics on synthetic candidates, where both signal and noise\nmatch our hypotheses, and on simulated signals injected into LIGO S5 data.\nIn both cases we have found that, with a reasonable choice of transition scale, $\\OSN$ is consistently the\nmost robust in the presence of various types of instrumental artifacts.\nIn particular, it consistently equals or surpasses the performance of the popular ad-hoc\n$\\F$-statistic consistency veto, reaching up to a factor of two improvement in detectable signal\nstrength at 95\\% confidence in example \\csc{} in Fig.~\\ref{fig:tests_realdata_detprobs_semicoherent}.\n\nCombined with its close-to-optimal performance in undisturbed data, this makes $\\OSN$ a promising statistic\nfor analyzing broadband, diverse data sets.\n\n\\section*{Acknowledgments}\nThis work has benefited from numerous discussions and comments from colleagues, in particular John T. Whelan,\nKarl Wette, Evan Goetz, Berit Behnke, Heinz-Bernd Eggenstein and Thomas Dent.\nWe acknowledge the LIGO Scientific Collaboration for providing the data from the LIGO S5 run.\nThe injection studies were carried out on the ATLAS cluster at AEI Hannover.\nP.L. and M.A.P. acknowledge support of the ``Sonderforschungsbereich'' Collaborative Research\nCentre (SFB\/TR7). D.K. was supported by the IMPRS on Gravitational Wave Astronomy.\nThis paper has been assigned LIGO document number LIGO-P1300167{} and AEI-preprint number AEI-2013-260{}.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{I. Introduction} \n\\label{sec1}\nThe fundamental work of\nEinstein-Podolsky-Rosen (EPR) \\cite{epr} on a distant entanglement of \na pair of non-interacting distinguished \nparticles and its effects on measurements\nis now at the foundations of long-distance quantum communications.\nThe entanglement concept coined by Schr\\\"odinger \\cite{schrodinger}\nwith a gedanken experiment of a cat, dead or alive,\nbecomes a resource of modern quantum computations \\cite{chuang,karol}.\nAn impressive modern progress of quantum information, \ncomputation and communication\nis described in \\cite{deutsch}. \n\nAn overview of various experimental\nrealizations of EPR pairs is given in \\cite{eprexprev}.\nVarious forms of propagating EPR pairs have been studied\nexperimentally but in its main aspect the propagation\nof EPR pairs was rather simple being similar\nto propagation on a line and always being integrable.\nHere we consider theoretically a situation when \ntwo non-interacting but entangled particles of an EPR pair propagate\nin a regime of quantum chaos \\cite{haake}.\nIn the classical limit a dynamics of these particles is chaotic \nbeing characterized by an exponential local divergence\nof trajectories with a positive Kolmogorov-Sinai entropy $h$\n\\cite{arnold,sinai,chirikov1979,lichtenberg}. The exponential instability\nof chaotic dynamics leads to exponential growth of round-off errors\nand breaking of time reversibility of classical evolution\ndescribed by reversible equations of motion.\nThus chaos resolves the famous Loschmidt-Boltzmann dispute\non time reversibility and emergence of statistical\nlaws from reversible dynamical equations \\cite{boltzmann1,loschmidt,boltzmann2}\n(see also \\cite{mayer}). \nPrior to classical chaos theory the problem of time reversal of \nlaws of nature was also discussed by such leading scientists as \nSchr\\\"odinger \\cite{schrodinger1931} \n(see English translation and overview in \\cite{schrodtrans}) and \nKolmogorov \\cite{kolmogorov}.\n\nHowever, in quantum mechanics a chaotic mixing in a phase-space cannot go \ndown to exponentially small scales being restricted by a quantum scale of the \nPlanck constant $\\hbar$. Thus in the regime of quantum chaos\nan exponential instability exists only during a logarithmically \nshort Ehrenfest time\nscale $\\tau_E \\sim |\\ln \\hbar|\/h$ \\cite{chi1981,dls1981,chi1988,ehrenfestime}\n(here $\\hbar$ is a dimensionless effective Planck constant \nrelated to typical quantum numbers). \nDue to the absence of exponential instability on\ntimes beyond $\\tau_E$ the quantum evolution remains reversible \nin presence of quantum errors in a drastic difference from\nthe classical dynamics as it was demonstrated\nin \\cite{dls1983} for the quantum Chirikov standard map,\nalso known as a kicked rotator \\cite{chi1981,chi1988,stmap}.\nThis system has been experimentally realized with cold atoms\nin kicked optical lattices and in particular \nthe quantum dynamical localization of\nchaotic diffusion has been observed in these experiments\n\\cite{raizen,garreau}. This dynamical localization of chaotic diffusion\nappears due to quantum interference and is analogous \nto the Anderson localization \\cite{anderson}\nof electron diffusion in disordered solids (see e.g. \\cite{fishman1,fishman2,dls1987}).\n\nIn \\cite{martin} it was shown that the time evolution of cold atoms in \nkicked optical lattices, described by the quantum Chirikov standard map,\ncan be reversed in time in the regime of quantum chaos.\nThis proposal was indeed experimentally realized by the Hoogerland \ngroup \\cite{hoogerland}.\nThus this system represents an efficient experimental platform which\nallows to investigate nontrivial effects of quantum mechanics, localization,\nchaos and time reversal.\n\nIn this work we investigate the properties of chaotic EPR pairs\nevolving in this fundamental system of quantum chaos\nand show that a measurement of one of the entangled particles\nbreaks exact time reversal of the other particle but \npreserves its approximate time reversibility.\nWe explain this unusual effect on the basis of \nthe Schmidt decomposition \\cite{schmidt}\n(see also the review \\cite{fedorov} and Refs. therein)\nand the Feynman path integral formulation of quantum mechanics \\cite{feynman}.\n\nThis article is composed as follows: the model is described in Section II,\nthe results are presented in Section III\nand the discussion and conclusion are given in Section IV;\nadditional Figures and data are given in Appendix.\n\n\\section{II. Model description} \n\\label{sec2}\n\nThe classical dynamics of one particle is described by\nthe Chirikov standard map \\cite{chirikov1979}:\n\\begin{equation}\n\\label{stmap}\n\\bar{p} = p + k \\sin{ x} \\; , \\;\\; \n\\bar{x} = x + T \\bar{p} \\; .\n\\end{equation}\nHere $x$ represents the position of an atom\nin an infinite $x-$axis of the kicked optical lattice,\nor a cyclic variable $0 \\leq x < 2\\pi$\nfor the case of the kicked rotator; $p$ is the momentum of a particle.\nThe bars denote the new values of variables after one iteration of this \nsymplectic map. \nThe physical process described by this map corresponds to\na sharp change of momentum, generated \ne.g. by a kick of the optical lattice \\cite{raizen,garreau},\nfollowed by a free particle propagation during a period $T$ between kicks.\nThe classical dynamics depends on a single chaos parameter\n$K=kT$ with a transition from integrability to unlimited chaotic\ndiffusion in momentum for $K > K_c =0.9715...$ \\cite{chirikov1979,lichtenberg}.\nThe system dynamics is reversible in time, e.g. by\ninverting all velocities in a middle of free rotation between two kicks.\n\nInside a chaotic component the dynamics is characterized by \nan exponential divergence of trajectories with the positive\nKolmogorov-Sinai entropy $h$. For $K>4$ the measure of stability\nislands is small and we have $h \\approx \\ln(K\/2)$ \\cite{chirikov1979}.\nFor $K > K_c$ the dispersion of momentum\ngrows diffusively with time $\\langle(\\Delta p)^2\\rangle = D t$ \nwith a diffusion coefficient\n$ D \\approx k^2\/2$ (see more details in \\cite{chirikov1979,dls1987}).\nHere and below the time $t$ is measured in number of map iterations.\nThe map captures a variety of universal features\nof dynamical chaos and appears in the description\nof various physical systems \\cite{stmap}. \n\nThe quantum evolution of the state $\\ket{\\psi}$ over a period is given \nby a unitary operator\n $\\hat{U}$ \\cite{chi1981,chi1988}:\n\\begin{eqnarray} \n\\label{qmap}\n\\ket{\\bar{\\psi}} = \\hat{U} \\ket{\\psi} = \ne^{-iT\\hat{p}^2\/2} e^{-ik\\cos{\\hat{x}}} \\ket{\\psi} \\; .\n\\end{eqnarray} \nHere the momentum $p$ is measured in recoil units of optical lattice with \n$\\hat{p}=-i \\partial \/ \\partial x $. Thus $T=\\hbar$ plays the role of\nan effective dimensionless Planck constant and the classical limit\ncorresponds to $T=\\hbar \\rightarrow 0$, $k \\rightarrow \\infty$,\n$K=kT = const$. Due to the periodicity of the optical lattice potential\nthe momentum operator $\\hat{p}=-i \\partial \/ \\partial x $ has eigenvalues\n$p=n+\\beta$ where $n$ is an integer and\n$\\beta$ is a quasimomentum conserved by the kick potential \n($0 \\leq \\beta < 1$). The value $\\beta=0$\ncorresponds to the case of a kicked rotator\nwith a wave function (in position representation) \n$\\psi(x)=\\langle x\\ket{\\psi}$ being \nperiodic on a circle $\\psi(x+2\\pi)=\\psi(x)$.\nIn this case the free rotation correspond (in momentum representation) \nto the phase shift\n${\\bar {\\psi}}_{n,0} = \\exp(-iTn^2\/2) \\psi_{n,0}$ with \n$\\psi_{n,\\beta}=\\langle p\\ket{\\psi}$ being the wave function \n(in momentum representation) at $p=n+\\beta$. \nIrrational values of $\\beta$ appear for\na particle propagation on an infinite $x$-axis;\nhere $\\beta$ is conserved\nand a free propagation of the momentum wave function $\\psi_{n,\\beta}$\ngives the phase shift ${\\bar {\\psi}}_{n,\\beta} = \n\\exp(-iT(n+\\beta)^2\/2)\\,\\psi_{n,\\beta}$.\nThe effects of quantum interference lead to dynamical localization\nof chaotic diffusion on a time scale $t_D \\approx D\/\\hbar^2 \\gg \\tau_E$\nand an exponential localization of quasienergy eigenstates\nwith a localization length $\\ell = D\/(2 \\hbar^2) \\approx k^2\/4$ \\cite{dls1987,chi1988}.\n\n\nIn \\cite{martin} it was pointed that the time reversal of a quantum evolution\nafter $t_r$ map iterations\ncan be realized by using a period between kicks\nbeing $T=4\\pi+\\epsilon$ for $t \\leq t_r$\nand $T'=4\\pi - \\epsilon$ for $t_r < t \\leq 2 t_r$.\nAlso the time reversal is done at the middle of the free propagation \nafter $t_r$ kicks (it is convenient to use a symmetrized scheme with a\nhalf-period of free rotation then kick and then again \na half-period of free propagation).\nThe inversion of kick amplitude\n$k \\cos x \\rightarrow - k \\cos x$ can be realized \nby a $\\pi$-translational shift of the optical lattice potential.\nSuch a time reversal is exact for $\\beta=0$ (kicked rotator case)\nand it also works approximately for small $\\beta$ values\nin the case of the kicked particle \\cite{martin}. \nThe time reversal for cold atoms in a kicked optical lattice\nwas experimentally demonstrated in \\cite{hoogerland}.\n\nHere we consider the time reversal of two\nnon-interacting distinguished particles being in an initial\nentangled state. We concentrate our analysis on the case when\nboth particles evolve in the regime of quantum chaos.\nThus we have the new case of chaotic EPR pairs. Following (\\ref{qmap})\nthe evolution of the two particle state $\\ket{\\psi}$ \n(with wave function $\\psi(x_1,x_2)=\\langle x_1,x_2\\ket{\\psi}$) \nof such pairs is given by the quantum map\n\\begin{eqnarray} \n\\label{qmappair}\n\\ket{\\bar{\\psi}} = (\\hat{U}_1\\otimes \\hat{U}_2) \\ket{\\psi} \\; ,\n\\end{eqnarray} \nwhere $\\hat{U}_1$ and $ \\hat{U}_2$ are one time period\nevolution operators for the first and second particle. \nIn absence of interactions between particles\nthe entropy of entanglement $S$ is preserved during this time\nevolution. It is convenient to use the Schmidt decomposition \n\\cite{schmidt,fedorov} \nfor an initial entangled state\n\\begin{eqnarray} \n\\label{schmidt}\n\\ket{\\psi}=\\sum_{i=1}^m \\alpha_i \\ket{u_i}\\otimes\\ket{v_i}\n\\end{eqnarray} \nwhere $\\ket{u_i}$, $\\ket{v_i}$ are one-particle states satisfying \nthe orthogonality relations: \n$\\langle u_i\\ket{u_j}=\\langle v_i\\ket{v_j}=\\delta_{ij}$. \nThe number $m$ of Schmidt components can be up to $m=N$ if $N$ is the \ndimension of the one-particle Hilbert space. However, for ``less'' entangled \nstates $m$ may be smaller and in this work we will consider the case of \n$m=2$.\nThe entropy of entanglement is then \ngiven by (see e.g. \\cite{chuang,fedorov}):\n \\begin{eqnarray} \n\\label{entropy}\nS = -Tr(\\rho_1 \\log_2 \\rho_1) = - \\sum_i |\\alpha_i|^2 \\log_2 |\\alpha_i|^2 \\; ,\n\\end{eqnarray} \nwhere $\\rho_1$ is a reduced density matrix of first particle\nobtained by a trace taken over the second particle.\nDuring the time evolution of EPR pair given by (\\ref{qmappair})\nthe wave functions of each particle evolve independently\nwith $\\ket{u_i(t)} = {\\hat{U}_1^t} \\ket{u_i(t=0)}$\nand $\\ket{v_i(t)} = {\\hat{U}_1^t} \\ket{v_i(t=0)}$.\nThus the coefficients $\\alpha_i$ of the Schmidt decomposition and the \nentropy of entanglement $S$ remain unchanged.\n\nHowever, since the particles are entangled\na measurement of the second particle after the time $t_r$ affects the \nwave function of first particle and thus the time reversal evolution of this \nparticle is modified so that the exact time reversibility\nis broken by the measurement. Nevertheless, we will see that still there is\nan approximate time reversal of the first particle.\nWe describe in detail this effect in the next section.\n\n\\section{III. Time evolution of chaotic EPR pairs} \n\\label{sec3}\n\nThe numerical simulations of the quantum map (\\ref{qmap}),~(\\ref{qmappair})\nare done in a usual way \\cite{chi1981,chi1988} \nby using the fact that the free propagation and the kick \nare diagonal in the momentum and coordinate representations respectively.\nConcerning the eigenphases $T n^2\/2$ of the free propagation operator \nwe mention an important technical detail: we compute these phases \nfor $n=-N\/2,\\,\\ldots,\\,N\/2-1$ \n(with $N$ being the dimension of the one-particle Hilbert space) \nand the values for $n<0$ are stored at the positions $N-n$ while the \nvalues for $n\\ge 0$ are stored at positions $n$. In this way \nif the initial states are localized close to small values of $n\\approx 0$ \n(or $n\\approx N$ which is topologically close to $n\\approx 0$ due \nto the periodic boundary conditions) and \nif during the time evolution the states do not touch the borders at \n$n\\approx \\pm N\/2$ the results are independent of the exact choice $N$ \nprovided $N$ is sufficiently large. In other words the momentum phases \nexhibit a smooth transition between $n\\approx 0$ and $n\\approx N$ according \nthe quadratic formula while at the ``system border'' $n\\approx N\/2$ \nthis transition is not smooth. Otherwise, if the phases were naively \ncomputed for $n=0,\\,\\ldots,N-1$ according to the quadratic formula the \nresults would depend in a sensitive way on $N$ even if the states remain \nlocalized close to $n\\approx 0$ since the eigenphases for $n\\approx N$ \nwould be very different.\n\nThe transitions from one representation (momentum or position) to another and\nback are done with the Fast Fourier Transform (FFT).\nFurthermore, we chose the quantum map to be directly symmetric in time and \ntherefore we present it as a half period of free propagation \n(using the operator $\\hat U_{\\rm half,free}=e^{-iT\\hat{p}^2\/4}$)\nfollowed by the kick (using $\\hat U_{\\rm kick}=e^{-ik\\cos{\\hat{x}}}$) \nand then again a half period of free propagation (using \n$\\hat U_{\\rm half,free}$).\nFurthermore, in order to have an exact mathematical equivalence \nbetween the two cases $T=4\\pi+\\epsilon$ and $T=\\epsilon$ \n(at $\\beta=0$) we also apply for \nthe first case to the initial states (given below for the different cases\nwe consider) \nan initial half period of free propagation \nwith $T=4\\pi$ (which provides an additional phase factor $(-1)^{n_1+n_2}$ \nin momentum representation). We have numerically verified that this \nequivalence is indeed valid. \n\nWe consider in detail 3 specific cases:\nA) kicked rotator case with a moderate dimensionless effective Planck constant\n$\\hbar_{\\rm eff}=\\epsilon = T -4\\pi < 1$ and a wavefunction periodic \non the $2\\pi$-circle (i.e. integer values of $p_i=n_i$ with $\\beta_i=0$ \nand $i=1,2$ for both particles); \nB) same case but taken in the deep semiclassical regime with \n$\\hbar_{\\rm eff} \\ll 1$; C) the case of kicked particles propagation\non an infinite (or quasi-infinite) line at moderate $\\hbar_{\\rm eff}$\nthat corresponds to the case of cold atoms in a kicked\noptical lattice \\cite{raizen,garreau,hoogerland} composed of $L$ periods \nsuch that $x\\in[0,\\,2\\pi L$. \nThe total computational basis size for one particle,\nused in the numerical simulations,\nwas changing from $N=1024$ up to $N=2^{22}$,\ndepending on the choice of A), B), C) and insuring \nthat the basis size does not affect the obtained results.\nFor two particles the size of the Hilbert space is\n$N_H=N^2$. For moderate values of $N$ (e.g. up to $N=2^{12}$ \nin cases A and B)\nwe used the whole basis with $N_H$ states\nusing two-dimensional (2D) FFT\ntransitions between momentum and coordinate \nrepresentations in (\\ref{qmappair}).\nFor larger $N$ values we used the fact that \nthe Schmidt decomposition (\\ref{schmidt}) has coefficients $\\alpha_i$\nbeing unchanged during the time evolution\nso that we propagate independently each particle\nand use the Schmidt entangled EPR wavefunction\nfor a measurement of the second particle\nat the time moment $t_r$ \nand backward propagating only the first particle\nafter measurement. \nWe checked, for $N \\leq 2^{12}$, that these two numerical \nmethods of time evolution simulation\ngive the same results up to the computer numerical accuracy.\nSome additional details about numerical simulations \nand Figures are given in the Appendix.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig1}\n\\end{center}\n\\caption{\\label{fig1} \nTime dependence of the average energy \nof the first particle $E_{1}(t) = \\langle n_{1}^2\/2\\rangle$\nfor the initial state (\\ref{eqinitialstate}) \nwith time evolution given by the quantum Chirikov standard map (\\ref{qmap})-(\\ref{qmappair}).\nThe measurement of second particle and time reversal are performed\nafter $t_r =40$ quantum map (\\ref{qmappair}) iterations. \nThe black curves in both panels show the forward time evolution \nfor $0\\le t\\le t_r$; the blue curves show the \nbackward time evolution $t_r \\le t \\leq 2t_r=80$ \nwith the exact time reversal without measurement (using $T=4\\pi-\\epsilon$). \nIn panel (a) the curves of other colors show the backward time evolution\nafter measurement of the second particle at momentum states $n_2=8$\n(cyan), 12 (green), 20 (magenta) , 200 (red).\nIn panel (b) the red curve shows the backward time evolution \nafter the second particle measurement detection at $n_2$ and \naveraging over all possible measurement results of $n_2$ values \n(black and red curves are shifted up by 50 units for a better visibility;\nred and blue curves coincide within numerical \nround-off errors ($\\sim 10^{-13}$)). The system parameters are:\n $N=1024$, $N_H=N^2$ and $\\hbar_{\\rm eff}=\\epsilon=5\/8$, $K_{\\rm eff}=5$, \n$k=K_{\\rm eff}\/\\hbar_{\\rm eff}=8$,\n$T=4\\pi \\pm \\epsilon$. We have verified that a further increase of $N$ \nto values of $2048$ and $4096$ provide identical results up to numerical \nround-off errors (provided the free propagation eigenphases are properly \ncomputed as explained in the text at the beginning of this section).\n}\n\\end{figure}\n\n\\subsection{IIIA. EPR pairs in kicked rotator at moderate $\\hbar_{\\rm eff}$ values}\n\\label{subsec3a}\n\nHere we present the results for a case with moderate effective value \nof the Planck constant.\nAs described above we use the values of parameter \n$T = 4\\pi +\\epsilon$ for forward\ntime propagation with $t_r$ quantum map iterations and $T=4\\pi - \\epsilon$\nfor next $t_r$ iterations corresponding to the time reversal. We remind that\nsince the phase shift $(4\\pi) n^2\/2$ is a multiple of $2\\pi$ for all integer\nvalues of the momentum $p=n$ the evolution is determined by an effective\nPlanck constant $\\hbar_{\\rm eff}=\\epsilon$. Thus the effective \nclassical chaos parameter \nis $K_{\\rm eff} = k \\epsilon = k \\hbar_{\\rm eff}$. The measurement is done \nfor the second particle after $t_r$ iterations.\nWe consider the case of projective measurement in the momentum basis\n$n_2$ of the second particle performing the projection \nto a certain value of $n_2$ after $t_r$ iterations. After that the \nevolution of the first particle continues with $T=4\\pi - \\epsilon$\nand $k \\rightarrow -k$\nfor the next $t_r$ iterations. Without measurement the EPR wavefunction \nof two particles returns\nexactly to its initial state due to exact time reversibility\nof the quantum evolution. Also, in absence of entanglement of particles\nthe measurement of the second particle does not affect the reversibility\nof the first particle which would exactly return to its initial state.\nHowever, in presence of entanglement the measurement\nof the second particle affects the time reversibility of the first \nparticle in a nontrivial manner.\n\nTo illustrate the nontrivial features of\nmeasurements on time reversal \nof chaotic EPR pairs we use typical \nsystem parameters with $K=k \\epsilon = k\\hbar_{\\rm eff} =5$\nand $k=8$ (thus $\\hbar_{\\rm eff} =5\/8$). \nSuch a value of $k=8$ is \nnot very high being well accessible to the present experimental facilities\n(see e.g. \\cite{raizen,garreau,hoogerland}).\n\nIn this first part to characterize the quantum time evolution \nwe compute the one-particle probability (of the first particle) as: \n$w(n_1,t) = \\sum_{n_2}|\\psi(n_1,n_2,t)|^2$,\n(with the momentum wave function \n$\\psi(n_1,n_2,t)=\\langle n_1,n_2\\ket{\\psi(t)}$), \nand the one-particle energy (of the first particle):\n$E_{1}(t) = \\langle n_{1}^2\/2\\rangle=\\sum_{n_1} (n_1^2\/2)\\,w(n_1,t)$.\n\nAs initial state we take an entangled EPR pair \nwithout any symmetry and with \nmore or less arbitrary coefficients at two momentum values:\n\\begin{eqnarray}\n\\label{eqinitialstate}\n\\ket{\\psi(t=0)}&=&\\Bigl(\n\\ket{0}\\otimes\\ket{0}+\n0.7\\ket{0}\\otimes\\ket{1}+\\\\\n\\nonumber\n&&\\quad 0.3\\ket{1}\\otimes\\ket{0}-\n2\\ket{1}\\otimes\\ket{1}\\Bigr)\/\\sqrt{5.58} \\; ,\n\\end{eqnarray}\nwhere $\\ket{n_1}\\otimes\\ket{n_2}$ represents the momentum basis states.\nThus initially both particles are distributed over\nmomentum states at $n_{1,2}$ being $0$ or $1$.\n\nThis state can be rewritten in the Schmidt decomposition \\cite{schmidt} as~:\n\\begin{equation}\n\\label{eqschmidt}\n\\ket{\\psi(t=0)}=\\sum_{i=1,2} \\alpha_i \\ket{u_i}\\otimes\\ket{v_i}\n\\end{equation}\nwith\n\\begin{eqnarray}\n\\label{eqschmidt2}\n\\nonumber\n\\alpha_1&=&0.8973\\quad,\\quad\\alpha_2=0.4414,\\\\\n\\nonumber\n\\ket{u_1}&=& 0.3440\\ket{0}-0.9390\\ket{1},\\\\\n\\nonumber\n\\ket{u_2}&=& 0.9390\\ket{0}+0.3440\\ket{1},\\\\\n\\nonumber\n\\ket{v_1}&=& 0.0294\\ket{0}+0.9996\\ket{1},\\\\\n\\ket{v_2}&=& 0.9996\\ket{0}-0.0294\\ket{1},\n\\end{eqnarray}\nThe entropy of entanglement of this initial state is:\n\\begin{equation}\n\\label{eqentropy_log2}\nS=-\\sum_i \\alpha_i^2\\,\\log_2(\\alpha_i^2)=0.7114 \\; .\n\\end{equation}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig2}\n\\end{center}\n\\caption{\\label{fig2} Panel (a) shows the time evolution \nof probability of the first particle $w(n_1,t)$ (color density plot) \nfor parameters of Fig.~\\ref{fig1} and $-128\\le n_1<128$ \n($y$-axis), $0\\le t\\le 80$ ($x$-axis), $t_r=40$. \nThe measurement and time reversal are done after $t_r$ map (\\ref{qmappair}) iterations\nwith the second particle detected at the momentum value $n_2=12$.\nThe thin white vertical \nline marks the time $t_r=40$ of measurement \nand the beginning of backward iterations. \nThe numbers of the color bar correspond to \n$[w(n_1,t)\/w_{\\rm max}(t)]^{1\/4}$\nwith $w_{\\rm max}(t)=\\max_{n_1} w(n_1,t)$ being the density maximum \nat a given value of $t$. \nPanels (b) and (c) provide a zoom \nfor $-32\\le n_1<32$ (both panels) \nand $0\\le t<10$ (b) or $70t_r$ is different from the forward one.\nHowever, at the return moment $t=2t_r$ we still\nhave two coherent wave packets for first particle\nwhich have the same shape as at the initial state\nbut with different coefficients. \n\nWe also show the initial $t=0$\nand final $t=2t_r=40$ probability distributions of the first particle\nin Fig.~\\ref{fig8} for different results of measurements\nof the second particle detected at $n_2=8, 12, 20, 200$. \nThe weights of each coherent state at $t=2t_r$ \nare determined from the Schmidt components of a theoretical state \nconstructed in the same way as in the case of Fig.~\\ref{fig4}.\nThe density of the theoretical state coincides with the final density \nat $t=2t_r$ up to usual numerical round-off errors (only the maximum of \neach theoretical state is shown in Fig.~\\ref{fig8} by a blue star).\n\nSimilar to the case of Fig.~\\ref{fig7}\nwith measured $n_2=8$ we show the time evolution $w(n_1,t)$\nfor other measured values $n_2 = 12, 20$ in Appendix Fig.~\\ref{figA3}.\nFor comparison we show in Appendix Fig.~\\ref{figA4}\nalso the case of exact time reversal without measurements\n(i.e. with average over all measured $n_2$ values):\nhere the distribution $w(n_1,t)$ is exactly symmetric with respect to\ntime reversal at the moment $t=t_r=20$.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig8}\n\\end{center}\n\\caption{\\label{fig8} Densities $w(n_1,t)$ of the first particle\nfor the case of Fig.~\\ref{fig5} are\nshown at initial $t=0$ (black curves)\nand final $t=2t_r=40$ (red curves) time moments;\n the second particle is measured \nat $n_2=8$ (a), $n_2=12$ (b), $n_2=20$ (c) and $n_2=200$ (d). \nThe blue stars provide for each case the maximum values of the \ndensity of the theoretical state obtained from the Schmidt decomposition \n(see text) and predicting the final density at $t=40$.\nThe theoretical density curves are identical to the red curves within \nnumerical precision $\\sim 10^{-13}$ but only their values at the two maximum \npositions are shown for a better visibility.\n}\n\\end{figure}\n\n\n\\subsection{IIIC. EPR pairs of cold atoms in a kicked optical lattice}\n\\label{subsec3c}\n\nAbove we studied the properties of measurements and time reversal of EPR pairs\nin the regime of kicked rotator when the evolution takes place on a ring \nof size $2\\pi$. However, the experiments with cold atoms \nin a kicked optical lattice \\cite{raizen,garreau,hoogerland}\ncorrespond to the situation when an EPR pair propagates on the \ninfinite $x$ axis containing many periods of size $2\\pi$. Due to \nthe periodicity of potential the wavefunction of each\nparticle is characterized by a quasimomentum with irrational values\n$\\beta$ (with $p=n+\\beta$) that reduce the probability of the \nsingle atom time reversal as discussed in detail in \\cite{martin}.\nThus to model this experimental setup we consider the EPR propagation on an \n$x$ interval of size $2\\pi L$ containing $L$ periods $2\\pi$ of the \noptical lattice. We use periodic boundary conditions in $x$\nbut during the time evolution the wave packet \nis not reaching the boundaries such that the \nboundary conditions are not important.\nIn this case the free propagation of a particle between kicks\nis given by the same unitary operator\nas in (\\ref{qmap}) but now in numerical simulations\nthe momentum takes discrete values\n$p=m\/L$ with integers $m=-N\/2,\\,\\ldots,\\,, N\/2-1$\nand $N=L N_r$ where $L$ gives the number of different\nquasimomentum values $\\beta$ and $N_r$ gives the number of integer\nvalues of momentum $p$.\nThe integer $p$ values corresponds to the rotator case.\nThe kick operator remains the same as in (\\ref{qmap}) but the position \noperator now takes the discrete values $x=2\\pi m L\/N$ ($m$ having the \nsame integer values as above) corresponding to the interval $[-\\pi L,\\pi L[$. \nAs in the previous Sections the numerical simulations are done \nwith the propagation of the full wavefunction using its Schmidt components.\nThis allows to reach very high $N$ and $L$ values required to\neliminate boundary effects. \nAs it was shown in previous Sections this\ncomputational method gives the same results\nas the full wavefunction propagation with 2D FFT\n(up to numerical precision).\nWe use as maximal values $N=2^{22}$ with $L=2^{14}$, $N_r=2^8$.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig9}\n\\end{center}\n\\caption{\\label{fig9} Time dependence of rescaled IPR\n$\\xi_t\/\\xi_0$ (a) and peak probability (b) of first particle\nfor a chaotic EPR pair in a kicked optical lattice:\nthe time forward evolution is marked by black points;\nfull backward evolution is marked by blue stars\n(time reversal is done at $t=t_r=10$ without measurement),\nbackward evolution with measurement of the moment $p_2$ of the \nsecond particle done at $t_r$ is shown by different \ncolor symbols for different measured $p_2$ values\nwith $p_2=8$ (cyan full squares), $p_2=12$ (green crosses),\n$p_2=20$ (magenta open squares), $p_2=200$ (red pluses).\nSystem parameters are $\\hbar_{\\rm eff}=\\epsilon=5\/8$, \n$K_{\\rm eff}=5$, $k=K_{\\rm eff}\/\\hbar_{\\rm eff}=8$,\n$T=4\\pi \\pm \\epsilon$ (as in Fig.~\\ref{fig1})\nand $N=L N_r =2^{22}$, $L=2^{14}$, $N_r=2^8$.\nThe initial state of the EPR pair is described in the text.\n}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig10}\n\\end{center}\n\\caption{\\label{fig10} (a) Husimi function of the first Schmidt component \n$\\ket{u_1(t)}$ of the first particle \nat the return time $t=2t_r=20$ without measurement at time reversal $t=t_r=10$;\n(b) Husimi function of the first particle at return time $t=2t_r=20$\nwith measurement of the second particle at $t=t_r=10$ with detected \nmomentum $p_2=12$. Parameters and initial state are as in Fig.~\\ref{fig9}.\nThe color bar is the same as in Figs.~\\ref{fig2} and \\ref{fig3} where the \nnumbers correspond to $[H(x,p)\/H_{\\rm max}]^{1\/8}$. Furthermore, the \ncontrast of the image files has been artificially enhanced to increase the \nvisibility of the regions with non-vanishing values of the Husimi function. \n$x$-axis shows the coordinate interval $-L\/2 \\leq x_1\/2\\pi < L\/2$ \nfor $L=10^{14}$;\n$y$-axis shows the momentum interval $-N_r\/2 \\leq p_1 < N_r\/2$ \nwith $N_r=2^8$. \n}\n\\end{figure}\n\nBelow we present results for time reversal \nfor chaotic EPR pair in a kicked optical lattice.\nThe momentum and energies are measured in recoil units \nas described in \\cite{martin} that corresponds\nto dimensionless units of $p$ used above.\nAs in the last subsection the initial state is given an entangled state \ngiven as the Schmidt decomposition of two pairs of coherent \nGaussian states and with equal coefficients $\\alpha_1=\\alpha_2=1\/\\sqrt{2}$. \nHowever, now the parameter $G$ in (\\ref{eqcoherent}) is given \nby $G=1\/(4\\Delta p)^2$ with $\\Delta p=0.01$ and due to notational reasons \nthe parameter $h_{\\rm eff}$ in (\\ref{eqcoherent}) is replaced with unity \n(not to be confused with $h_{\\rm eff}=5\/8$ mentioned below). \nThe corresponding width of the Gaussian packet in position representation \nis $\\Delta x=1\/(2\\Delta p)=50 \\approx 8\\times 2\\pi$ corresponding roughly \nto $8$ periods of the optical lattice.\nThe center and phase parameters of (\\ref{eqcoherent}) \nof the two Schmidt components for the \nfirst particle are $p_0^{(1)}=1$, $p_0^{(2)}=2$, $x_0^{(1)}=\\pi$ \n(in the middle of the cell of index $m=0$) and $x_0^{(2)}=3\\pi$ \n(in the middle of the cell of index $m=1$). The values for the two \ncorresponding Schmidt components of the second particle are \n$p_0^{(1)}=-1$, $p_0^{(2)}=-2$, $x_0^{(1)}=\\pi$ and $x_0^{(2)}=3\\pi$, \ni.e. negative $p_0^{(j)}$ values and same $x_0^{(j)}$ values with \nrespect to the first particle.\n\nConcerning the Chirikov map we use the same parameters of the first \nsubsection IIIA, i.e.: \n$\\hbar_{\\rm eff}=\\epsilon = 5\/8$, \n$K_{\\rm eff}=5$, $k=K_{\\rm eff}\/\\hbar_{\\rm eff}=8$, $T=4\\pi \\pm \\epsilon$.\nThe time reversal is done after $t_r=10$ followed by a measurement \nof the second particle and the observation of first\nparticle at the return moment $t=2t_r=20$.\n\nAs in \\cite{martin} we characterize the quantum evolution of the\nfirst particle \nby the Inverse Participation Ratio (IPR) defined \nby $\\xi_t= [\\sum_{p_1} w(p_1,t)]^2\/\\sum_p w^2(p_1,t)=1\/\\sum_p w^2(p_1,t)$ \nwhere $w(p_1,t)=\\sum_{p_2} |\\langle p_1,p_2 |\\psi (t)\\rangle |^2$\nare the probabilities of the first particle in the momentum space at time $t$ \nand after summing over the second particle momentum $p_2$ (the second \nidentity in the expression of $\\xi_t$ \nholds if the probabilities $w(p_1,t)$ are properly normalized).\nIn addition we also compute the time variation of the relative peak probability\n$W_{\\rm peak}(0)\/W_{\\rm peak}(t)$ where \n$W_{\\rm peak}(t)=\\sum_{j=1,\\,2} w(p_0^{(j)},t)$ \nis the sum of the probabilities at the two initial peak positions $p_0^{(j)}=j$\n(with $j=1,\\,2$) in momentum space.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig11}\n\\end{center}\n\\caption{\\label{fig11} Zoom of Husimi functions shown \nin the range $-3.5\/256 \\leq x_{1,2}\/(2\\pi L) \\leq 3.5\/256$ (corresponding \nto $448$ periods of the optical lattice),\n$-3.5 \\leq p_{1.2} \\leq 3.5$; the top three rows show the Husimi functions of\nthe Schmidt components $\\ket{u_1(t)}$ (a), $\\ket{u_2(t)}$ (b), $\\ket{v_1(t)}$ \n(c) at $t=0$ (1st row), $t=10$ (2nd row) and the return time $t=20$ for the \ncase with no measurement (3rd row). \nThe last row $t=20,meas$ shows the Husimi functions of the first particle \nat the return time $t=20$ with the measured momentum of the second \nparticle at $t=10$ \nbeing $p_2=8$ (a), $p_2=12$ (b), $p_2=20$ (c). \nThe color bar is the same as in Figs.~\\ref{fig2} and \\ref{fig3} where the \nnumbers correspond to $[H(x,p)\/H_{\\rm max}]^{1\/4}$.\nThe dashed white horizontal lines in top row mark integer momentum values.\n}\n\\end{figure}\n\nThe time dependence of the relative IPR value $\\xi_t\/\\xi_0$ is shown in \nFig.~\\ref{fig9}(a).\nUp to the reversal time $t_r=10$ we have an approximately diffusive growth\nof IPR $\\xi_t\/\\xi_0 \\propto \\sqrt{t}$ corresponding to the energy diffusion\nwell seen in Fig.~\\ref{fig1}. After the time reversal this growth is stopped\nbut at the return time $t=2t_r=20$ there is no real return to the \ninitial IPR value at $t=0$. The reason is that the time reversal \nis exact only for quasimomentum values $\\beta=0$ (integer $p$ values)\nand only approximate for rather small $\\beta$ close to zero or unity.\nThis point is discussed in detail in \\cite{martin}.\nIn fact the inversion of IPR is better for the case presented in \n\\cite{martin} (see Fig.1 there)\nsince the kick amplitude $k$ is significantly smaller ($k=4.5$ there vs. \n$k=8$ here). The new feature well seen in Fig.~\\ref{fig9}(a) is that \nthe measurement of the momentum\nof the second particle after $t=t_r=10$ map iterations significantly affects\nthe return behavior of IPR.\n\nTo demonstrate that certain characteristics have an exact return to \nthe initial value (up to numerical precision) we show in Fig.~\\ref{fig9}(b) \nthe time dependence of the probability ratio $W_{\\rm peak}(0)\/W_{\\rm peak}(t)$.\nDue to conservation of quasimomentum $\\beta$ the probability $W_{\\rm peak}(t)$\nis influenced only by the components of the wavefunction with $\\beta=0$\nwhich have an exact time reversal and the final value $W_{\\rm peak}(t=2t_r)$ \nis identical to its initial value $W_{\\rm peak}(0)$ \n(up to numerical precision).\nHowever, the measurement of the second particle at $t=t_r=10$ affects\nthe time evolution of $W_{\\rm peak}(0)\/W_{\\rm peak}(t)$ at intermediate times \n$11\\leq t\\leq 18$ as it is well seen in Fig.~\\ref{fig9}(b).\nNote that $W_{\\rm peak}(t)$ is given by the sum of probabilities over\nthe two initial peak probabilities of the first particle at integer\nvalues of $p$. Due to that we have the exact return of $W_{\\rm peak}(t)$.\nHowever, at the return moment $t=2t_r=20$ the relative \ndistribution of the return probability over the two initial peak positions\nis strongly affected by the measurement of the second particle as we \nshow below.\n\nWe illustrate the global spreading of the initial wavefunction \nby showing the Husimi function in $(x,p)$ plane\nin Fig.~\\ref{fig10}. The top panel shows the Husimi function of\nthe first Schmidt component $v_1(p_1)$ at the return moment\n$t=2t_r=20$ (time reversal is done at $t_r=10$ without \nmeasurement of second particle).\nIn the bottom panel we show the Husimi function of the first particle\nat $t=2t_r=20$ for the case when a measurement \ndetected the second particle at $p_2=12$ at $t_r=10$.\nThis figure shows that the main part of probability is not affected by\ntime reversal and continues to spread in the phase space.\nDue to conservation of quasimomentum\n$\\beta$ the Husimi function is composed \nof narrow distributions (some kind of parallel lines)\nlocated at integer momentum values. This is a result\nof quasimomentum conservation and the narrow initial width \n$\\Delta p=0.01$ of the initial distribution in $\\beta$ at $t=0$.\n\nThis line-type structure is better visible\nin the zoom of Fig.~\\ref{fig10} shown in Fig.~\\ref{fig11}. Here we show\ntime snapshots of the Husimi function of Schmidt components\n $\\ket{u_1(t)}$, $\\ket{u_2(t)}$ of the first particle \nand also $\\ket{v_1(t)}$ of the second particle at $t=0, 10, 20$\n(from left to right columns and top to down rows).\nIn the bottom row we show the Husimi function\nof the first particle at return time $t=20$\nwith measured momentum of second particle\nbeing $p_2=8,12,20$ (left to right)\nat reversal time $t=t_r=10$.\nHere we see a part of probability which returns to the\ninitial distribution.\n\nHowever, in global we see that the main fraction of the wave packet \nis not affected by time reversal.\nIndeed, as it was shown in \\cite{martin}\nonly a relatively small fraction\nof the wave packet returns to the initial distribution\n(that was associated with the Loschmidt cooling).\nThe reason is that the described procedure\nof time reversal is exact only for\nthe quasimomentum value $\\beta=0$ and works approximately\nfor other values $|\\beta| \\ll 1$ and $|\\beta-1| \\ll 1$. \n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig12}\n\\end{center}\n\\caption{\\label{fig12} Probability distribution $w(p_1+\\beta)$ of the \nfirst particle over quasimomentum $\\beta$ for two \ninteger offsets $p_1=1$ or $p_1=2$; the initial Gaussian probability \n(of width $\\Delta p=0.01$) $t=0$ is shown by the black dashed curve\nrepresenting the first initial peak at $p_1=1$ (curve for the \nsecond initial peak at $p_1=2$ is identical). \nAll shown distributions are rescaled by the maximum amplitude of the \ninitial Gaussian distribution at $\\beta=0$. The rescaled probabilities at \nreturn the time $t=2t_r=20$ are shown\nby red and blue curves for the initial peaks at $p_1=1$ and $p_1=2$ \nrespectively; the different panels correspond to: \n(a) time reversal at $t_r=10$ without measurement;\nmeasurement at $t_r=10$ detecting the second particle at $p_2=8$ (b),\n$p_2=12$ (c), $p_2=20$ (d). System parameters are as in Fig.~\\ref{fig9}.\n}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fig13}\n\\end{center}\n\\caption{\\label{fig13} Similar as in Fig.~\\ref{fig12} but the results \nare shown on a larger momentum range of the \nfirst particle $-20 \\leq p_1 \\leq 20$; \nblue curves show the probability at the moment of time reversal $t=t_r=10$, \nred curves show the probability at return time $t=2t_r=20$; \nthe different panels correspond to the same cases of Fig.~\\ref{fig12} \n(without or with measurement and detected $p_2$ values) and \nall densities are rescaled as in Fig.~\\ref{fig12}.\nThe shown curves also integrate the data for non-integer values of $p_1$ \nbut the density values at non-integer $p_1$ are essentially zero (in graphical \nprecision).\n}\n\\end{figure}\n\nTo see in a better way the fraction of the wave packet \nreturning\nto the initial distribution we show in Fig.~\\ref{fig12}\nthe probability distribution in quasimomentum $\\beta$\nof the first particle \nat $t=0$ and return time $t=2t_r=20$.\nIn panel Fig.~\\ref{fig12}(a) the time reversal is done\nwithout measurement of second particle.\nThe initial distribution has two peaks \nat $p_1=1,2$ and the return probability\nexactly returns to its initial values\nat $\\beta=0$. However, the width of \nreturn distribution in $\\beta$ is significantly narrowed\nsince the time reversal is only approximate for\n$\\beta$ different from (but close) zero. This effect, \ncalled Loschmidt cooling, \nis discussed in detail in \\cite{martin}.\nThe new feature present in Fig.~\\ref{fig12}\nis that a measurement of the second particle at $t=t_r=10$\nsignificantly affects the peak probabilities \nat two initial positions $p_1=1,2$ due to the entanglement\nof the EPR pair. At the same time the sum of probabilities\nof the two peaks at $p_1=1,2$ remains exactly equal to the initial \npeak probability sum at $t=0$ since the time reversal is exact for $\\beta=0$\n(see also Fig.~\\ref{fig9}(b)). As for the above case of the kicked rotator,\nwe interpret the fact that\na measurement of second particle drastically affects\nthe return path of the first particle with\na specific Feynman path \\cite{feynman} \nselected by measurement of the entangled second particle\nat the moment of time reversal. \n\nThe distribution of probabilities of the first particle\nat times $t=t_r=10$ and $t=2t_r=20$ is also shown in Fig.~\\ref{fig13}\non a larger scale of momentum $p_1$. We see that there is\na broad background of probability of the first particle\nwhich diffusively spreads in momentum due to quantum chaos\nand which is not significantly affected by the time reversal.\nHowever, we also see that at the return time $t=2t_r=20$ there \nappear two very high peaks near momentum positions of the initial\ndistribution. The amplitudes of these two peaks are\nstrongly affected by a measurement of the second particle at time \nreversal $t_r=10$.\nEven if the total probability in these two peaks at $t=20$ is \nsmall compared to the total probability, their very high peak amplitudes\nallow to detect them in a very robust way. In fact, as it was shown in \n\\cite{fink1,fink2}\nfor reversal of acoustic waves, the chaotic dynamics \nallows to enhance the time reversal signal making it much more visible\nin presence of chaotic background.\nHere we have a similar situation \nthat potentially allows to realize and detect the time reversal of\nentangled quantum cold atoms. The time reversal of cold atoms\nwithout measurement at the moment of time reversal has been \nrealized in \\cite{hoogerland}. \n\nHere we presented results for measurements which\ndetect a specific momentum value of second particle.\nAdditional results for a measurement projection\non a broader distribution of momentum $p_2$ with\na certain width $\\Delta p_2$ are presented in \nAppendix Fig.~\\ref{figA5}, Fig.~\\ref{figA6}.\nIn this case the time reversal also reproduces the \nthe peaks of probability of first particle\nnear their initial positions. These results show that a measurement device,\nwhich is modeled by a width $\\Delta p_2$,\naffects the probability distribution\nof first particle at the return moment $t=2t_r$.\n\nAbove we considered an initial entangled state\nwith a narrow probability distribution\nnear two integer momentum values of the \nEPR pair. We suppose that in an experimental \nsetup initially ultra cold atoms \ncan be trapped at very low temperatures\ncorresponding to $p$ values close to zero.\nThen a field pulse can move the momentum \nto higher $p$ values being close to their \ninteger values (in recoil units).\nThe entanglement between the atoms \ncan be created due to their initial interactions\nwhich is later switched off,\ne.g. with the help of the Feshbach resonance.\nIt is also possible that both atoms \nhave an initial momentum close to zero\nbut being entangled they may have a certain \nspacial separation.\nHere we consider the case of distinguishable atoms \nthat can be realized by taking two identical atoms but \nat different hyperfine states.\nSuch a difference of internal atomic structure\nallows to measure one atom without\naffecting the other one.\nOf course, such type of experiments are very \nchallenging but the technological progress allows now\nto perform operations with\ntwo entangled atoms (see e.g. \\cite{jorg})\nand we expect that the experimental\ninvestigation of chaotic EPR pairs\ncan be realized soon in cold atom experiments. \n\n\\section{IV. Discussion}\n\\label{sec4}\n\nIn this work we analyzed the case when the evolution of an \nEPR pair is chaotic in the classical limit of small Planck constant.\nAt the same time the system dynamics is reversible in time \nboth in classical and quantum cases. In the classical case\nthe errors grow exponentially with time due to\ndynamical chaos that breaks the time reversal \nof evolution is presence even of very small errors.\nIn contrast the quantum evolution remains relatively stable to\nquantum errors due to the existence of instability only during a \nlogarithmically short Ehrenfest time scale. Our main objective was to analyze \nhow measurements of one particle of a chaotic and entangled EPR pair\naffects the time reversal of the remaining particle.\nWe find that this particle retains an approximate time reversal\nreturning to one of all configurations\nrepresenting the initial entangled EPR state.\nWe explain such an approximate time reversal\non the basis of the Feynman path integral formulation\nof quantum mechanics according to which a measurement selects\na specific configuration which returns to its initial\nstate via time inverted specific pathway.\nWe show that the Schmidt decomposition of the initially entangled EPR state\nallows to identify the final quantum state at the return time.\n\nHere we considered the chaotic EPR pairs in the case of the quantum\nChirikov standard map. This system has been already realized \nin experiments with cold atoms in kicked optical lattices\n\\cite{raizen,garreau}. Moreover, the time reversal, proposed in \n\\cite{martin}, has been realized experimentally by the Hoogerland \ngroup \\cite{hoogerland}. However, in this experiment the interplay \naspects of entanglement and measurement \nfor time reversal had not been studied. At present advanced \ncold atoms techniques allow to investigate various \nquantum correlations of entangled pairs of atoms (see e.g. \\cite{jorg})\nand we expect that experimental investigations \nof the time reversal of chaotic EPR pairs, discussed here, are possible.\nIt may also be interesting to consider the time reversal for \ntwo entangled Bose-Einstein condensates (BECs)\nwith their chaotic evolution in a kicked optical lattice\nfollowing the proposal of time reversal for a single BEC\ndescribed in \\cite{martin2}.\n \n\n\\section{Acknowledgments}\nThis research was supported in part through the grant \nNANOX $N^o$ ANR-17-EURE-0009, (project MTDINA) in the frame of the Programme des Investissements d'Avenir, France;\nthe work is also done as a part of prospective ANR France project OCTAVES.\nThis work was granted access to the HPC resources of \nCALMIP (Toulouse) under the allocation 2021-P0110.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}