diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzctac" "b/data_all_eng_slimpj/shuffled/split2/finalzzctac" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzctac" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:introduction}\n\\noindent\nIn this paper, we shall develop a data-driven method to solve the following multiscale elliptic PDEs with random coefficients $a(x,\\omega)$, \n\\begin{align}\n\\mathcal{L}(x,\\omega) u(x,\\omega) \\equiv -\\nabla\\cdot\\big(a(x,\\omega)\\nabla u(x,\\omega)\\big) &= f(x), \n\\quad x\\in D, \\quad \\omega\\in\\Omega, \\label{MsStoEllip_Eq}\\\\\nu(x,\\omega)&= 0, \\quad \\quad x\\in \\partial D, \\label{MsStoEllip_BC}\n\\end{align}\nwhere $D \\in \\mathbb{R}^d$ is a bounded spatial domain and $\\Omega$ is a sample space. The forcing function $f(x)$ is assumed to be in $L^2(D)$.\nWe also assume that the problem is uniformly elliptic almost surely; see Section \\ref{sec:randomproblem} for precise definition of the problem.\n\nIn recent years, there has been an increased interest in quantifying the uncertainty in systems with randomness, i.e., solving stochastic partial differential equations (SPDEs, i.e., PDEs driven by Brownian motion) or partial differential equations with random coefficients (RPDEs). Uncertainty quantification (UQ) is an emerging research \narea to address these issues; see \\cite{Ghanem:91,Xiu:03,babuska:04,matthies:05,WuanHou:06,Wan:06,Babuska:07,Webster:08,Xiu:09,Najm:09,sapsis:09,Zabaras:13,Grahamquasi:2015} and references therein. \nHowever, when SPDEs or RPDEs involving multiscale features and\/or high-dimensional random inputs, the problems become challenging due to high computational cost.\n\nRecently, some progress has been made in developing numerical methods for multiscale PDEs with random coefficients; see \\cite{Kevrekidis:2003,Zabaras:06,Ghanem:08,graham2011quasi,abdulle2013multilevel,hou2015heterogeneous,ZhangCiHouMMS:15,efendiev2015multilevel,chung2018cluster} and references therein. \nFor example, data-driven stochastic methods to solve PDEs with random and\/or multiscale coefficients were proposed in \\cite{ChengHouYanZhang:13,ZhangCiHouMMS:15,ZhangHouLiu:15,hou2019model}. They demonstrated through numerical experiments that those methods were efficient in solving RPDEs with many different force functions. However, the polynomial chaos expansion \\cite{Ghanem:91,Xiu:03} is used to represent the randomness in the solutions. Although the polynomial chaos expansion is general, it is a priori instead of problem specific. Hence many terms may be required in practice for an accurate approximation which induces the curse of dimensionality. \n\nWe aim to develop a new data-driven method to solve multiscale elliptic PDEs with random coefficients based on intrinsic dimension reduction. The underlying low-dimensional structure for elliptic problems is implied by the work \\cite{bebendorf2003}, in which high separability of the Green's \nfunction for uniformly elliptic operators with $L^{\\infty}$ coefficients and the structure of blockwise low-rank approximation to the inverses of FEM matrices were established. We show that under the uniform ellipticity assumption, the family of Green's functions parametrized by a random variable $\\omega$ is still highly separable, which reveals the approximate low dimensional structure of the family of solutions to \\eqref{MsStoEllip_Eq} (again parametrized by $\\omega$)\nand motivates our method. \n \nOur method consists of two stages. In the offline stage, a set of data-driven basis \nis constructed from solution samples. For example, the data can be generated by solving \\eqref{MsStoEllip_Eq}-\\eqref{MsStoEllip_BC} corresponding to a sampling of the coefficient $a(x,\\omega)$. \nHere, different sampling methods can be applied, including Monte Carlo (MC) method and quasi-Monte Carlo (qMC) method. The sparse-grid based stochastic collocation method \\cite{Griebel:04,Xiu:05,Webster:08} also works when the dimension of the random variables in $a(x,\\omega)$ is moderate. Or the data come from field measurements directly.\nThen the low-dimensional structure and the corresponding basis will be extracted using model reduction methods, such as the proper orthogonal decomposition (POD) \\cite{HolmesLumleyPOD:1993,Sirovich:1987,Willcox2015PODsurvey}, a.k.a. principle component analysis (PCA). The basis functions are data driven and problem specific. \nThe key point is that once the dimension reduction is achieved, the online stage of computing the solution corresponding to a new coefficient becomes finding a linear combination of the (few) basis to approximate the solution. However, the mapping from the input coefficients of the PDE to the expansion coefficients of the solution in terms of the data driven basis is highly nonlinear. We propose a few possible online strategies (see Section \\ref{sec:DerivationNewMethod}). For examples, if the coefficient is in parametric form, one can approximate the nonlinear map from the parameter domain to the expansion coefficients. Or one can apply Galerkin method using the extracted basis to solve \\eqref{MsStoEllip_BC} for a new coefficient. In practice, the random coefficient of the PDE may not be available but censors can be deployed to record the solution at certain locations. In this case, one can compute the expansion coefficients of a new solution by least square fitting those measurements at designed locations.\nWe also provide analysis and guidelines for sampling, dimension reduction, and other implementations of our methods. \n\n\n \n\n \nThe rest of the paper is organized as follows. In Section 2, we introduce the high separability of the Green's function of deterministic elliptic PDEs and present its extension to elliptic problems with random coefficients. In section 3, we describe our new data-driven method and its detailed implementation. In Section 4, we present numerical results to demonstrate the efficiency of our method. Concluding remarks are made in Section 5.\n\n\n\\section{Low-dimensional structures in the solution space} \\label{sec:LowDimStructures}\n\\subsection{High separability of the Green's function of deterministic elliptic operators. }\n\\noindent\nLet $\\mathcal{L}(x): V \\to V' $ be an uniformly elliptic operator in a divergence form\n\\begin{align}\n\\mathcal{L}(x)u(x) \\equiv -\\nabla\\cdot(a(x)\\nabla u(x))\\label{DeterministicEllipticPDE}\n\\end{align}\nin a bounded Lipschitz domain $D \\subset \\mathbb{R}^d$, where $V = H_0^1(D)$. The uniformly elliptic assumption means that there exist $a_{\\min}, a_{\\max}>0$, such that $a_{\\min}0$ and satisfies\n\\[\na(u, \\varphi) = \\int_{E} a(x)\\nabla u(x)\\cdot \\nabla \\varphi(x) dx =0 \\quad \\forall \\varphi \\in C_0^{\\infty} (E).\n\\]\nDenote the space of $\\mathcal{L}$-harmonic functions on $E$ by $X(E)$, which is closed in $L^2(E)$. The following key Lemma shows that the space of $\\mathcal{L}$-harmonic function has an approximate low dimensional structure. \n\n\\begin{lemma}[Lemma 2.6 of \\cite{BebendorfHackbusch:2003}]\\label{lemma1}\nLet $\\hat{E}\\subset E \\subset D$ in $R^d$ and assume that $\\hat{E}$ is convex such that \n\\[\ndist(\\hat{E}, \\partial E)\\ge \\rho~ diam(\\hat{E})>0, \\quad \\mbox{for some constant } \\rho >0.\n\\]\nThen for any $1>\\epsilon>0$, there is a subspace $W\\subset X(\\hat{E})$ so that for all $u\\in X(\\hat{E})$,\n\\[\ndist_{L^2(\\hat{E})}(u, W)\\le \\epsilon \\|u\\|_{L^2(E)}\n\\]\nand\n\\[\ndim(W)\\le c^d(\\kappa_a,\\rho) (|\\log \\epsilon |)^{d+1},\n\\]\nwhere $c(\\kappa_a,\\rho) >0 $ is a constant that depends on $\\rho$ and $\\kappa_a$.\n\\end{lemma}\nIn other words, the above Lemma says the Kolmogorov n-width of the space of $\\mathcal{L}$-harmonic function $X(\\hat{E})$ is less than $O(\\exp(-n^{\\frac{1}{d+1}}))$.\nThe key property of $\\mathcal{L}$-harmonic functions used to prove the above result is the Caccioppoli inequality, which provides the estimate $\\|\\nabla u\\|_{L^2(\\hat{\\E})} \\le C(\\kappa_a, \\rho)\\|u\\|_{L^2(E)}$. Moreover, the projection of the space of piecewise constant functions defined on a multi-resolution rectangular mesh onto $X(\\hat{E})$ can be constructed as a candidate for $W$ based on Prop. \\ref{FiniteDimensionalApprox}.\n\nIn particular, the Green's function $G(\\cdot,y)$ is $\\mathcal{L}$-harmonic on $E$ if $y\\notin E$. Moreover, given two disjoint domains in $D_1, D_2$ in $D$, the Green's function $G(x,y)$ with $x\\in D_1, y\\in D_2$ can be viewed as a family of $\\mathcal{L}$-harmonic functions on $D_1$ parametrized by $y\\in D_2$. From the above Lemma one can easily deduce the following result which shows the high separability of the Green's function for the elliptic operator \\eqref{DeterministicEllipticPDE}.\n\n\n\n\n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{tikzpicture}[scale=0.9]\n\n\t\\coordinate [label={[xshift=0.7cm, yshift=0.3cm]$D$}] (a1) at (0,0);\n\t\\coordinate (b1) at (0,4);\n\t\\coordinate (c1) at (8,4);\n\t\\coordinate (d1) at (8,0);\n\t\\draw(a1)--(b1)--(c1)--(d1)--cycle;\n\n\t\\coordinate (a2) at (1,0.8);\n\t\\coordinate (b2) at (1,3.2);\n\t\\coordinate (c2) at (3,3.2);\n\t\\coordinate (d2) at (3,0.8);\n\t\\draw(a2)--(b2)--(c2)--(d2)--cycle;\n\n\t\\coordinate (a3) at (5,0.8);\n\t\\coordinate (b3) at (5,3.2);\n\t\\coordinate (c3) at (7,3.2);\n\t\\coordinate (d3) at (7,0.8);\n\t\\draw(a3)--(b3)--(c3)--(d3)--cycle;\n\n\t\\tikzstyle{textnode} = [thick, fill=white, minimum size = 0.1cm]\n\t\\node[textnode] (D1) at (2,2) {$D_1$};\n\t\\node[textnode] (D2) at (6,2) {$D_2$};\n\t\\node[textnode] (Gf) at (4,3.3) {$G(x,y)$};\n\n\t\\path [->] (Gf) edge node {} (D1);\n\t\\path [->] (Gf) edge node {} (D2);\n\t\\end{tikzpicture}\n\t\\caption{Green's function $G(x,y)$ with dependence on $x\\in D_1$ and $y\\in D_2$.}\n\t\\label{fig:Greenfunction1}\n\\end{figure} \n\n\\begin{proposition}[Theorem 2.8 of \\cite{BebendorfHackbusch:2003}]\\label{GreenFuncSepaApp}\n\tLet $D_1, D_2 \\subset D$ be two subdomains and $D_1$ be convex (see Figure \\ref{fig:Greenfunction1}). Assume that there exists $\\rho>0$ such that \n\t\\begin{align}\n\t0 < \\text{ \\normalfont diam} (D_1) \\leq \\rho\\text{ \\normalfont dist} (D_1, D_2). \n\t\\label{AdmissiblePairs}\n\n\t\\end{align}\n\tThen for any $\\epsilon \\in (0,1)$ there is a separable approximation\n\t\\begin{align}\n\tG_k(x,y) = \\sum_{i=1}^k u_i(x) v_i(y) \\quad \\text{with } k \\leq \n\tc^d(\\kappa_a, \\rho) |\\log \\epsilon|^{d+1},\n\t\\label{GreenFuncSepaApp1}\n\t\\end{align}\n\tso that for all $y\\in D_2$\n\t\\begin{align}\n\t\\| G( \\cdot,y) - G_k(\\cdot,y) \\|_{L^2 (D_1)} \\leq \\epsilon \\| G(\\cdot,y) \\| _{L^2(\\hat{D}_1)},\n\t\\end{align}\n\twhere $\\hat{D}_1 := \\{ x \\in D : 2\\rho~\\text{\\normalfont dist} (x, D_1) \\leq \\text{\\normalfont diam} (D_1)\\}$.\n\\end{proposition}\n\\begin{remark}\nIn the recent work \\cite{EngquistZhao:2018}, it is shown that the Green's function for high frequency Helmholtz equation is not highly separable due to the highly oscillatory phase.\n\\end{remark}\n\n\\subsection{Extension to elliptic PDEs with random coefficients}\\label{sec:randomproblem}\n\\noindent\nLet's consider the following elliptic PDEs with random coefficients:\n\\begin{align}\n\\mathcal{L}(x,\\omega) u(x,\\omega) \n\\equiv -\\nabla\\cdot\\big(a(x,\\omega)\\nabla u(x,\\omega)\\big) &= f(x), \n\\quad x\\in D, \\quad \\omega\\in\\Omega, \\label{MsStoEllip_ModelEq}\\\\\nu(x,\\omega)&= 0, \\quad \\quad x\\in \\partial D, \\label{MsStoEllip_ModelBC}\n\\end{align}\nwhere $D \\in \\mathbb{R}^d$ is a bounded spatial domain and $\\Omega$ is a sample space. The forcing function $f(x)$ is assumed to be in $L^2(D)$. The above equation can be used to model the flow pressure in porous media such as water aquifer and oil reservoirs, where the permeability field $a(x,\\omega)$ is a random field whose exact values are infeasible to obtain in practice due to the low resolution of seismic data. We also assume that the problem is uniformly elliptic almost surely, namely, there exist $a_{\\min}, a_{\\max}>0$, such that\n\\begin{align}\nP\\big(\\omega\\in \\Omega: a(x, \\omega)\\in [a_{\\min},a_{\\max}], \\forall x \\in D\\big) = 1.\n\\label{asUniformlyElliptic1}\n\\end{align}\nNote that we do not make any assumption on the regularity of the coefficient $a(x,\\omega)$ in the physical space, which can be arbitrarily rough for each realization. \nFor the problem \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC}, the corresponding Green function is defined as\n\\begin{align}\n\\mathcal{L}(x,\\omega)G(x,y,\\omega) \\equiv-\\nabla_x\\cdot(a(x,\\omega)\\nabla_x G(x,y,\\omega)) &= \\delta(x,y), \\quad x\\in D,\\quad \\omega\\in\\Omega,\\\\\nG(x,y,\\omega) &= 0, \\quad \\quad x\\in \\partial D, \n\\end{align} \nwhere $y\\in D$ and $\\delta(x,y)$ is the Dirac delta function. \nA key observation for the proof of Lemma \\ref{lemma1} and Prop. \\ref{FiniteDimensionalApprox} is that the projection of the space of piecewise constant functions defined on a multi-resolution rectangular mesh, depending only on the geometry of $D_1, D_2$, $\\kappa_a$, and $\\rho$, onto the $\\mathcal{L}$-harmonic function provides a candidate for the finite dimensional subspace $W$. Based on this observation, one can easily extend the statement in Prop. \\ref{FiniteDimensionalApprox} to the family of Green's functions $G(x,y,\\omega) $ parametrized by $\\omega$\nunder the uniform ellipticity assumption \\eqref{asUniformlyElliptic1}.\n\\begin{theorem}\\label{ThmRandomGreenFuncSepaApp}\n\tLet $D_1, D_2 \\subset D$ be two subdomains and $D_1$ be convex. Assume that there is $\\rho>0$ such that $0 < \\text{ \\normalfont diam} (D_1) \\leq \\rho\\text{ \\normalfont dist} (D_1, D_2)$. \n\tThen for any $\\epsilon \\in (0,1)$ there is a separable approximation\n\t\\begin{align}\n\tG_k(x,y,\\omega) = \\sum_{i=1}^k u_i(x) v_i(y,\\omega) \\quad \\text{with } k \\leq \n\tc^d(\\kappa_a, \\rho) |\\log \\epsilon|^{d+1},\n\t\\label{RandomGreenFuncSepaApp}\n\t\\end{align}\n\tso that for all $y\\in D_2$\n\t\\begin{align}\n\t\\| G(\\cdot,y,\\omega) - G_k(\\cdot,y, \\omega) \\|_{L^2 (D_1)} \\leq \\epsilon \\| G(\\cdot,y, \\omega) \\| _{L^2(\\hat{D}_1)} \\quad \\text{a.s. in } \\Omega,\n\t\\end{align}\n\twhere $\\hat{D}_1 := \\{ x \\in D : 2\\rho\\text{\\normalfont dist} (x, D_1) \\leq \\text{\\normalfont diam} (D_1)\\}$.\n\\end{theorem}\n\nThe above Theorem shows that there exists a low dimensional linear subspace, e.g., spanned by $u_i(\\cdot)$, that can approximate the family of functions $G(\\cdot,y,\\omega)$ well in $L^2(D_1)$ uniformly with respect to $y\\in D_2$ and a.s. in $\\omega$. Moreover, if $\\mathrm{supp}(f)\\subset D_2$, one can approximate the solution to \\eqref{MsStoEllip_ModelBC} by the same space well in $L^2(D_1)$ uniformly with respect to $f$ and a.s. in $\\omega$. Let \n\\begin{equation}\nu_f(x,\\omega)=\\int_{D_2} G(x,y,\\omega)f(y) dy\n\\end{equation}\nand\n\\begin{equation}\nu^{\\epsilon}_f(x,\\omega)=\\int_{D_2} G_k(x,y,\\omega)f(y) dy=\\sum_{i=1}^k u_i(x)\\int_{D_2} v_i(y,\\omega) f(y) dy.\n\\end{equation}\nHence\n\\begin{equation}\n\\begin{array}{l}\n\\|u_f(\\cdot,\\omega)-u^{\\epsilon}_f(\\cdot,\\omega)\\|^2_{L^2(D_1)}=\\int_{D_1} \\left[\\int_{D_2} (G(x,y,\\omega)-G_k(x,y,\\omega))f(y) dy\\right]^2 dx \n\\\\ \\\\\n\\le \\|f\\|_{L^2(D_2)}^2 \\int_{D_2}\\| G(\\cdot,y,\\omega) - G_k(\\cdot,y, \\omega) \\|^2_{L^2 (D_1)} dy\\le C(D_1, D_2, \\kappa_a, d)\\epsilon^2\\|f\\|_{L^2(D_2)}^2,\n\\end{array}\n\\end{equation}\na.s. in $\\omega$ since $\\| G(\\cdot,y, \\omega) \\| _{L^2(\\hat{D}_1)}$ is bounded by a positive constant that depends on $D_1, D_2, \\kappa_a, d$ a.s. in $\\omega$ due to uniform ellipticity \\eqref{asUniformlyElliptic1}.\nAlthough, the proof of high separability of the Green's function requires $x\\in D_1, y\\in D_2$ for well separated $D_1$ and $D_2$, i.e., avoiding the singularity of the Green's function at $x=y$, the above approximation of the solution $u$ in a domain disjoint with the support of $f$ seems to be valid for $u$ in the whole domain even when $f$ is a globally supported smooth function as shown in our numerical tests. \n\n\n\\begin{remark}\\label{remark1}\nIt is important to note that both the linear subspace $W$ and the bound for its dimension are independent of the randomness. Moreover, it is often possible to find a problem specific and data driven subspace with a dimension much smaller than the theoretical upper bound for $W$ (as demonstrated by our experiments). This key observation motivates our data-driven approach which can achieve a significant dimension reduction in the solution space. \n\\end{remark}\n\\begin{remark}\nAlthough we present the problem and our data driven approach for the elliptic problem \\eqref{MsStoEllip_ModelBC} with scalar random coefficients $a(x,\\omega)$, all the statements can be directly extended when the random coefficient is replaced by a symmetric positive definite tensor $a_{i,j}(x,\\omega), i,j,=1, \\ldots, d$ with uniform ellipticity.\n\\end{remark}\n\n\\begin{remark}\nIn the recent work \\cite{BrysonZhaoZhong:2019}, it is shown that a random field can have a large intrinsic complexity if it is rough, i.e., $a(x_1,\\omega)$ and $a(x_2, \\omega)$ decorrelate quickly in terms of $\\|x_1-x_2\\|$. However, when a random field, as rough as it can be, is input as the coefficient of an elliptic PDE, the intrinsic complexity of the resulting solution space, which depends on the coefficient highly nonlinearly and nonlocally, is highly reduced. This phenomenon can also be used to explain the severe ill-posedness of the inverse problem in which one tries to recover the coefficient of an elliptic PDE from the boundary measurements such as electrical impedance tomography (EIT).\n\\end{remark}\n\n\nBefore we end this subsection, we give a short review of existing methods for solving problem \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} involving random coefficients. There are basically two types\nof methods. In intrusive methods, one represents the solution of \\eqref{MsStoEllip_ModelEq} by $u(x,\\omega)= \\sum_{\\alpha \\in J} u_{\\alpha}(x)H_{\\alpha}(\\omega)$, where $J$ is an index set, and $H_{\\alpha}(\\omega)$ are certain basis functions (e.g. orthogonal polynomials). Typical examples are the Wiener chaos expansion (WCE) and polynomial chaos expansion (PCE) method. Then, one uses Galerkin method to compute the expansion coefficients $u_{\\alpha}(x)$; see \\cite{Ghanem:91,Xiu:03,babuska:04,matthies:05,WuanHou:06,Najm:09} and reference therein. These methods have been successfully applied to many UQ problems, where the dimension of the random input is small. However, the number of basis functions increases exponentially with the dimension of random input, i.e., they suffer from the curse of dimensionality of both the input space and the output (solution) space.\n\nIn the non-intrusive methods, one can use the MC method or qMC method to solve \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC}. However, the convergence rate is slow and the method becomes more expensive when the coefficient $a(x,\\omega)$ contains multiscale features. Stochastic collocation methods explore the smoothness of the solutions in the random space and use certain quadrature points and weights to compute the solutions \\cite{Xiu:05,Babuska:07}. Exponential convergence can be achieved for smooth solutions, but the quadrature points grow exponentially as the number of random variables increases. Sparse grids \\cite{Griebel:04,Webster:08} can reduce the quadrature points to some extent \\cite{Griebel:04}. However, the sparse grid method still becomes very expensive when the dimension of randomness is modestly high.\n\nInstead of building random basis functions a priori or \nchoosing collocation quadrature points based on the random coefficient $a(x,\\omega)$ \n(see Eq.\\eqref{ParametrizeRandomCoefficient}), we extract the low dimensional structure and a set of basis functions in the solution space directly from the data (or sampled solutions). Notice that the dimension of the extracted low dimensional space mainly depends on $\\kappa_a$ (namely $a_{\\min}$ and $a_{\\max}$), and very mildly on the dimension of the random input in $a(x,\\omega)$. Therefore, the curse of dimension can be alleviated.\n\n \n\\section{Derivation of the new data-driven method} \\label{sec:DerivationNewMethod}\nIn many physical and engineering applications, one needs to obtain the solution of the Eq.\\eqref{MsStoEllip_ModelEq} on a subdomain $\\hat{D}\\subseteq D$. For instance, in the reservoir simulation one is interested in computing the pressure value $u(x,\\omega)$ on a specific subdomain $\\hat{D}$. \n Our method consists of offline and online stages. In the offline stage, we extract the low dimensional structure and a set of data-driven\nbasis functions from solution samples. For example, a set of solution samples $\\{u(x,\\omega_i)\\}_{i=1}^{N}$ can be obtained from measurements or generated by solving \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} with coefficient samples $\\{a(x,\\omega_i)\\}_{i=1}^{N}$. \n\nLet $V_l=\\{u|_{\\hat{D}}(x,\\omega_1),...,u|_{\\hat{D}}(x,\\omega_N)\\}$ denote the solution samples. We use POD \\cite{HolmesLumleyPOD:1993,Sirovich:1987,Willcox2015PODsurvey}, or a.k.a PCA, to find the optimal subspace and its orthonormal basis to approximate $V_l$ to certain accuracy. Define the correlation matrix \n$\\sigma_{ij}=_{\\hat{D}}, i, j= 1, \\ldots, N$. Let the eigenvalues and corresponding eigenfunctions of the correlation matrix be $\\lambda_1\\ge \\lambda_2 \\ge \\ldots \\ge \\ldots \\ge \\lambda_N \\ge 0$ and $\\phi_{1}(x)$, $\\phi_{2}(x), \\ldots, \\phi_N(x)$ respectively. The space spanned by the leading $K$ eigenfunctions have the following approximation property to $V_l$.\n\\begin{proposition}\\label{POD_proposition}\n\\begin{align}\n\\frac{\\sum_{i=1}^{N}\\Big|\\Big|u(x,\\omega_{i})- \\sum_{j=1}^{K}_{\\hat{D}}\\phi_j(x)\\Big|\\Big|_{L^2(\\hat{D})}^{2} }{\\sum_{i=1}^{N}\\Big|\\Big|u(x,\\omega_{i})\\Big|\\Big|_{L^2(\\hat{D})}^{2}}=\\frac{\\sum_{s=K+1}^{N} \\lambda_s}{\\sum_{s=1}^{N} \\lambda_s}.\n\\label{Prop_PODError}\n\\end{align}\n\\end{proposition} \nFirst, we expect a fast decay in $\\lambda_s$ so that a small $K\\ll N$ will be enough to approximate the solution samples well in root mean square sense. \nSecondly, based on the existence of low dimensional structure implied by Theorem \\ref{ThmRandomGreenFuncSepaApp}, we expect that the data-driven basis, $\\phi_{1}(x)$, $\\phi_{2}(x), \\ldots, \\phi_{K}(x)$, can almost surely approximate the solution $u|_{\\hat{D}}(x,\\omega)$ well too \nunder some sampling condition (see Section \\ref{sec:DetermineNumberOfSamples}) by\n\\begin{align}\nu|_{\\hat{D}}(x,\\omega) \\approx \\sum_{j=1}^{K}c_{j}(\\omega)\\phi_{j}(x), \\quad \\text{a.s. }\n\\omega \\in \\Omega, \\label{RB_expansion}\n\\end{align} \nwhere the data-driven basis functions $\\phi_{j}(x)$, $j=1,...,K$ are defined on $\\hat{D}$. \nThe Prop.\\ref{POD_proposition} still remains valid in the case $\\hat{D}=D$, where the data-driven basis $\\phi_{j}(x)$, $j=1,...,K$ can be used in the Galerkin approach to solve \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} on the whole domain $D$ (see Section \\ref{sec:GlobalProblem}).\n\nNow the problem is how to find $c_{j}(\\omega)$ through an efficient online process given a new realization of $a(x,\\omega)$. We prescribe several strategies in different setups.\n\n\\subsection{Parametrized randomness}\\label{sec:parametrized}\nIn many applications, $a(x,\\omega)$ is parameterized by $r$ independent random variables, i.e., \n\\begin{align}\na(x,\\omega) = a(x,\\xi_{1}(\\omega),...,\\xi_{r}(\\omega)).\n\\label{ParametrizeRandomCoefficient}\n\\end{align}\nThus, the solution can be represented as a function of these random variables as well, i.e., $u(x,\\omega) = u(x,\\xi_{1}(\\omega),...,\\xi_{r}(\\omega))$.\nLet $\\gvec{\\xi}(\\omega)=[\\xi_1(\\omega),\\cdots,\\xi_r(\\omega)]^T$ denote the \nrandom input vector and $\\textbf{c}(\\omega)=[c_{1}(\\omega),\\cdots,c_{K}(\\omega)]^T$ denote the vector of solution coefficients in \\eqref{RB_expansion}. Now, the problem can be viewed as constructing \n a map from $\\gvec{\\xi}(\\omega)$ to $\\textbf{c}(\\omega)$, denoted by $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$, which is nonlinear. We approximate this nonlinear map through the sample solution set. \n Given a set of solution samples $\\{u(x,\\omega_i)\\}_{i=1}^{N}$ corresponding to $\\{\\gvec{\\xi}(\\omega_i)\\}_{i=1}^{N}$, e.g., by solving \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} with $a(x,\\xi_{1}(\\omega_i),...,\\xi_{r}(\\omega_i))$,\nfrom which the set of data driven basis $\\phi_{j}(x), j=1,...,K$ is obtained using POD as described above, we can easily compute the projection coefficients $\\{\\textbf{c}(\\omega_i)\\}_{i=1}^{N}$ of $u|_{\\hat{D}}(x,\\omega_i)$ on $\\phi_{j}(x)$, $j=1,...,K$, i.e., $c_j(\\omega_i)=_{\\hat{D}}$. From the data set,\n$F(\\gvec{\\xi}(\\omega_i))= \\textbf{c}(\\omega_i)$, $i=1,...,N$, we construct the map $\\textbf{F}$. Note the significant dimension reduction by reducing the map $\\gvec{\\xi}(\\omega)\\mapsto u(x,\\omega)$ to the map $\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$.\nWe provide a few ways to construct $\\textbf{F}$. \n\\begin{itemize}\n\\item Interpolation. \n\\\\\nWhen the dimension of the random input $r$ is small or moderate, one can use interpolation. In particular, if the solution samples correspond to $\\gvec{\\xi}$ located on a (sparse) grid, standard polynomial interpolation can be used to approximate the coefficient $c_j$ at a new point of $\\gvec{\\xi}$. If the solution samples correspond to $\\gvec{\\xi}$ at scattered points or the dimension of the random input $r$ is moderate or high, one can first find the a few nearest neighbors to a new point efficiently using $k-d$ tree \\cite{wald2006building} and then use moving least square approximation centered at the new point. \n\\item Neural network.\n\\\\\n When the dimension of the random input $r$ is high, interpolation approach becomes expensive and less accurate, we show that neural network seems to provide a satisfactory solution.\n\\end{itemize}\nMore implementation details will be explained in Section \\ref{sec:NumericalExperiments} and the map $\\textbf{F}$ is plotted based on interpolation. \n\n\n \nIn the online stage, one can compute the solution $u(x,\\omega)$ to \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} using the constructed mapping $\\textbf{F}$. \nGiven a new realization of $a(x,\\xi_{1}(\\omega_i),...,\\xi_{r}(\\omega_i))$, we plug $\\gvec{\\xi}(\\omega)$ into the constructed map $\\textbf{F}$ and directly obtain $\\textbf{c}(\\omega)=\\textbf{F}(\\gvec{\\xi}(\\omega))$ which are the projection coefficients of the solution on the data-driven basis. So we can quickly obtain the new solution $u|_{\\hat{D}}(x,\\omega)$ using Eq.\\eqref{RB_expansion}, where the computational time is negligible. Once we obtain the numerical solutions, we can use them to compute statistical quantities of interest, such as mean, variance, and joint probability distributions. \n\\begin{remark}\nIn Prop.\\ref{POD_proposition} we construct the data-driven basis functions from eigen-decomposition of the correlation matrix associated with the solution samples. Alternatively we can subtract the mean from the solution samples, compute the covariance matrix, and construct the basis functions from eigen-decomposition of the covariance matrix. \n\\end{remark}\n\n\\subsection{Galerkin approach} \\label{sec:GlobalProblem}\n\\noindent\nIn the case $\\hat{D}=D$, we can solve \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} on \nthe whole domain $D$ by the standard Galerkin formulation using the data driven basis for a new realization of $a(x,\\omega)$.\n\nOnce the data driven basis $\\phi_{j}(x)$, $j=1,...,K$, which are defined on the domain $D$, are obtained from solution samples in the offline stage, \n given a new realization of the coefficient $a(x,\\omega)$, we approximate the corresponding solution as \n\\begin{align}\nu(x,\\omega) \\approx \\sum_{j=1}^{K}c_{j}(\\omega)\\phi_{j}(x), \\quad \\text{a.s. }\n\\omega \\in \\Omega, \\label{RB_expansion2}\n\\end{align} \nand use the Galerkin projection to determine the coefficients $c_{j}(\\omega)$, $j=1,...,K$ by solving the following linear system in the online stage,\n\\begin{align}\n\\sum_{j=1}^K \\int_{D}a(x,\\omega)c_{j}(\\omega)\\nabla\\phi_{j}(x)\\cdot\\nabla\\phi_{l}(x)dx = \\int_{D}f(x)\\phi_{l}(x)dx, \n \\quad l=1,...,K.\n \\label{GalerkinSystem}\n\\end{align}\n\n\\begin{remark}\nThe computational cost of solving the linear system \\eqref{GalerkinSystem} is small compared to using a Galerkin method, such as the finite element method, directly for $u(x,\\omega)$ because $K$ is much smaller than the degree of freedom needed to discretize $u(x,\\omega)$. \n\\end{remark}\n\nIf the coefficient $a(x,\\omega)$ has the affine parameter dependence property \\cite{RozzaPatera:2007}, \ni.e., $ a(x,\\omega) = \\sum_{n=1}^{r} a_{n}(x)\\xi_{n}(\\omega) $, we compute the terms that do not depend on randomness, including $\\int_{D}a_{n}(x)\\nabla\\phi_{j}(x)\\cdot\\nabla\\phi_{l}(x)dx$,\n$\\int_{D}f(x)\\phi_{l}(x)dx$, $j,l=1,...,K$ and save them in the offline stage. This leads to considerable savings in assembling the stiffness matrix for each new realization of the coefficient $a(x,\\omega)$ in the online stage.\nOf course, the affine form is automatically parametrized. Hence, one can also construct the map $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$ as described in the previous Section \\ref{sec:parametrized}. \nIf the coefficient $a(x,\\omega)$ does not admit an affine form, we can apply the empirical interpolation method (EIM) \\cite{PateraMaday:2004} to convert $a(x,\\omega)$ into an affine form. \n\n\\subsection{Least square fitting from direct measurements at selected locations}\\label{sec:LS}\nIn many applications, only samples (data) or measurements of $u(x,\\omega)$ is available while the model of $a(x,\\omega)$ or its realization is not known. In this case, we propose to compute the coefficients $\\textbf{c}$ by least square fitting the measurements (values) of $u(x,\\omega)$ at appropriately selected locations. First, as before, from a set of solutions samples, $u(x_j, \\omega_i)$, measured on a mesh $x_j \\in \\hat{D}, j=1, \\ldots, J$, one finds a set of data driven basis $\\phi_1(x_j), \\ldots, \\phi_K(x_j)$, e.g. using POD. For a new solution $u(x,\\omega)$ measured at $x_1, x_2, \\ldots, x_M$, one can set up the following least square problem to find $\\vec{c}=[c_1, \\ldots, c_K]^T$ such that $u(x,\\omega)\\approx \\sum_{k=1}^K c_k\\phi_k(x)$:\n\\begin{equation}\n\\label{eq:LS}\nB \\vec{c}=\\vec{y}, \\quad \\vec{y}=[u(x_1,\\omega), \\ldots, u(x_M,\\omega)]^T, B=[\\boldsymbol{\\phi}^M_1, \\ldots, \\boldsymbol{\\phi}^M_K]\\in R^{M\\times K},\n\\end{equation}\nwhere $\\boldsymbol{\\phi}^M_k=[\\phi_k(x_1), \\ldots, \\phi_k(x_M)]^T$. The key issue in practice is the conditioning of the least square problem \\eqref{eq:LS}. One way is to select the measurement (sensor) locations $x_1, \\ldots x_M$ such that rows of $B$ are as decorrelated as possible. We adopt the approach proposed in \\cite{Kutz2017Sensor} in which a QR factorization with pivoting for the matrix of data driven basis is used to determine the measurement locations. More specifically, let $\\Phi=[\\boldsymbol{\\phi}_1, \\ldots, \\boldsymbol{\\phi}_K]\\in R^{J\\times K}$, $\\boldsymbol{\\phi}_k=[\\phi_k(x_1), \\ldots, \\phi_k(x_J)]^T$. If $M=K$, QR factorization with column pivoting is performed on $\\Phi^T$. If $M>K$, QR factorization with pivoting is performed on $\\Phi\\Phi^T$. The first $M$ pivoting indices provide the measurement locations. More details can be found in \\cite{Kutz2017Sensor} and Section \\ref{sec:NumericalExperiments}.\n\n\n\\subsection{Extension to problems with parameterized force functions} \\label{sec:ExtensionTOManyFx}\n\\noindent\nIn many applications, we are interested in solving multiscale elliptic PDEs with random coefficients in the multiquery setting. A model problem is given as follows, \n\\begin{align}\n-\\nabla\\cdot\\big(a(x,\\omega)\\nabla u(x,\\omega)\\big) &= f(x,\\theta), \n\\quad x\\in D, \\quad \\omega\\in\\Omega, \\quad \\theta \\in \\Theta, \\label{MsStoEllipMultiquery_Eq}\\\\\nu(x,\\omega)&= 0, \\quad \\quad x\\in \\partial D, \\label{MsStoEllipMultiquery_BC}\n\\end{align}\nwhere the setting of the coefficient $a(x,\\omega)$ is the same as \\eqref{ParametrizeRandomCoefficient}. \nNotice that the force function $f(x,\\theta)$ is parameterized by $\\theta\\in \\Theta$ and $\\Theta$ is a \nparameter set. In practice, we often need to solve the problem \\eqref{MsStoEllipMultiquery_Eq}-\\eqref{MsStoEllipMultiquery_BC} with multiple force functions $f(x,\\theta)$, which is known as the multiquery problem. It is computationally expensive to solve this kind of problem using traditional methods. \n\nSome attempts have been made in \\cite{ZhangCiHouMMS:15,hou2019model}, where a data-driven stochastic method has been proposed to solve PDEs with random and multiscale coefficients. When the number of random variables in the coefficient $a(x,\\omega)$ is small, say less than 10, the methods developed in \\cite{ZhangCiHouMMS:15,hou2019model} can provide considerable savings in solving multiquery problems. However, they suffer from the curse of dimensionality of both the input space and the output (solution) space. \nOur method using data driven basis, which is based on extracting a low dimensional structure in the output space, can be directly adopted to this situation. Numerical experiments are presented in Section \\ref{sec:NumericalExperiments}.\n\n\n\n \n\\subsection{Determine a set of good learning samples} \\label{sec:DetermineNumberOfSamples}\n\\noindent\nA set of good solution samples is important for the construction of data-driven basis in the offline stage. \nHere we provide an error analysis which is based on the finite element formulation. However, the results extend to general Galerkin formulation.\nFirst, we make a few assumptions. \n\\begin{assumption} \\label{assumption2}\n\tSuppose $a(x,\\omega)$ has the following property: given $ \\delta_1 > 0$, there exists an integer $N_{\\delta_1}$ and a choice of snapshots $\\{a(x,\\omega_i)\\}$, $i=1,...,N_{\\delta_1}$ such that\n\t\\begin{align} \n\t\\mathds{E}\\left[\\inf_{1\\le i\\le N_{\\delta_1}} \\big|\\big|a(x,\\omega) - a(x,\\omega_i)\\big|\\big|_{L^\\infty(D)}\\right] \\le \\delta_1. \\label{asd}\n\t\\end{align}\n\\end{assumption}\nLet $\\{a(x,\\omega_i)\\}_{i=1}^{N_{\\delta_1}}$ denote the samples of the random coefficient. When the coefficient has an affine form, we can verify Asm. \\ref{assumption2} and provide a constructive way to sample snapshots \n$\\{a(x,\\omega_i)\\}_{i=1}^{N_{\\delta_1}}$ if we know the distribution of the random variables $\\xi_{i}(\\omega)$, $i=1,...,r$.\n\nLet $V_h\\subset H_{0}^{1}(D)$ denote a finite element space that is spanned by nodal basis functions on a mesh with size $h$ and $\\tilde{V}_h \\subset V_h$ denote the space spanned by the data-driven basis $\\{\\phi_{j}(x)\\}_{j=1}^{K}$. We assume the mesh size is fine enough so that the finite element space can approximate the solutions to the underlying PDEs well. For each $a(x,\\omega_i)$, let $u_h(x,\\omega_i)\\in V_h$ denote the FEM solution and $\\tilde{u}_h(x,\\omega_i)\\in \\tilde{V}_h$ denote the projection on the data-driven basis $\\{\\phi_{j}(x)\\}_{j=1}^{K}$. \n\\begin{assumption} \\label{assumption3}\n\tGiven $\\delta_2 > 0$, we can find a set of data-driven basis, $\\phi_1, \\ldots, \\phi_{K_{\\delta_2}}$ such that \n\t\\begin{align}\n\t||u_h(x,\\omega_i)-\\tilde{u}_h(x,\\omega_i)||_{L^2(D)} \\le \\delta_2,\\ \\forall 1\\le i \\le K_{\\delta_2}, \\label{equation_asumption2}\n\t\\end{align}\n\twhere $\\tilde{u}_h(x,\\omega_i)$ is the $L^2$ projection of $u_h(x,\\omega_i)$ onto the space spanned by $\\phi_1, \\ldots, \\phi_{K_{\\delta_2}}$.\n\\end{assumption}\nAsm.\\ref{assumption3} can be verified by setting the threshold in the POD method; see Prop.\\ref{POD_proposition}. Now we present the following error estimate. \n\n\\begin{theorem} \\label{error_theorem1}\n\tUnder Assumptions \\ref{assumption2}-\\ref{assumption3}, for any $\\delta_i > 0$, $i=1,2$, we can choose the samples of the random coefficient $\\{a(x,\\omega_i)\\}_{i=1}^{N_{\\delta_1}}$ and the threshold in constructing the data-driven basis accordingly, such that \n\t\\begin{align}\n\t\\mathds{E}\\left[\\big|\\big|u_h(x,\\omega) - \\tilde{u}_h(x,\\omega)\\big|\\big|_{L^2(D)}\\right] \n\t\\leq C\\delta_1 + \\delta_2, \\label{error_theorem}\n\t\\end{align}\n\twhere $C$ depends on $a_{\\min}$, $f(x)$ and the domain $D$.\n\\end{theorem}\n\\begin{proof}\n\tGiven a coefficient $a(x,\\omega)$, let $u_h(x,\\omega)$ and $\\tilde{u}_h(x,\\omega)$ be the corresponding FEM solution and data-driven solution, respectively. We have\n\t\\begin{align} \\label{proof_basis_error}\n\t&\\big|\\big|u_h(x,\\omega) - \\tilde{u}_h(x,\\omega)\\big|\\big|_ {L^2(D)} \\nonumber\\\\\n\t\\le &\\big|\\big|u_h(x,\\omega) - u_h(x,\\omega_i)\\big|\\big|_{L^2(D)} + \n\t\\big|\\big|u_h(x,\\omega_i) - \\tilde{u}_h(x,\\omega_i)\\big|\\big|_{L^2(D)} + \\big|\\big|\\tilde{u}_h(x,\\omega_i) - \\tilde{u}_h(x,\\omega)\\big|\\big|_{L^2(D)}, \\nonumber\\\\\n\t:=& I_1 + I_2+ I_3,\n\t\\end{align}\n\twhere $u_h(x,\\omega_i)$ is the solution corresponding to the coefficient $a(x,\\omega_i)$ and $\\tilde{u}_h(x,\\omega_i)$ is its projection. Now we estimate the error term $I_1$ first. In the sense of weak form, we have\n\t\\begin{align}\n\t\\int_{D}a(x,\\omega)\\nabla u_h(x,\\omega)\\cdot \\nabla v_h(x)dx=\\int_{D}f(x)v_h(x), \\quad \\text{for all} \\quad v_h(x)\\in V_h,\n\t\\label{FEMsolutionWeakForm1} \n\t\\end{align}\n\tand\n\t\\begin{align} \n\t\\int_{D}a(x,\\omega_i)\\nabla u_h(x,\\omega_i)\\cdot\\nabla v_h(x)dx=\\int_{D}f(x)v_h(x), \\quad \\text{for all} \\quad v_h(x)\\in V_h.\n\t\\label{FEMsolutionWeakForm2} \n\t\\end{align}\n\tSubtracting the variational formulations \\eqref{FEMsolutionWeakForm1}-\\eqref{FEMsolutionWeakForm2} for \n\t$u_h(x,\\omega)$ and $u_h(x,\\omega_i)$, we find that for all $v_h(x)\\in V_h$, \n\t\\begin{align}\n\t\\int_{D}a(x,\\omega)\\nabla (u_h(x,\\omega)-u_h(x,\\omega_i))\\cdot\\nabla v_h(x)dx\n\t=-\\int_{D}(a(x,\\omega)-a(x,\\omega_i))\\nabla u_h(x,\\omega_i)\\cdot\\nabla v_h(x). \n\t\\label{FEMsolutionWeakForm3} \n\t\\end{align}\n\tLet $w_h(x)=u_h(x,\\omega)-u_h(x,\\omega_i)$ and $L(v_h)=-\\int_{D}(a(x,\\omega)-a(x,\\omega_i))\\nabla u_h(x,\\omega_i)\\cdot\\nabla v_h(x)$ denote the linear form. Eq.\\eqref{FEMsolutionWeakForm3} means that \n\t$w_h(x,\\omega)$ is the solution of the weak form $\\int_{D}a(x,\\omega)\\nabla w_h\\cdot\\nabla v_h(x)dx=L(v_h)$. Therefore, we have \n\t\\begin{align}\n\t\\big|\\big|w_h(x)\\big|\\big|_ {H^1(D)}\\leq \\frac{||L||_{H^1(D)}}{a_{\\min}}.\n\t\\label{EstimateError}\n\t\\end{align}\n\tNotice that \n\t\\begin{align}\n\t||L||_{H^1(D)} =\\max_{||v_h||_{H^1(D)}=1}|L(v_h)|&\\leq ||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\n\t||u_h(x,\\omega_i)||_{H^1(D)},\\nonumber \\\\\n\t&\\leq ||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\\frac{||f(x)||_{H^1(D)}}{a_{\\min}}.\n\t\\label{EstimateError2}\n\t\\end{align}\n\tSince $w_h(x)=0$ on $\\partial D$, combining Eqns.\\eqref{EstimateError}-\\eqref{EstimateError2} and using the Poincar\\'e inequality on $w_h(x)$, we obtain an estimate for the term $I_1$ as \n\t\\begin{align}\n\t\\big|\\big|u_h(x,\\omega) - u_h(x,\\omega_i)\\big|\\big|_{L^2(D)} &\\leq C_1\\big|\\big|u_h(x,\\omega) - u_h(x,\\omega_i)\\big|\\big|_{H^1(D)} \\nonumber \\\\\n\t&\\leq C_1||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\\frac{||f(x)||_{H^1(D)}}{a_{\\min}^2},\n\t\\label{EstimateError3}\n\t\\end{align}\n\twhere $C_1$ only depends on the domain $D$. For the term $I_3$ in Eq.\\eqref{proof_basis_error}, we can similarly get \n\t\\begin{align}\n\t\\big|\\big|\\tilde{u}_h(x,\\omega_i) - \\tilde{u}_h(x,\\omega)\\big|\\big|_{L^2(D)} \n\t\\leq C_1||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\\frac{||f(x)||_{H^1(D)}}{a_{\\min}^2}.\n\t\\label{I3}\n\t\\end{align}\n\tThe term $I_2$ in Eq.\\eqref{proof_basis_error} can be controlled according to the Asm.\\ref{assumption3}. \n\tCombining the estimates for terms $I_1$, $I_2$ and $I_3$ and integrating \n\tover the random space, we prove the theorem. \n\n\\end{proof} \n \nTheorem \\ref{error_theorem1} indicates that the error between $u_h(x,\\omega)$ and its approximation $\\tilde{u}_h(x,\\omega)$ using the data driven basis consists of two parts. The first part depends on how well the random coefficient is sampled. While the second part depends on the truncation threshold in constructing the data-driven basis from the solution samples. In practice, a balance of these two factors and the discretization error (of the numerical method used to solve the PDEs) gives us the guidance on how to choose solution samples and truncation threshold in the POD method to achieve optimal accuracy. Again, the key advantage for our data driven approach for this form of elliptic PDEs is the low dimensional structure in the solution space which provides a significant dimension reduction. \n\n\n\\section{Numerical experiments} \\label{sec:NumericalExperiments}\n\\noindent\nIn this section we will present various numerical experiments to demonstrate the accuracy and efficiency of our proposed data-driven method. \n\\subsection{An example with five random variables}\\label{sec:Example1}\n\\noindent\nWe consider a multiscale elliptic PDE with a random coefficient that is defined on a square domain $D=[0,1]\\times[0,1]$,\n\\begin{align}\\label{randommultiscaleelliptic}\n\\begin{split}\n-\\nabla\\cdot(a(x,y,\\omega)\\nabla u(x,y,\\omega)) &= f(x,y), \\quad (x,y)\\in D, \\omega\\in\\Omega,\\\\\nu(x,y,\\omega)&=0, \\quad \\quad (x,y)\\in\\partial D.\n\\end{split}\n\\end{align}\nIn this example, the coefficient $a(x,y,\\omega)$ is defined as \n\\begin{align} \na(x,y,\\omega) =& 0.1 + \\frac{2+p_1\\sin(\\frac{2\\pi x}{\\epsilon_1})}{2-p_1\\cos(\\frac{2\\pi y}{\\epsilon_1})} \\xi_1(\\omega)\n+ \\frac{2+p_2\\sin(\\frac{2\\pi (x+y)}{\\sqrt{2}\\epsilon_2})}{2-p_2\\sin(\\frac{2\\pi (x-y)}{\\sqrt{2}\\epsilon_2})}\\xi_2(\\omega)\n+ \\frac{2+p_3\\cos(\\frac{2\\pi (x-0.5)}{\\epsilon_3})}{2-p_3\\cos(\\frac{2\\pi (y-0.5)}{\\epsilon_3})}\\xi_3(\\omega) \\nonumber \\\\\n&+ \\frac{2+p_4\\cos(\\frac{2\\pi (x-y)}{\\sqrt{2}\\epsilon_4})}{2-p_4\\sin(\\frac{2\\pi (x+y)}{\\sqrt{2}\\epsilon_4})}\\xi_4(\\omega)\n+ \\frac{2+p_5\\cos(\\frac{2\\pi (2x-y)}{\\sqrt{5}\\epsilon_5})}{2-p_5\\sin(\\frac{2\\pi (x+2y)}{\\sqrt{5}\\epsilon_5})}\\xi_5(\\omega), \\label{coefficientofexample1}\n\\end{align}\nwhere $[\\epsilon_1,\\epsilon_2,\\epsilon_3,\\epsilon_4,\\epsilon_5]=[\\frac{1}{47},\\frac{1}{29},\\frac{1}{53},\\frac{1}{37},\\frac{1}{41}]$, $[p_1,p_2,p_3,p_4,p_5]=[1.98,1.96,1.94,1.92,1.9]$, and $\\xi_i(\\omega)$, $i=1,...,5$ are i.i.d. uniform random variables in $[0,1]$. The contrast ratio in the coefficient \\eqref{coefficientofexample1} is $\\kappa_a\\approx 4.5\\times 10^3$. The force function is $f(x,y) = \\sin(2\\pi x)\\cos(2\\pi y)\\cdot I_{D_2}(x,y)$, where $I_{D_2}$ is an indicator function defined on $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$. \nThe coefficient \\eqref{coefficientofexample1} is highly oscillatory in the physical space. \nTherefore, one needs a fine discretization to resolve the small-scale variations in the problem. \nWe shall show results for the solution to \\eqref{randommultiscaleelliptic} with coefficient \\eqref{coefficientofexample1} in: (1) a restricted subdomain $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$ away from the support $D_2$ of the source term $f(x,y)$; and (2) the full domain $D$.\n\nIn all of our numerical experiments, we use the same uniform triangulation to implement the standard FEM and choose mesh size $h=\\frac{1}{512}$ in order to resolve the multiscale information. We use $N=2000$ samples in the offline stage to construct the data-driven basis and determine the number of basis $K$ according to the decay rate of the eigenvalues of the correlation matrix of the solution samples, i.e., $\\sigma_{ij}=, i,j=1, \\dots, N$.\n\nIn Figure \\ref{fig:Example1localeigenvalues}, we show the decay property of eigenvalues. Specifically, we show the magnitude of the eigenvalues in Figure \\ref{fig:Example1localeigenvalues1a} and the ratio of the accumulated sum of the leading eigenvalues over the total sum in Figure \\ref{fig:Example1localeigenvalues1b}. These results and Prop.\\ref{POD_proposition} imply that a few leading eigenvectors will provide a set of data-driven basis that can approximate all solution samples well. \n\\begin{figure}[tbph]\n\t\\centering \n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ Decay of eigenvalues.}\n\t\t\\label{fig:Example1localeigenvalues1a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_acc_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption\n\t\t\t\n\t\t\t $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example1localeigenvalues1b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues in the local problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1localeigenvalues}\n\\end{figure} \n\nAfter we construct the data-driven basis, we use the spline interpolation to approximate the mapping $\\textbf{F}:\\gvec{\\xi} \\mapsto \\textbf{c}(\\gvec{\\xi})$. Notice that the coefficient of \\eqref{coefficientofexample1}\nis parameterized by five i.i.d. random variables. We can partition the random space $[\\xi_1(\\omega),\\xi_2(\\omega),\\cdots,\\xi_5(\\omega)]^T\\in [0,1]^5$ into a set of uniform grids in order to \nconstruct the mapping $\\textbf{F}$. Here we choose $N_1=9^5$ samples. We remark that we can choose other sampling strategies, such as sparse-grid points and Latin hypercube points. In Figure \\ref{fig:Example1localbasismapping}, we show the profiles of the first two data-driven basis functions $\\phi_{1}$ and $\\phi_{2}$ and the plots of the mappings $c_1(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ and $c_2(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ with fixed $[\\xi_3,\\xi_4,\\xi_5]^T=[0.25, 0.5, 0.75]^T$. One can see that the data-driven basis functions contain multiscale features and the mapping $c_1(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ and $c_2(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ are smooth with respect to $\\xi_i$, $i=1,2$. The behaviors of other data-driven basis functions and the mappings are similar (not shown here).\n\n\n\n\\begin{figure}[tbph]\n\t\\centering\t\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_basis_zeta1-eps-converted-to.pdf}\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_mapping_c1-eps-converted-to.pdf}\\\\\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_basis_zeta2-eps-converted-to.pdf}\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_mapping_c2-eps-converted-to.pdf}\\\\\n\t\\end{subfigure}\n\t\\caption{Plots of data-driven basis $\\phi_{1}$ and $\\phi_{2}$ and mappings $c_1(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ and $c_2(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ with fixed $[\\xi_3,\\xi_4,\\xi_5]^T=[0.25, 0.5, 0.75]^T$.}\n\t\\label{fig:Example1localbasismapping}\n\\end{figure} \n\nOnce we get the mapping $\\textbf{F}$, the solution corresponding to a new realization $a(x,\\gvec{\\xi}(\\omega))$ can be constructed easily by finding $ \\textbf{c}(\\gvec{\\xi})$ and plugging in the approximation \\eqref{RB_expansion}. In Figure \\ref{fig:Example1locall2err}, we show the mean relative $L^2$ and $H^1$ errors of the testing error and projection error. The testing error is the error between the numerical solution obtained by our mapping method and the reference solution obtained by the FEM on the same fine mesh used to compute the sample solutions. The projection error is the error between the FEM solution and its projection on the space spanned by data-driven basis, i.e. the best possible approximation error. For the experiment, only four data-driven basis are needed to achieve a relative error less than $1\\%$ in $L^2$ norm and less than $2\\%$ in $H^1$ norm. Moreover, the numerical solution obtained by our mapping method is close to the projection solution, which is the best approximation of the reference solution by the data-driven basis. This is due to the smoothness of the mapping. Notice that the computational time of the mapping method is almost negligible. In practice, when the number of basis is 10, it takes about $0.0022s$ to get a new solution by the mapping method, whereas the standard FEM takes $0.73s$. \n\\begin{figure}[tbph]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\n\t\\includegraphics[width=1.0\\linewidth]{ex1_local_L2err-eps-converted-to.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\\includegraphics[width=1.0\\linewidth]{ex1_local_H1err-eps-converted-to.pdf}\n\t\\end{subfigure}\n\t\\caption{ Relative $L^2$ and $H^1$ error with increasing number of basis for the local problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1locall2err}\n\\end{figure} \n\nIn Figure \\ref{fig:Example1localdiffN}, we show the accuracy of the proposed method when we use different number of samples $N$ in constructing the data-driven basis. Although the numerical error decreases when the sampling number $N$ is increased in general, the difference is very mild. \n\\begin{figure}[tbph]\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanL2err_diffN.pdf}\n\t\\caption{Testing errors in $L^2$ norm.}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanL2proj_diffN.pdf}\n\t\\caption{Projection errors in $L^2$ norm.}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanH1err_diffN.pdf}\n\t\\caption{Testing errors in $H^1$ norm.}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanH1proj_diffN.pdf}\n\t\\caption{Projection errors in $H^1$ norm.}\n\t\\end{subfigure}\n\t\\caption{ The relative testing\/projection errors in $L^2$ and $H^1$ norms with different number of samples (i.e. $N$) for the local problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1localdiffN}\n\\end{figure} \n\nNext, we test our method on the whole computation domain for \\eqref{randommultiscaleelliptic} with coefficient \\eqref{coefficientofexample1}. Figure \\ref{fig:Example1globaleigenvalues} shows the decay property of eigenvalues. Similarly, we show magnitudes of the leading eigenvalues in Figure \\ref{fig:Example1globaleigenvalues3a} and the ratio of the accumulated sum of the eigenvalues over the total sum in Figure \\ref{fig:Example1globaleigenvalues3b}. We observe similar behaviors as before. Since we approximate the solution in the whole computational domain, we take the Galerkin approach described in Section \\ref{sec:GlobalProblem} using the data-driven basis. \nIn Figure \\ref{fig:Example1globall2engerr}, we show the mean relative error between our numerical solution and the reference solution in $L^2$ norm and $H^1$ norm, respectively. In practice, when the number of basis is 15, it takes about $0.084s$ to compute a new solution by our method, whereas the standard FEM method costs about $0.82s$ for one solution. \n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ Decay of the eigenvalues.}\n\t\t\\label{fig:Example1globaleigenvalues3a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_acc_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example1globaleigenvalues3b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues for the global problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1globaleigenvalues}\n\\end{figure} \n\n \n\n\n\n\n\n\\begin{figure}[tbph] \n\t\\centering\t\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_L2err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $L^2$ norm.}\n\t\t\\label{fig:Example1globall2err}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_H1err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $H^1$ norm.} \n\t\t\\label{fig:Example1globalengerr}\n\t\\end{subfigure}\n\t\\caption{The relative errors with increasing number of basis for the global problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1globall2engerr}\n\\end{figure} \n\n\n\\subsection{An example with an exponential type coefficient}\\label{sec:Example2}\n\\noindent\nWe now solve the problem \\eqref{randommultiscaleelliptic} with an exponential type coefficient. \nThe coefficient is parameterized by eight random variables, which has the following form \n\\begin{align}\na(x,y,\\omega) =&\\exp\\Big( \\sum_{i=1}^8 \\sin(\\frac{2\\pi (9-i)x}{9\\epsilon_i})\\cos(\\frac{2\\pi iy}{9\\epsilon_i})\\xi_i(\\omega) \\Big),\n\\label{coefficientofexample2}\n\\end{align}\nwhere the multiscale parameters $[\\epsilon_1,\\epsilon_2,\\cdots,\\epsilon_{8}] =[\\frac{1}{43},\\frac{1}{41},\\frac{1}{47},\\frac{1}{29},\\frac{1}{37},\\frac{1}{31},\\frac{1}{53},\\frac{1}{35}]$ and $\\xi_i(\\omega)$, $i=1,...,8$ are i.i.d. uniform random variables in $[-\\frac{1}{2},\\frac{1}{2}]$. Hence the contrast ratio is $\\kappa_a\\approx 3.0\\times 10^3$ in the coefficient \\eqref{coefficientofexample2}. The force function is $f(x,y) = \\cos(2\\pi x)\\sin(2\\pi y)\\cdot I_{D_2}(x,y)$, where $I_{D_2}$ is an indicator function defined on $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$. \nIn the local problem, the subdomain of interest is $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$. \n\nIn Figure \\ref{fig:Example2eigenvalues}, we show the decay property of eigenvalues. Specifically, in Figure \\ref{fig:Example2eigenvalues-a} we show the magnitude of leading eigenvalues and in Figure \\ref{fig:Example2eigenvalues-b} we show the ratio of the accumulated sum of the eigenvalues over the total sum. These results imply that the solution space has a low-dimensional structure, which can be approximated by the data-driven basis functions. \n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ Decay of eigenvalues.}\n\t\t\\label{fig:Example2eigenvalues-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_acc_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example2eigenvalues-b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues in the problem of Sec.\\ref{sec:Example2}.}\n\t\\label{fig:Example2eigenvalues}\n\\end{figure} \nSince the coefficient $a(x,y,\\omega)$ is parameterized by eight random variables, it is expensive to construct the mapping $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$ using the interpolation method with uniform grids. Instead, we use a sparse grid polynomial interpolation approach to approximate the mapping $\\textbf{F}$. Specifically, we use Legendre polynomials with total order less than or equal 4 to approximate the mapping, where the total number of nodes is $N_1=2177$; see \\cite{Griebel:04}.\n\nFigure \\ref{fig:Example2errors-a} shows the relative errors of the testing error and projection error in $L^2$ norm. Figure \\ref{fig:Example2errors-b} shows the corresponding relative errors in $H^1$ norm. The sparse grid polynomial interpolation approach gives a comparable error as the best approximation error. We observe similar convergence results in solving the global problem \\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexample2} (not shown here). Therefore, we can use sparse grid method to construct mappings for problems of moderate number of random variables. \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_L2err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $L^2$ norm.}\n\t\t\\label{fig:Example2errors-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_H1err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $H^1$ norm.} \n\t\t\\label{fig:Example2errors-b}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the problem of Sec.\\ref{sec:Example2}.}\n\t\\label{fig:Example2errors}\n\\end{figure} \n\n\\subsection{An example with a discontinuous coefficient}\\label{sec:ExampleInterface}\n\\noindent \nWe solve the problem \\eqref{randommultiscaleelliptic} with a discontinuous coefficient, \nwhich is an interface problem. The coefficient is parameterized by twelve random variables and has the following form \n\t\\begin{align}\na(x,y,\\omega) =& \\exp\\Big(\\sum_{i=1}^{6} \\sin(2\\pi \\frac{x\\sin(\\frac{i\\pi}{6}) +y\\cos(\\frac{i\\pi}{6}) }{\\epsilon_i} )\\xi_i(\\omega) \\Big)\\cdot I_{D\\setminus D_3}(x,y)\\nonumber\\\\\n&+\\exp\\Big(\\sum_{i=1}^{6} \\sin(2\\pi \\frac{x\\sin(\\frac{(i+0.5)\\pi}{6}) +y\\cos(\\frac{(i+0.5)\\pi}{6}) }{\\epsilon_{i+6}} )\\xi_{i+6}(\\omega) \\Big)\\cdot I_{D_3}(x,y),\n\\label{coefficientofexampleInterface}\n\\end{align}\nwhere $\\epsilon_i=\\frac{1+i}{100}$ for $i=1,\\cdots,6$, $\\epsilon_{i}=\\frac{i+13}{100}$ for $i=7,\\cdots,12$, $\\xi_i(\\omega)$, $i=1,\\cdots,12$ are i.i.d. uniform random variables in $[-\\frac{2}{3},\\frac{2}{3}]$, and $I_{D_3}$ and $I_{D\\setminus D_3}$ are indicator functions. \nThe subdomain $D_3$ consists of three small rectangles whose edges are parallel to the edges of domain $D$ with width $10h$ and height $0.8$. And the lower left vertices are located \nat $(0.3,0.1),(0.5,0.1),(0.7,0.1)$ respectively. The contrast ratio in the coefficient \\eqref{coefficientofexampleInterface} is $\\kappa_a\\approx 3\\times 10^3$. In Figure \\ref{fig:ExampleInterfaceRealizations} we show two realizations of the coefficient \\eqref{coefficientofexampleInterface}. \t\n\n\t\\begin{figure}[htbp]\n\t\t\\centering\n\t\t\\includegraphics[width=0.49\\linewidth]{Ex5_DiffCoef_realization1.pdf}\n\t\t\\includegraphics[width=0.49\\linewidth]{Ex5_DiffCoef_realization5.pdf} \n\t\t\\caption{Two realizations of the coefficient \\eqref{coefficientofexampleInterface} in the interface problem.} \n\t\t\\label{fig:ExampleInterfaceRealizations}\n\t\\end{figure}\n \n\tWe now solve the local problem of \\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexampleInterface}, where the domain of interest is $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$. The force function is $f(x,y) = \\cos(2\\pi x)\\sin(2\\pi y)\\cdot I_{D_2}(x,y)$, where $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$. In Figure \\ref{fig:ExampleInterfaceeigenvalues-a} and Figure \\ref{fig:ExampleInterfaceeigenvalues-b} we show the magnitude of dominant eigenvalues and approximate accuracy. These results show that only a few data-driven basis functions are enough to approximate all solution samples well. \n\t\\begin{figure}[tbph] \n\t\t\\centering\n\t\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_eigenvalues.pdf} \n\t\t\t\\caption{ Decay of eigenvalues.}\n\t\t\n\t\t\t\\label{fig:ExampleInterfaceeigenvalues-a}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_acc_eigenvalues.pdf} \n\t\t\t\\caption{$1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\n\t\t\t\\label{fig:ExampleInterfaceeigenvalues-b}\n\t\t\\end{subfigure}\n\t\t\\caption{The decay properties of the eigenvalues in the problem of Sec.\\ref{sec:ExampleInterface}.}\n\t\t\\label{fig:ExampleInterfaceeigenvalues}\n\t\\end{figure} \n\n Since the coefficient \\eqref{coefficientofexampleInterface} is parameterized by twelve random variables, constructing the mapping $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$ using the sparse grid polynomial interpolation becomes very expensive too. Here we use the least square method combined with the $k-d$ tree algorithm for searching nearest neighbors to approximate the mapping $\\textbf{F}$. \n \n \n In our method, we first generate $N_1=5000$ data pairs $\\{(\\gvec{\\xi}^n(\\omega),\\textbf{c}^n(\\omega)\\}_{n=1}^{N_1}$ that will be used as training data. \n Then, we use $N_2=200$ samples for testing in the online stage. For each new testing data point $\\gvec{\\xi}(\\omega)=[\\xi_1(\\omega),\\cdots,\\xi_r(\\omega)]^T$ (here $r=12$), we run the $k-d$ tree algorithm to find its $n$ nearest neighbors in the training data set and apply the least square method to compute the corresponding mapped value $\\vec{c}(\\omega)=[c_1(\\omega), \\ldots, c_K(\\omega)]^T$. The complexity of constructing a $k-d$ tree is $O(N_1\\log N_1)$. Given the $k-d$ tree, for each testing point the complexity of finding its $n$ nearest neighbors is $O(n\\log N_1)$ \\cite{wald2006building}. Since the $n$ training data points are close to the testing data point $\\gvec{\\xi}(\\omega)$, for each training data $(\\gvec{\\xi}^m(\\omega),\\textbf{c}^m(\\omega)$, $m=1,....n$, we compute the first-order Taylor expansion of each component $c^m_j(\\omega)$ at $\\gvec{\\xi}(\\omega)$ as \n \\begin{align}\n c^m_j(\\omega)\\approx c_j(\\omega)+\\sum_{i=1}^{r=12}(\\xi^m_i-\\xi_i)\\frac{\\partial c_j}{\\partial \\xi_i}(\\omega),\\quad j=1,2,\\cdots,K, \n \\label{least-square-system}\n \\end{align}\n where $\\xi^m_i$, $i=1,...,r$, $c^m_j(\\omega)$, $j=1,...,K$ are given training data, $c_j(\\omega)$ and \n $\\frac{\\partial c_j}{\\partial \\xi_i}(\\omega)$, $j=1,...,K$ are unknowns associated with the testing data point $\\gvec{\\xi}(\\omega)$. In the $k-d$ tree algorithm, we choose $n=20$, which is slightly greater than $r+1=13$. By solving \\eqref{least-square-system} using the least square method, we get the mapped value $\\vec{c}(\\omega)=[c_1(\\omega), \\ldots, c_K(\\omega)]^T$. Finally, we use the formula \\eqref{RB_expansion} to get the numerical solution of Eq.\\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexampleInterface}. \n \n Because of the discontinuity and high-dimensional random variables in the coefficient \\eqref{coefficientofexampleInterface}, the problem \\eqref{randommultiscaleelliptic} is more challenging. The nearest neighbors based least square method provides an efficient way to construct mappings and achieves relative errors less than $3\\%$ in both $L^2$ norm and $H^1$ norm;\n see Figure \\ref{fig:ExampleInterfacelocalerrors}. Alternatively, one can use the neural network method to construct mappings for this type of challenging problems; see Section \\ref{sec:Example3}.\n \n \n \n\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_L2err.pdf}\n\t\n\t\t\\label{fig:ExampleInterfaceerrors-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_H1err.pdf}\n\t\n\t\t\\label{fig:ExampleInterfaceerrors-b}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the local problem of Sec.\\ref{sec:ExampleInterface} .}\n\t\\label{fig:ExampleInterfacelocalerrors}\n\\end{figure}\n\n\\subsection{An example with high-dimensional random coefficient and force function}\\label{sec:Example3}\n\\noindent\nWe solve the problem \\eqref{randommultiscaleelliptic} with an exponential type coefficient and random force function,\nwhere the total number of random variables is twenty. Specifically, the coefficient is parameterized by eighteen i.i.d. random variables, i.e.\n\\begin{align}\na(x,y,\\omega) = \\exp\\Big(\\sum_{i=1}^{18} \\sin(2\\pi \\frac{x\\sin(\\frac{i\\pi}{18}) +y\\cos(\\frac{i\\pi}{18}) }{\\epsilon_i} )\\xi_i(\\omega) \\Big),\n\\label{coefficientofexample3}\n\\end{align}\nwhere $\\epsilon_i=\\frac{1}{2i+9}$, $i=1,2,\\cdots,18$ and $\\xi_i(\\omega)$, $i=1,...,18$ are i.i.d. uniform random variables in $[-\\frac{1}{5},\\frac{1}{5}]$. The force function is a Gaussian density function $f(x,y) = \\frac{1}{2\\pi\\sigma^2}\\exp(-\\frac{(x-\\theta_1)^2+(y-\\theta_2)^2}{2\\sigma^2})$ with a random center $(\\theta_1,\\theta_2)$ that is a random point uniformly distributed in the subdomain $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$ and $\\sigma=0.01$. When $\\sigma$ is small, the \nGaussian density function $f(x,y)$ can be used to approximate the Dirac-$\\delta$ function, such as modeling wells in reservoir simulations.\n\nWe first solve the local problem of \\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexample3}, where the subdomain of interest is $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$. In Figures \\ref{fig:Example3eigenvalues-a} and \\ref{fig:Example3eigenvalues-b}, we show the magnitude of leading eigenvalues and the ratio of the accumulated sum of the eigenvalue over the total sum, respectively. We observe similar exponential decay properties of eigenvalues even if the force function contains randomness. These results show that we can still build a set of data-driven basis functions to solve problem \\eqref{randommultiscaleelliptic} with coefficient \\eqref{coefficientofexample3}.\n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_eigenvalues.pdf} \n\t\t\\caption{ Decay of eigenvalues.}\n\t\t\\label{fig:Example3eigenvalues-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_acc_eigenvalues.pdf} \n\t\t\\caption{ $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example3eigenvalues-b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues in the problem of Sec.\\ref{sec:Example3}.}\n\t\\label{fig:Example3eigenvalues}\n\\end{figure} \n \nNotice that both the coefficient and force contain randomness here. We put the random variables $\\gvec{\\xi}(\\omega)$ in the coefficient and the random variables $\\gvec{\\theta}(\\omega)$ in the force together when we construct the mapping $\\textbf{F}$. Moreover, the dimension of randomness, 18+2=20, is too large even for sparse grids. Here we construct the mapping $\\textbf{F}:(\\gvec{\\xi}(\\omega),\\gvec{\\theta}(\\omega))\\mapsto \\textbf{c}(\\omega)$ using the neural network as depicted in Figure \\ref{fig:DNNstructure2}. The neural network has 4 hidden layers and each layer has 50 units. Naturally, the number of the input units is 20 and the number of the output units is $K$. The layer between input units and first layer of hidden units is an affine transform. So is the layer between output units and last layer of hidden units. Each two layers of hidden units are connected by an affine transform, a tanh (hyperbolic tangent) activation and a residual connection, i.e. $\\textbf{h}_{l+1}=\\tanh(\\textbf{A}_l \\textbf{h}_l+\\textbf{b}_l)+\\textbf{h}_l$, $l=1,2,3$, where $\\textbf{h}_l$ is $l$-th layer of hidden units, $\\textbf{A}_l$ is a 50-by-50 matrix and $\\textbf{b}_l$ is a 50-by-1 vector. Under the same setting of neural network, if the rectified linear unit (ReLU), which is piecewise linear, is used\nas the activation function, we observe a much bigger error. Therefore we choose the hyperbolic tangent activation function and implement the residual neural network (ResNet) here \\cite{he2016deep}. \n\n\\begin{figure}[tbph]\n\\tikzset{global scale\/.style={\n scale=#1,\n every node\/.append style={scale=#1}\n }\n}\n\t\\centering\n\t\\begin{tikzpicture}[global scale=0.6]\n\t\\tikzstyle{inputvariables} = [circle, very thick, fill=yellow,draw=black, minimum height=0.1cm, text width = 0.2cm]\n\t\\tikzstyle{hiddenvariables} = [circle, thick, draw =black, fill=blue,minimum height=0.1cm, text width = 0.2cm]\n\t\\tikzstyle{outputvariables} = [circle, very thick, draw=black, fill=red,minimum height=0.1cm, text width = 0.2cm]\n\t\\tikzstyle{dottedvariables} = [thick, fill=white, minimum size = 0.2cm]\n\t\\tikzstyle{textnode} = [thick, fill=white, minimum size=0.1cm ]\n\n\t\\node[inputvariables] (x1) at (0,0) {$\\xi_1$};\n\t\\node[inputvariables, below=0.4cm of x1] (x2) {$\\xi_2$};\n\t\\node[dottedvariables, below=0.1cm of x2] (x3) {$\\vdots$};\n\t\\node[inputvariables, below=0.1cm of x3] (x4) {$\\xi_{r_1}$};\n\t\\node[inputvariables, below=0.4cm of x4] (x5) {$\\theta_{1}$};\n\t\\node[inputvariables, below=0.1cm of x5] (x6) {$\\theta_{2}$};\n\t\\node[dottedvariables, below=0.1cm of x6] (x7) {$\\vdots$};\n\t\\node[inputvariables, below=0.4cm of x7] (x8) {$\\theta_{r_2}$};\n\t\\draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (x4.south west)--(x1.north west);\n\t\\node[textnode, above left of=x3,left=0.2cm] {$\\gvec{\\xi}(\\omega)$};\n\t\\draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (x8.south west)--(x5.north west);\n\t\\node[textnode, above left of=x7,left=0.2cm] {$\\gvec{\\theta}(\\omega)$};\n\n\t\\node[hiddenvariables] (h1) at (3,-1) {};\n\t\\node[hiddenvariables, below=0.4cm of h1] (h2) {};\n\t\\node[dottedvariables, below=0.1cm of h2] (h3) {$\\vdots$};\n\t\\node[dottedvariables, below=0.1cm of h3] (h4) {$\\vdots$};\n\t\\node[hiddenvariables, below=0.4cm of h4] (h5) {};\n\t\\node[hiddenvariables, below=0.4cm of h5] (h6) {};\n\t\n\t\\node[dottedvariables] (h31) at (5.5,-1) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h31] (h32) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h32] (h33) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h33] (h34) {$\\dots$};\n\t\\node[dottedvariables, below=0.5cm of h34] (h35) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h35] (h36) {$\\cdots$};\n\t\n\t\\node[hiddenvariables] (h21) at (8,-1) {};\n\t\\node[hiddenvariables, below=0.4cm of h21] (h22) {};\n\t\\node[dottedvariables, below=0.1cm of h22] (h23) {$\\vdots$};\n\t\\node[dottedvariables, below=0.1cm of h23] (h24) {$\\vdots$};\n\t\\node[hiddenvariables, below=0.4cm of h24] (h25) {};\n\t\\node[hiddenvariables, below=0.4cm of h25] (h26) {};\n\n\t\\node[outputvariables] (y1) at (11,0) {$c_1$};\n\t\\node[outputvariables, below=0.4cm of y1] (y2) {$c_2$};\n\t\\node[outputvariables, below=0.4cm of y2] (y3) {$c_3$};\n\t\\node[dottedvariables, below=0.2cm of y3] (y4) {$\\vdots$};\n\t\\node[dottedvariables, below=0.2cm of y4] (y5) {$\\vdots$};\n\t\\node[dottedvariables, below=0.2cm of y5] (y6) {$\\vdots$};\n\t\\node[dottedvariables, below=0.2cm of y6] (y7) {$\\vdots$};\n\t\\node[outputvariables, below=0.2cm of y7] (y8) {$c_k$};\n\t\\draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (y1.north east)--(y8.south east);\n\t\\node[textnode, above right of=y5, right=0.2cm] {$\\mathbf{c}(\\omega)$};\n\n\t\\path [-] (x1) edge node {} (h1);\n\t\\path [-] (x2) edge node {} (h1);\n\t\\path [-] (x4) edge node {} (h1);\n\t\\path [-] (x5) edge node {} (h1);\n\t\\path [-] (x6) edge node {} (h1);\n\t\\path [-] (x8) edge node {} (h1);\n\t\\path [-] (x1) edge node {} (h2);\n\t\\path [-] (x2) edge node {} (h2);\n\t\\path [-] (x4) edge node {} (h2);\n\t\\path [-] (x5) edge node {} (h2);\n\t\\path [-] (x6) edge node {} (h2);\n\t\\path [-] (x8) edge node {} (h2);\n\t\\path [-] (x1) edge node {} (h5);\n\t\\path [-] (x2) edge node {} (h5);\n\t\\path [-] (x4) edge node {} (h5);\n\t\\path [-] (x5) edge node {} (h5);\n\t\\path [-] (x6) edge node {} (h5);\n\t\\path [-] (x8) edge node {} (h5);\n\t\\path [-] (x1) edge node {} (h6);\n\t\\path [-] (x2) edge node {} (h6);\n\t\\path [-] (x4) edge node {} (h6);\n\t\\path [-] (x5) edge node {} (h6);\n\t\\path [-] (x6) edge node {} (h6);\n\t\\path [-] (x8) edge node {} (h6);\n\n\t\\path [-] (h1) edge node {} (h31);\n\t\\path [-] (h2) edge node {} (h31);\n\t\\path [-] (h5) edge node {} (h31);\n\t\\path [-] (h6) edge node {} (h31);\n\t\\path [-] (h1) edge node {} (h32);\n\t\\path [-] (h2) edge node {} (h32);\n\t\\path [-] (h5) edge node {} (h32);\n\t\\path [-] (h6) edge node {} (h32);\n\t\\path [-] (h1) edge node {} (h33);\n\t\\path [-] (h2) edge node {} (h33);\n\t\\path [-] (h5) edge node {} (h33);\n\t\\path [-] (h6) edge node {} (h33);\n\t\\path [-] (h1) edge node {} (h34);\n\t\\path [-] (h2) edge node {} (h34);\n\t\\path [-] (h5) edge node {} (h34);\n\t\\path [-] (h6) edge node {} (h34);\n\t\\path [-] (h1) edge node {} (h35);\n\t\\path [-] (h2) edge node {} (h35);\n\t\\path [-] (h5) edge node {} (h35);\n\t\\path [-] (h6) edge node {} (h35);\n\t\\path [-] (h1) edge node {} (h36);\n\t\\path [-] (h2) edge node {} (h36);\n\t\\path [-] (h5) edge node {} (h36);\n\t\\path [-] (h6) edge node {} (h36);\n\t\n\t\\path [-] (h21) edge node {} (h31);\n\t\\path [-] (h22) edge node {} (h31);\n\t\\path [-] (h25) edge node {} (h31);\n\t\\path [-] (h26) edge node {} (h31);\n\t\\path [-] (h21) edge node {} (h32);\n\t\\path [-] (h22) edge node {} (h32);\n\t\\path [-] (h25) edge node {} (h32);\n\t\\path [-] (h26) edge node {} (h32);\n\t\\path [-] (h21) edge node {} (h33);\n\t\\path [-] (h22) edge node {} (h33);\n\t\\path [-] (h25) edge node {} (h33);\n\t\\path [-] (h26) edge node {} (h33);\n\t\\path [-] (h21) edge node {} (h34);\n\t\\path [-] (h22) edge node {} (h34);\n\t\\path [-] (h25) edge node {} (h34);\n\t\\path [-] (h26) edge node {} (h34);\n\t\\path [-] (h21) edge node {} (h35);\n\t\\path [-] (h22) edge node {} (h35);\n\t\\path [-] (h25) edge node {} (h35);\n\t\\path [-] (h26) edge node {} (h35);\n\t\\path [-] (h21) edge node {} (h36);\n\t\\path [-] (h22) edge node {} (h36);\n\t\\path [-] (h25) edge node {} (h36);\n\t\\path [-] (h26) edge node {} (h36);\n\n\t\\path [-] (h21) edge node {} (y1);\n\t\\path [-] (h22) edge node {} (y1);\n\t\\path [-] (h25) edge node {} (y1);\n\t\\path [-] (h26) edge node {} (y1);\n\t\\path [-] (h21) edge node {} (y2);\n\t\\path [-] (h22) edge node {} (y2);\n\t\\path [-] (h25) edge node {} (y2);\n\t\\path [-] (h26) edge node {} (y2);\n\t\\path [-] (h21) edge node {} (y3);\n\t\\path [-] (h22) edge node {} (y3);\n\t\\path [-] (h25) edge node {} (y3);\n\t\\path [-] (h26) edge node {} (y3);\n\t\\path [-] (h21) edge node {} (y8);\n\t\\path [-] (h22) edge node {} (y8);\n\t\\path [-] (h25) edge node {} (y8);\n\t\\path [-] (h26) edge node {} (y8);\n\t\n\t\\node[textnode, font=\\fontsize{15}{6}\\selectfont, above=1.0cm of h31] (Text1){Hidden units};\n\t\\node[textnode, font=\\fontsize{15}{6}\\selectfont, left =1.8cm of Text1] (Text2) {Input units};\n\t\\node[textnode, font=\\fontsize{15}{6}\\selectfont, right = 1.8cm of Text1] {Output units};\n\t\\end{tikzpicture}\n\t\\caption{Structure of neural network, where $r_1=18$ and $r_2=2$.}\n\t\\label{fig:DNNstructure2}\n\\end{figure} \nWe use $N_1=5000$ samples for network training in the offline stage and use $N_2=200$ samples for testing in the online stage. The sample data pairs for training are $\\{(\\gvec{\\xi}^n(\\omega),\\gvec{\\theta}^n(\\omega)),\\textbf{c}^n(\\omega)\\}_{n=1}^{N_1}$, where \n$\\gvec{\\xi}^n(\\omega)\\in [-\\frac{1}{5},\\frac{1}{5}]^{18}$, $\\gvec{\\theta}^n(\\omega))\\in [\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$, and $\\textbf{c}^n(\\omega)\\in R^{K}$. We define the loss function of network training as \n\\begin{align}\nloss\\big(\\{\\textbf{c}^n\\},\\{\\textbf{\\^{c}}^n\\}\\big) = \\frac{1}{N_1}\\sum_{n=1}^{N_1}\\frac{1}{K}|\\textbf{c}^{n}-\\textbf{\\^{c}}^{n}|^2, \n\\end{align}\nwhere $\\textbf{c}^{n}$ are the training data and $\\textbf{\\^{c}}^n$ are the output of the neural network.\n\n \nFigure \\ref{fig:Example3locall2err-a} shows the value of loss function during training procedure. Figure \\ref{fig:Example3locall2err-b} shows the corresponding mean relative error of the testing samples in $L^2$ norm. \nEventually the relative error of the neural network reaches about $1.5\\times 10^{-2}$. \nFigure \\ref{fig:Example3locall2err-c} shows the corresponding mean relative error of the testing samples in $H^1$ norm. We remark that many existing methods become extremely expensive or infeasible when the problem is parameterized by high-dimensional random variables like this one. \n \n\\begin{figure}[htbp]\n\t\\centering\n\n\t\\begin{subfigure}[b]{0.32\\textwidth}\n\t\t$K=5$\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingloss_5.pdf} \\\\\n\t\t$K=10$\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingloss_10.pdf} \\\\\n\t\t$K=20$\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingloss_20.pdf} \n\t\t\\caption{ Loss.}\n\t\t\\label{fig:Example3locall2err-a}\n\t\\end{subfigure}\n\n\n\t\\begin{subfigure}[b]{0.32\\textwidth}\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingL2err_5.pdf} \\\\\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingL2err_10.pdf} \\\\\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingL2err_20.pdf} \n\t\t\\caption{ Relative $L^2$ error.} \n\t\t\\label{fig:Example3locall2err-b}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[b]{0.32\\textwidth}\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingH1err_5.pdf}\\\\ \n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingH1err_10.pdf}\\\\\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingH1err_20.pdf}\n\t\t\\caption{ Relative $H^1$ error.} \n\t\t\\label{fig:Example3locall2err-c}\n\t\\end{subfigure}\n\t\\caption{ First column: the value of loss function during training procedure. Second column and third column: the mean relative errors of the testing set during training procedure in $L^2$ and $H^1$ norm respectively.}\n\t\\label{fig:Example3locall2err}\n\\end{figure}\n\n\n\n\\subsection{An example with unknown random coefficient and source function}\\label{sec:Example4}\n\\noindent\nHere we present an example where the models of the random coefficient and source are unknown. Only a set of sample solutions are provided as well as a few censors can be placed at certain locations for solution measurements. This kind of scenario appears often in practice. We take the least square fitting method as described in Section \\ref{sec:LS}. Our numerical experiment is still based on \\eqref{randommultiscaleelliptic}, which is used to generate solution samples (instead of experiments or measurements in real practice). But once the data are generated, we do not assume any knowledge of the coefficient or the source when computing a new solution. \n\n\nTo be specific, the coefficient takes the form\n\\begin{align}\na(x,y,\\omega) = \\exp\\Big(\\sum_{i=1}^{24} \\sin(2\\pi \\frac{x\\sin(\\frac{i\\pi}{24}) +y\\cos(\\frac{i\\pi}{24}) }{\\epsilon_i} )\\xi_i(\\omega) \\Big)\n\\label{coefficientofexample4}\n\\end{align}\nwhere $\\epsilon_i=\\frac{1+i}{100}$, $i=1,2,\\cdots,24$ and $\\xi_i(\\omega)$, $i=1,...,24$ are i.i.d. uniform random variables in $[-\\frac{1}{6},\\frac{1}{6}]$. The force function is a random function $f(x,y) = \\sin(\\pi(\\theta_1x+2\\theta_2))\\cos(\\pi(\\theta_3y+2\\theta_4))\\cdot I_{D_2}(x,y)$ with i.i.d. uniform random variables $\\theta_1,\\theta_2,\\theta_3,\\theta_4$ in $[0,2]$. We first generate $N=2000$ solutions samples (using standard FEM) $u(x_j, \\omega_i), i=1, \\ldots, N, j=1, \\ldots, J$, where $x_j$ are the points where solution samples are measured. Then a set of $K$ data-driven basis $\\phi_k(x_j), j=1, \\ldots, J, k=1, \\dots, K$ are extracted from the solution samples as before. \n\nNext we determine $M$ good sensing locations from the data-driven basis so that the least square problem \\eqref{eq:LS} is not ill-conditioned. We follow the method proposed in \\cite{Kutz2017Sensor}. Define $\\Phi=[\\boldsymbol{\\phi}_1, \\ldots, \\boldsymbol{\\phi}_K]\\in R^{J\\times K}$, where $\\boldsymbol{\\phi}_k=[\\phi_k(x_1), \\ldots, \\phi_k(x_J)]^T$. If $M=K$, QR factorization with column pivoting is performed on $\\Phi^T$. If $M>K$, QR factorization with pivoting is performed on $\\Phi\\Phi^T$. The first $M$ pivoting indices provide the measurement locations. Once a new solution is measured at these $M$ selected locations, the least square problem \\eqref{eq:LS} is solved to determine the coefficients $c_1, c_2, \\ldots, c_K$ and the new solution is approximated by $u(x_j,\\omega)=\\sum_{k=1}^K c_k\\phi_k(x_j)$.\n\n \nIn Figure \\ref{fig:Example4localerrors} and Figure \\ref{fig:Example4globalerrors}, we show the results of the local problem and global problem, respectively. In these numerical results, we \ncompared the error between the reconstructed solutions and the reference solution. We find the our proposed method works well for problem \\eqref{randommultiscaleelliptic} with a non-parametric coefficient or source as well. \n \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_local_L2err.pdf}\n\t\t\\label{fig:Example4errors-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_local_H1err.pdf}\n\t\t\\label{fig:Example4errors-b}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the local problem of Sec.\\ref{sec:Example4} .}\n\t\\label{fig:Example4localerrors}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_global_L2err.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_global_H1err.pdf}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the global problem of Sec.\\ref{sec:Example4}.}\n\t\\label{fig:Example4globalerrors}\n\\end{figure}\n \n\\section{Conclusion} \\label{sec:Conclusion}\n\\noindent \nIn this paper, we propose a data-driven approach to solve elliptic PDEs with multiscale and random coefficient which arise in various applications, such as heterogeneous porous media flow problems in water aquifer and oil reservoir simulations. The key idea for our method, which is motivated by the high separable approximation of the underlying Green's function, is to extract a problem specific low dimensional structure in the solution space and construct its basis from the data. Once the data-driven basis is available, depending on different setups, we design several ways to compute a new solution efficiently.\n\nError analysis based on sampling error of the coefficients and the projection error of the data-driven basis is presented to provide some guidance in the implementation of our method. Numerical examples show that the proposed method is very efficient especially when the problem has relative high dimensional random input.\n\n \n \n\n\n\n\n\\section*{Acknowledgements}\n\\noindent\nThe research of S. Li is partially supported by the Doris Chen Postgraduate Scholarship. \nThe research of Z. Zhang is supported by the Hong Kong RGC General Research Funds (Projects 27300616, 17300817, and 17300318), National Natural Science Foundation of China (Project 11601457), Seed Funding Programme for Basic Research (HKU), and Basic Research Programme (JCYJ20180307151603959) of The Science, Technology and Innovation Commission of Shenzhen Municipality. The research of H. Zhao is partially supported by NSF grant DMS-1622490 and DMS-1821010. This research is made possible by a donation to the Big Data Project Fund, HKU, from Dr Patrick Poon whose generosity is gratefully acknowledged.\n\n\n\n\n\\bibliographystyle{siam}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe projects of the $e^+e^-$ Linear Collider (LC) -- ILC, CLIC, ... -- contain one essential element that is not present in other colliders.\nHere each $e^-$ (or $e^+$) bunch will be used only\nonce, and physical collisions retain two very dense and\nstrongly collimated beams of high energy electrons with the\nprecisely known time structure. We consider,\nfor definiteness, electron beam parameters of the ILC\nproject \\cite{ILC}:\n \\bear{l}\n\\mbox{\\it particle energy}\\;\\; E_e=250\\div 500\\;{\\rm GeV}, \\\\\n\\mbox{\\it number of electrons per second}\\;\\; N_e\\sim 10^{14}\/{\\rm s},\\\\\n\\mbox{\\it transverse size and angular spread are negligible};\\\\\n\\mbox{\\it time structure is complex and precisely known}.\n \\eear{beampar}\n\n\nThe problem of dealing with this powerful beam dump is\nunder extensive discussions, see, e.g.,~\\cite{ILC}.\n\nAbout 10 years ago we suggested to utilize such used beams in project TESLA to\ninitiate operation of a subcritical fission reactor and to\nconstruct a neutrino factory ($\\nu$F)~\\cite{LCWS05,reactor}.\nWith new studies of ILC and CLIC, these proposals should be renewed.\\\\\n\n\n\n\n\n\\bu Neutrino factories promise to solve many problems. The existing\nprojects (see, e.g.,~\\cite{nufact1}-\\cite{nufact3}) are very\nexpensive, their physical potential is limited by an\nexpected neutrino energy and productivity of a neutrino\nsource.\n\n\n\n\nThe proposed $\\nu$F based on LC is much less expensive\nthan those discussed nowadays since there are no additional costs\nfor construction of a high intensity and high energy particle source. The\ncombination of the high number of particles in the beam and\nhigh particle energy with precisely known time\nstructure \\eqref{beampar} provides very favorable properties of such a $\\nu$F.\nThe initial beam will be prepared in LC irrelevantly to the $\\nu$F\nconstruction. The construction demands no special\nelectronics except that for detectors. The initial beam\nis very well collimated, therefore the additional efforts for\nbeam cooling are not necessary. Use of the IceCube in\nAntarctica or Lake Baikal detector just as specially prepared detector not so far from LC as a far distance detector (FDD)\nallows to study in details $\\nu_\\mu-\\nu_\\tau$ oscillations and observe possible oscillations $\\nu_\\mu\\to \\nu_{sterile}$ (in latter case --\nvia a measurement of deficit of $\\nu_\\mu N\\to \\mu X$ events).\n\nThe neutrino beam will have a very well known discrete time\nstructure that repeats the same structure in the LC. This\nfact allows one to separate cosmic and similar backgrounds\nduring operation with high precision. A very simple structure of\na neutrino generator allows to calculate the energy spectrum\nand a content of the main neutrino beam with high accuracy.\nIt must be verified with high precision in a nearby detector\n(NBD).\n\nIn this project an incident neutrino beam will contain mainly $\\nu_\\mu$,\n$\\bar{\\nu}_\\mu$ with a small admixture of $\\nu_e$\nand $\\bar{\\nu}_e$ and tiny dope of $\\nu_\\tau$ and\n$\\bar{\\nu}_\\tau$ (the latter can be calculated with low\nprecision). For the electron beam energy of 250~GeV, neutrino\nenergies are spread up to about\n80~GeV with the mean energy of about 30 GeV, providing reliable\nobservation of $\\tau$, produced by $\\nu_\\tau$ from the\n$\\nu_\\mu-\\nu_\\tau$ oscillations. In the physical program of a\ndiscussed $\\nu$F we consider only the problem of\n$\\nu_\\mu-\\nu_\\tau$ and\/or $\\nu_\\mu\n-\\nu_{sterile}$ oscillations. The potential of this $\\nu$F in solving\nother problems of $\\nu$ physics should be studied after\ndetailed consideration of the project, see also~\\cite{nufact3}.\\vspace{-3mm}\n\n\n\n\\section{Elements of neutrino factory. Scheme}\n\n\n\nThe proposed scheme deals with the electron beam used in LC\nand contains the following parts, Fig.~1:\\\\\n\\bu Beam bending magnet (BM).\\\\\n\\bu Pion producer (PP), \\, \\\\ \\bu Neutrino\ntransformer (NT),\\\\\n \\bu Nearby detector (NBD), \\\\ \\ \\bu Far distance detector (FDD).\n\\\\\n\\begin{figure}[hbt]\n\\centering \\includegraphics[width=0.75\\textwidth, height=2.5cm]{Scem2.eps}\n\\caption{Main parts of the neutrino factory after BM.}\n \\end{figure}\n\n\n\n\\section{Beam bending magnet}\n\nThe system should start with a bending magnet situated after\nthe detector of the basic collider. It\nturns the used beam at an angle necessary to reach FDD by\nsacrificing monochromaticity but without essential growth of the angular\nspread. The vertical component of the turning angle $\\alpha_V$\nis determined by Earth curvature. Let us denote the\ndistance from LC to FDD at the Earth surface by $L_F$. To reach FDD, the initial\nbeam (and therefore NT) should be turned before PP at the\nangle $\\alpha_V =L_F\/(2R_E)$ below horizon (here\n$R_E$ is the Earth radius).\n\n\nThe horizontal component of turning angle can be minimized\nby a suitable choice of the proper LC orientation\n(orientation of an incident beam near the LC collision point).\n\n\n\\section{Pion producer (PP)}\n\nThe PP can be, for example, a 20~cm long water cylinder\n({\\it one radiation length}). Water in the cylinder can rotate for cooling.\nIn this PP almost each electron will produce a bremsstrahlung\nphoton with energy $E_\\gamma=100-200$~GeV. The angular\nspread of these photons is roughly the same as that\nof the initial beam ($\\sim 0.1$~mrad). Bremsstrahlung\nphotons have an additional angular spread of about\n$1\/\\gamma\\approx 2\\cdot 10^{-6}$. These two spreads are\nnegligible for our problem.\n\nThen these photons collide with nuclei and produce pions,\n \\be\n\\gamma N\\to N'+\\pi\\;'s,\\quad \\sigma\\approx 110\\,\\mu b.\n \\ee\nThis process gives about $10^{-3}$ $\\gamma N$ collisions\nper one electron that corresponds to about $\\sim 10^{11}$ $\\gamma N$\ncollisions per second. On average, each of these collisions\nproduces a single pion with high energy $E_\\pi>E_\\gamma\/2$ (for\nestimates $\\la E_\\pi^h\\ra=70$~GeV) and at least 2-3 pions\nwith lower energy (for estimates, $\\la E_\\pi^\\ell\\ra\\approx\n20$~GeV).\n\nMean transverse momentum of these pions is 350-500 MeV. The\nangular spread of high energy pions with the energy $\\la\nE_\\pi^h\\ra$ is within 7 mrad. Increase of the angular\nspread of pions with decrease of energy is compensated by\ngrowth of the number of produced pions. Therefore, for\nestimates we accept that the pion flux within an angular\ninterval of 7~mrad contains $\\sim 10^{11}$~pions with\n$E_\\pi=\\la E_\\pi^h\\ra$ and the same number of pions with\n$E_\\pi=\\la E_\\pi^\\ell\\ra$ per second. Let us denote the\nenergy distribution of pions flying almost forward by\n$f(E)$.\n\n$\\circ$ Certainly, more refined calculations should also consider\nproduction and decay of $K$ mesons etc.\n\n\n\\bu The production of $\\nu_\\tau$ in the reaction mentioned in ref.~\\cite{Telnov}\n \\begin{subequations}\\label{nutsorce}\n \\be\n\\gamma N\\to D_s^\\pm X\\to \\nu_\\tau \\bar{\\tau} X\\,.\n \\ee\nplays the most essential role for our estimates. Its cross\nsection rapidly increases with energy growth and\n \\be\n\\sigma(\\gamma N\\to \\tau+...)\\approx 2\\cdot 10^{-33}\\, {\\rm cm}^2\\;\\;\n{\\rm at} \\;\\;\nE_\\gamma\\approx 50 \\mbox{ GeV}\\,.\n \\ee\n \\end{subequations}\n\n\n\\section{Neutrino transformer (NT). Neutrino beams}\n\nFor the neutrino transformer (NT), we suggest a high vacuum\nbeam pipe of length $L_{NT}\\approx 1$~km and radius\n$r_{NT}\\approx 2$~m.\nHere muon neutrino $\\nu_\\mu$ and\n$\\bar{\\nu}_\\mu$ are created from $\\pi\\to\\mu\\nu$ decay. This\nlength $L_{NT}$ allows that more than one quarter of pions with\n$E_\\pi\\le\\la E_\\pi^h\\ra$ decay. The pipe with a radius\n$r_{NT}$ gives an angular coverage of 2 mrad, which cuts\nout 1\/12 of the total flux of low and medium energy\nneutrinos. With the growth of pion energy, two factors act\nin opposite directions. First, the initial angular\nspread of pions decreases with this growth, therefore, the fraction\nof the flux selected by the pipe will increase. Second, the\nnumber of pion decays within a relatively short pipe\ndecreases with this growth. These two tendencies compensate each\nother in the resulting flux.\n\nThe energy distribution of neutrinos obtained from the decay of a pion\nwith the energy $E$ is uniform in the interval $(aE,\\,0)$\nwith $a=1-(m_\\mu \/m_\\pi)^2$. Therefore, the\ndistribution $F(\\vep)$ of the neutrino energy $\\vep$ can be obtained from\nthe energy distribution of pions near the forward direction $f(E)$\nas\n \\be\nF(\\vep)=\\int\\limits_{\\vep\/a}^{E_e} f(E)dE\/(aE)\\,,\\qquad\na=1-\\fr{m_\\mu^2}{m_\\pi^2}\\approx 0.43\\,.\\label{spectr}\n \\ee\nThe increase of the angular spread in the decay is negligible\nin the rough approximation. Finally, at the end of NT we\nexpect to have the neutrino flux within the angle of 2~mrad\n \\bear{c}\n2\\cdot 10^{9} \\nu\/{\\rm s} \\;\\; {\\rm with}\\;\\; E_\\nu=\\la\nE_\\nu^h\\ra\\approx 30~{\\rm GeV},\\\\ \\mbox{ and }\\;\\; 2\\cdot\n10^{9} \\nu\/{\\rm s} \\;\\; {\\rm with}\\;\\; E_\\nu=\\la E_\\nu^\\ell\\ra\\approx\n9~{\\rm GeV}.\n \\eear{nucount}\nWe denote below neutrinos with $\\la E_\\nu\\ra=30$~GeV and\n$9$~GeV as {\\it high energy neutrinos} and {\\it low\nenergy neutrinos}, respectively.\n\n\n$\\circ$ Other sources of $\\nu_{\\mu}$ and $\\nu_e$ change these\nnumbers only slightly.\n\n\\bu {\\bf The background $\\pmb{\\nu_\\tau}$ beam}.\n\nThe $\\tau$ neutrino are produced in PP. Two mechanisms\nwere discussed in this respect, the Bethe-Heitler process\n$\\gamma N\\to \\tau\\bar{\\tau}+X$~\\cite{Skrinsky} and the process\n\\eqref{nutsorce} which is dominant~\\cite{Telnov}. The\ncross section \\eqref{nutsorce} is five orders smaller than\n$\\sigma(\\gamma N\\to X)$. The mean transverse momentum $\\la p_t\\ra$ of $\\nu_\\tau$ is given by $m_\\tau$, it is more than three times\nhigher than $\\la p_t\\ra$ for $\\nu_\\mu$. Along with, e.g.,\n$\\bar{\\nu}_\\tau$ produced in this process, in NT each\n$\\tau$ decays to $\\nu_\\tau$ plus other particles.\nTherefore, each reaction of such a type is a source of a\na $\\nu_\\tau+\\bar{\\nu}_\\tau$ pair. Finally, for the flux density\nwe have\n \\be\nN_{\\nu_\\tau}\\sim 3\\cdot 10^3\\nu_\\tau\/({\\rm s}\\cdot {\\rm mrad}^2)\\lesssim\n8\\cdot 10^{-6} N_{\\nu_\\mu}\\,.\\label{nutflux}\n \\ee\nThe $\\nu_\\tau$ (or $\\bar{\\nu}_\\tau$) energy is typically\nhigher than that of $\\nu_\\mu$ by a factor of $2\\div 2.5$.\n\nBesides, $\\nu_\\tau$ will be produced by non-decayed pions\nwithin the protecting wall behind NP in the process like\n$\\pi N\\to D_sX\\to \\tau\\nu_\\tau X$. The cross section of\nthis process increases rapidly with the energy growth and\nequals $0.13\\mu$b at $E_\\pi=200$~GeV \\cite{tel1}. A rough\nestimate shows that the number of additional $\\nu_\\tau$\npropagating in the same angular interval is close to the\nestimate \\eqref{nutflux}. In the numerical\nestimates below we consider, for definiteness, the first\ncontribution only. A measurement of $\\nu_\\tau$ flux in\nthe NBD is a necessary component for the study of $\\nu_\\mu-\\nu_\\tau$\noscillations in FDD.\n\n\n\n\n\n\\section{ Nearby detector (NBD)}\n\n\nThe main goal of the nearby detector (NBD) is to measure\nthe energy and angular distribution of neutrinos within\nthe beam as well as $N_{\\nu_e}\/N_{\\nu_\\mu}$ and\n$N_{\\nu_\\tau}\/N_{\\nu_\\mu}$.\n\nWe propose to place the NBD at the reasonable distance behind NT\nand a concrete\nwall (to eliminate pions and other particles from the initial\nbeam). For estimates, we consider the body of NBD in a form of the\nwater cylinder with a radius about 2-3~m (roughly the same as\nNT) and length $\\ell_{NBD}\\approx 100$~m. The detailed construction\nof the detector should be considered separately.\n\nFor $E_\\nu=30$ GeV, the cross section for $\\nu$ absorption\nis\n \\bear{c}\n \\sigma(\\bar{\\nu} N\\to \\mu^+h)=0.1 \\pi\\alpha^2\\fr{m_pE_{\\bar{\\nu}}}\n{M_W^4\\sin^4\\theta_W}\n \\approx\n10^{-37} {\\rm cm}^2,\\\\ \\sigma(\\nu N\\to\n\\mu^-h)=0.22 \\pi\\alpha^2\\fr{m_pE_\\nu}{M_W^4\\sin^4\\theta_W}\\approx 2\\cdot 10^{-37}\n{\\rm cm}^2.\n \\eear{}\nTaking into account these numbers, the free path length in water is\n $\\lambda_{\\bar{\\nu}}=10^{13}$~{\\rm cm} and $\\lambda_\\nu= 0.45\\cdot\n10^{13}$~{\\rm cm}. That gives\n \\bear{c}\n(1\\div 2)\\cdot 10^7\\;\\; \\mu\/{ {\\rm year}}\\;\\;\n ({\\rm with}\\;\\;\\la E_\\mu\\ra\\sim 30\\;{\\rm GeV});\\\\\n150\\div 250\\;\\; \\tau\/{ {\\rm year}}\\;\\; ({\\rm with}\\;\\;\\la\nE_\\tau\\ra\\sim 50\\;{\\rm GeV})\\,.\n \\eear{numberNBD}\n(here 1 year =$10^7$~s, that is LC working time). These numbers look sufficient for\ndetailed measurements of muon neutrino spectra and for\nverification of the calculated direct $\\nu_\\tau$ background.\n\n\n\\section{Far Distance Detector (FDD)}\n\nHere we consider how the FDD can be used for a solution of the single problem:\n$\\nu_\\mu-\\nu_\\tau$ and (or) $\\nu_\\mu-\\nu_{sterile}$ oscillations.\nOther possible applications should be considered elsewhere.\nWe discuss here two possible position of FDD -- at relatively small distance from LC -- FDD I (with special detector) and very far from LC -- FDD II (with using big detectors, constructed for another goals).\n\nFor the\nlength of oscillations we use estimate \\cite{Vysot}\n \\be\n L_{osc}\\approx\nE_\\nu\/(50~{\\rm GeV})\\cdot 10^5~{\\rm km}\\,.\\label{Losc}\n \\ee\n\n\n\\subsection{FDD I}\n\nWe discuss first the opportunity to construct special relatively compact detector with not too expensive excavation work at the distance of a few hundred kilometers from LC (for definiteness, 200 km). For this distance the NT should be\nturned at 16~mrad angle below horizon. This angle can be reduced by\n3~mrad (one half of angular spread of initial pion beam).\n\nWe consider the body of this FDD in the form of water channel of length 1~km with radius $R_F\\approx 40$~m. The transverse size is\nlimited by water transparency.\n\nThe fraction of neutrino's reaching this FDD is given by\nratio $k=(R_F\/L_F)^2\/[(r_{NT}\/L_{NT})^2]$. In\nour case $k\\approx 0.01$. Main effect under interest here is\n$\\nu_\\mu\\to \\nu_\\tau$ oscillation. They add\n$(L_F\/L_{osc})^2N_{\\nu_\\mu}$ to initial $N_{\\nu_\\tau}$.\n\nIn FDD of chosen sizes we expect the counting rate to be\njust 10 times lower than that in NBD \\eqref{numberNBD} for\n$\\nu N\\to \\mu X$ reactions with high energy neutrino. We\nalso expect the rate of $\\nu_\\tau N\\to \\tau X$ events to be\nanother $10^5$ times lower (that is about 10 times higher\nthan the background given by initial $\\nu_\\tau$ flux),\n \\be\n\\begin{array}{c}\n N(\\nu_\\mu N\\to \\mu X)\\approx (1\\div 2)\\cdot\n 10^6\/year,\\\\\nN(\\nu_\\tau N\\to \\tau X)\\approx (10\\div\n20)\/year\\end{array}\\;\\; in\\;\\; FDDI.\\label{FDDInumb}\n \\ee\nFor neutrino of lower energies effect increases. Indeed,\n$\\sigma(\\nu N\\to\\tau X)\\propto E_\\nu$ while $L_{osc}\\propto\nE_\\nu$. Therefore, observed number of $\\tau$ from\noscillations increases $\\propto 1\/E_\\nu$ at $E_\\nu\\ge\n10$~GeV. The additional counting rate for $\\nu_\\tau N\\to\n\\tau X$ reaction with low energy neutrino (with $\\la\nE_\\nu\\ra=9$~GeV) cannot be estimated so simply, but rough\nestimates give numbers similar to \\eqref{FDDInumb}.\n\nThese numbers look sufficient for separation of\n$\\nu_\\mu-\\nu_\\tau$ oscillations and rough measurement of\n$s_{23}$.\n\nNote that at considered FDD I size the counting rate of $\\nu_\\tau\nN\\to \\tau X$ reaction is independent on FDD distance from\nLC, $L_F$. The growth of $L_F$ improves the signal to\nbackground ratio for oscillations. The value of signal\nnaturally increases with growth of volume of\nFDD I.\n\n\n\n\\subsection{FDD II}\n\nNow we consider very attractive opportunity to use for FDD existent neutrino telescope with volume of 1 km$^3$ situated at Lake Baikal or in Antarctica (IceCube detector) with the\ndistance {\\it basic LC --- FDD II} $L_F\\approx 10^4$~km. This\nopportunity requires an excavation work\nfor NT and NBD at the angle about $50^\\circ$ under horizon.\n\n\nAt this distance according to \\eqref{Losc} for $\\nu$ with energy about 30~GeV, we expect\nthe conversion of $(L_F\/L_{osc})^2\\approx 1\/36$ for\n$\\nu_\\mu\\to \\nu_\\tau$ or $\\nu_\\mu\\to \\nu_{sterile}$.\n\nThe number of expected events $\\nu_\\mu\\to \\mu\nX$ with high energy neutrinos will be about 0.01 of that in\nNBD,\n \\be\n \\begin{array}{c}\n N(\\nu_\\mu N\\to \\mu X)\\approx (1\\div 2)\\cdot\n 10^5\/{\\rm year},\\\\\nN(\\nu_\\tau N\\to \\tau X)\\approx 3\\cdot 10^3\/{\\rm year}\\,.\\end{array} \\label{FDDIInumb}\n \\ee\nThe contribution of low energy neutrinos increases both\nthese counting rates.\n\nTherefore, one can hope that a few years of experimentation\nwith a reasonable $\\tau$ detection efficiency will allow one to\nmeasure $s_{23}$ with percent accuracy, and a similar period\nof observation of $\\mu$ production will allow to observe\nthe loss of $\\nu_\\mu$ due to transition of this neutrino to\n $\\nu_{sterile}$.\n\n\n\n\n\\section{Discussion }\n\nHere we suggest to construct Neutrino Factory with great physical potential using beam of Linear Collider after main collision there.\n\nAll technical details of the proposed scheme including\nsizes of all elements, construction, and materials of\ndetectors can be modified in the forthcoming simulations\nand optimization of parameters. The numbers obtained above\nrepresent the first careful estimates. In particular, rate of neutrino can be enhanced with increasing of length of PP, this rate can appear significantly higher after more accurate calculation of pion production there, the length of NT can be reduced due to economical reasons, etc.\n\nAfter first stage of Linear Collider its energy can be increased. For these stages proposed scheme can be used without changes (except magnetic field in BM and taking into account new time structure of neutrino beam).\n\nWe did not discuss here methods of $\\mu$ and $\\tau$ detection\nand their efficiency. Next, a large fraction of residual\nelectrons, photons and pions leaving the PP will reach the\nwalls of the NT pipe. The heat sink and radiation\nprotection of this pipe must be taken into account.\n\n\nA more detailed physical program of this $\\nu$F\nwill include many features of that in other projects\n(see, e.g.,~\\cite{nufact1}-\\cite{nufact3}).\n\n\\section{ Other possible applications of some parts of $\\pmb\\nu$F}\n\n\n\\bu {\\bf PP for the fixed\ntarget experiment}. The PP can be treated as an $eN\/\\gamma N$\ncollider with luminosity $3\\cdot 10^{39}$~cm$^{-2}$s$^{-1}$ with a\nc.m.s. energy of about 23~GeV. Therefore, if one adds some standard\ndetector equipment behind PP, it can be also used for a fixed target\n$eN\/\\gamma N$ experiment. Here one can\nstudy rare processes in $\\gamma N$ collisions, $B$\nphysics, etc.\n\n\n\n\\bu {\\bf Additional opportunity for using NBD.} The high rate\nof $\\nu_\\mu N\\to \\mu X$ processes\nexpected in NBD allows one to study new problems of high\nenergy physics. The simplest example is the opportunity to\nstudy charged and axial current induced structure functions and\ndiffraction ($\\nu N\\to \\mu +hadrons$, $\\nu\nN\\to\\mu N' \\rho^\\pm $, $\\nu N\\to\\mu N' b_1^\\pm$,...)\nwith high precision.\\\\\n\n\n\n\n\nI am thankful to N.~Budnev, S.~Eidelman, D.~Naumov, L.~Okun, V.~Saveliev,\nA.~Sessler, V.~Serbo, A.~Skrinsky, O.~Smirnov, V.~Telnov, M.~Vysotsky, M.~Zolotorev for comments.\nThe paper is supported in part by Russian grants RFBR, NSh,\nof Presidium RAS and\nPolish Ministry of Science and Higher\nEducation Grant N202 230337\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt many situations in modeling and analysis, it is helpful to use a coordinate system other than the standard Cartesian system. While a Cartesian system has many desirable properties, it is sometimes more beneficial to have a coordinate system which is better tuned to the character of the problem. If a problem has an identifiable symmetry to it, such as cylindrical, or spherical, or otherwise, then it is often possible to simplify or even eliminate one or more dimensions of the problem. Such reductions in the complexity of problems ease analysis and accelerate the acquisition of numerical solutions. This is the precise motivation for the works in \\cite{ConEuler} and \\cite{ConMHD} which eliminate the radial dimension of supersonic flow problems governed by the Euler and Ideal Magnetohydrodynamic (MHD) equations respectively to acquire the conical versions of those equations. The Conical Euler equations:\n\n\\begin{subequations}\\label{EulerCon}\n\\begin{gather}\n\\left(\\rho V^\\beta\\right)_{|\\beta} = 0 \\label{mass} \\\\\n\\left(\\rho V^i V^\\beta + G^{i\\beta}P\\right)_{|\\beta} = 0 \\label{mom} \\\\\n\\left( \\left[\\rho E+P\\right] V^\\beta \\right)_{|\\beta} =0 \\label{energy}\n\\end{gather}\n\\end{subequations}\n\n\nand the Conical MHD Equations:\n\n\n\\begin{subequations}\\label{MHDCon}\n\\begin{gather}\n \\left(\\rho V^\\beta\\right)_{|\\beta} = 0 \\label{mass_mhd} \\\\\n \\left(\\rho V^iV^\\beta - \\frac{1}{\\mu}B^iB^\\beta + G^{i\\beta}\\left(P + \\frac{|\\pmb{B}|^2}{2\\mu}\\right)\\right)_{|\\beta} = -\\frac{1}{\\mu}B^iB^\\beta_{|\\beta} \\label{mom_mhd}\n \\\\\n \\left( \\left(\\rho E+P+\\frac{|\\pmb{B}|^2}{\\mu}\\right)V^\\beta - \\frac{1}{\\mu}(\\pmb{V}\\cdot\\pmb{B})B^\\beta \\right)_{|\\beta} = -\\frac{1}{\\mu}(\\pmb{V}\\cdot\\pmb{B})B^\\beta_{|\\beta} \\label{energy_mhd}\n \\\\\n (V^\\beta B^i-V^iB^\\beta)_{|\\beta} = -V^iB^\\beta_{|\\beta} \\label{mag_mhd}\n\\end{gather}\n\\end{subequations}\n\n\nare defined on the surface of a sphere which is two dimensional, but curved. The unique character of the systems made them incompatible with basic numerical methods. Numerical methods have been developed for the conical Euler equations subject to the assumption of irrotational flow in \\cite{Guan} and \\cite{SriFAMethod} with success. These however did not so easily extend to the more general conical equations. Furthermore there are no known efforts to solve the conical MHD equations numerically. Therefore it was fitting to develop a new method designed to handle the challenge of solving a fluid flow problem on a curved manifold.\n\nThe curvature of the surface was accounted for using tensor calculus which provides tools that can systematically transform equations between coordinate systems. In fact it allows equations to be put in general form, not referencing any particular coordinate system, and thus appropriate to be adapted to any coordinate system. As an example, in numerical simulations of gas flows past bodies, it is convenient for the coordinate system to conform to the contour of the object in the flow. With a coordinate free formulation of the governing equations, the problem is abstractly the same regardless of the exact shape of the object around which the gas is flowing.\n\nTo use such formulations in practice though, appropriate numerical methods must be developed which accurately accommodate the non-uniformity of the coordinate lines. A key component of ensuring a numerical method does this is deriving discrete source terms analogous to the Christoffel symbols which show up in expressions involving derivatives with respect to curved coordinate lines. It is important in numerics for source terms to not only be consistent in the limit of zero mesh spacing, but also to have a behavior which is consistent with the continuous case even with a finite mesh spacing \\cite{balbas,BalancedNT,jin_2001,kurganov2007}. Work has been done on fluid flow problems on manifolds such as in \\cite{Man_Lev} in which the geometric terms were consistent in the limit of zero mesh spacing but did not truly capture the tensorial nature of the problems, and thus did not perfectly captured steady state solutions. Work has also been done to develop appropriate source terms in applications such as shallow water and chemically reacting flows which capture behavior and steady solutions in addition to being consistent in value. However, so far work has not been done which both addresses a fluid flow problem on a general curved manifold \\textit{and} derives the geometric source terms in such a way as to preserve the tensorial nature of the problem and thus accurately capture steady state solutions. In this work we demonstrate how to derive such source terms for a large class of discrete differential operators and manifolds. We then develop a numerical method involving these source terms to solve the conical Euler and MHD equations on the surface of a sphere.\n\nIn section \\ref{sec:CD} we introduce the covariant derivative in a curved coordinate system. In the following section we develop a discrete analog of the covariant derivative by deriving source terms which correctly account for the curvature of the coordinate system. An example of how these can be applied to a modern central scheme is presented in section \\ref{sec:CS}. The conical Euler and MHD equations are introduced in section \\ref{sec:CF}, and a numerical method to solve them is developed in sections \\ref{sec:Mesh}, \\ref{sec:BC}, \\ref{sec:Disc}, and \\ref{sec:SP}. Numerical results produced by this method are presented and discussed in section \\ref{sec:RD}.\n\n\n\\section{Covariant Derivative}\\label{sec:CD}\n\nWe restrict ourselves to the case of a Riemannian manifold. This restriction allows us to define a real vector basis on the manifold which refers back to a Cartesian coordinate system. This vector basis is the Jacobian matrix of the coordinate transformation between the Cartesian system and the system in which the problem is formulated. While such a restriction is not universally applicable, it does apply to a wide variety of current research areas. It mainly only breaks down in relativistic applications. Furthermore, this treatment of tensor calculus is simpler and highlights the use of tools from calculus and linear algebra.\n\nIn a curved coordinate system, the basis for vectors and tensors is no longer uniform. Thus it is possible for the components of a vector to change, but for the vector to remain the same, and conversely for the vector to change, but the components to remain the same. The covariant derivative (denoted $(\\cdot)_{|i}$ for differentiation in the $i^{th}$ coordinate direction) accounts for this. If a vector does not change in a given direction, then the covariant derivative in that direction will be zero, even if the components are changing.\n\nThe covariant derivative is the foundation of different kinds of derivatives which are seen in practice such as the gradient, divergence, curl, and Laplacian. It is thus the case that if an appropriate discrete form of the covariant derivative can be derived, then expressions for a wide variety of operators will naturally follow. In order to derive a discrete form, the mathematical character of the covariant derivative must be understood.\n\n\nConsider a $d$-dimensional Euclidean space spanned by two coordinate systems, a Cartesian system ($\\tilde{X}$) with coordinates $\\tilde{x}^i$ for $i\\in \\{1,2,3,...,d\\}$, and another curved system ($X$) with coordinates $x^i$ for $i\\in \\{1,2,3,...,d\\}$. The Jacobian matrix is defined at every point, given by $\\JJ{i}{j}$ and provides the basis for vectors and tensors. Thus:\n\n\\begin{equation}\n \\forall u^j \\in X \\text{, we have } \\JJ{i}{j}u^j=\\tilde{u}^i\\in\\tilde{X} \n\\end{equation}\n\nand\n\n\\begin{equation}\n \\forall w^{jk} \\in X \\text{, we have } \\JJ{i}{j}\\JJ{h}{k}w^{jk}=\\tilde{w}^{ih}\\in\\tilde{X} \n\\end{equation}\n\nand so on. \n\nLikewise, there is at every point a dual basis, $\\JD{i}{j}$, which acts as a basis for derivatives. That is:\n\n\\begin{equation}\n \\JD{i}{j}\\frac{\\partial }{\\partial x^i} = \\frac{\\partial }{\\partial \\tilde{x}^j}\n\\end{equation}\n\n\n\nWe would like for derivatives of tensors to transform in the same manner. However in general:\n\n\\begin{equation}\n \\JD{i}{j}\\JJ{k}{l}\\frac{\\partial u^l}{\\partial x^i}\\neq\\frac{\\partial \\tilde{u}^k}{\\partial \\tilde{x}^j}\n\\end{equation}\n\nSo instead the covariant derivative must be used. Examples of covariant derivatives of tensors of various orders are given here:\n\n\\begin{equation}\n (f)_{|i} = \\frac{\\partial f}{\\partial x^i}\n\\end{equation}\n\n\\begin{equation}\n (u^j)_{|i} = \\frac{\\partial u^j}{\\partial x^i} + \\Gamma\\indices{_i^j_k}u^k\n\\end{equation}\n\n\\begin{equation}\n (w^{jk})_{|i} = \\frac{\\partial w^{jk}}{\\partial x^i} + \\Gamma\\indices{_i^j_l}w^{lk} + \\Gamma\\indices{_i^k_l}w^{jl}\n\\end{equation}\n\n\nThese satisfy the transformation relationships:\n\n\\begin{equation}\\label{eq:CDtransr1}\n \\JD{i}{l}\\JJ{m}{j}\\left[\\frac{\\partial u^j}{\\partial x^i} + \\Gamma\\indices{_i^j_k}u^k\\right]=\\frac{\\partial \\tilde{u}^m}{\\partial \\tilde{x}^l} + \\tilde{\\Gamma}\\indices{_l^m_k}\\tilde{u}^k\n\\end{equation}\n\nand\n\n\\begin{multline}\\label{eq:CDtransr2}\n \\JD{i}{p}\\JJ{m}{j}\\JJ{n}{k}\\left[\\frac{\\partial w^{jk}}{\\partial x^i} + \\Gamma\\indices{_i^j_l}w^{lk} + \\Gamma\\indices{_i^k_l}w^{jl}\\right] \\\\ =\\frac{\\partial \\tilde{w}^{mn}}{\\partial \\tilde{x}^p} + \\tilde{\\Gamma}\\indices{_p^m_l}\\tilde{w}^{ln} + \\tilde{\\Gamma}\\indices{_p^n_l}\\tilde{w}^{ml}\n\\end{multline}\n\nand so on. The Christoffel symbol, $\\Gamma$, is defined by the metric tensor:\n\n\\begin{equation}\\label{ChrisDef}\n \\Gamma\\indices{_k^j_i} = \\Gamma\\indices{_i^j_k} = \\FullChris{i}{j}{k}\n\\end{equation}\n\nThe metric tensor, which defines length and angle in the curved system is given by:\n\n\\begin{equation}\\label{metricDef}\n G_{ij} = \\JJ{h}{i}\\JJ{h}{j}\n\\end{equation}\n\nand its inverse by:\n\n\\begin{equation}\\label{metricInvDef}\n G^{ki} = \\JJ{k}{l}^{-1}\\JJ{i}{l}^{-1}\n\\end{equation}\n\n\n\n\n\nPlugging equations \\ref{metricDef} and \\ref{metricInvDef} into \\ref{ChrisDef} gives:\n\n\\begin{multline}\n \\Gamma\\indices{_k^j_i} = \\frac{1}{2}\\JJ{j}{n}^{-1}\\JJ{l}{n}^{-1}\\bigg[ \\frac{\\partial }{\\partial x^k}\\left(\\JJ{h}{l}\\JJ{h}{i}\\right) \\\\ + \\frac{\\partial }{\\partial x^i}\\left(\\JJ{h}{l}\\JJ{h}{k}\\right) - \\frac{\\partial }{\\partial x^l}\\left(\\JJ{h}{i}\\JJ{h}{k}\\right) \\bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\frac{1}{2}\\JJ{j}{n}^{-1}\\JJ{l}{n}^{-1}\\bigg[ \\JJ{h}{i}\\frac{\\partial }{\\partial x^k}\\JJ{h}{l} + \\JJ{h}{l}\\frac{\\partial }{\\partial x^k}\\JJ{h}{i} \\\\ + \\JJ{h}{k}\\frac{\\partial }{\\partial x^i}\\JJ{h}{l} + \\JJ{h}{l}\\frac{\\partial }{\\partial x^i}\\JJ{h}{k} \\\\ - \\JJ{h}{k}\\frac{\\partial }{\\partial x^l}\\JJ{h}{i} - \\JJ{h}{i}\\frac{\\partial }{\\partial x^l}\\JJ{h}{k} \\bigg]\n\\end{multline}\n\nby switching the order of some of the derivatives, then combining and canceling terms, we get:\n\n\\begin{equation}\n = \\frac{1}{2}\\JJ{j}{n}^{-1}\\JJ{l}{n}^{-1}\\bigg[ 2\\JJ{h}{l}\\frac{\\partial }{\\partial x^k}\\JJ{h}{i} \\bigg]\n\\end{equation}\n\n\\begin{equation}\n = \\JJ{j}{n}^{-1}\\delta^h_n\\frac{\\partial }{\\partial x^k}\\JJ{h}{i}\n\\end{equation}\n\n\\begin{equation}\n = \\JJ{j}{h}^{-1}\\frac{\\partial }{\\partial x^k}\\JJ{h}{i}\n\\end{equation}\n\nA discrete formulation of the covariant derivative will have to have source terms which are consistent with this expression in the limit as mesh spacing goes to zero. As well as preserving the transformation relationships \\eqref{eq:CDtransr1} and \\eqref{eq:CDtransr2}.\n\n\n\\section{Discrete Formulation}\\label{sec:DiscCD}\n\nThe discrete representation of the $d$-dimensional manifold would be a list of points, $\\{(x^1_{,i},x^2_{,i},x^3_{,i},...,x^d_{,i})\\}^N_{i=1}$. Where $N$ is the number of points in the mesh, and the mesh index is a subscript separated with a comma from indices for tensor components and indices referring to coordinate directions. It is also assumed that there are Jacobian matrices at each point in the mesh, $\\left\\{ \\JJ{j}{k}_{,i} \\right\\}_{i=1}^N$. Discrete differential operators acting on a function defined on the mesh are denoted $D_i$ for differentiation in the $i^{th}$ coordinate direction. It is not assumed that there is another, Cartesian mesh, thus it causes no conflicts to define: $\\tilde{D}_k\\equiv \\JD{j}{k}D_j$.\n\n\nDeriving a consistent, discrete covariant derivative associated with a given discrete differential operator relies on the following theorem.\n\n\\begin{theorem}\\label{sumOfDiffs}\n Let $\\{u_i\\}_1^N$ be collection of values. If a linear combination of those values, $\\sum_{i=1}^N \\phi_iu_i$, has the property that the coefficients $\\{\\phi_i\\}_1^N$ sum to zero, then the linear combination can be written as a linear combination of differences of pairs of values in $\\{u_i\\}_1^N$.\n\\end{theorem}\n\n\\begin{proof}\nIf we have:\n\n\\begin{equation}\n \\sum_{i=1}^N \\phi_i = 0\n\\end{equation}\n\nthen we have:\n\n\\begin{equation}\n \\phi_1 = - \\sum_{i=2}^N \\phi_i\n\\end{equation}\n\nTherefore:\n\n\\begin{equation}\n \\sum_{i=1}^N \\phi_iu_i = \\phi_1u_1 + \\sum_{i=2}^N \\phi_iu_i\n\\end{equation}\n\n\\begin{equation}\n = -\\left(\\sum_{i=2}^N \\phi_i\\right)u_1 + \\sum_{i=2}^N \\phi_iu_i\n\\end{equation}\n\n\\begin{equation}\n = \\sum_{i=2}^N \\phi_i(u_i-u_1)\n\\end{equation}\n\nwhich is a linear combination of differences of pairs of values in $\\{u_i\\}_1^N$.\n\\end{proof}\n\n\nMany discrete differential operators have the property that the coefficients sum to zero. In fact, it is a requirement for standard finite difference approximations of derivatives. The theorem thus applies to a broad class of differential operators and allows Christoffel-like source terms to be derived for them.\n\n\n\nWe now consider a discrete differential operator which we would like to use to build a discrete covariant derivative. We assume that the discrete operator is consistent with true differentiation in the limit as mesh size goes to zero.; that is \n\n\\begin{equation}\n \\lim_{\\Delta x\\to 0}D_if = \\frac{\\partial f}{\\partial x^i}\n\\end{equation}\n\nAnd we assume that this operator has coefficients which sum to zero. Using Theorem \\ref{sumOfDiffs} the operation will be written as a weighted sum of differences. It should be pointed out that the particular set of differences given in the proof of Theorem \\ref{sumOfDiffs} is not necessarily the only way to write the operator as the sum of differences. It simply proves that there will always be at least one. Generally, for each coefficient, there will be an associated ``+'' index and ``-'' index. The difference associated with that coefficient is given by the ``+'' variable minus the ``-'' variable.\n\nTo come up with a discrete covariant derivative, we need to come up with an operator $CD_i$ such that:\n\n\\begin{equation}\n \\tilde{D}_j\\tilde{u}^k = \\JJ{k}{l}\\JD{i}{j}CD_iu^l\n\\end{equation}\n\nWe already have by definition that:\n\n\\begin{equation}\n \\tilde{D}_j\\tilde{u}^k = \\JD{i}{j}D_i\\tilde{u}^k\n\\end{equation}\n\nthus we only need to acquire:\n\n\\begin{equation}\n D_i\\tilde{u}^k = \\JJ{k}{l}CD_iu^l\n\\end{equation}\n\nWe can write the derivative of the Cartesian components in the ``$s$'' direction at mesh index ``$c$'' as:\n\n\\begin{equation}\n D_s\\tilde{u}^j_{,c} = \\sum_{i=1}^N \\phi_i(\\tilde{u}^j_{,k^+_i} - \\tilde{u}^j_{,k^-_i})\n\\end{equation}\n\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i(\\tilde{u}^j_{,k^+_i} - \\tilde{u}^j_{,c} + \\tilde{u}^j_{,c} - \\tilde{u}^j_{,k^-_i})\n\\end{equation}\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,k^+_i}u^l_{,k^+_i} - \\JJ{j}{l}_{,c}u^l_{,c} + \\JJ{j}{l}_{,c}u^l_{,c} - \\JJ{j}{l}_{,k^-_i}u^l_{,k^-_i} \\right)\n\\end{equation}\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,k^+_i}u^l_{,k^+_i} - \\JJ{j}{l}_{,c}u^l_{,c} \\right) + \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,c}u^l_{,c} - \\JJ{j}{l}_{,k^-_i}u^l_{,k^-_i} \\right)\n\\end{equation}\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\left( \\left(\\JJ{j}{l}_{,k^+_i} + \\JJ{j}{l}_{,c}\\right)u^l_{,k^+_i} - \\JJ{j}{l}_{,c}\\left(u^l_{,k^+_i} - u^l_{,c}\\right) \\right) \\\\\n + \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,c}\\left(u^l_{,c} - u^l_{,k^-_i}\\right) + \\left(\\JJ{j}{l}_{,c} - \\JJ{j}{l}_{,k^-_i}\\right)u^l_{,k^-_i} \\right)\n\\end{multline}\n\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c}\\Bigg[ \\sum_{i=1}^N \\phi_i\\left( \\left(u^l_{,k^+_i} - u^l_{,c}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\right) \\\\\n + \\sum_{i=1}^N \\phi_i\\left( \\left(u^l_{,c} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\right) \\Bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c} \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(u^l_{,k^+_i} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline}\n\nThis suggests that the discrete covariant derivative corresponding to the discrete derivative is:\n\n\\begin{multline}\n CD_su^l = \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(u^l_{,k^+_i} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline}\n\n\nFurthermore it can be shown that with ``nice'' enough solution and manifold this expression is consistent with the true covariant derivative. Indeed:\n\n\\begin{multline*}\n \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(u^l_{,k^+_i} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline*}\n\n\n\\begin{multline}\n = \\frac{\\partial u^l}{\\partial x^s} + \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline}\n\nAssuming $u$ and the manifold are sufficiently bounded and smooth, then:\n\n\\begin{multline}\n = \\frac{\\partial u^l}{\\partial x^s} + \\JJ{l}{m}^{-1}u^n\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right) \\\\ + \\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right) \\Bigg] \n\\end{multline}\n\n\\begin{equation}\n = \\frac{\\partial u^l}{\\partial x^s} + \\JJ{l}{m}^{-1}u^n\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,k^-_i} \\Bigg] \n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial u^l}{\\partial x^s} + \\JJ{l}{m}^{-1}u^n \\frac{\\partial }{\\partial x^s}\\JJ{m}{n}\n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial u^l}{\\partial x^s} + \\Gamma\\indices{_s^l_n}u^n \n\\end{equation}\n\nWhich is the covariant derivative of $u$.\n\nFor a rank 2 tensor, the derivation proceeds similarly.\n\n\n\\begin{equation}\n D_s\\tilde{w}^{jh}_{,c} = \\sum_{i=1}^N \\phi_i(\\tilde{w}^{jh}_{,k^+_i} - \\tilde{w}^{jh}_{,k^-_i})\n\\end{equation}\n\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i(\\tilde{w}^{jh}_{,k^+_i} - \\tilde{w}^{jh}_{,c} + \\tilde{w}^{jh}_{,c} - \\tilde{w}^{jh}_{,k^-_i})\n\\end{equation}\n\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\Bigg( \\JJ{j}{l}_{,k^+_i}\\JJ{h}{p}_{,k^+_i}w^{lp}_{,k^+_i} - \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} \\\\ + \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} - \\JJ{j}{l}_{,k^-_i}\\JJ{h}{p}_{,k^-_i}w^{lp}_{,k^-_i} \\Bigg)\n\\end{multline}\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\Bigg( \\JJ{j}{l}_{,k^+_i}\\JJ{h}{p}_{,k^+_i}w^{lp}_{,k^+_i} - \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} \\Bigg) \\\\ + \\sum_{i=1}^N \\phi_i\\Bigg(\\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} - \\JJ{j}{l}_{,k^-_i}\\JJ{h}{p}_{,k^-_i}w^{lp}_{,k^-_i} \\Bigg)\n\\end{multline}\n\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\Bigg( \\left(\\JJ{j}{l}_{,k^+_i}\\JJ{h}{p}_{,k^+_i} - \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\right)w^{lp}_{,k^+_i} \\\\ + \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\left(w^{lp}_{,k^+_i} - w^{lp}_{,c}\\right) \\Bigg) \\\\ + \\sum_{i=1}^N \\phi_i\\Bigg(\\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\left(w^{lp}_{,c} - w^{lp}_{,k^-_i}\\right) \\\\ + \\left(\\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c} - \\JJ{j}{l}_{,k^-_i}\\JJ{h}{p}_{,k^-_i}\\right)w^{lp}_{,k^-_i} \\Bigg)\n\\end{multline}\n\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,c}\\right) + \\left(w^{lp}_{,c} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\nThis suggests that the discrete covariant derivative corresponding to the discrete derivative is:\n\n\\begin{multline}\n CD_sw^{lp} = \\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\nThis too can be shown to be consistent in the limit of mesh spacing going to zero.\n\n\\begin{multline*}\n \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline*}\n\n\\begin{multline}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\\\ \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\n\n\\begin{multline}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\\\ \\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right) \\\\ + \\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right) \\Bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\\\ \\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i} \\Bigg]\n\\end{multline}\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\frac{\\partial}{\\partial x^s}\\Bigg( \\JJ{m}{n}\\JJ{q}{r}\\Bigg)\n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\Bigg( \\JJ{m}{n}\\frac{\\partial}{\\partial x^s}\\JJ{q}{r} + \\JJ{q}{r}\\frac{\\partial}{\\partial x^s}\\JJ{m}{n}\\Bigg)\n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + w^{nr}\\Bigg( \\delta^l_n\\JJ{p}{q}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{q}{r} + \\delta^p_r\\JJ{l}{m}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{m}{n}\\Bigg)\n\\end{equation}\n\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + w^{lr} \\JJ{p}{q}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{q}{r} + w^{np}\\JJ{l}{m}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{m}{n}\n\\end{equation}\n\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\Gamma\\indices{_s^p_r} w^{lr} + \\Gamma\\indices{_s^l_n} w^{np}\n\\end{equation}\n\nWhich is the covariant derivative of a rank 2 tensor.\n\n\nFollowing the same process we can derive the discrete covariant derivative for a rank $n$ tensor corresponding to a discrete derivative:\n\n\\begin{multline}\n CD_sw^{i_1i_2...i_n} = \\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{i_1i_2...i_n}_{,k^+_i} - w^{i_1i_2...i_n}_{,k^-_i}\\right) \\\\ + \\prod_{l=1}^n\\JJ{i_l}{j_l}_{,c}^{-1}\\left(\\prod_{l=1}^n\\JJ{j_l}{m_l}_{,k^+_i} - \\prod_{l=1}^n\\JJ{j_l}{m_l}_{,c}\\right)w^{m_1m_2...m_n}_{,k^+_i} \\\\ + \\prod_{l=1}^n\\JJ{i_l}{j_l}_{,c}^{-1}\\left(\\prod_{l=1}^n\\JJ{j_l}{m_l}_{,c} - \\prod_{l=1}^n\\JJ{j_l}{m_l}_{,k^-_i}\\right)w^{m_1m_2...m_n}_{,k^-_i} \\Bigg]\n\\end{multline}\n\nThese expressions can be used to derive the discrete analog of any operator which is based on the covariant derivative. Not only are these expressions consistent in the limit of zero mesh spacing, but they also preserve the tensorial nature of the true covariant derivative. As a consequence of the latter property, we have the additional property:\n\n\\begin{theorem}\\label{PreserveZeroThm}\n Let $D$ be a discrete differential operator with coefficients that sum to zero. And let $CD$ be the associated discrete covariant derivative. Then for any rank $n$ tensor, $w$, we have the property:\n \\begin{multline}\n D_s\\tilde{w}^{j_1j_2...j_n} = 0 \\quad\\forall j_1j_2...j_n\\in\\{1,2,...,d\\}^n \\\\ \\Leftrightarrow CD_sw^{l_1l_2...l_n} = 0\\quad\\forall l_1l_2...l_n\\in\\{1,2,...,d\\}^n\n \\end{multline}\n where $d$ is the dimensionality of the manifold.\n\\end{theorem}\n\n\\begin{proof}\n If $D_s\\tilde{w}^{j_1j_2...j_n} = 0 \\quad\\forall j_1j_2...j_n\\in\\{1,2,...,d\\}^n$, then:\n \n \\begin{multline*}\n CD_sw^{l_1l_2...l_n} = \\left(\\prod_{k=1}^n\\JJ{l_k}{j_k}^{-1}\\right)D_s\\tilde{w}^{j_1j_2...j_n} = \\left(\\prod_{k=1}^n\\JJ{l_k}{j_k}^{-1}\\right)0 \\\\ = 0 \\quad\\forall l_1l_2...l_n\\in\\{1,2,...,d\\}^n\n \\end{multline*}\n \n Likewise, if $CD_sw^{l_1l_2...l_n} = 0\\quad\\forall l_1l_2...l_n\\in\\{1,2,...,d\\}^n$, then:\n \n \\begin{multline*}\n D_s\\tilde{w}^{j_1j_2...j_n} = \\left(\\prod_{k=1}^n\\JJ{j_k}{l_k}\\right)CD_sw^{l_1l_2...l_n} = \\left(\\prod_{k=1}^n\\JJ{j_k}{l_k}\\right)0 \\\\ = 0 \\quad\\forall j_1j_2...j_n\\in\\{1,2,...,d\\}^n\n \\end{multline*}\n \n\\end{proof}\n\nThis means that a tensor field which is uniform with respect to the Cartesian basis will be treated as exactly uniform with respect to the curved basis. This is an important property in ensuring that certain steady states of fluid flow problems are appropriately captured by a numerical method.\n\nAs a final point, it is worth noting that these expressions are still linear operators which do not depend on the function they are acting on. They depend solely on the mesh and stencil chosen, so as long as those things remain the same, the operators do not have to be recomputed. In practice then, applying the covariant derivative operator is only marginally more expensive computationally than applying a standard derivative operator.\n\n\n\\section{Application to central scheme for conservation laws}\\label{sec:CS}\n\nTo illustrate how the source terms we derived can be put into practice, we consider a central scheme developed by Kurganov and Tadmor \\cite{Kurg}. Central Schemes are a type of finite volume numerical method often applied to conservation laws. These have the advantage over other finite volume methods of not relying on solutions to the Riemann problem. The simplicity of such methods makes them easier to implement, and faster to run. \n\nThe method derived has the semi-discrete form for a one dimensional problem with uniform mesh spacing:\n\n\\begin{multline}\\label{CartesianCS}\n \\frac{d}{dt}u_{,i} = -\\frac{f^+_{,i+1\/2}+f^-_{,i+1\/2}}{2\\Delta x} + \\frac{\\lambda_{M,i+1\/2}}{2\\Delta x}[u^+_{,i+1\/2}-u^-_{,i+1\/2}] \\\\\n +\\frac{f^+_{,i-1\/2}+f^-_{,i-1\/2}}{2\\Delta x} - \\frac{\\lambda_{M,i-1\/2}}{2\\Delta x}[u^+_{,i-1\/2}-u^-_{,i-1\/2}]\n\\end{multline}\n\nIn this expression, $\\{u_{,i}\\}_{i=1}^N$ is the discrete representation of the quantity being conserved, $f$ is the flux function for that quantity, and $\\lambda_M$ is the maximum wave speed at the specified cell boundary. The index notation $i\\pm1\/2$ refers to the plus and minus boundaries of the $i$th cell, and a superscript $+$ or $-$ refers to a value defined on the plus or minus side of that cell boundary. These are calculated:\n\n\n\\begin{equation}\n u^{\\mp}_{,i\\pm1\/2} = u_{,i} \\pm \\frac{\\Delta x}{2}u_{x,i}\n\\end{equation}\n\n\n\\begin{equation}\n u^{\\pm}_{,i\\pm1\/2} = u_{,i\\pm1} \\mp \\frac{\\Delta x}{2}u_{x,i\\pm1} = u^{\\pm}_{,(i\\pm1)\\mp1\/2}\n\\end{equation}\n\n\\begin{equation}\n f^{\\mp}_{,i\\pm1\/2} = f\\left(u^{\\mp}_{,i\\pm1\/2}\\right)\n\\end{equation}\n\n\\begin{equation}\n f^{\\pm}_{,i\\pm1\/2} = f\\left(u^{\\pm}_{,i\\pm1\/2}\\right) = f\\left(u^{\\pm}_{,(i\\pm1)\\mp1\/2}\\right)\n\\end{equation}\n\n\\begin{equation}\n \\lambda_{M,i\\pm1\/2} = \\text{max}\\left( \\lambda\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\mp}_{,i\\pm1\/2}\\right) \\right), \\lambda\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\pm}_{,i\\pm1\/2}\\right) \\right) \\right)\n\\end{equation}\n\nExpressions for $u_{x,i}$ are derived based on the values of $u$ in neighboring cells. In order to improve stability, numerical methods for conservation laws use TVD slope approximations which prevent spurious oscillations from occurring around shock waves. This is addressed in the next subsection.\n\nThe expression for $\\frac{d}{dt}u_{,i}$ in equation \\eqref{CartesianCS} can be computed according to alogrithm \\ref{CartesianCSUt}. This can then be integrated in time using the ODE solver of one's choice.\n\n\\begin{algorithm}\n\\caption{Compute Time Derivative}\\label{CartesianCSUt}\n\\begin{algorithmic}[1]\n\\Procedure{$U_t$}{$u$}\n\\State $\\text{compute }u_{x,i}\\quad\\forall i$ using a TVD scheme\n\\State $u^{\\mp}_{,i\\pm1\/2} \\gets u_{,i} \\pm \\frac{\\Delta x}{2}u_{x,i}$\n\\State $f^{\\mp}_{,i\\pm1\/2} \\gets f\\left(u^{\\mp}_{,i\\pm1\/2}\\right)$\n\\State$\\lambda_{M,i\\pm1\/2} \\gets \\text{max}\\left[ \\rho\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\mp}_{,i\\pm1\/2}\\right) \\right), \\rho\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\pm}_{,(i\\pm1)\\mp1\/2}\\right) \\right) \\right]$\n\\State $u_{t,i} \\gets $\\eqref{CartesianCS}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\nThis method performs well on problems set in a Cartesian coordinate system, but it is not suited, in its current form, to be used on a curved manifold. Both the slope approximations and the time derivative formula \\eqref{CartesianCS} must be modified using the discrete source terms that have been derived.\n\n\n\\subsection{Slope limiting}\n\n\nIt is a known problem that numerical methods for fluid flow problems can cause non-physical oscillations to occur near the steep gradients of shock waves. In some cases, these oscillations can even cause the solution to destabilize and blow up. To prevent this, TVD slope approximations, or ``slope limiters,'' are used to calculate discrete derivatives. The most common slope limiter is probably the minmod limiter where minmod is defined by:\n\n\\begin{equation}\n \\text{minmod}(x,y) = \\frac{1}{2}(\\text{sign}(x)+\\text{sign}(y))\\min(|x|,|y|)\n\\end{equation}\n\nand the derivative of the solution in each mesh cell is given by:\n\n\\begin{equation}\\label{ux}\n u_{x,i} = \\text{minmod}\\left( \\frac{u_{,i}-u_{,i-1}}{\\Delta x}, \\frac{u_{,i+1}-u_{,i}}{\\Delta x}\\right)\n\\end{equation}\n\nTo apply this process to tensorial quantities, the covariant derivative must replace all derivatives. That is:\n\n\\begin{equation}\n (u)_{|x,i} = \\text{minmod}\\left( CD^B_xu_{,i}, CD^F_xu_{,i}\\right)\n\\end{equation}\n\nwhere $CD^B_x$ and $CD^F_x$ are respectively the discrete covariant derivative operators derived from the backward and forward derivative operators in equation \\eqref{ux}. By considering the tensor basis to be constant inside a mesh cell, we have the relationship:\n\n\\begin{equation}\n u_{x,i} = (u)_{|x,i}\n\\end{equation}\n\nThus we can compute the values of $u$ throughout the cell as:\n\n\\begin{equation}\n \\tilde{u}_{,i}(x) = u_{,i} + (u)_{|x,i}(x-x_{,i})\n\\end{equation}\n\nwhich provides a way to compute the values at the cell boundaries.\n\n\\subsection{Parallel transport}\n\nBefore addressing the changes to the time derivative formula \\eqref{CartesianCS}, we must first introduce a new concept to overcome an issue with finite volume methods on manifolds. Finite volume methods are based on integration rather than differentiation. The derivation for equation \\eqref{CartesianCS} presented in \\cite{Kurg} is based entirely on integration. This poses a unique challenge on a manifold with a non-uniform tensor basis. In such a setting, integrating the components of a tensor field results in a meaningless quantity. As an example, consider the integral:\n\n\\begin{equation}\n \\int_0^{2\\pi}\\int_0^1 \\cos\\theta\\hat{r} - \\sin\\theta\\hat{\\theta} \\quad rdrd\\theta\n\\end{equation}\n\nwhich is the integral of the Cartesian vector $\\hat{x}$ over the unit circle. A simple calculation will show that the integral comes out to be zero even though the true vector field is nonzero everywhere. This clearly creates a problem for a numerical method which is based on integration. In order to apply finite volume methods, tensors must be shifted to uniform bases before they can be integrated. In order to carry out these shifts without changing the tensors, we use the process of parallel transport. Parallel transport is the process of moving a tensorial quantity from one basis to another without changing its true value \\cite{Man_Lev,Lovelock}. Definition \\ref{PTdef} states this more formally.\n\n\\begin{definition}\\label{PTdef}\n Let $s$ be a curve along a manifold. A tensor, $w$ is said to be \\emph{parallel transported} along $s$ if the covariant derivative of $w$ along $s$ is identically zero. That is:\n \n \\begin{equation}\\label{PT}\n (w)_{|i}\\frac{\\partial x^i}{\\partial s}=0\n \\end{equation}\n \n\\end{definition}\n\nIf we have a tensor defined at a point, $x^i_1$ on a manifold, and are interested in finding out what that tensor's components would be at another point $x^i_2$, we can solve equation \\eqref{PT} with $s$ being a curve which connects $x^i_1$ and $x^i_2$.\n\nA discrete analog of this process can be developed using the discrete covariant derivative. Consider two neighboring mesh cells with indices $i-1$ and $i$, and centers $x_{i-1}$ and $x_{i}$. Say there is a tensor defined at $x_{i-1}$ which we would like to transport to $x_{,i}$. Using the two point difference operator, the parallel transport condition can be posed as:\n\n\\begin{equation*}\n \\frac{1}{\\Delta x}\\left[u^l_{,i} - u^l_{,i-1} + \\JJ{l}{m}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i} - \\JJ{m}{n}_{,i-1}\\right)u^n_{,i-1}\\right] = 0\n\\end{equation*}\n\n\\begin{equation}\n \\Rightarrow u^l_{,i} = u^l_{,i-1} - \\JJ{l}{m}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i} - \\JJ{m}{n}_{,i-1}\\right)u^n_{,i-1}\n\\end{equation}\n\nfor a rank 1 tensor, and:\n\n\\begin{multline*}\n \\frac{1}{\\Delta x}\\bigg[w^{lp}_{,i} - w^{lp}_{,i-1} \\\\ + \\JJ{l}{m}_{,i}^{-1}\\JJ{p}{q}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i}\\JJ{q}{r}_{,i} - \\JJ{m}{n}_{,i-1}\\JJ{q}{r}_{,i-1}\\right)w^{nr}_{,i-1}\\bigg] = 0\n\\end{multline*}\n\n\n\\begin{multline}\n \\Rightarrow w^{lr}_{,i} = w^{lp}_{,i-1} \\\\ - \\JJ{l}{m}_{,i}^{-1}\\JJ{p}{q}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i}\\JJ{q}{r}_{,i} - \\JJ{m}{n}_{,i-1}\\JJ{q}{r}_{,i-1}\\right)w^{nr}_{,i-1}\n\\end{multline}\n\nfor a rank 2 tensor, and so on. These expression conveniently provide a straightforward way to compute the discretely parallel transported form of tensors in neighboring mesh cells. The notation $PT_{,i}(w^l_{,j})$ will be used to refer to a tensor which has been parallel transported from mesh cell $j$ to mesh cell $i$.\n\nIn \\cite{Man_Lev}, parallel transport was used to adapt a finite volume method to a curved manifold by, in short, transporting neighboring cells to a common basis and then applying the cartesian form of the finite volume method to the transported components. The same will be done to the present central scheme, but using the discrete parallel transport expression which preserves tensorial transformations.\n\n\n\\subsection{Modified central scheme}\n\n\nA slightly modified process for computing the time derivative which accounts for the non-uniform basis can now be devised. First, the solution is reconstructed by calculating slopes using the minmod limiter on the backward and forward covariant derivatives. These are used to compute the values of $u$, and $f$ at the cell boundaries. These values are then parallel transported to the neighboring cells which depend on them. Once all the tensors share a basis, their components can be integrated to acquire meaningful quantities. The derivation of \\eqref{CartesianCS} presented in \\cite{Kurg} then proceeds identically, but applied to the transported quantities. The resulting expression is given here:\n\n\\begin{multline}\\label{ManifoldCS}\n \\frac{d}{dt}u_{,i} = -\\frac{PT_{,i}(f^+_{,i+1\/2})+f^-_{,i+1\/2}}{2\\Delta x} + \\frac{\\lambda_{M,i+1\/2}}{2\\Delta x}[PT_{,i}(u^+_{,i+1\/2})-u^-_{,i+1\/2}] \\\\\n +\\frac{f^+_{,i-1\/2}+PT_{,i}(f^-_{,i-1\/2})}{2\\Delta x} - \\frac{\\lambda_{M,i-1\/2}}{2\\Delta x}[u^+_{,i-1\/2}-PT_{,i}(u^-_{,i-1\/2})]\n\\end{multline}\n\n\n\nThe expression is similar to \\eqref{CartesianCS}, except that $u^\\pm_{,i\\pm1\/2}$ and $f^\\pm_{,i\\pm1\/2}$ are replaced by $PT_{,i}(u^\\pm_{,i\\pm1\/2})$ and $PT_{,i}(f^\\pm_{,i\\pm1\/2})$ respectively. In addition, the maximum wave speeds, $\\lambda_M$, have to be computed based on parallel transported values. The modified procedure for computing the time derivative is given in algorithm \\ref{ManifoldCSUt}. \n\n\\begin{algorithm}\n\\caption{Compute Time Derivative - Manifold}\\label{ManifoldCSUt}\n\\begin{algorithmic}[1]\n\\Procedure{$U_t$}{$u$}\n\\State $\\text{compute }u_{|x,i} = \\text{minmod}\\left( CD^B_xu_{,i}, CD^F_xu_{,i}\\right)\\quad\\forall i$\n\\State $u^{\\mp}_{,i\\pm1\/2} \\gets u_{,i} \\pm \\frac{\\Delta x}{2}u_{|x,i}$\n\\State $f^{\\mp}_{,i\\pm1\/2} \\gets f\\left(u^{\\mp}_{,i\\pm1\/2}\\right)$\n\\State$\\lambda_{M,i\\pm1\/2} \\gets \\text{max}\\left[ \\rho\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\mp}_{,i\\pm1\/2}\\right) \\right), \\rho\\left( PT_{,i}\\left(\\frac{\\partial f}{\\partial u}\\left(u^{\\pm}_{,(i\\pm1)\\mp1\/2}\\right)\\right) \\right) \\right]$\n\\State $u_{t,i} \\gets $\\eqref{ManifoldCS}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{remark}\n In some applications there will be the relationships $PT_{,i}(f(U_{,j}))=f(PT_{,i}(U_{,j}))$ and $PT_{,i}(\\frac{\\partial f}{\\partial U}(U_{,j}))=\\frac{\\partial f}{\\partial U}(PT_{,i}(U_{,j}))$. This would allow one to skip the step in which the flux functions and their Jacobians are parallel transported by instead using the parallel transported solution variables to compute the neighboring flux functions and wave speeds in the local basis. These relationships will not hold however, if $f$ depends on a spatial variable.\n\\end{remark}\n\nFor higher dimension domains, \\eqref{ManifoldCS} can be naturally extended. For two dimensions the expression is:\n\n\\begin{multline}\\label{2DManifoldCS}\n \\frac{d}{dt}u_{,i,j} = -\\frac{PT_{,i,j}(f^{1+}_{,i+1\/2,j})+f^{1-}_{,i+1\/2,j}}{2\\Delta x^1} + \\frac{\\lambda_{M,i+1\/2,j}}{2\\Delta x^1}[PT_{,i,j}(u^+_{,i+1\/2,j})-u^-_{,i+1\/2,j}] \\\\\n +\\frac{f^{1+}_{,i-1\/2,j}+PT_{,i,j}(f^{1-}_{,i-1\/2,j})}{2\\Delta x^1} - \\frac{\\lambda_{M,i-1\/2,j}}{2\\Delta x^1}[u^+_{,i-1\/2,j}-PT_{,i,j}(u^-_{,i-1\/2,j})] \\\\\n -\\frac{PT_{,i,j}(f^{2+}_{,i,j+1\/2})+f^{2-}_{,i,j+1\/2}}{2\\Delta x^2} + \\frac{\\lambda_{M,i,j+1\/2}}{2\\Delta x^2}[PT_{,i,j}(u^+_{,i,j+1\/2})-u^-_{,i,j+1\/2}] \\\\\n +\\frac{f^{2+}_{,i,j-1\/2}+PT_{,i,j}(f^{2-}_{,i,j-1\/2})}{2\\Delta x^2} - \\frac{\\lambda_{M,i,j-1\/2}}{2\\Delta x^2}[u^+_{,i,j-1\/2}-PT_{,i,j}(u^-_{,i,j-1\/2})]\n\\end{multline}\n\nand so on for arbitrary dimensions. All of these can be integrated in time using whichever ODE solver that one prefers. \n\n\n\n\n\\section{Conical Flow}\\label{sec:CF}\n\nThe conical Euler and MHD equations, which govern flow past an infinite cone of arbitrary cross section, are derived and analyzed in \\cite{ConEuler} and \\cite{ConMHD} respectively. These equations result from setting the corresponding system in a 3D Euclidean space covered by coordinates $(\\xi^1,\\xi^2,r)$, where $\\xi^\\beta$ are defined on the surface of the sphere and $r$ is the radial coordinate, and then setting the covariant derivative in the $r$ direction equal to zero. Doing so completely removes any dependence on $r$, leaving a system defined entirely on the surface of a sphere. The origin of the space (and center of the sphere) is taken to be the tip of the cone whose cross section, by definition, does not depend on $r$ either.\n\n\n\nThe conical equations, \\ref{EulerCon} and \\ref{MHDCon}, involve the contracted covariant derivative (denoted $(\\cdot)_{|\\beta}$) where the contraction is only performed over the components corresponding to the surface of the sphere ($\\beta\\in\\{1,2\\}$). These forms of the equations are slightly different than the forms presented in \\cite{ConEuler} and \\cite{ConMHD}, but they are still consistent, differing only by a factor of $\\sqrt{g}$. The forms considered for the analysis of these equations rely on the relationship $\\overset{(G)}{\\Gamma}\\indices{_i^j_j} = \\frac{1}{\\sqrt{G}}\\frac{\\partial \\sqrt{G}}{\\partial x^i}$ which will not in general be valid in the discrete case. The forms given here only rely on the tensorial transformation properties of the covariant derivative and thus can be discretized using the source terms already presented.\n\nOne option for solving the conical equations is to convert them to time dependent problems as described in \\ref{EulerCon} and \\ref{MHDCon} and apply the central scheme \\eqref{2DManifoldCS} and then march in time until a steady state is achieved. Initial tests of this procedure were conducted and it was determined that it was too time consuming and did not achieve good long term results due to waves reflecting off the surface of the cone. Instead, a finite difference\/area method was used to discretize the conical equations, and an iterative scheme based on Newton's method was used to solve the system of nonlinear equations. Solutions were achieved much faster and were void of residual transient modes. This is the method which is described throughout the following sections.\n\n\n\n\n\n\\section{Mesh}\\label{sec:Mesh}\n\nA computational domain must be created in order to solve the equations numerically. For the problem of conical flow, the domain is on the surface of a unit sphere. Because most meshing utilities assume Cartesian coordinate systems are being used, it is simplest to create a 2D mesh which represents the spherical slice of the 3D domain having been projected onto the XY plane, and then compute what the curvature would be in the solver. Spherical curvature is simple enough to compute since most all relations between Cartesian and spherical coordinate systems have analytical expressions.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.55]{NumberedMesh.png}\n \\caption{Example mesh for flow past an elliptic cone. The mesh is abstractly rectangular, with width 20 and height 5 and cell indices going from left to right, bottom to top. This mesh was created using GMSH.}\n \\label{numMesh}\n\\end{figure}\n\nFollowing this procedure, a mesh will be generated such as the one given in Figure \\ref{numMesh}. The center of the mesh is the origin $(0,0)$ and $x^2 + y^2 \\leq 1$ for all points in the mesh. Though this mesh is curved, it is abstractly rectangular, with a height and width and predictable ordering. The mesh cells can be identified by a single index, say $i$, and except for at the far left and far right boundaries of the mesh will have left and right neighbors $i-1$ and $i+1$ respectively, and top and bottom neighbors $i+W$ and $i-W$ respectively where $W$ is the width of the mesh. At the left boundary, the left neighbor has index $i+W-1$, and at the right boundary, the right neighbor has index $i-W+1$.\n\n\nEach computational cell has four vertices which have Cartesian coordinates $\\{(x_i,y_i)\\}^4_{i=1}$ reported by the mesh generating software. The spherical coordinates $(\\theta,\\phi)$ on the sphere ($\\theta$ is the azimuthal angle and $\\phi$ is the zenith angle) associated with each $(x,y)$ can then be computed as follows:\n\n\\begin{equation}\n\\theta=\\begin{cases}\n\t\\arctan(x\/y) & x\\geq0 \\\\ \n \\arctan(x\/y) - \\pi & x<0 \\\\ \n\\end{cases}\n\\end{equation}\n\n\\begin{equation}\n \\phi = \\arcsin(\\sqrt{x^2 + y^2})\n\\end{equation}\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.55]{MeshCoordsNumbered.png}\n \\caption{Coordinates defined by the mesh lines}\n \\label{MeshCoords}\n\\end{figure}\n\nThis gives for each cell four new sets of coordinates $\\{(\\theta_i,\\phi_i)\\}^4_{i=1}$. These coordinates will not in general be aligned with the mesh. In order to simplify the formulation of the discrete problem and some of the calculations involved, it is convenient to do one more coordinate transformation to a coordinate system whose coordinate lines are defined to be the mesh lines. These coordinates are $(\\xi^1,\\xi^2)$, with $\\xi^1$ going left to right, and $\\xi^2$ going bottom to top as shown in Figure \\ref{MeshCoords}. The exact value of each of these coordinates in each cell is not important, so it can be freely assumed that $\\xi^1,\\xi^2\\in[0,1]$ in each cell. The relationship between the spherical coordinates and the mesh coordinates can be computed as described in \\cite{SriFAMethod}. Basis functions are defined inside the cell which are given by:\n\n\\begin{subequations}\\label{CellBasis}\n\\begin{gather}\n b^1 = \\xi^1(1-\\xi^2) \\\\\n b^2 = \\xi^1\\xi^2 \\\\\n b^3 = (1-\\xi^1)\\xi^2 \\\\\n b^4 = (1-\\xi^1)(1-\\xi^2)\n\\end{gather}\n\\end{subequations}\n\n\nThere is one basis function corresponding to each corner of the cell. That function has unit value at that corner and zero at every other corner. Figure \\ref{b2plot} gives a plot of such a function. Using these functions, $\\theta$ and $\\phi$ can be given as functions of $\\xi^1$ and $\\xi^2$ inside each cell:\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.5]{BasisFunction.png}\n \\caption{Plot of $b^2$ basis function}\n \\label{b2plot}\n\\end{figure}\n\n\n\\begin{equation}\n \\theta(\\xi^1,\\xi^2) = \\sum_1^4\\theta_ib^i(\\xi^1,\\xi^2)\n\\end{equation}\n\n\\begin{equation}\n \\phi(\\xi^1,\\xi^2) = \\sum_1^4\\phi_ib^i(\\xi^1,\\xi^2)\n\\end{equation}\n\nwith derivatives:\n\n\\begin{equation}\n \\theta_{\\xi^1}(\\xi^1,\\xi^2) = \\sum_1^4\\theta_ib_{\\xi^1}^i(\\xi^1,\\xi^2)\n\\end{equation}\n\n\\begin{equation}\n \\theta_{\\xi^2}(\\xi^1,\\xi^2) = \\sum_1^4\\theta_ib_{\\xi^2}^i(\\xi^1,\\xi^2)\n\\end{equation}\n\netc.\n\nThe Jacobian matrix of the transformation from spherical coordinates to mesh coordinates is given by:\n\n\\begin{equation}\n J_{\\theta\\rightarrow\\xi} = \\left[ \\begin{smallmatrix} \\theta_{\\xi^1} & \\theta_{\\xi^2} \\\\ \\phi_{\\xi^1} & \\phi_{\\xi^2} \\end{smallmatrix} \\right]\n\\end{equation}\n\nfor just the surface coordinates or by:\n\n\\begin{equation}\n J_{\\theta\\rightarrow\\xi} = \\left[ \\begin{smallmatrix} \\theta_{\\xi^1} & \\theta_{\\xi^2} & 0 \\\\ \\phi_{\\xi^1} & \\phi_{\\xi^2} & 0 \\\\ 0 & 0 & 1 \\end{smallmatrix} \\right]\n\\end{equation}\n\nif the radial coordinate is included. \n\n\\begin{remark}\n It is important to keep in mind that $\\theta=\\frac{\\pi}{2}$ is the same as $\\theta=-\\frac{3\\pi}{2}$ in the cells at the far right side of the mesh. Otherwise the Jacobian matrices could have some erroneous entries.\n\\end{remark}\n\nThe Jacobian matrix of the transformation from Cartesian coordinates to spherical coordinates (on a unit sphere) is given by:\n\n\\begin{equation}\n J_{x\\rightarrow\\theta} = \\left[ \\begin{smallmatrix} x_\\theta & x_\\phi & x_r \\\\ \ty_\\theta & y_\\phi & y_r\\\\ z_\\theta & z_\\phi & z_r \\end{smallmatrix} \\right] = \\left[ \\begin{smallmatrix} -\\sin\\theta \\sin\\phi & \\cos\\theta \\cos\\phi & \\cos\\theta \\sin\\phi\\\\ \t\\cos\\theta \\sin\\phi & \\sin\\theta \\cos\\phi & \\sin\\theta \\sin\\phi\\\\ 0 & -\\sin\\phi & \\cos\\phi \\end{smallmatrix} \\right]\n\\end{equation}\n\nThe total Jacobian matrix from Cartesian coordinates to mesh coordinates is computed easily by the matrix product:\n\n\\begin{equation}\n J_{x\\rightarrow\\xi} = J_{x\\rightarrow\\theta}J_{\\theta\\rightarrow\\xi}\n\\end{equation}\n\nA Jacobian matrix can now be computed at the center of each computational cell $(\\xi^1,\\xi^2)=(0.5,0.5)$ which serves as the basis for tensors in that cell. Having achieved this, discrete Christoffel symbols can be derived according to the process outlined above.\n\n\n\\begin{remark}\n It is not entirely necessary that $\\xi^1,\\xi^2\\in[0,1]$. These variables can be treated as having different ranges within the cells in order to have better conditioned Jacobian matrices or less change in tensor components from one mesh cell to the next. If the mesh cells become very oblong, or their sizes change dramatically over a small region of the mesh, these issues could affect the stability of the numerical method. It is important however that all the $\\xi^1$ ranges in a mesh column, or $\\xi^2$ ranges in a mesh row are the same.\n\\end{remark}\n\n\n\n\\section{Boundary conditions}\\label{sec:BC}\n\nFree stream conditions for all solution variables are established in the outermost mesh cells, that is the row of cells along the top of the abstractly rectangular mesh, as Dirichlet boundary conditions. Higher order stencils used for discrete derivatives had to be backward biased in cells near the outer boundary since they would otherwise require values from cells which do not exist.\n\nAt the body of the cone, which is the row of cells along the bottom of the abstractly rectangular mesh, forward biased differencing must be used to avoid relying on nonexistent cells. The velocity component in the $\\xi^2$ direction is set to zero as a Dirichlet boundary condition here. This is the no penetration condition which requires that the velocity at a wall must be parallel to the wall. \n\nFor the MHD case, the cone is assumed to be a perfect conductor. A conductor which is in steady state will have a charge arrangement such that the electric field is perpendicular to the surface. In the perfectly conducting fluid, there is the relationship from Ohm's law \\cite{MHDFlowPastBodies}:\n\n\\begin{equation}\n -\\pmb{E} = \\pmb{V}\\times\\pmb{B}\n\\end{equation}\n\nIf we let $\\pmb{n}$ be the normal at the surface of the cone then we have:\n\n\\begin{equation}\n -\\pmb{n}\\times\\pmb{E} = \\pmb{n}\\times\\left(\\pmb{V}\\times\\pmb{B}\\right)\n\\end{equation}\n\nwhich leads to:\n\n\\begin{equation}\n 0 = \\pmb{V}\\left(\\pmb{n}\\cdot\\pmb{B}\\right) - \\pmb{B}\\left(\\pmb{n}\\cdot\\pmb{V}\\right)\n\\end{equation}\n\nand because of the no penetration condition, we have:\n\n\\begin{equation}\n 0 = \\pmb{V}\\left(\\pmb{n}\\cdot\\pmb{B}\\right) \\Rightarrow \\pmb{n}\\cdot\\pmb{B} = 0\n\\end{equation}\n\n\nwhich says that the magnetic field must also be parallel to the wall. All the other other variables at the wall are free. \n\nThe last thing that must be accounted for is that the mesh is periodic in the $\\xi^1$ direction. The far right column of cells is also to the left of the far left column of cells.\n\n\n\n\\section{Discretization}\\label{sec:Disc}\n\nFor each mesh cell, there is one variable corresponding to each unknown in the conical equations. For the conical Euler equations, that is:\n\n\\begin{equation*}\n \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ \\pmb{V}_,i \\\\ e_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N = \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ v^1_,i \\\\ v^2_,i \\\\ V^3_,i \\\\ e_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N \n\\end{equation*}\n\nand for the conical MHD equations:\n\n\\begin{equation*}\n \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ \\pmb{V}_,i \\\\ e_,i \\\\ \\pmb{B}_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N = \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ v^1_,i \\\\ v^2_,i \\\\ V^3_,i \\\\ e_,i \\\\ b^1_,i \\\\ b^2_,i \\\\ B^3_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N \n\\end{equation*}\n\n\nFor each variable in each cell there is an associated flux function. These are the functions on the LHS of equations \\eqref{EulerCon} and \\eqref{MHDCon} of which the contracted covariant derivative is being taken. For $\\rho$ and $e$, the flux is a rank 1 tensor, while for $\\pmb{V}$, and $\\pmb{B}$ the flux is a rank 2 tensor. These flux functions are computed in each cell using the variables and inverse metric tensor in that cell giving: \n\n\n\\begin{equation*}\n \\left\\{ \\left[ \\begin{matrix} f^j_{\\rho,i} \\\\ f^{kj}_{\\pmb{V},i} \\\\ f^j_{e,i} \\end{matrix} \\right]\\right\\}_{i=1}^N \\text{or } \\left\\{ \\left[ \\begin{matrix} f^j_{\\rho,i} \\\\ f^{kj}_{\\pmb{V},i} \\\\ f^j_{e,i} \\\\ f^{kj}_{\\pmb{B},i} \\end{matrix} \\right]\\right\\}_{i=1}^N \n\\end{equation*}\n\n\nWith these defined in every cell in the mesh, equations \\eqref{EulerCon} and \\eqref{MHDCon} can be discretized with the covariant derivatives derived earlier using any stencil that one desires. \n\nTo improve stability, a viscous-like dissipation term is added to discrete fluid dynamics equations even if it isn't present in the original equation. Since such terms often closely resemble second derivatives it is possible for discrete source terms to be added to the expression which allow them to appropriately transform between coordinate systems. When everything is put together, the result is a system of 5 or 8 times $N$ nonlinear equations:\n\n\n\\begin{equation}\\label{EulerDiscrete}\n \\left\\{ \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\end{matrix} \\right]_{,i}\\right)= \\pmb{0}\\right\\}_{i=1}^N \n\\end{equation}\n\nor\n\n\\begin{equation}\\label{MHDDiscrete}\n \\left\\{ \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{B}} \\end{matrix} \\right]_{,i} + \\left[ \\begin{matrix} 0 \\\\ \\frac{1}{\\mu}B^k(CD_\\beta B^\\beta) \\\\ \\frac{1}{\\mu}(\\pmb{V}\\cdot\\pmb{B})(CD_\\beta B^\\beta) \\\\ V^k(CD_\\beta B^\\beta) \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\\\ B^k \\end{matrix} \\right]_{,i}\\right) = \\pmb{0} \\right\\}_{i=1}^N \n\\end{equation}\n\n\n\nIn this project a five point central stencil was used to derive the discrete differential operator. This stencil is high order and symmetric, and avoids the problem of odd-even decoupling. The coefficients for this operator are the standard finite difference coefficients given here:\n\n\\begin{equation}\n D_1f_{,i} = \\frac{1}{12}f_{,i-2} + \\frac{-2}{3}f_{,i-1} + \\frac{2}{3}f_{,i+1} + \\frac{-1}{12}f_{,i+2}\n\\end{equation}\n\nand\n\n\\begin{equation}\n D_2f_{,i} = \\frac{1}{12}f_{,i-2W} + \\frac{-2}{3}f_{,i-W} + \\frac{2}{3}f_{,i+W} + \\frac{-1}{12}f_{,i+2W}\n\\end{equation}\n\n\nWe have suppressed the $\\frac{1}{\\Delta\\xi^i}$ for clarity, and because we will generally be assuming that $\\Delta\\xi^i=1$.\n\nTo come up with a viscous operator, we consider the viscous part of equation \\eqref{ManifoldCS}. To simplify this expression, a zero order slope approximation is used, and the maximum wave speeds are replaced with a constant, tunable viscous parameter $C_{\\text{visc}}$. The resulting operators before accounting for curvature are: \n\n\\begin{equation}\n Visc_{1}(u_{,i}) = C_{\\text{visc}}\\left[ -u_{,i-1} + 2u_{,i} -u_{,i+1} \\right]\n\\end{equation}\n\nand\n\n\\begin{equation}\n Visc_{2}(u_{,i})= C_{\\text{visc}}\\left[ -u_{,i-W} + 2u_{,i} -u_{,i+W} \\right]\n\\end{equation}\n\nThe total viscous term would then be the sum of the operators in each direction:\n\n\\begin{equation}\n Visc(u_{,i}) = Visc_{1}(u_{,i}) + Visc_{2}(u_{,i})\n\\end{equation}\n\n\nA covariant version of this operator can be derived the same as if it were a derivative operator. We point out that this viscous term, like the viscous term in equation \\eqref{ManifoldCS}, will not be exactly like a Laplacian operator. It is more accurately an averaging operator. Furthermore, since the coefficients which define the operator sum to zero, theorem \\ref{PreserveZeroThm} applies, meaning that for any tensor whose components are uniform in a Cartesian coordinate system, the covariant operator will evaluate to zero on the components in the curved system. \n\nPutting these stencils together admits the conservation form:\n\n\\begin{multline}\\label{FullConservationForm}\n (\\Delta_1 F)_{,i} = \\left(\\frac{-1}{12}f_{,i-1} + \\frac{7}{12}f_{,i} + \\frac{7}{12}f_{,i+1} + \\frac{-1}{12}f_{,i+2}\\right) + C_{\\text{visc}}(u_{,i}-u_{,i+1})\\\\ - \\left[\\left(\\frac{-1}{12}f_{,i-2} + \\frac{7}{12}f_{,i-1} + \\frac{7}{12}f_{,i} + \\frac{-1}{12}f_{,i+1}\\right) + C_{\\text{visc}}(u_{,i-1}-u_{,i})\\right] \\\\ = F^+-F^-\n\\end{multline}\n\nand likewise for $(\\Delta_2F)_{,i}$. After inserting the source terms to account for curvature, the method will no longer be conservative in the strictest sense, but it will capture the appropriate behavior of the equations.\n\nThe left and right boundaries of the mesh are periodic, and thus there will always be enough neighboring cells to complete the stencil. At the top and bottom boundary however, some cells will be missing. In the second-from-top and second-from-bottom rows of the mesh, the $+2W$ and $-2W$ cells respectively are missing, and in the bottom row of the mesh the $-W$ and $-2W$ cells are missing. The top row of the mesh has fixed values and therefore does not depend on neighboring cells. In the bottom row of the mesh a three point forward difference stencil and a two point average are used. In the second-from-top and second-from-bottom rows a four point difference stencil with a backward and forward bias respectively are used, and the same viscous averaging operator can be used since all the necessary cells exist. These operators are given here:\n\nSecond from the top:\n\n\\begin{equation}\n D_2f_{,i} = \\frac{1}{6}f_{,i-2W} + (-1)f_{,i-W} + \\frac{1}{2}f_{,i} + \\frac{1}{3}f_{,i+W}\n\\end{equation}\n\nSecond from the bottom boundary:\n\n\\begin{equation}\n D_2f_{,i} = \\frac{-1}{3}f_{,i-W} + \\frac{-1}{2}f_{,i} + f_{,i+W} + \\frac{-1}{6}f_{,i+2W}\n\\end{equation}\n\nAt the bottom boundary:\n\n\\begin{equation}\n D_2f_{,i} = \\frac{-3}{2}f_{,i} + 2f_{,i+W} + \\frac{-1}{2}f_{,i+2W}\n\\end{equation}\n\nand\n\n\\begin{equation}\n Visc_{2}(u_{,i}) = C_{\\text{visc}}\\left[ u_{,i} -u_{,i+W} \\right]\n\\end{equation}\n\n\nIn some situations it will be possible to pick values in ghost cells outside the boundaries of the mesh such that the expressions resulting from these operators can be considered to be in the same form as \\eqref{FullConservationForm}. It is however difficult to guarantee that this will always be possible. Fortunately, the regions in which these operators are used are well clear of the main bow shock wave, and though body shocks occur, they will mostly follow the $\\xi^2$ coordinate lines and so will not affect differencing in the $\\xi^2$ direction \\cite{Guan,sriThesis,ShockFreeCrossFlow}. It is therefore acceptable for these operators to be non-conservative.\n\n\n\n\\subsection{Preservation of steady state}\n\nThe goal of these conical flow problems is to solve for a steady flow that satisfies the equations. The goal of the discrete equations is to capture the steady state numerically. Though we cannot describe all steady state solutions, we do know a subset of them and can verify that the discretization is capable of accurately capturing them. In the case where there are no walls or boundaries, it is known that uniform values of all variables satisfies the equations. By theorem \\ref{PreserveZeroThm}, it is easily shown that this discretization of the conical equations exactly captures these solutions - that is any set of uniform density, uniform energy, uniform velocity, and (in the case of MHD) uniform magnetic field satisfy the discrete equations to machine precision.\n\n\n\n\n\\section{Solution procedure}\\label{sec:SP}\n\nAn algorithm based on Newton's method was developed to solve the system of nonlinear equations. This method iteratively solves a linearized form of the nonlinear system of equations set equal to zero. After each iteration, the equations are closer to being solved assuming certain conditions are met, involving smoothness and closeness to the solution.\n\nLet $F$ be a vector of nonlinear functions such as the residual of the discretization of our differential equations, and let $U$ be a vector of all the variables on which $F$ depends. By a Taylor expansion, we get\n\n\\begin{equation}\\label{expansion}\n F\\approx F(U) + \\frac{\\partial F}{\\partial U}\\Delta U\n\\end{equation}\n\nwhen $\\Delta U$ is small. Since we are interested in $F=0$ we have:\n\n\\begin{equation}\\label{NewtonExpression}\n \\frac{\\partial F}{\\partial U}\\Delta U = -F(U)\n\\end{equation}\n\nwhich can be solved for $\\Delta U$. If $F(U)$ is close enough to $F=0$, then $F(U+\\Delta U)$ should be even closer. This process can be repeated until a value of $U$ is achieved such that $F$ is satisfactorily close to zero.\n\nBy applying this process to the residual of the discretized system of equations, we can iterate to a solution, starting from an initial guess. Since the discretization given is known to be satisfied by a uniform solution, it is convenient to take such as the initial guess. In particular, the whole domain is set to the free stream values. The residual is thus zero everywhere, but the boundary conditions at the wall are not satisfied. To keep the residual close to zero, the algorithm slowly increments the boundary variables towards their specified values. After each incrementation, Newton's method is employed to relax the residual back down to within a desired tolerance of zero. Thus the residual can be kept close to zero always, and the solution will be achieved once the boundary conditions are satisfied and a last round of Newton's method has relaxed the residual back to zero. \n\nA pseudocode of applying this algorithm to the conical Euler equations is given in algorithm \\ref{EulerAlgo}. For the Euler equations, the boundary conditions at the wall are that the $v^2$ component of the velocity is zero. Algorithm \\ref{EulerAlgo} uses a linear incrementation of $v^2$, but other non-uniform increments could also be used. \n\n\n\\begin{algorithm}\n\\caption{Solve Conical Euler Equations}\\label{EulerAlgo}\n\\begin{algorithmic}[1]\n\\State $U \\gets U_\\infty$\n\\For{$\\textit{inc}=1 \\text{ to } \\textit{numIncrements}$}\n \\State $v^2_{,1:W} = \\frac{\\textit{numIncrements} - \\textit{inc}}{\\textit{numIncrements}}v^2_\\infty$\n \\For{$\\textit{it}=1 \\text{ to }\\textit{maxIt}$}\n \\State $\\text{compute }\\textit{Res}$\n \\If{$|\\textit{Res}|<\\textit{tol}$}\n \\State break\n \\Else\n \\State $\\text{compute }\\frac{\\partial \\textit{Res}}{\\partial U}$\n \\State $\\text{solve }\\frac{\\partial \\textit{Res}}{\\partial U}\\Delta U = -\\textit{Res}$\n \\State $U \\gets U + \\Delta U$\n \\EndIf\n \\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nThe residual for the conical Euler equations is from equation \\eqref{EulerDiscrete}:\n\n\n\\begin{equation}\n Res_{,i} = \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\end{matrix} \\right]_{,i}\\right) \n\\end{equation}\n\nSince the difference and averaging operators are all linear, the Jacobian of the residual can be computed as:\n\n\\begin{equation}\n \\frac{\\partial \\textit{Res}}{\\partial U}_{,i} = \\left[ \\begin{matrix} CD_\\beta \\frac{\\partial f^\\beta_{\\rho}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^{k\\beta}_{\\pmb{V}}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^\\beta_{e}}{\\partial U} \\end{matrix} \\right]_{,i} + Visc\\left(I\\right)\n\\end{equation}\n\nwhere $I$ is the identity matrix.\n\nSolving the MHD equations can be done in a similar way, but with a few modifications. The first is simple which is incrementing the $\\xi^2$ component of the magnetic field to zero at the wall same as is done to that component of the velocity. The second modification is more significant.\n\nAccompanying equation \\eqref{MHDCon} is the requirement that the divergence of the magnetic field is identically zero. This constraint is not however explicitly enforced, allowing for the possibility that a there exists a solution involving a magnetic field which is not divergence free. Numerical tests have demonstrated that for time dependent Ideal MHD with Powell source terms, the numerical divergence is kept small if initial data is divergence free. Since a Newton's method does not march in the time-like direction, these observations unfortunately do not apply. No prior work exists on the conical Ideal MHD equations, and so a strategy to impose this constraint had to be developed from scratch. The most straightforward approach to ensure that the magnetic field remained divergence-less throughout the iteration process was to convert the linear solve portion of Newton's method into a constrained minimization problem.\n\nTo do this, first expression \\eqref{NewtonExpression} is changed to:\n\n\\begin{equation}\\label{NewtonAltExpression}\n \\frac{\\partial F}{\\partial U}\\Delta U = -F \\Leftrightarrow \\frac{\\partial F}{\\partial U} U_{\\text{next}} = \\frac{\\partial F}{\\partial U} U - F\n\\end{equation}\n\nwhich is an equivalent expression, but can be solved directly for $U_{\\text{next}}$. Instead of solving this expression exactly, we instead minimize:\n\n\\begin{equation}\n \\frac{1}{2}\\Bigg\\lVert\\frac{\\partial F}{\\partial U} U_{\\text{next}} - \\left(\\frac{\\partial F}{\\partial U} U - F\\right)\\Bigg\\rVert_2^2\n\\end{equation}\n\nsubject to the constraint that the magnetic field be divergence-less. This constraint can be stated mathematically as a linear equation:\n\n\\begin{equation}\\label{ConstraintHomo}\n (\\text{div}B) U = \\pmb{0}\n\\end{equation}\n\nwhere $(\\text{div}B)$ is a matrix which applies the contracted covariant derivative operator to the magnetic field variables. With this, the ``linear solve'' step in the algorithm is replaced with the linearly constrained least squares problem:\n\n\\begin{equation}\n \\min_{U_{\\text{next}}}\\frac{1}{2}\\Bigg\\lVert\\frac{\\partial F}{\\partial U} U_{\\text{next}} - \\left(\\frac{\\partial F}{\\partial U} U - F\\right)\\Bigg\\rVert_2^2 \\text{ s.t. } (\\text{div}B) U_{\\text{next}} = \\pmb{0}\n\\end{equation}\n\n\nFortunately, this problem is straight forward to solve. Using the method of Lagrange multipliers, it is easily shown that the solution is achieved by solving the system:\n\n\\begin{equation}\n \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\frac{\\partial F}{\\partial U} & (\\text{div}B)^T \\\\ (\\text{div}B) & 0 \\end{matrix} \\right]\\left[ \\begin{matrix} U_{\\text{next}} \\\\ \\lambda \\end{matrix} \\right] = \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\left(\\frac{\\partial F}{\\partial U} U - F\\right) \\\\ \\pmb{0} \\end{matrix} \\right]\n\\end{equation}\n\nwhere $\\lambda$ is a vector or Lagrange multipliers, the value of which is irrelevant. There are other ways to solve the constrained minimzation problem based on the QR or SVD factorizations, but this one was satisfactory in practice. Algorithm \\ref{MHDAlgo} provides a pseudo code of the modified Newton's Method for the MHD equations.\n\n\\begin{algorithm}\n\\caption{Solve Conical MHD Equations}\\label{MHDAlgo}\n\\begin{algorithmic}[1]\n\\State $U \\gets U_\\infty$\n\\For{$\\textit{inc}=1 \\text{ to } \\textit{numIncrements}$}\n \\State $v^2_{,1:W} = \\frac{\\textit{numIncrements} - \\textit{inc}}{\\textit{numIncrements}}v^2_\\infty$\n \\State $b^2_{,1:W} = \\frac{\\textit{numIncrements} - \\textit{inc}}{\\textit{numIncrements}}b^2_\\infty$\n \\For{$\\textit{it}=1 \\text{ to }\\textit{maxIt}$}\n \\State $\\text{compute }\\textit{Res}$\n \\If{$|\\textit{Res}|<\\textit{tol}$}\n \\State break\n \\Else\n \\State $\\text{compute }\\frac{\\partial \\textit{Res}}{\\partial U}$\n \\State $\\text{solve }\\min\\frac{1}{2}||\\frac{\\partial \\textit{Res}}{\\partial U} U_{\\text{next}} - \\left(\\frac{\\partial \\textit{Res}}{\\partial U} U -\\textit{Res}\\right)||_2^2\\text{ s.t. }(\\text{div}B) U_{\\text{next}}=\\pmb{0}$\n \\State $U \\gets U_{\\text{next}}$\n \\EndIf\n \\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nSince the divergence-less constraint on the magnetic field is always satisfied, the residual for the conical MHD equations from equation \\eqref{MHDDiscrete} is given by:\n\n\\begin{equation}\n Res_{,i} = \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{B}} \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\\\ B^k \\end{matrix} \\right]_{,i}\\right)\n\\end{equation}\n\nand the Jacobian is:\n\n\\begin{equation}\n \\frac{\\partial \\textit{Res}}{\\partial U}_{,i} = \\left[ \\begin{matrix} CD_\\beta \\frac{\\partial f^\\beta_{\\rho}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^{k\\beta}_{\\pmb{V}}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^\\beta_{e}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^{k\\beta}_{\\pmb{B}}}{\\partial U} \\end{matrix} \\right]_{,i} + Visc\\left(I\\right)\n\\end{equation}\n\n\\begin{remark}\n In the case of solving the conical MHD equations with the magnetic field set identically to zero, then algorithm \\ref{MHDAlgo} reduces to algorithm \\ref{EulerAlgo}.\n\\end{remark}\n\n\\begin{remark}\n Depending on how one chooses to enforce boundary conditions, it may be necessary to add them as constraints in the constrained minimization problem. For example, if the boundary variables are stored in $U$ along with all the other variables, and their values are forced by inserting the equations $IU_{,i}=U_{\\text{boundary},i}$ (where $I$ is the identity matrix and cell $i$ is a boundary cell) into the linear solves, then it is possible that the solution to the minimization problem will have values other than those desired at the boundary. Augmenting the constraint equation \\eqref{ConstraintHomo} to $(\\text{div}B) U_{\\text{next}}=Z$ which includes $IU_{,i}=U_{\\text{boundary},i}$ guarantees that the boundary values will be what they are meant to be. To solve this modified formulation, one solves the system:\n \\begin{equation}\n \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\frac{\\partial F}{\\partial U} & (\\text{div}B)^T \\\\ (\\text{div}B) & 0 \\end{matrix} \\right]\\left[ \\begin{matrix} U_{\\text{next}} \\\\ \\lambda \\end{matrix} \\right] = \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\left(\\frac{\\partial F}{\\partial U} U - F\\right) \\\\ Z \\end{matrix} \\right]\n \\end{equation}\n\\end{remark}\n\n\n\\begin{remark}\n A trade-off had to be made in enforcing the discrete divergence-free constraint. The flux-divergence form, or ``conservation form'' of the MHD equations is only valid if certain terms proportional to or involving the divergence of the magnetic field are equal to zero. These terms result from using vector calculus identities to manipulate Maxwell's equations. Discrete forms of these terms which are consistent with the MHD equations, and Maxwell's equations, and the vector calculus identities would not be linear expressions. To force these to be zero would require nonlinear constraint equations which are more difficult to satisfy. Instead of doing that, it was decided to use the simpler linear constraints \\eqref{ConstraintHomo} which are consistent in the limit of zero mesh spacing, but result in some numerical inconsistency.\n\\end{remark}\n\n\n\n\n\\section{Results and discussion}\\label{sec:RD}\n\nThe numerical method so far described, involving the discrete Christoffel symbols and the Newton's method was coded in Octave and was run on a variety of test cases. We present here some examples of solutions which it produced. Those included are designed to highlight the capabilities of the method more than to apply to any particular aerospace application.\n\n\\subsection{Gas properties}\n\nSo far, no assumptions have been made about the thermodynamic properties of the fluid being governed by the equations. We are therefore free to apply any valid pressure and temperature models without creating conflicts in the governing equations or numerical method. It was chosen however to assume the gas was perfect in the coming examples in order to avoid unnecessary complexity that might obfuscate characteristics of the method. The pressure of the gas is thus computed by the ideal gas law:\n \n \\begin{equation}\n P=(\\gamma-1)\\rho e\n \\end{equation}\n \n where $\\gamma$ is the ratio of specific heats, $c_p$ and $c_v$, at constant pressure and volume respectively (the value of $\\gamma$ is 1.4 for regular air). The specific heats are assumed constant, which results in the relationship:\n \n \\begin{equation}\n e = c_vT\n \\end{equation}\n\nwhere $T$ is the temperature of the gas. Furthermore, the gas constant $R=c_p-c_v$ can be defined.\n\n\\subsection{Non-dimensionalization}\n\nIt is generally preferable in fluid dynamics to solve non-dimensionalized versions of the governing equations. To this end, we introduce the non-dimensional variables:\n\n\\begin{equation}\n \\rho_* = \\rho\/\\rho_\\infty\n\\end{equation}\n\\begin{equation}\n V^i_* = V^i\/|\\pmb{V}_\\infty|\n\\end{equation}\n\\begin{equation}\n e_* = e\/|\\pmb{V}_\\infty|^2\n\\end{equation}\n\\begin{equation}\n B^i_* = B^i\/\\left(\\sqrt{\\rho_\\infty\\mu}|\\pmb{V}_\\infty|\\right)\n\\end{equation}\n\n\\begin{equation}\n P_* = P\/\\left(\\rho_\\infty|\\pmb{V}_\\infty|^2\\right)\n\\end{equation}\n\nwhere the subscript $\\infty$ refers to the free stream value. Additionally we have:\n\n\\begin{equation}\n E_* = E\/|\\pmb{V}_\\infty|^2 = e_* + \\frac{1}{2}|\\pmb{V}_*|^2\n\\end{equation}\n\nand for an ideal gas, there is the relationship:\n\n\\begin{equation}\n P_* = P(\\rho_*,e_*)\n\\end{equation}\n\nThe nondimensionalized version of equation \\eqref{EulerCon} is:\n\n\\begin{subequations}\\label{EulerConNonDim}\n\\begin{gather}\n\\left(\\rho_* V_*^\\beta\\right)_{|\\beta} = 0 \\\\\n\\left(\\rho_* V_*^i V_*^\\beta + G^{i\\beta}P_*\\right)_{|\\beta} = 0\\\\\n\\left( \\left[\\rho_* E_*+P_*\\right] V_*^\\beta \\right)_{|\\beta} =0 \n\\end{gather}\n\\end{subequations}\n\n\nAnd of equation \\eqref{MHDCon}:\n\n\\begin{subequations}\\label{MHDConNonDim}\n\\begin{gather}\n \\left(\\rho_* V_*^\\beta\\right)_{|\\beta} = 0 \\\\\n \\left(\\rho_* V_*^iV_*^\\beta - B_*^iB_*^\\beta + G^{i\\beta}\\left(P_* + \\frac{|\\pmb{B}_*|^2}{2}\\right)\\right)_{|\\beta} = -B_*^iB^\\beta_{*|\\beta} \n \\\\\n \\left( \\left(\\rho_* E_*+P_*+|\\pmb{B}_*|^2\\right)V_*^\\beta - (\\pmb{V}_*\\cdot\\pmb{B}_*)B_*^\\beta \\right)_{|\\beta} = -(\\pmb{V}_*\\cdot\\pmb{B}_*)B^\\beta_{*|\\beta} \n \\\\\n (V_*^\\beta B_*^i-V_*^iB_*^\\beta)_{|\\beta} = -V_*^iB^\\beta_{*|\\beta}\n\\end{gather}\n\\end{subequations}\n\n\n\\subsection{Free stream conditions}\n\nThe outermost row of mesh cells is used to impose the free stream conditions on the solution. These are imposed via Dirichlet conditions on the non-dimensional variables. For this project, free stream conditions were assumed to be uniform and constant. The values of the variables were set according to the desired angle of attack, angle of roll, and Mach number of the cone, and the relationship between the air stream and the magnetic field.\n\nThe definition of $\\rho_*$ requires that it always has a value of one in the free stream. Likewise, the magnitude of the vector $\\pmb{V}_*$ is always equal to one in the free stream. The direction of $\\pmb{V}_*$ is determined by the angle of attack and roll of the cone. Since the cone is assumed to be aligned with the $z$ axis, the Cartesian representation of the dimensionless free stream velocity is given by:\n\n\\begin{equation}\n \\pmb{\\tilde{V}}_{*\\infty} = \\left[ \\begin{smallmatrix} \\cos Roll & -\\sin Roll & 0 \\\\ \\sin Roll & \\cos Roll & 0 \\\\ 0 & 0 & 1 \\end{smallmatrix} \\right]\\left[ \\begin{smallmatrix} \\\\ 1 & 0 & 0 \\\\ 0 & \\cos AoA & \\sin AoA \\\\ 0 & -\\sin AoA & \\cos AoA \\end{smallmatrix} \\right]\\left[ \\begin{smallmatrix} 0 \\\\ 0 \\\\ 1 \\end{smallmatrix} \\right] = \\left[ \\begin{smallmatrix} -\\sin Roll \\sin AoA \\\\ \\cos Roll \\sin AoA \\\\ \\cos AoA \\end{smallmatrix} \\right]\n\\end{equation}\n\nThis vector is then transformed onto the local basis of the mesh. For a perfect gas, the value of $e_*$ is determined by the Mach number and gas constant by the expression:\n\n\\begin{equation}\n e_{*\\infty} = \\frac{1}{\\gamma(\\gamma-1)M_\\infty^2}\n\\end{equation}\n\nwhich comes from:\n\n\\begin{equation}\n e_{*\\infty} = \\frac{c_vT_\\infty}{|\\pmb{V}_\\infty|} = \\frac{c_vT_\\infty}{c_\\infty^2M_\\infty^2} = \\frac{c_vT_\\infty}{\\gamma RT_\\infty M_\\infty^2} = \\frac{c_v}{\\gamma (c_p-c_v)M_\\infty^2} = \\frac{1}{\\gamma(\\gamma-1)M_\\infty^2}\n\\end{equation}\n\n\nThe direction of $\\pmb{B}_{*\\infty}$ can be set somewhat arbitrarily. The magnitude however, should be small enough that the magnitude of the free stream velocity remains greater than the fastest magneto acoustic speed. Otherwise, information would be able to easily propagate upstream which would invalidate the conical assumption. The fast magneto acoustic speed is given in non-dimensional form by:\n\n\\begin{equation}\n c_{f*}^2 = \\frac{1}{2}\\left[ \\left(\\frac{1}{M^2} + |\\pmb{B}_{*}|^2\\right) + \\sqrt{\\left(\\frac{1}{M^2} + |\\pmb{B}_{*}|^2\\right)^2 - 4\\frac{1}{M^2}\\left(\\pmb{B}_{*}\\cdot\\pmb{w}\\right)^2} \\right]\n\\end{equation}\n\n\nwhere $\\pmb{w}$ is a unit vector which specifies the direction of the propagating wave. This speed should be less than the magnitude of the non-dimensional free stream velocity which is equal to one. A sufficient condition to ensure this is:\n\n\\begin{equation}\n |\\pmb{B}_{*\\infty}|^2 < 1-\\frac{1}{M_{\\infty}^2}\n\\end{equation}\n\nConsequently, the additional constraints are imposed that the magnitude of the non-dimensional magnetic field must be less than that of the non-dimensional velocity, and that the free stream Mach number must be greater than one.\n\n\n\n\\subsection{Right circular cone validation}\n\nTo demonstrate the reliability of the numerical method, a series of solutions were computed for circular cones at zero angle of attack. This scenario has been thoroughly studied and properties of the solutions can be checked against tables provided by NASA \\cite{NASA_Tables}. \n\nCones with half angles 5, 10, and 15 degrees were modeled at speeds of Mach 1.5, 2, 3, 4, and 5. The 10 degree mesh had 80 elements in the $\\xi^1$ direction whereas the 5 and 15 degree meshes only had 60. The setting of the problem is uniform in the $\\xi^1$ direction so resolution in this direction was not too important. All the meshes had 100 elements in the $\\xi^2$ direction. This meant that up to 40,000 variables were solved for in these experiments. Good convergence was achieved, with the $L_2$ norm of the residual being less than $10^{-9}$.\n\nThe shock wave angle, the surface to free stream density and pressure ratios and the surface Mach number were all computed based on the solutions and compared to NASA values. The results of this comparison are presented in tables \\ref{ShockAngle}, \\ref{DensityRatio}, \\ref{PressureRatio}, and \\ref{SurfaceMach}, and an example of a full solution is shown in Figure \\ref{fig:ValidationExample}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.2]{10deg_AoA0_M2_ShockAngle.png}\n \\caption{Example solution from validation testing. This is a 10 degree half angle cone at zero angle of attack and Mach 2. Pressure field is shown along with the distance in the XY plane to the shock wave. The angle of the shock wave is $\\theta_s=\\arcsin .52 = .547$. This image was rendered in ParaView.}\n \\label{fig:ValidationExample}\n\\end{figure}\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 0.734 & 0.744 & 0.789 \\\\ \n\\hline\n2 & 0.524 & 0.547 & 0.600 \\\\ \n\\hline\n3 & 0.347 & 0.379 & 0.444 \\\\ \n\\hline\n4 & 0.268 & 0.309 & 0.384 \\\\ \n\\hline\n5 & n\/a & 0.273 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 0.731 & 0.745 & 0.786 \\\\ \n\\hline\n2 & 0.525 & 0.545 & 0.592 \\\\ \n\\hline\n3 & 0.344 & 0.379 & 0.441 \\\\ \n\\hline\n4 & 0.261 & 0.309 & 0.380 \\\\ \n\\hline\n5 & & 0.272 & \\\\ \n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 0.469 & 0.132 & 0.455 \\\\ \n\\hline\n2 & 0.314 & 0.432 & 1.438 \\\\ \n\\hline\n3 & 0.819 & 0.002 & 0.826 \\\\ \n\\hline\n4 & 2.744 & 0.059 & 1.072 \\\\ \n\\hline\n5 & & 0.358 & \\\\ \n\\hline\n\\end{tabular}\n\\caption{Shock wave angle prediction. Angles are presented in radians}\\label{ShockAngle}\n\\end{table}\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 1.047 & 1.137 & 1.261 \\\\ \n\\hline\n2 & 1.071 & 1.203 & 1.382 \\\\ \n\\hline\n3 & 1.132 & 1.370 & 1.687 \\\\ \n\\hline\n4 & 1.207 & 1.576 & 2.054 \\\\ \n\\hline\n5 & n\/a & 1.805 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 1.044 & 1.136 & 1.257 \\\\ \n\\hline\n2 & 1.067 & 1.201 & 1.377 \\\\ \n\\hline\n3 & 1.124 & 1.368 & 1.685 \\\\ \n\\hline\n4 & 1.193 & 1.571 & 2.047 \\\\ \n\\hline\n5 & & 1.802 & \\\\\n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 0.265 & 0.116 & 0.294 \\\\ \n\\hline\n2 & 0.374 & 0.158 & 0.351 \\\\ \n\\hline\n3 & 0.705 & 0.167 & 0.122 \\\\ \n\\hline\n4 & 1.133 & 0.307 & 0.355 \\\\ \n\\hline\n5 & & 0.156 & \\\\\n\\hline\n\\end{tabular}\n\\caption{Ratio of surface density to free stream density}\\label{DensityRatio}\n\\end{table}\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 1.067 & 1.197 & 1.386 \\\\ \n\\hline\n2 & 1.102 & 1.296 & 1.587 \\\\ \n\\hline\n3 & 1.190 & 1.558 & 2.111 \\\\ \n\\hline\n4 & 1.299 & 1.916 & 2.847 \\\\ \n\\hline\n5 & n\/a & 2.387 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 1.062 & 1.195 & 1.378 \\\\ \n\\hline\n2 & 1.095 & 1.292 & 1.566 \\\\ \n\\hline\n3 & 1.178 & 1.551 & 2.091 \\\\ \n\\hline\n4 & 1.281 & 1.889 & 2.801 \\\\ \n\\hline\n5 & & 2.309 & \\\\ \n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 0.435 & 0.209 & 0.542 \\\\ \n\\hline\n2 & 0.626 & 0.244 & 1.348 \\\\ \n\\hline\n3 & 1.047 & 0.448 & 0.961 \\\\ \n\\hline\n4 & 1.404 & 1.419 & 1.661 \\\\ \n\\hline\n5 & & 3.348 & \\\\\n\\hline\n\\end{tabular}\n\\caption{Ratio of surface pressure to free stream pressure}\\label{PressureRatio}\n\\end{table}\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 1.486 & 1.462 & 1.431 \\\\ \n\\hline\n2 & 1.972 & 1.927 & 1.872 \\\\ \n\\hline\n3 & 2.925 & 2.813 & 2.683 \\\\ \n\\hline\n4 & 3.856 & 3.642 & 3.410 \\\\ \n\\hline\n5 & n\/a & 4.406 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 1.458 & 1.375 & 1.271 \\\\ \n\\hline\n2 & 1.942 & 1.834 & 1.707 \\\\ \n\\hline\n3 & 2.891 & 2.710 & 2.507 \\\\ \n\\hline\n4 & 3.816 & 3.531 & 3.217 \\\\ \n\\hline\n5 & & 4.292 & \\\\\n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 1.925 & 6.338 & 12.612 \\\\ \n\\hline\n2 & 1.570 & 5.068 & 9.674 \\\\ \n\\hline\n3 & 1.163 & 3.795 & 7.031 \\\\ \n\\hline\n4 & 1.036 & 3.156 & 6.009 \\\\ \n\\hline\n5 & & 2.652 & \\\\ \n\\hline\n\\end{tabular}\n\\caption{Surface Mach number}\\label{SurfaceMach}\n\\end{table}\n\nResults at Mach 5 were not able to be achieved for the 5 and 15 degree cones. As the Newton's method was iterating to a solution, spurious oscillations began to arise which eventually destabilized the solution to the point that it blew up. Attempts were made to suppress these oscillations by increasing $C_{\\text{visc}}$, however when enough viscosity was added to achieve stability, the solutions were overly damped and non-physical. The stable capture of shock waves without sacrificing resolution is a difficult problem in fluid dynamics for which many difficult numerical methods have been devised. Evaluation and implementation of these was however beyond the scope of this project. \n\nThe stability of the solution was also observed to depend to some degree on the quality of the mesh. It is thus possible that if a more sophisticated mesh were designed, either up front or via an adaptive mesh method, that the steeper gradients could be better handled. \n\nWhen solutions were achieved they provided results which matched well with the values from the NASA tables. The surface Mach number consistently had the highest error, with a maximum of about 12\\%. Such consistency demonstrates the validity of the derivation of the sources terms which model the curvature of the discrete manifold.\n\n\\subsection{Additional Validation}\n\nOther cases of conical Euler flow were solved to compare to previous work on the subject not limited to right circular cones. Sritharan \\cite{sriThesis} provided plots of the pressure coefficient from a 10 degree half angle cone at 10 degrees angle of attack and Mach 2. The same case was run in the solver developed here and comparison plots were made. The mesh used was the same in the previous section.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.45]{CircleSurfaceComparison.png}\n \\caption{10 degree half angle cone at 10 degrees angle of attack and Mach 2. Pressure coefficient plotted around the surface of the cone.}\n \\label{fig:CircleSurfaceComparison}\n\\end{figure}\n\n\nFigure \\ref{fig:CircleSurfaceComparison} shows the pressure coefficient around the surface of the cone, and Figure \\ref{fig:CirclePhiComparison} shows the pressure coefficient plotted along 3 different curves in the $\\phi$ direction outward from the surface of the cone. The graphs show very good agreement in value. The shock waves are not as sharply resolved by this method, particularly when the shock is weaker. The position of the shock waves is consistent though, and so is the jump across them.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.45]{CirclePhiComparison.png}\n \\caption{10 degree half angle cone at 10 degrees angle of attack and Mach 2. Pressure coefficient plotted outward from the surface of the cone.}\n \\label{fig:CirclePhiComparison}\n\\end{figure}\n\n\nIn \\cite{Siclari}, Siclari provides a plot of the surface pressure coefficient for an elliptic cone with a sweep angle of 71.61 degrees and 6 to 1 aspect ratio. This cone was set at an angle of attack of 10 degrees and Mach 1.97. A comparison plot of the present method is given in Figure \\ref{fig:6to1Comparison}. Results were acquired using a mesh with 320 cells in the $\\xi^1$ direction and 50 in the $\\xi^2$ direction.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.55]{6to1EllipseComparison.png}\n \\caption{6:1 elliptic cone at 10 degrees angle of attack and Mach 2. Pressure coefficient plotted along the surface of the cone. The $x$-axis is scaled by the wingspan.}\n \\label{fig:6to1Comparison}\n\\end{figure}\n\nThe technique Siclari used to compute the solution was a shock fitting method and so had a very sharply resolved body shock. The shock is still captured well by the present method, and the plots show good agreement everywhere else as well.\n\n\n\\subsection{Other Euler results}\n\nWe now present some more examples of solutions produced by the described method. Meshes used in this section had between 30,000 and 40,000 total variables to be solved for, and in every case good convergence was still achieved with the $L_2$ norm of the residual being less than $10^{-9}$.\n\nFirst we consider the velocity field around a circular cone at an angle of attack. Figure \\ref{fig:M1p5_AoA5} shows the flow around a 10 degree cone at 5 degrees angle of attack and Mach 1.5. The mesh used was the same 80 by 100 mesh used for the 10 degree cone validation tests. There is clearly higher pressure on the windward surface of the cone than on the leeward side. In addition, the crossflow stream lines wrap around the body and converge on the top surface of the cone as predicted by \\cite{sriThesis,Siclari,NASA_con}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.3]{10deg_AoA5_M1p5_PressureCrossFlow.png}\n \\caption{10 degree half angle cone at 5 degrees angle of attack and Mach 1.5. Pressure field is shown along with the crossflow velocity.}\n \\label{fig:M1p5_AoA5}\n\\end{figure}\n\nIn Figure \\ref{fig:M2_AoA20}, the angle of attack has been increased to 20 degrees and the free stream Mach number has been increased to 2. In this case, the increase in pressure on the windward side is even greater compared to the free stream pressure, and the convergence point of the crossflow stream lines has been lifted off the surface of the cone. Furthermore, we see in Figure \\ref{fig:M2_AoA20_Mc} that supersonic crossflow bubbles have formed on either side of the surface of the cone. These are related to the change of type of the governing PDE model, and are consistent with the theory of this problem.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{10deg_AoA20_M2_PressureCrossFlow.png}\n \\caption{10 degree half angle cone at 20 degrees angle of attack and Mach 2. Pressure field is shown along with the crossflow velocity.}\n \\label{fig:M2_AoA20}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.3]{10deg_AoA20_M2_Mc.png}\n \\caption{10 degree half angle cone at 20 degrees angle of attack and Mach 2. crossflow Mach number is displayed.}\n \\label{fig:M2_AoA20_Mc}\n\\end{figure}\n\n\nA natural extension is to consider an elliptic cone in place of a circular one. These results are shown in Figures \\ref{fig:Ellipse} and \\ref{fig:Ellipse_Mc}. The behavior is qualitatively similar to the case of the cone, but with more accentuated features. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{EllipsePressureCrossFlow.png}\n \\caption{Elliptic cone at 20 degrees angle of attack and Mach 2. Pressure field is shown along with the crossflow velocity. Mesh is 160 elements in the $\\xi^1$ direction and 50 elements in the $\\xi^2$ direction.}\n \\label{fig:Ellipse}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{EllipseMc.png}\n \\caption{Elliptic cone at 20 degrees angle of attack and Mach 2. crossflow Mach number is displayed.}\n \\label{fig:Ellipse_Mc}\n\\end{figure}\n\nSo far, all these results are consistent with the expected behavior of this flow problem based on previous work of Ferri, Sritharan, and others. There is naturally motivation to consider more irregular shapes. To this end, we consider the case of Figure \\ref{fig:Fighter} which shows the flow field around a rough outline of the cross section of a fighter jet. This demonstrates the method's ability to handle more complex geometries and flow solutions.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{FighterRoll20_PressureCrossFlow.png}\n \\caption{Rough outline of aircraft at 20 degrees of roll, 10 degrees angle of attack, and Mach 1.5. Pressure field is shown along with the crossflow velocity. Mesh is 120 elements in the $\\xi^1$ direction and 50 elements in the $\\xi^2$ direction.}\n \\label{fig:Fighter}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{FighterRoll20_PressureXYFlow.png}\n \\caption{Rough outline of aircraft at 20 degrees of roll, 10 degrees angle of attack, and Mach 1.5. Pressure field is shown along with the velocity projected onto the XY plane instead of onto the surface of the sphere.}\n \\label{fig:FighterXY}\n\\end{figure}\n\n\n\nAny function of the solution variables can be computed and displayed. This includes different views of the velocity field. It is most natural to view the velocity field projected onto the surface of the sphere, but it may be insightful to view the components from a different perspective. In Figure \\ref{fig:FighterXY}, the velocity field has been projected onto the XY plane and highlights behaviors of the solution which maybe were not apparent in Figure \\ref{fig:Fighter}.\n\n\n\\subsection{MHD results}\n\n\nWe now consider the case of a free stream containing a magnetic field. Errors for the examples in this section were higher than for the Euler case, with the $L_2$ norm of the residual being of order $1$ and $L_\\infty$ norm of the residual being of order $10^{-1}$. These errors are probably too high to give good qualitative results, and a few erroneous artifacts can be seen in the following examples. The increase in error is likely related to the divergenceless constraints applied to the solution which are known to only be truly consistent in the limit of zero mesh spacing. The solutions were however qualitatively consistent with theory and demonstrate true behaviors of the system. Further investigation is required to develop a discrete expression which can be better satisfied.\n\n\n\nWe expect to see some identifiable, qualitative differences in the MHD solutions compared to the non-conducting counterparts. The Lorentz force naturally opposes the motion of a conductor across magnetic field lines, and this force is proportional to the velocity of the conductor. As a result, MHD flows tend to have flattened velocity gradients compared to equivalent non-conducting flows. This behavior results in greater shock wave angles and redistribution of pressure and temperature fields \\cite{NumSimMHD,ExpResults}. The effects are also directional since the Lorentz force acts perpendicular to the magnetic field. In the case of ideal magnetohydrodynamic flows, there is the ``frozen-in'' property which states that the fluid cannot cross magnetic field lines, but is free to move along them \\cite{MHDFlowPastBodies}. All of these behaviors can be observed in the following figures.\n\n\n\n\nFigures \\ref{fig:MHD_Ref}, \\ref{fig:MHD_UP}, and \\ref{fig:MHD_SA} demonstrate an increase in shock wave angle with the addition of a magnetic field. Figure \\ref{fig:MHD_Ref} shows the same 10 degree half angle cone at 20 degrees angle of attack and Mach 2 presented above with no magnetic field present to serve as a reference. The mesh used had dimensions 40 cells in the $\\xi^1$ direction and 80 in the $\\xi^2$ direction for a total of 3200 elements and thus 25,600 variables. Two different orientations of magnetic fields were imposed both with magnitudes of $0.4$. In Figure \\ref{fig:MHD_UP} the magnetic field is imposed in the ``upward perpendicular'' direction which means that the magnetic field was perpendicular to the incoming flow stream in the upward direction. The Cartesian components of the magnetic field are given by:\n\n\\begin{equation}\n \\pmb{B}_{*\\infty} = \\left[ \\begin{smallmatrix} 0 \\\\ \\cos AoA \\\\ -\\sin AoA \\end{smallmatrix} \\right]\n\\end{equation}\n\nwhich mostly points in the $\\hat{y}$ direction but is kept perpendicular to the free stream as the angle of attack is increased. In Figure \\ref{fig:MHD_SA}, the magnetic field was stream-aligned which means that it was imposed in the same direction as the free stream velocity.\n\n\nIn both cases involving electromagnetic interaction, the shock wave angle can be seen to increase all around the circumference of the cone. It is clear in Figure \\ref{fig:MHD_UP} that the angle of the shock wave increases more around the top of the cone than around the bottom . This is likely due to the velocity having greater magnitude around the top and sides than near the crossflow stagnation region, and so the effect of the Lorentz force is greater.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{MHD_Ref_10deg_AoA20_M2_Pressure.png}\n \\caption{10 degree half angle cone at 20 degrees angle of attack and Mach 2. No magnetic field is present to provide a reference of the shock wave angle and strength. Results were achieved using the MHD solver with the free stream magnetic field set to zero. The final $L_2$ norm of the residual was less than $10^{-9}$.}\n \\label{fig:MHD_Ref}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{MHD_UpPerp_10deg_AoA20_M2_Pressure.png}\n \\caption{Magnetic field was imposed upward perpendicular to the incoming flow stream with a magnitude of 0.4. Unevenness of the pressure field outside the shock wave is likely due to numerical error.}\n \\label{fig:MHD_UP}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{MHD_StrmAlign_10deg_AoA20_M2_Pressure.png}\n \\caption{Magnetic field was aligned with the incoming flow stream and given a magnitude of 0.4.}\n \\label{fig:MHD_SA}\n\\end{figure}\n\n\n\nTo illustrate the ``frozen-in'' property of Ideal MHD flows, we also considered the case of an asymmetric magnetic field. The following examples involve the same 10 degree cone at 20 degrees angle of attack and Mach 2. The magnetic field was imposed at 30 degrees counter-clockwise from the $y$ axis at varying angles from the cone ($z$) axis with a magnitude of $0.1$ as depicted in Figure \\ref{fig:BinfDiagram}. A mesh with 64 elements in each coordinate direction was used. As the angle off the cone axis increased, it can be seen in Figure \\ref{fig:MHD64SurfPress} that the maximum pressure region on the surface of the cone is rotated couner-clockwise.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.35]{BinfDiagram.png}\n \\caption{Orientation of free stream magnetic field.}\n \\label{fig:BinfDiagram}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.35]{MHD64SurfPress30deg.png}\n \\caption{Cone surface pressure coefficient 10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed 30 degrees off of the $y$ axis at the angle specified from the cone's axis.}\n \\label{fig:MHD64SurfPress}\n\\end{figure}\n\n\nThe pressure and velocity fields for the 90 degree case is shown in Figure \\ref{fig:90degVcP}. The maximum pressure region has clearly been rotated, as has the convergence point of the crossflow stream lines. This is consistent with the idea that the velocity is allowed to flow along the magnetic field lines, but is resisted in flowing across them. Likewise, we expect to see the magnetic field distorted by the flow of the flow of the fluid which is shown in the next two figures.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.3]{64_roll30_pitch90_P_vc.png}\n \\caption{10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed 30 degrees off of the $y$ axis and 90 degrees from the cone's axis. Cross flow velocity and pressure are shown.}\n \\label{fig:90degVcP}\n\\end{figure}\n\n\nFigure \\ref{fig:90degBcP} shows the magnetic field projected onto the surface of the sphere along with the pressure field for the case of the magnetic field being 30 degrees off the $y$ axis and 90 degrees off the cone's axis, and Figure \\ref{fig:ConeAlignBcP} shows the the same for the case of the magnetic field being aligned with the cone's axis (0 degrees off of its axis). It is particularly visible in Figure \\ref{fig:ConeAlignBcP}, the case of cone-aligned magnetic field, that the magnetic field is constricted when the gas is compressed. The cross flow magnetic field near the cone's surface points from low density regions to higher density regions.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.3]{64_roll30_pitch90_P_bc.png}\n \\caption{10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed 30 degrees off of the $y$ axis and 90 degrees from the cone's axis. Cross flow magnetic field and pressure are shown.}\n \\label{fig:90degBcP}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.3]{64_ConAlign_P_bc.png}\n \\caption{10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed along the cone's axis. Cross flow magnetic field and pressure are shown.}\n \\label{fig:ConeAlignBcP}\n\\end{figure}\n\n\n\nThough the behaviors demonstrated so far are consistent with past work on the subject, there are some artifacts which likely do not belong. Clearly visible in Figure \\ref{fig:MHD_UP} is some unevenness of the pressure field outside the bow shock. This is visible in Figures \\ref{fig:90degVcP} and \\ref{fig:90degBcP} as well though not as prevalent. This is believed to be an artifact of the inconsistency of the divergenceless constraint preventing desirable convergence from being achieved. This unevenness tended to occur the more perpendicular the magnetic field was to the free stream velocity, which is when the Lorentz force effects would be stronger. Despite this, the solutions produced did still exhibit behaviors consistent with MHD theory which demonstrates the validity of the overall method, that is the discrete covariant derivatives and the solution algorithm.\n\n \n\\section{Conclusion}\n\nA numerical scheme has been developed which solves the conical Euler and MHD equations. This method relies heavily on the discrete Christoffel symbols which account for the curvature of the manifold in differential expressions. The discretization of the conical equations transforms in the appropriate tensorial manner and is exactly satisfied by a broad class of known solutions. A standard Newton's method was used to solve the system of nonlinear equations and produced results that were consistent with theory and prior numerical and experimental work. Better convergence is desired for the MHD case, but will likely require a more involved discretization in order to be achieved.\n\n\n\n\n\n\\section*{Acknowledgement}\n\nThis research was supported in part by an appointment to the Student Research Participation Program at the U.S. Air Force Institute of Technology administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and USAFIT.\n\n\n\n\\section*{References}\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction } \n\nIn their seminal paper \\cite{HL} Harvey and Lawson defined the notion of \ncalibrations. Let $M$ be a Riemannian manifold and $\\varphi$ be a closed\nk-form. We say that $\\varphi$ is a calibration if for any k-dimensional\nplane $\\kappa$ in the tangent bundle of $M$, we have \n\\( \\varphi|_{\\kappa} \\leq {vol({\\kappa})} \\).\nWe call a k-dimensional submanifold $L$ a calibrated submanifold if \n\\( \\varphi|_{L} = vol(L) \\). \nIt is easy to see that calibrated submanifolds minimize volume in their\nhomology class and thus provide examples of minimal varieties. \nFor a thorough discussion and numerous examples we refer the reader to\n\\cite{Hv},\\cite{HL},\\cite{Mc}.\n\nIn this paper we study the geometry of Calibrated submanifolds and investigate\nrelations between their moduli-space and geometry of the ambient manifold. The\npaper is organized as follows:\n\nIn section 2 we prove several comparison theorems for the volume of small balls in a Calibrated submanifold of a Riemannian manifold $M$, whose\nsectional curvature is bounded from above by some $K$. Let $L$ be a minimal submanifold in $M$, $p \\in L$ a point and $B(p,r)$ is a ball of radius $r$ around $p$\nin $M$. Then there are a number of results on comparison between the volume of\n$ L \\bigcap B(p,r)$ and a volume of a ball of radius $r$ in a space form of \nconstant curvature $K$ (see \\cite{And}, \\cite{Kim}, \\cite{MY}).\nOur main result in section 2 is Theorem\n2.0.2, which states that if $L$ is calibrated then the volume of a ball of \nradius $r$ in the induced metric on $L$ (which is smaller than $L \\bigcap \nB(p,r)$)\nis greater than the volume of a\nball of the same radius in a space form of constant curvature $K$ for $r \\leq r_0$ with $r_0$ depending only on the ambient manifold $M$.\nAs a corollary we deduce that there is an upper bound on a \ndiameter of a calibrated submanifold in a given homology class.\n\nIn section 3 we investigate Special Lagrangian Geometry on a Calabi-Yau\nmanifold. In section 3.1 we define Special Lagrangian submanifolds for any\nchoice of Kahler metric on a Calabi-Yau manifold\nand give basic facts pertaining to Special Lagrangian (SLag) Geometry. We will also prove a result about finite group actions on Calabi-Yau manifolds\nand construct several new examples of SLag submanifolds.\n\nIn section 3.2 we study connections between moduli-space of Special Lagrangian submanifolds and global geometry of the ambient Calabi-Yau manifold. \nWe will be \ninterested in submanifolds, which satisfy condition $\\star$ on their cohomology ring (defined in section 3.2). In particular tori satisfy this condition.\nWe state 2\nconditions on an ambient manifold for each of those the moduli-space is not\ncompact. \nThese conditions hold in many examples, and so we got a non-compactness theorem for the moduli-space.\n\nIn section 3.3 we use results of 2 previous sections to investigate\na Borcea-Voisin threefold in detail. We find a \nKahler metric on it for which we can completely characterize singular \nSpecial Lagrangian submanifolds \n(they will be a product of a circle with a cusp curve).\nMoreover SLag submanifolds don't intersect and the compactified moduli-space \nfills the whole Calabi-Yau manifold, i.e. Borcea-Voisin 3-fold fibers with \ngeneric fiber being a Special Lagrangian torus. We also construct a mirror\nto this fibration. \nThus the SYZ conjecture (see \\cite{SYZ}) holds in this example. \n\nIn section 3.4 we will examine holomorphic functions on a Calabi-Yau\nmanifold in a neighbourhood of a Special Lagrangian submanifold. An \nimmediate consequence of the fact that SLag submanifolds \nare 'Special'\nis Theorem 3.4.1, which states that the integral of a\nholomorphic function over SLag submanifolds is a constant\nfunction on their moduli-space. This will give a restriction on how a\nfamily of SLag submanifolds might approach a singularity (Corollary\n3.4.1) and also will give a restriction on SLag submanifolds asymptotic\nto a cone in $\\mathbb{C}^n$ (Theorem 3.4.2).\n\nIn section 4 we study coassociative submanifolds on $G_2$ manifolds. We extend\na coassociative condition for any choice of a closed (but not necessarily\nco-closed) $G_2$ form. Deformation of coassociative submanifolds will still be\nunobstructed and the moduli-space is smooth of dimension $b_2^+(L)$, there $L$\nis a coassociative submanifold. We will show that one example of a $G_2$\nmanifold constructed by Joyce in \\cite{Joy} is a fibration with generic fiber\nbeing a coassociative 4-torus. We also construct a mirror to this fibration.\n\nThere are a number of natural questions that arise from this paper. One is to\ngive a systematic way to construct fibrations on resolutions of torus quotients\n(both for SLag and coassociative geometry). Another point is that we produced\nthose fibrations for certain special choices of structures on the ambient manifold (a certain choice\nof Kahler metric or a certain closed $G_2$ form). We would like to get\nfibrations for any other isotopic structure. If we have a 1-parameter family of\nstructures then we obtain a 1-parameter family of moduli-spaces $\\Phi_t$. \nSuppose that $\\Phi_0$ compactifies to a fibration of the ambient manifold. We\nconjecture that so does each $\\Phi_t$ (both for SLag and coassociative\ngeometries). This in particular would imply the existence of SLag fibration for\nthe Calabi-Yau metric on a Borcea-Voisin threefold and coassociative fibration for a parallel $G_2$ structure.\nWe hope to address those issues in a future paper.\n\n{\\bf Acknowledgments} : This paper is written towards author's Ph.D. at\nthe\nMassachusetts Institute of Technology. The author wants to thank his\nadvisor, Tom Mrowka, for initiating him into the subject and for\nconstant encouragement. He is also grateful to Gang Tian for a\nnumber of useful conversations. Special thanks go to Grisha Mikhalkin for\nexplaining the Viro construction in Real Algebraic Geometry, which was\nused to construct examples of real quintics.\n\n\\section {Volume Comparison for Calibrated Submanifolds } \n If a Riemannian manifold $M$ has an upper bound $K$ on it's sectional\ncurvature then the volume of a sufficiently small ball in $M$ is greater\nthen the volume of a ball of the same radius in a space form of curvature\n$K$. It turns out that this holds more generally for calibrated\nsubmanifolds of $M$. Namely we have a following theorem :\n\\begin{thm}\n: Let $\\varphi$ be a calibrating k-form on an ambient manifold $M$ and $L$ be\na \ncalibrated submanifold. Let the sectional curvature of $M$ be bounded\nfrom above by $K$ . Let \n\\(r \\leq { min(injrad(M),\\frac{\\pi}{\\sqrt{K}}) } \\). \nLet $p \\in L$ and $B(p,r)$ be a ball of radius r around $p$ in $M$ and \\(B^K(r)\\) be a\nball of radius $r$ in a k-dimensional space of constant sectional \ncurvature $K$. Then \n\\[ vol(L\\bigcap{B(p,r)} \\geq vol(B^K(r)) \\]\n\\end{thm}\nRemark : If $\\varphi$ is a volume form on $M$, then this is Gunther's\nvolume comparison theorem.\\\\\n{\\bf Proof of Theorem 2.0.1}: The proof is based on a\nfollowing Lemma, which is\na counterpart to Rauch comparison theorem :\n\\begin{lem}\n: Let $M$ be a (complete) Riemannian manifold whose sectional\ncurvature is bounded from above by $K$ and $\\gamma$ : \n\\( [0,t] \\mapsto M \\) be a unit speed geodesic. Let $Y$ be a Jacobi\nfield along \\( \\gamma \\) which vanishes at 0, orthogonal to \n\\( \\gamma ' \\) and \\( t \\leq \\frac {\\pi}{\\sqrt{K}} \\). \nThen it's length \\( |Y(\\theta)| \\) satisfies the following differential \ninequality\n\\( {|Y|''} + K|Y| \\geq 0\\). \\\\\nMoreover if a function \\( \\Psi \\) is a solution to \n\\( \\Psi '' + K \\cdot \\Psi = 0 \\) , \\( \\Psi(0) = 0 \\)\nand \\( \\Psi(t) = |Y(t)| \\) then \\[ \\Psi (\\theta) \\geq |Y(\\theta )| \\]\nfor \\( 0 \\leq \\theta \\leq t \\) \n\\end{lem}\n{\\bf Proof }:\\\\\nFirst a condition on $t$ means that $Y$ doesn't vanish on \\( (0,t] \\) by Rauch\nComparison theorem.\nWe have \n\\(|Y| = \\sqrt{\\langle {Y,Y} \\rangle } \\), \\(|Y|' = \\frac{\\langle\n\\nabla_{t} Y,Y \\rangle }{|Y|} \\), \n\\[|Y|''= \\frac{| \\nabla_{t} Y|^{2} - \\langle Y,R(\\gamma ' ,Y) \\gamma '\n\\rangle } {|Y|} - \\frac{\\langle \\nabla_{t} Y, Y \\rangle ^{2} }{|Y|^3} \n\\geq \\frac{ | \\nabla_{t} Y|^{2} |Y|^{2} - \\langle \\nabla_{t} Y ,Y\n\\rangle ^{2} }{|Y|^{3} } - K|Y| \n\\geq - K|Y|\\]\nby Cauchy-Schwartz inequality. Here $R$ is a curvature operator, \n\\( \\gamma ' \\) is a (unit length) tangent field to geodesic \\( \\gamma \\). \nSince $Y$ is orthogonal to \\( \\gamma ' \\) then \n\\( \\frac {\\langle R( \\gamma ' , Y) \\gamma ', Y \\rangle} {|Y|^{2} } \\) is \nthe sectional curvature of a plane through $Y$ and\n\\( \\gamma ' \\), which is less then $K$.\\\\\nFor the second claim consider \\( F = \\frac{|Y|}{\\Psi} \\). \n$\\Psi$ is\npositive on the interval \\( (o,t] \\) and hence $F$ is well defined on that\ninterval.\nAlso \\( F ' = \\frac{|Y| ' \\Psi - \\Psi ' |Y| }{\\Psi ^{2} } \\).\\\\\nConsider \\( G= |Y| ' \\Psi - \\Psi ' |Y| \\). \\( G(0) = 0 \\) , \n\\( G ' = |Y| '' \\Psi - \\Psi '' |Y| \\geq 0 \\).\\\\ \nSo \\( G \\geq 0 \\) , i.e. \\( F ' \\geq 0 \\) . Now \\( F( t) = 1 \\) , so \n\\( F \\leq 1\\) i.e. \\( |Y| \\leq F \\) Q.E.D.\\\\\n\n\\noindent\nNow we can prove {\\bf Theorem 2.0.1}:\\\\\nLet $d_p$ be a distance function to $p$ on $M$. Then for an open dense\nset of full measure of values $t$, $t$ is a regular value of $d_p$\nrestricted to $L$. Let now\n\\[ f(t)= vol(L \\bigcap B(p,t) ) \\hbox{ and } g(t) = \\int_{L \\bigcap B(p,t)}\n|\\nabla_{L} d_{p}| \\] \nWe also consider an analogous situation on \\( \\overline{L} \\) - a space\nform of constant curvature $K$. Then \\( \\overline{f} = \\overline{g} \\)\nbecause \\( |\\nabla \\overline{d_p} | = 1 \\) on \\( \\overline{L} \\).\\\\\nFor $t$ a regular value as above we have by the co-area formula :\n\\begin{equation} \n f ' (t) \\geq g ' (t) ~,~\n g ' (t) = vol(S_{t})\n\\end{equation}\nand \n\\begin{equation} \n\\overline{f} ' = vol( \\overline{S_t}),\n\\end{equation}\nhere \\( S_{t} = d_{p}^{-1}(t) \\bigcap L \\).\\\\\nConsider now a map $\\xi$ : \\( S_{t} \\times [0,t] \\mapsto M \\),\n \\( \\xi ( a, \\theta ) = exp( \\frac{\\theta}{t} exp^{-1}(a) ) \\), here\n\\( a \\in S_{t} \\) , \\(\\theta \\in [0,t] \\).\nThen \\( vol(\\xi(S_{t} \\times [0,t] )) \\geq f(t) \\).\\\\ \nIndeed let $\\rho$ be a $k-1$ form on \\( B(p,r) \\) s.t. \\( d \\rho = \\varphi\n\\) (such $\\rho$ exists by Poincare Lemma).\nThen by the calibrating condition \\[ vol(\\xi(S_{t} \\times [0,t] )) \\geq \\int_{S_{t} \\times [0,t]} \n\\xi^{\\ast} \\varphi = \\int_{S_{t}} \\rho = \\int_{B(p,t) \\bigcap L} \\varphi = \nf(t) \\]\nAlso on \\( \\overline{L} \\) we have \\( \\overline{f(t)} =\nvol(\\xi(\\overline{S_{t}} \\times [0,t] )) \\).\\\\\nWe need to estimate \\(h(t)= vol(\\xi(S_{t} \\times [0,t] ) \\). Let $g '$ be\nthe product metric on \\( S_{t} \\times [0,t] \\). Then \\( h(t) = \\int_{S_{t} \n\\times [0,t]} Jac(d \\xi) dg ' \\).\\\\\nTo estimate \\( Jac(d \\xi) \\) at point \\( (a , \\theta ) \\) we take an o.n.\nbasis \\( v_{1} \\ldots v_{k-1} \\) to $S_t$ at $a$. Then \\( d \\xi (v_{i}) \\)\nis a value of a Jacobi field along a unit speed geodesic \\( (exp(s \\cdot\n\\frac{exp^{-1}(a)}{t}) | s \\in [0,\\infty)) \\) at \\( s= \\theta \\) which is\northogonal to this geodesic, vanishes at $0$ and those length is $1$ at\n\\( s=t \\).\\\\\nLet $F_{t}(\\theta)$ solve \\( F_{t} '' + K \\cdot F_{t} = 0 \\), \\(\nF_{t}(0)=0 \\), \\(F_{t}(t)=1\\).\\\\\nBy Lemma 2.0.1 we have \\(|d \\xi(v_{i})| \\leq F_{t}(\\theta) \\), so \n\\begin{equation}\nJac(d \\xi) \\leq (F_{t}(\\theta))^{k-1} \n\\end{equation}\nWe can consider an analogous situation on \\( \\overline{L} \\) and in that\ncase we have an equality \\( Jac(d \\overline{\\xi}) = (F_{t}(\\theta))^{k-1}\\).\nSo \n\\[\\overline{f(t)} = \\int_{\\overline{S_t} \\times [0,t]}\n(F_{t}(\\theta))^{k-1} = vol(\\overline{S_t}) \\cdot \\int_{[0,t]}\n(F_{t}(\\theta))^{k-1} d \\theta = \n(by (2)) = \\overline{f} '\n(t) \\cdot \\alpha (t) \\]\n(here \\( \\alpha (t)= \\int_{[0,t]} (F_{t}(\\theta))^{k-1} d \\theta \\)).\\\\\nReturning now to our calibrated submanifold we deduce from (3)\nand (1) that\n\\(f(t) \\leq f ' (t) \\cdot \\alpha (t) \\).\\\\\nSo \\( \\frac{f ' (t)}{f(t)} \\geq \\frac{\\overline{f} ' (t)}{\\overline{f} \n(t)} \\) , i.e. \\( ln(f) ' \\geq (ln( \\overline{f}) - \\epsilon) ' \\) for\nany \n\\( \\epsilon > 0 \\). Having $\\epsilon$ fixed we can choose $t_0$ small\nenough s.t. \\( lnf(t_{0}) \\geq ln \\overline{f} (t_{0}) - \\epsilon \\).\\\\\nNow \\(lnf(\\theta ) \\) is defined for a.e. $\\theta$ and is an increasing \nfunction on $\\theta$, so\n\\[lnf(t) \\geq lnf(t_{0}) + \\int_{[t_{0},t]} lnf ' \\geq ln \\overline{f}\n(t_{0})- \\epsilon + \\int_{[t_{0},t]} (ln \\overline{f}) ' = ln\n\\overline{f} (t) - \\epsilon \\]\nNow $\\epsilon$ was arbitrary, hence \\( lnf(t) \\geq ln \\overline{f} (t) \\)\ni.e. \\( f(t) \\geq \\overline{f} (t) \\) Q.E.D .\\\\\n\n\\noindent \nWe wish to discuss the compactification of some moduli-space of\ncalibrated submanifolds in a given homology class.\nIf we have a moduli-space $\\Phi$ we can look on it as a subspace in\nthe space of rectifiable currents. k-dimensional currents have a mass\nnorm ${\\bf M}$ and and a flat norm ${\\cal F}$ (see \\cite{Mor}, p.42) \n\\[{\\bf M}(L)= sup(\\int_{L}\\eta | \\eta \\hbox{ a k-form}, \n\\forall p \\in M :|\\eta(p)| \\leq 1 ) \\]\n\\[{\\cal F}(A) = inf({\\bf M}(A) + {\\bf M}(B) | L=A+ \\partial B) \\] \nSince all the\nsubmanifolds in $\\Phi$ are closed and have the same volume, then by the\nFundamental compactness theorem (theorem 5.5 in \\cite{Mor}) we have that the\nclosure \\( \\overline {\\Phi} \\) of $\\Phi$ in the flat topology is compact.\\\\\nAlso for compact subsets of $M$ there is a Gromov-Hausdorff distance\nfunction\n\\(d^{GH}\\), there \\[ d^{GH}(K,N)= sup_{p \\in K } inf_{q \\in N } d(p,q)\n\\]\nUsing Theorem 2.0.1 we get \n\\begin{cor}\n: There is a constant \\(C=C(M,{\\varphi}) \\) s.t. for \n\\( K,N \\in {\\Phi} \\) \nwe have \\(d^{GH}(K,N) \\leq { C \\cdot ({\\cal F}(K-N))^{\\frac{1}{k+1} } } \\).\n\\end{cor}\n{\\bf Proof } : Suppose \\( d^{GH}(K,N) = r \\). Then we have a \npoint \\(p \\in K \\) s.t. \\(d(p,N) =r \\).\\\\\nIt is easy to construct a nonnegative function $f$ supported in a ball \n$B(p,r)$ which is equal to $1$ on a ball $B(p,r\/2)$ and s.t. \\( |\\nabla (f)| \n\\leq \\frac{const}{r} \\).\\\\\nSuppose \\( K-N = A+ \\partial B \\). Obviously \\(K-N(f \\varphi) \\geq \nvol(B(p,r\/2) \\bigcap K) \\)\\\\\n\\(\\geq const \\cdot r^{k} \\) by theorem 2.0.1\\\\\nAlso \\(K-N(f \\varphi) = A(f \\varphi) + B(df \\wedge \\varphi) \\leq {\\bf M}(A)+\n\\frac{const \\cdot {\\bf M}(B)}{r} \\leq \\frac{const \\cdot ({\\bf M}(A)+{\\bf M}\n(B))}{r}\\).\\\\\nSo taking the infimum we get \\[ const \\cdot r^{k} \\leq \\frac{{\\cal F}(K-N)}\n{r} \\]\nwhich is the statement of the Corollary. Q.E.D. \n\\\\\n\\noindent\nFrom that we get an immediate corollary \n\\begin{cor}\n: If a sequence of submanifolds $L_i$ in $\\Phi$ converges to\na current $L$, then it converges to the support of $L$ in Gromov-Hausdorff\ntopology .\n\\end{cor}\nWe now come to the main result of this section.\nWe wish to strengthen Theorem 2.0.1 by an analogous result for volume of\nballs of radius $r$ in the induced metric on calibrated submanifolds \n(which are smaller then the balls we considered before). We have the\nfollowing\n\\begin{thm}\n: Let $M$,$\\varphi$,$p$,$L$,$K$ and \\(B^{K}(r)\\) as\nin Theorem 2.0.1\nand let \\(d_L\\) be \na distance function to $p$ on $L$ in the induced metric on $L$.\nThen for \\( r \\leq min(injrad(M),R(K)) \\) we have :\n\\[vol ( x \\in{L}|d_{L}(x) \\leq{r}) \\geq vol(B_{K}(r)) \\] and \\(R(K)=\\pi\/ \n\\sqrt{K} \\) for $K$ positive.\n\\end{thm} \n\\begin{cor}:\nLet $M, \\varphi$ as before. Then there is an a priori bound on a diameter of calibrated submanifolds in a given homology class $\\eta$.\n\\end{cor}\n{\\bf Proof of the Corollary}: Choose some $r$ satisfying conditions of theorem\n2.0.2. Let $L$ be some\ncalibrated submanifold\nin a homology class $\\eta$. Let $\\Gamma$ be a maximal covering of $L$ by\ndisjoint balls of radius $r$. Since by theorem 2.0.2 each such ball has a\na volume at least $\\epsilon$ and the volume of $L$ is \\( v= [ \\varphi ] \n(\\eta) \\), then such covering exists and the number of elements in\n$\\Gamma$ is at most \\( N = \\frac{v}{\\epsilon} \\). Now every point in $L$\nis contained in one of the balls of radius $2r$ with the same centers as \nballs in $\\Gamma$.\\\\\nSo it is easy to deduce that the diameter of $L$ is at most \\( 4rN \\).\n Q.E.D. \\\\\n\n\\noindent\n{\\bf Proof of Theorem 2.0.2}: We wish to use the same argument as in \nthe proof of\nTheorem 2.0.1 for the distance function $d_L$. The problem is that $d_L$ is\nnot a smooth function in the $r$-neighbourhood of $p$. But we can\nstill smoothen it using the following technical Lemma:\n\\begin{lem}:\nLet $L$ be a submanifold, \\( p \\in L \\) and $d_L$\nas before. \nWe can pick \\( \\rho > 0 \\) and a \\( (C^{\\infty}) \\) function\n\\( 0 \\leq \\nu \\leq 1 \\) on \\( [0, \\infty ) \\) which is $0$ on \\( (0, \\rho]\n\\), $1$ on \\( [2 \\rho, \\infty ) \\) and nondecreasing s.t.\nfor any positive $\\epsilon$ there is a function \\( \\lambda _{\\epsilon}\n\\)on $L$ which satisfies :\n\n1) \\( \\lambda_{\\epsilon} \\) is \\( C^{\\infty} \\) outside of p \n\n2) \\( d_{L} \\leq \\lambda_{\\epsilon} \\leq d_{L}(1+ \\epsilon ) \\)\n\n3) \\( | \\nabla \\lambda_{\\epsilon}| \\leq 1+ \\nu(d_{L}) \\epsilon \\) \n\\end{lem}\n{ \\bf Proof}: Pick a positive \\( \\rho << injrad(L) \\). Choose a function\n$\\kappa$ on $M$ s.t. \\( \\kappa=1\\) on \\( B(p, 2 \\rho ) \\) and \\( \\kappa =\n0 \\) outside of \\(B(p,3 \\rho) \\).\nChoose a nonnegative radially symmetric function $\\sigma$ on $\\mathbb{R}^k$ \nwith\nsupport in the unit ball which integrates to 1 and let \\( \\sigma_{n}(x)=\nn^{k} \\cdot \\sigma(nx) \\). Then \\( \\sigma_{n} \\) also integrates to 1.\\\\\nChoose a nonnegative function \\( \\eta \\leq 1 \\) , \\( \\eta = 0 \\) on \\(B(p,\n\\frac{5 \\rho}{4} ) \\) and \\( \\eta = 1 \\) outside of \\( B(p, \\frac{3 \\rho}\n{2} ) \\).\n\nDefine now \\( \\mu^{n} : L \\mapsto R \\) , \\[ \\mu^{n}(q)= \\int_{T_{q}L}\nd_{L}(exp(\\theta))\\sigma^{n}(\\theta) d \\theta \\]\nHere \\(T_{q}L \\) is the tangent bundle to $q$ at $L$. Since \\( \\sigma^n\n\\) was radially symmetric function and \\( T_{q}L\\) has a metric, the\nexpression \\( \\sigma^{n}(\\theta) \\) is well defined and also integration\ntakes part only on a ball of radius \\( \\frac{1}{n} \\subset T_{q}L \\).\\\\ \nAlso it is clear that \\[\\mu^{n}= d_{L} + o(\\frac{1}{n}) \\] \nThe point is that for large $n$, \\(\\mu^n\\) is a smooth function on $L$.\nIndeed let us denote by \\( J(a,b) \\) the Jacobian of exponential map from\n$a$ that hits $b$ for $a,b$ points in $L$ that are close enough. Then\n\\(J(a,b) \\) is a smooth function on \\( (a,b) \\) and we can rewrite\n\\[ \\mu^{n}(q) = \\int_{L} J(q,b)^{-1} \\cdot d_{L}(b) \\cdot \\sigma\n(exp_{b}^{-1}(q)) db \\] \nand it is clear from this definition that\n\\( \\mu^n \\) is a smooth function of $q$ for $n$ large enough.\\\\\nAlso one can easily prove that \\( |\\mu^{n}(q_{1}) - \\mu^{n}(q_{2}) | \\leq \nd(q_{1},q_{2}) \\cdot (1 + o(\\frac{1}{n})) \\), hence \\[|\\nabla \\mu^{n} |\n\\leq 1 + o(\\frac{1}{n}) \\]\nNow pick \\( \\epsilon > 0 \\). Define \n\\( \\lambda_{\\epsilon}^{n} = (1 + \\eta \\epsilon)(\\kappa \\cdot d_{L} + (1-\n\\kappa) \\cdot \\mu^{n}) \\).\\\\\nThen \\( \\lambda_{\\epsilon}^{n} = d_L\\) on \\(B(p,\\frac{3 \\rho}{2}) \\) and\nit is smooth outside of $p$. \\\\\nOne can also directly verify that we can choose a constant C s.t. for\nsufficiently large $n$, the function \\( \\lambda_{\\epsilon} =\n\\lambda_{\\frac{\\epsilon}{C}}^{n} \\)\nsatisfies properties 2) and 3) as desired. Q.E.D.\\\\\n\n\\noindent \nNow we can prove {\\bf Theorem 2.0.2}: We will use the fact that the function \\(\\alpha(t) \\), defined in the proof of Theorem 2.0.1, is an increasing\nfunction of t for \\( 0 \\leq t \\leq \\frac{\\pi}{\\sqrt{K}} \\) for $K$ positive and\nfor \\( 0 \\leq t \\leq R(K) \\) for $K$ negative.\n\nPick $\\rho$ as in Lemma 2.0.2. \nLet \\( \\epsilon >0 \\). We will follow the lines of proof of Theorem 2.0.1\nfor the function \\( \\lambda_{\\epsilon} \\) instead of the distance\nfunction.\nWe denote by \\[ f(t)= vol(\\lambda_{\\epsilon}^{-1}([0,t])~,~\n S_{t} = \\lambda_{\\epsilon}^{-1}(t) \\] \nThen conditions on \\( \\lambda_{\\epsilon} \\) and the co-area formula imply\nthat for a regular value $t$ we have \\( f'(t) \\geq \\frac{vol(S_{t})}\n{1+ {\\epsilon} {\\nu(t)}} \\).\\\\\nAlso we can consider \\(A_{t} = ((a, \\theta)| a \\in S_{t} , 0 \\leq \\theta\n\\leq d_{p}(a) ) \\) (here $d_p$ is the distance to $p$ in the ambient\nmanifold).\nWe have \\( \\xi : A_{t} \\mapsto M \\), \\(\\xi(a, \\theta) =\nexp_{M}(\\frac{\\theta \\cdot exp^{-1}(a) }{d_{p}(a)}) \\).\\\\\nAs before we will have \\( f(t) \\leq vol(\\xi(A_{t})) \\) and \n\\( Jac(d \\xi) \\leq (F_{d_{p}(a)}(\\theta))^{k-1} \\) (see (3), we have the same\nnotations as in Theorem 2.0.1). The estimate for Jacobian is true for the following reason: Let $v_1, \\ldots, v_{k-1}$ be an o.n. basis to $S_t$ at $a$. Then only the normal component of $d\\xi(v_i)$ to the geodesic contributes to \n$Jac(d\\xi)$. The normal component can be estimated by Lemma 2.0.1.\n \nSo we will have \\( vol(\\xi(A_{t})) \\leq \\int_{S_t} \\alpha(d_{p}(a)) da\n\\leq vol(S_{t}) \\cdot \\alpha(t) \\) (here we used\nthe fact that $\\alpha$ is an increasing function and \\( d_{p}(a) \\leq\nd_{L}(a) \\leq \\lambda_{\\epsilon}(a) = t \\)).\\\\ \nCombining all this we get \\[(lnf)'(t) \\geq \\frac{(ln \\overline{f})'(t)}{1+\n\\epsilon \\nu(t) } = [(\\frac{ln \\overline{f}}{1+ \\epsilon \\nu })' +\n\\epsilon \\nu '\/(1+\\epsilon \\nu)^{2} \\cdot ln(\\overline{f})](t)\\]\nNow \\( \\nu(t) = 0 \\) for \\(t \\leq \\rho \\) and \\( \\nu'(t) = 0 \\) for \\( t\n\\geq 2 \\rho \\) and \\( ln(\\overline{f}) \\geq -C \\) for \\( 2\\rho \\geq t \\geq \\rho\n \\).\nSo \\[ (lnf)' \\geq (\\frac{ln \\overline{f}}{1+ \\epsilon \\nu})' - \\epsilon C' \\]\ni.e. \\( (lnf+ \\epsilon C' t)' \\geq (\\frac{ln \\overline{f}}{1+ \\epsilon\n\\nu})' \\) \\\\\nand for $\\theta$ small we have \n \\( ln(f(\\theta) + \\epsilon C' \\theta) \\geq ln(\\overline{f}(\\theta)) =\n\\frac{ln \\overline{f}(\\theta)}{1+ \\epsilon \\nu(\\theta)}) \\).\\\\\nSo \\(ln(f + \\epsilon C't) \\geq ln \\overline{f}\/(1+ \\epsilon \\nu) \\).\\\\\nHere $\\epsilon$ was arbitrary and we are done. Q.E.D. \n\n\\section{Special Lagrangian geometry on a Calabi-Yau manifold}\n\\subsection{ Basic properties and examples}\nLet $M^{2n}$ be a Calabi-Yau manifold, $\\varphi$ a holomorphic volume\nform and $\\omega$ a Kahler 2-form. \nIf $\\omega$ is a Calabi-Yau form then \\(Re({\\varphi}) \\) \nis a calibration (see \\cite {Mc})\nand calibrated submanifold $L$ can be characterized by\nalternative conditions : \\( \\omega |_{L} = 0 \\) and \\( Im({\\varphi})|_{L}\n= 0 \\).\nFor arbitrary Kahler form $\\omega$ we can define special Lagrangian (SLag) submanifolds by\nthose 2 conditions. The form $\\varphi$ has length $f$ with respect to the metric $\\omega$ (here $f$ is a positive function on $M$).\nWe can conformally change the metric so that the form\n\\( \\varphi \\) will have length $\\sqrt{2}^n$ with respect to the new metric\n$g'$. \nThen SLag submanifolds will be Calibrated by \\( Re \\varphi \\) with respect to \n$g'$. \n\\begin{lem}\nLet $L^n$ be a compact connected $n$-dimensional manifold. Then the moduli-space of SLag embeddings of $L$ into $M$ is a smooth manifold of dimension $b_1(L)$. \n\\end{lem}\n{\\bf Proof:} The proof is a slight modification of McLean's proof for a Calabi-Yau metric (see \\cite{Mc}).\n\nLet $i:L \\mapsto M$ be a (smooth) SLag embedding of $L$ into $M$. Locally the moduli-space $\\Gamma$ of $C^{2,\\alpha}$-embeddings of $L$ into $M$ (modulo the diffeomorphisms of $L$) can be identified with the $C^{2,\\alpha}$ sections of the normal bundle of $i(L)$ to $M$ via the exponential map. Also the normal bundle is naturally isomorphic to the cotangent bundle of $L$ via the map $v \\mapsto i_v \\omega$. Hence the tangent bundle to $\\Gamma$ can be identified with $C^{2,\\alpha}$ 1-forms on $L$. Let $V_k$ be the vector space of exact $C^{1,\\alpha}$ $k$-forms on $L$ and let $V=V_2 \\oplus V_n$. There is locally a map $\\sigma: \\Gamma \\mapsto V$, given at an embedding $j(L) \\in \\Phi$ by $(j^{\\ast}(\\omega),j^{\\ast}(Im\\varphi))$. The moduli-space $\\Phi$ of SLag embeddings is just the zero set of $\\sigma$. Now the differential of $\\sigma$ at $i(L)$ in the direction of $\\alpha$ (there $\\alpha$ is a $C^{2,\\alpha}$ 1-form on L as above) is\n\\[ (d\\alpha,d (f \\ast \\alpha)) \\]\nthere $f$ is the length of $\\varphi$ in the metric defined by $\\omega$. For\n$\\omega$ a Calabi-Yau metric $f$ is constant. We claim that the differential is surjective and the tangent space to $\\Phi$ is naturally isomorphic to the\nfirst cohomology $H^1(L,\\mathbb{R})$. To prove this consider first an operator $P$ from the space of $C^{3,\\alpha}$ functions on $L$ to $C^{1,\\alpha}$ $n$-forms on $L$, $P(h)=d (f\\ast dh)$. We claim that $P$ is surjective onto the space of exact $n$-forms and the kernel of $P$ is the space of constant functions on $L$. Since $f$ is non-vanishing $P$ is elliptic. So to prove the surjectivity it is enough to show that the co-kernel of $P$ consists of constant multiples of the volume form on $L$. Let $\\mu$ be in the co-kernel of $P$. Let $h=\\ast \\mu$. One easily computes that \\[\\int_{L}Ph \\cdot \\mu= \\pm \\int_{L} f|d^{\\ast}(\\mu)|^2 \\]\nSo $d^{\\ast}(\\mu)=0$, hence $\\mu$ is a constant multiple of the volume form on $M$. Let now $h$ be in the kernel of $P$. Then arguing as before we get that $\\mu=\\ast h$ is a constant multiple of the volume form on $M$, i.e. $h$ is a constant. \n\nNow we can prove the lemma . First we prove that $d\\sigma$ is surjective. Let $\\alpha$ be an exact 2-form on $L$, and $\\beta$ be an exact $n$-form on $L$. We need to find a 1-form $\\gamma$ on $L$ s.t. \\[d\\gamma=\\alpha ~ , ~ d(f \\ast \\gamma)=\\beta \\]\nSince $\\alpha$ is exact there is a 1-form $\\gamma'$ s.t. $d\\gamma'=\\alpha$. We are looking for $\\gamma$ of the form $\\gamma=\\gamma'+dh$ for a function $h$. Since the operator $P$ was surjective onto the space of exact $n$-forms on $L$, we get that such $h$ exists, so $d\\sigma$ is surjective, hence $\\Phi$ is smooth. Next we prove that $dim(\\Phi)=b_1(L)$. Let $W=ker(d\\sigma)$. $W$ is the tangent space to $\\Phi$ at $i(L)$. Since $W$ is represented by closed 1-forms on $L$, there is a natural map $\\xi:W \\mapsto H^1(L,\\mathbb{R})$. We claim that this map is an isomorphism. Indeed let $a \\in H^1(L,\\mathbb{R})$ and let $\\gamma'$ be a closed 1-form on $L$ representing the class $a$. From the properties of operator $P$ it is clear that there is a unique exact 1-form $\\gamma''=dh$ s.t. $\\gamma=\\gamma'+\\gamma''$ is in the kernel of $\\sigma$. Hence $\\xi$ is an isomorphism Q.E.D.\n\nRemark: A more general setup of deformations of SLag submanifolds in a symplectic manifold was considered by S. Salur in \\cite{Sem}.\n\nIn all subsequent discussions\nthe moduli-space will be connected (i.e. we take a connected component of the moduli-space of SLag embeddings of a given manifold $L$).\n\nWe can also define\n$\\Phi'$ - a moduli-space as special Lagrangian\nembeddings of a given manifold $L$ into $M$ over Diff' \n(diffeomorphisms of $L$ which induce identity map on the homology of $L$).\nThen \\( \\Phi' \\) is a covering space of $\\Phi$.\nNow any element $\\alpha$ in the first homology of $L$ induces a 1-form\n\\( h^{\\alpha} \\)on \\(\\Phi' \\) in the following way : Let \\(\\xi \\in \\Phi '\n\\) and let \\( L_{\\xi} \\) be it's support in $M$. If $v$ is a tangent\nvector to \\( \\Phi ' \\) at \\( \\xi \\) then we can view $v$ as a closed\n1-form on \\( L_{\\xi} \\). From definition of \\( \\Phi ' \\) it is clear\nthat the element \\( \\alpha \\) induces a well defined element in \\(\nH_{1}(L_{\\xi}) \\), which we will also call \\( \\alpha \\). So we\ndefine \\( h^{\\alpha}(v) = [v](\\alpha) \\). Hitchin\neffectively proved in \\cite{Hit} that \\( h^{\\alpha} \\) is a closed form on \\(\n\\Phi ' \\) (his notations are somewhat different from ours). Thus if we\npick \\( \\alpha_{1} \\ldots \\alpha_{k} \\) a basis for the first homology of $L$ then\nwe have a frame of closed forms \\(h^{1} \\ldots h^{k} \\) and\ncorrespondingly a dual frame of commuting vector fields \\( h_{1} \\ldots\nh_{k} \\) on \\( \\Phi ' \\). Hence any compact connected component $\\Gamma$ of $\\Phi$ must be a torus. Indeed the flow by commuting vector fields $h_i$ induces a\ntransitive $\\mathbb{R}^k$ action on $\\Gamma$ with stabilizer being a discrete\nsubgroup $A$, hence $\\Gamma$ is diffeomorphic to $\\mathbb{R}^k\/A$ - a k-torus. \n\nNext we investigate finite group actions on Calabi-Yau manifolds. Suppose\nthat a group $G$ acts by structure preserving diffeomorphisms on $M$. We have the following\n\\begin{lem} \n: Suppose a SLag submanifold $L$ is invariant under the $G$ action and $G$ acts\ntrivially on the first cohomology of $L$. Then $G$ leaves invariant every\nelement in the moduli-space $\\Phi$ through $L$.\\\\\nMoreover, suppose that $g \\in G$ and $x \\in M-L$ in an isolated fixed point of\n$g$. Then $x$ cannot be contained in any element of $\\Phi$\n\\end{lem}\n{\\bf Proof} : Since $G$ is structure preserving, it sends SLag submanifolds\nto SLag submanifolds. Since it leaves $L$ invariant, it preserves $\\Phi$\n(which is a connected component of $L$ in the moduli-space of SLag submanifolds).\nFrom the identification of the tangent space of $\\Phi$ at $L$\nwith $H^1(L,\\mathbb{R})$ and the fact that $G$ acts trivially on\n$H^1(L,\\mathbb{R})$ we deduce that $G$ acts trivially on the tangent space\nto $\\Phi$ at $L$. Hence $G$ acts trivially on $\\Phi$, i.e. it leaves each\nelement of $\\Phi$ invariant.\n\nTo prove the second statement, consider a set $S$ of those elements in $\\Phi$\nwhich contain $x$. Obviously $S$ is closed and doesn't contain $L$.\nWe prove that $S$ is open and then it will be empty.\n\nLet $L' \\in S$. Any element $L''$ close to $L'$ can be viewed uniquely\nas an image $exp(v)$, there $v$ is a normal vector field to $L'$. Suppose\n$v(x) \\neq 0$. Since $L''$ is $g$-invariant then $exp(g_{\\ast}v(x))$ is\nalso in $L''$, there $g_{\\ast}$ is a differential of $g$ at $x$. Since $L'$\nis $g$-invariant then $g_{\\ast}$ preserves the tangent space to $L'$ at\n$x$, hence it preserves the normal space. Also since $x$ is isolated then\n$g_{\\ast}$ has no nonzero invariant vectors. Hence $v(x) \\neq g_{\\ast}v(x)$ in the normal bundle. \n\nSince exponential map is a diffeomorphism from a small neighbourhood of the \nnormal bundle of $L'$ to $M$ we see that $exp(g_{\\ast}v(x))$ is not in $L''$-\na contradiction. So $v(x)=0$ i.e. $L'' \\in S$ Q.E.D.\n\nAs for examples of special Lagrangian submanifolds, many come\nfrom the following setup : Let $M$ be a Calabi-Yau manifold and $\\sigma$\nan antiholomorphic involution. Suppose $\\sigma$ reverses $\\omega$.\nThen the fixed-point set of $\\sigma$ is a special Lagrangian submanifold.\nFor a Calabi-Yau metric $\\omega$ the condition $\\sigma$ reverses $\\omega$ is\nequivalent to $\\sigma$ reversing the cohomology class $[\\omega]$, which often \ncan be easily verified.\nIndeed suppose $\\sigma$ reverses $[\\omega]$. Then \\( -\\sigma^{\\ast}(\\omega)\\)\nis easily seen to define a Kahler form, which lies in the same cohomology class\nas $\\omega$ and the metric it induces is obviously equal to \\(\\sigma^{\\ast}(g)\n\\), i.e. it is Ricci-flat. Hence by Yau's fundamental result (see \\cite{Yau}) \nwe have \\(-\\sigma^{\\ast}(\\omega)= \\omega \\).\n\nWe wish to discuss 2 collections of such examples. In both cases $M$ is a\nprojective manifold defined as a zero\nset of a collection of real polynomials.\nThen the conjugation of the projective space induces an anti-holomorphic\ninvolution which reverses the Fubini-Study Kahler form, hence it also reverses the Calabi-Yau form in the same cohomology class.\nThe fixed point set is a submanifold of a real projective space .\\\\\n\n\\noindent\n Our first example with be a complete intersection of hypersurfaces of\ndegree 4 and 2 in \\( \\mathbb {C}P^5 \\).\\\\\nFirst we note that a 2-torus can be represented as surface of degree 4 in\n$\\mathbb{R}^3$. Indeed a torus can be viewed as a circle bundle over a circle\n\\( ((x,y,0)|x^{2} + y^{2} = 1 ) \\) in $\\mathbb{R}^3$, there a fiber over a \npoint\n\\( a=(x,y,0) \\) is a circle of radius \\( \\frac{1}{2} \\) centered at $a$\nand passing in a plane through \\( a , (0,0,1) \\) and the origin . \nIf \\( (x,y,z) \\) is a point on our torus then it's distance to a point\n\\( (\\frac{x}{\\sqrt{x^{2} + y^{2}}}, \\frac{y}{\\sqrt{x^{2}+y^{2}}},0) \\) is\n\\( \\frac{1}{2} \\).\\\\\nIf we compute we get \\( 1+x^{2} + y^{2}+z^{2} -2 \\sqrt{x^{2}+y^{2}} =\n\\frac{1}{4} \\) , i.e. \n\\( p(x,y,z) = ( \\frac{3}{4} + x^{2} + y^{2}+z^{2})^{2} - 4(x^{2}+y^{2}) =\n0 \\).\n\nSo in inhomogeneous coordinates \\( x_{1} \\ldots x_{5} \\) on \\(\\mathbb{R}P^5\\) \nthe zero locus of 2 polynomials \\(p(x_{1},x_{2},x_{3}) \\) and\n\\(q(x_{4},x_{5}) \\) (there \\( q(x,y)=x^{2}+y^{2}-1 \\) ) is a 3-torus.\\\\\nIf we consider the corresponding homogeneous polynomials on \\(\\mathbb{R}P^5\\),\nthen it is easy to see that there is no solution for \\(x_{6}=0 \\).\nSo the zero locus of those polynomials in \\(\\mathbb{R}P^5\\) is a 3-torus.\nIf we perturb them slightly so that the corresponding complex 3-fold in\n\\(\\mathbb{C}P^5 \\) will be smooth then we obtain the desired example.\\\\\n\n\\noindent\n Other examples are quintics with real coefficients in \\(\\mathbb{C}P^4\\). In \nthat case real quintics would be special Lagrangian submanifolds. R.Bryant\nconstructed in \\cite{Br} a real quintic, which is a 3-torus .\\\\\nWe will construct, using Viro`s technique in real algebraic geometry \n(see \\cite{Vir}), real\nquintics $L_k$ which are diffeomorphic to projective space\n\\(\\mathbb{R}P^3\\) with k 1-handles attached for \\( k = 0, 1 , 2 , 3\\). If\n$k=3$ then \\(b_{1}(L_{3})=3 \\) and the cup product in the first cohomology\nof $L_3$ is $0$.\n\nThe construction goes as follows: First in inhomogeneous coordinates\n\\( x_{1} , \\ldots ,x_{4} \\) on \\(\\mathbb{R}P^4\\) we consider a polynomial \n\\(p \\cdot\nq \\), there \\( p(x_{1}, \\ldots , x_{4} ) = x_{1}^{2} + \\ldots + x_{4}^{2}\n- 1 \\) and \\( q = x_{4} \\). In \\(\\mathbb{R}P^4\\) the zero locus of the \npolynomial will be\n\\( \\mathbb{R}P^3 \\bigcup S^3 \\), there $\\mathbb{R}P^3$ is a zero locus of $q$,\n$S^3$ is a zero locus of $p$ \nand $\\mathbb{R}P^3$ intersects $S^3$ along \na 2-sphere \\( S^{2} \\subset \\mathbb{R}^3 = (x_4=0)\\).\n \nNow we consider \\( f = pq -\\epsilon h \\), there $h$ is some polynomial\nof degree up to 5 and $\\epsilon>0$ is small enough.\nThe Hessian of $pq$ on $S^2$\nis nondegenerate along the normal bundle to $S^2$ and vanishes along the axes \nof the normal bundle (the axes are a normal bundle of $S^3$ to $S^2$ and of\n$\\mathbb{R}P^3$ to $S^2$). If we look on those axes locally as coordinate\naxes then $pq= xy$ in those coordinates.\n\nSuppose first $h$ is non-zero along $S^2$. We can assume that $h>0$ on $S^2$.\nThe zero locus of $f=pq - \\epsilon h$ will live in 2 quadrants in which\n$xy>0$. Thus in zero locus of $f$, \na part $A_1$ of $\\mathbb{R}P^3$ outside $S^2$ will \"connect\" with one \nhemisphere $S'$ of\n$S^3$, and the part $A_2$ inside $S^2$ will connect with\nthe other hemisphere $S''$ and the zero locus of $f$ is a disjoint union of \n$\\mathbb{R}P^3$ and $S^3$.\\\\\nSuppose the zero set of $h$ intersects our $S^2$ transversally along k \ncircles such that no circle will lie in the interior of the other. We can \nassume w.l.o.g. that on the exterior $V$ of those circles $h$ is positive.\nThen along $V$, $A_1$ connects to $S'$ and $A_2$ connects to $S''$ as before.\nAlong the interior of every circle, $A_1$ connects to $S''$ and $A_2$ to\n$S'$. So near interior of these circles we get 1-handles connecting \n$\\mathbb{R}P^3$ with $S^3$.\nSo the zero locus of $f$ will be $\\mathbb{R}P^3$ \nand $S^3$\nconnected by k 1-handles, i.e. it will be an $\\mathbb{R}P^3$ with $k-1$ \n1-handles attached.\\\\\nIt is not hard to find examples of such $h$ for small values of k.\nFor instance for $k=4$ (i.e. for $L_3$) we can take \\[h= ((x_1-1\/3)^2 +\n(x_2 -1\/3)^2 - 1\/16)((x_1+1\/3)^2 + (x_2 +1\/3)^2 - 1\/16) \\] and the zero\nlocus of $h$ intersects $S^2 \\subset \\mathbb{R}^3$ in 4 circles.\n\n\\subsection{ Non-compactness of the moduli-space}\nIn this section we will consider connections between the moduli-space of\nSLag submanifolds and global geometry of the ambient Calabi-Yau\nmanifold $M$.\\\\\nLet $\\Phi$ be the moduli-space of SLag submanifolds. We have a fiber\nbundle $F$ over $\\Phi$, \\( F \\subset M \\times \\Phi \\),\n\\( F =((a, L)|a\n\\in M , L \\in \\Phi \\) s.t. $a \\in L$) .\\\\\nWe have a natural projection map \\( pr : F \\mapsto \\Phi \\),\nwhose fiber is the support of the element in $\\Phi$ and the evaluation map\n\\( ev : F \\mapsto M \\), \\( ev(a, L)=a \\).\\\\\nAlso the tangent space to a point \\( (a , L) \\in F \\) naturally splits\nas\n\\( T_{a}L \\oplus T' \\), there \\(T_{a}L\\) is\nthe tangent space to $L$ at $a$ (a tangent space to the fiber) and \n\\(T'= ((v(a),v)|v \\) is a variation v. field to the moduli-space\nand \\(a(v) \\) is the value of $v$ at $a$).\n\nLet $L$ be a compact k-dimensional oriented manifold with \\(b_{1}(L)=\nk\\). We say that $L$ satisfies condition $\\star$ if for $\\alpha_1$ \\ldots\n$\\alpha_k$ a basis for \\( H^{1}(L) \\) we have \n\\( \\alpha_{1} \\cup \\) \\ldots \\( \\cup \\alpha_k \\neq 0 \\) . \nThis holds e.g. if $L$ is a torus. On the other hand the real quintic\nwith \\( b_{1} =3 \\) that we constructed in section 3.1 doesn't satisfy\ncondition $\\star$. \n\\begin{thm}\n: Let $L$ be a special Lagrangian submanifold with \\(b_{1}(L)\n= k\\).\\\\\nSuppose $L$ satisfies $\\star$ .\nSuppose some connected component of \\( \\Phi ' \\) is compact. Then the \nBetti numbers of $M$ satisfy : \\(b_{i}(M) \\leq b_{i}(L \\times T^{k}) \\)\n(here $T^k$ is a $k$-torus).\\\\\nSuppose we have $G,g,x$ satisfying conditions of Lemma 3.1.1. Then $\\Phi$ \nitself is not compact.\n\\end{thm}\n{\\bf Proof of Theorem 3.2.1}: Suppose $L$ satisfies $\\star$. First prove that \n$\\Phi$ is orientable (in fact it has a natural volume element $\\sigma$). \nLet \\( L' \\in \\Phi \\) and \\( v_{1} \\ldots v_{k} \\) be \nelements of the tangent space to $\\Phi$ at $L'$. So \\( v_{1} \\ldots\nv_{k} \\) are closed 1-forms on $L'$ and we define:\n\\[\\sigma( v_{1} \\ldots v_{k} ) = [v_{1}] \\cup \\ldots \\cup [v_{k}]\n(L')\\] \nSuppose that $\\Phi$ is compact, or a connected component $\\Gamma$ of $\\Phi'$\nis compact. In each case we have an evaluation map as before. We will prove that in both cases it has a positive degree. We will give a proof for $\\Phi$, the\nproof for $\\Gamma$ is analogous.\n\nFirst $\\Phi$ has a natural volume element $\\sigma$ described above.\nSo the $2k$-form \\( \\alpha = pr^{\\ast}(\\sigma) \\wedge ev^{\\ast}(Re\n\\varphi) \\) is the volume form on $F$.\\\\ \nLet \\( L_{\\phi} \\in \\Phi \\) and \\( \\alpha_1\n\\ldots \\alpha_k \\) be a basis for \\( H^{1}(L_{\\phi}) \\) s.t. \\(\n\\alpha_1 \\cup \\ldots \\cup \\alpha_k [L_{\\phi}] = 1 \\). Then we can consider\ncorresponding\nvector fields \\(v_{1} \\ldots v_{k} \\) along $L_{\\phi}$, which\nform a frame for the bundle $T'$ (described in the beginning of this\nsection) restricted to \\( L_{\\phi} \\). So $[i_{v_j}\\omega]= \\alpha_j$ and\n\\(pr^{\\ast}(\\sigma)(v_{1}, \\ldots , v_{k} ) =1 \\).\n\nLet now $\\eta$ be a Riemannian volume form on $M$. Then we have \\[deg(ev) = \n\\int_{F}ev^{\\ast}(\\eta) \/ vol(M) \\] Since $F$ is a fiber bundle we\ncan use integration over the fiber formula to compute:\n\\[ \\int_{F}ev^{\\ast}(\\eta) = \\int_{\\Phi}(\\int_{L_{\\phi}} i_{v_1} \\ldots i_{v_k}\nev^{\\ast}(\\eta) )d \\phi \\] ( of course we choose \\( \\alpha_1 \\ldots \\alpha_k\n\\) for each \\( L_{\\phi} \\) ). \\\\\nAlso \\( i_{v_1} \\ldots i_{v_k}\nev^{\\ast}(\\eta) \\) is easily seen to be equal to \\(i_{v_1} \\omega\n\\wedge \\ldots \\wedge i_{v_k} \\omega \\) (all restricted to the fiber\n\\(L_{\\xi} \\) ).\\\\\nSo \\( \\int_{L_{\\xi}} i_{v_1} \\ldots i_{v_k} ev^{\\ast}(\\eta) = \\alpha_1\n\\cup \\ldots \\cup \\alpha_k (L_{\\phi}) = 1 \\).\\\\\nSo \\( deg(ev) = \\int_{\\Phi} 1 \/vol(M) = vol(\\Phi)\/vol(M) > 0 \\).\n\nNow let $\\Gamma$ be compact. Let $F'$ be a corresponding fiber bundle over $\\Gamma$.\nFirst we claim that \\(b_{i}(F') \\geq\nb_{i}(M) \\).\nSuppose this is not true for some $0 0$ then $\\int_{\\alpha}Re \\eta \\geq 1\/2$.\n\nLet now $\\Sigma$ be as before. Then $\\Sigma$ represents a homology class $h$\nand $\\int_{h}Re(\\eta)=1$. Also the integral of $Re(\\eta)$ on every component \nof $\\Sigma$ is at least $1\/2$, so $\\Sigma$ has at most 2 components.\nLet $\\Sigma '$ be another cusp curve on the boundary of the moduli-space.\nSuppose $\\Sigma '$ intersects $\\Sigma$. Since $h \\cdot h = 0$ we see that \n$\\Sigma$ and $\\Sigma '$ must have a common component. Suppose $\\Sigma$ has a \ncomponent $P$ which is not in $\\Sigma'$. Then $0= [P] \\cdot h = [P] \\cdot\n[ \\Sigma '] > 0 $- a contradiction. So $\\Sigma$ and $\\Sigma '$ have same\ncomponents, and since their total number (counted with multiplicity) is at \nmost 2, then $\\Sigma = \\Sigma '$.\n\nFinally we prove that the number of exceptional spheres is finite.\nAs we have seen, there are 2 types of exceptional curves:\n\n1) A curve with 2 components $A_i$ and $B_i$. Then $0= [A_i] \\cdot h=\n[A_i] \\cdot ([A_i] + [B_i])$. Now $[A_i] \\cdot [B_i] > 0$, so $[A_i] \\cdot\n[A_i] < 0 $.\n\nIf $A_j,B_j$ is another curve like that, then we have seen that $A_i$\ndoesn't intersect it, so in particular $[A_i] \\cdot [A_j] = 0$.\nSo one easily sees that the numbers of such curves is at most\n$5= b_2(\\overline{X})$. \n\n2) A curve with 1 component (possibly with multiplicities). Let this\ncurve be $k \\cdot P_i$, there $P_i$ is a primitive rational curve and\n$k \\cdot [P_i]= h $. To \nstudy those $P_i$ we make following observations: There is a $\\mathbb{Z}_2\n\\oplus \\mathbb{Z}_2$ action on $T^4$ with generators\n\n$\\gamma_1: (z_1,z_2) \\mapsto (z_1 + i\/2, z_2)$\n\n$\\gamma_2: (z_1,z_2) \\mapsto (z_1, z_2 + i\/2)$ \n\nThis action commutes with $\\alpha'$ action, and hence induces an action on\n$K^3$. It also preserves $\\overline{X}$.\n\nNext we find elements in $K^3$, which do not have a full orbit under the\naction. A point $(z_1,z_2)$ doesn't have a full orbit if it is preserved\nunder one of $\\gamma_1, \\gamma_2, \\gamma_1 \\circ \\gamma_2$.\nNow the fixed points are:\n\n$Fix(\\gamma_1)= ((z_1,z_2): (z_1 +i\/2, z_2)= (-z_1 +1\/2, -z_2+1\/2))$. \nThese are 2 points, disjoint from exceptional spheres. A similar\nanalysis for $\\gamma_2$ and $\\gamma_1 \\circ \\gamma_2$ produces 2 points\nfor each.\n\nNow the actions of $\\gamma_i$ are structure preserving on $\\overline{X}$,\nso they send SLag tori to Slag tori. Moreover they preserve an open set\nof tori $Re(z_i)=const$ in our moduli-space. So by Lemma 3.1.1 they leave elements\nof the moduli-space invariant. Hence they preserve the\nlimiting curve $P_i$ (because the convergence is in particular a\nGromov-Hausdorff convergence).\n\nFor a limiting curve $P_i$,\nconsider $\\chi(P_i) = [P_i] \\cdot [P_i] - c_1(K^3)([P_i]) + 2 = 2$.\nThen by theorem 7.3 of \\cite{MW} we can count $\\chi(P_i)$ by\nadding contributions of singular points (which are double points or branch \npoints), and each singular point gives a positive contribution. So $P_i$ has\nsingular points and there are at most 2 of those.\n\nLet $x$ is a singular point.\nThen it's orbit under $\\mathbb{Z}_2 \\oplus \\mathbb{Z}_2$ action consists of \nsingular points. So it cannot have length 4, so $x$ is one of 6 points $D$ \nwith orbit of length 2. So $P_i$ contains at least 2 points of the set $D$.\n\nIf $P_j$ is another curve of type 2, then $[P_i] \\cdot [P_j] = 0$, so they\ndon't intersect. Also $P_j$ contains at least 2 points from the set $D$.\nSo it is clear that the number of $P_i$ is at most 3.\n \nFrom all that we deduce that our moduli-space can be compactified to a pseudo-cycle. Also any point $x$ outside of bad neighbourhoods has a unique preimage\nin the smooth part of the moduli-space, so we deduce that the degree of the\nevaluation map is 1, so the compactified moduli-space fills the whole manifold\n$M$. Also elements of the compactified moduli-space don't intersect, so\n$M$ fibers with generic fiber being a SLag torus. Also the fibration is \nsmooth over the smooth part of the moduli-space. To prove that we need to prove that the differential of the evaluation map is an isomorphism. This is clearly\ntrue outside our `bad' neighbourhoods. Inside a bad neighbourhood, it is\nenough to prove that variational vector fields to our pseudoholomorphic tori\ndo not vanish. But this follows from a standard argument that each zero of such\na vector field gives a positive contribution to the first Chern class of the \nnormal bundle, which is trivial.\n\nWe want to point out that this example, then we can use\nHyperKahler trick and local product structure to study limiting SLag\nsubmanifolds, is quite ad hoc and some new ideas are needed to study\nsingular SLag submanifolds in general. \n\n\nNext we wish to construct a mirror, i.e. to compactify the dual fibration.\nLet $ \\stackrel{M}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ be a fibration\nover the compactified moduli-space and let $ \\stackrel{M_0}{\\stackrel{\\downarrow }{\\Phi}}$ be a restriction of this fibration over the (smooth)\nmoduli-space, \nthere $M_0 \\subset M$ is an open subset. Let $a \\in \\Phi$ and $L_a$ be a fiber.\nWe have a vector space $V_a=H_1(L_a,\\mathbb{R})$ and a lattice $\\Lambda_a=\nH_1(L_a, \\mathbb{Z})$ in it, and so we get a torus bundle $V_a\/\\Lambda_a$ \nover $\\Phi$. By dualizing each $V_a$ we get a dual bundle $V_a^{\\ast}\/\n\\Lambda_a^{\\ast}$. We will adopt the following definition of a topological mirror from M. Gross's paper (see \\cite{MG1})\n\n\\begin{dfn}\n:Let $\\stackrel{M'}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ be another fibration with $M'$ smooth and a corresponding fibration\n$\\stackrel{M_0'}{\\stackrel{\\downarrow }{\\Phi}}$ is a smooth torus \nfibration. Let $V_a'\/\\Lambda_a'$ as before.\nWe say that $M'$ is a topological mirror to $M$ if there is a fiberwise linear \nisomorphism $\\rho : V_a^{\\ast}\/ \\Lambda_a^{\\ast} \\mapsto V_a'\/ \\Lambda_a'$\nover $\\Phi$.\n\\end{dfn}\nSuppose now $M$ is a symplectic manifold and $\\stackrel{M_0}{\\stackrel{\\downarrow}{\\Phi}}$ is a Lagrangian fibration. Then Duistermaat's theory of action-angle\ncoordinates (see \\cite{MG2}) implies that there is an action of the cotangent bundle $T^{\\ast}\\Phi$ on the fibers with a stabilizer lattice $\\Lambda_b$. This of course induces a natural isomorphism $\\xi: V_a\/\\Lambda_a \\mapsto T^{\\ast}\\Phi \/ \\Lambda_b$. There is also a dual isomorphism $\\xi^{\\ast}: T \\Phi \\mapsto V_a^{\\ast}$ given explicitly by $\\xi(v) = [i_v \\omega]$, here $\\omega$ is a symplectic structure and $v \\in T\\Phi$ is viewed as a normal vector field to an element of $\\Phi$. Also the natural symplectic structure on $T^{\\ast}\\Phi$ projects to a symplectic structure on $T^{\\ast}\\Phi \/ \\Lambda_b$ and hence on $V_a\/\\Lambda_a$.\n\nIf our fibration is a Special Lagrangian fibration then one can get a symplectic structure on the dual bundle $V_a^{\\ast}\/\\Lambda_a^{\\ast}$ as follows (this construction was done by Hitchin in \\cite{Hit} and in coordinate-free way by Gross in \\cite{MG2}):\n\nWe have a map $\\alpha: V_a^{\\ast} \\mapsto T^{\\ast}\\Phi$ defined by periods of the closed form $Im \\varphi$. Explicitly, let $u \\in V_a^{\\ast}$. We can view $u \\in H^1(L,\\mathbb{R})$. For $v \\in T\\Phi$ we define \n\\begin{equation}\\label{PD}\n\\alpha(u)(v) = [i_v Im\\varphi] \\cup u ([L])= [i_v Im\\varphi](PD(u)) \n\\end{equation}\nHere $v$ is viewed as a normal vector field to $L$ and $PD(u)$ is a Poincare dual to $u$. One shows that for $u$ a section of $\\Lambda_a^{\\ast}$ (the integral cohomology lattice), $\\alpha(u)$ is a closed 1-form on $\\Phi$ and thus $\\alpha$ induces a symplectic structure on $V_a^{\\ast}\/\\Lambda_a^{\\ast}$. This motivates the following definition \n\\begin{dfn}\nLet $\\stackrel{M}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ and \n$\\stackrel{M'}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ be 2 Special Lagrangian fibrations. Then $M'$ is a symplectic mirror to $M$ if the corresponding isomorphism $\\rho: V_a^{\\ast}\/\\Lambda_a^{\\ast} \\mapsto V_a'\/\\Lambda_a'$ over\n$\\Phi$ is a symplectomorphism.\n\\end{dfn}\n\nTo construct a topological mirror we make following observations: Let $F= \n\\stackrel{W_a\/ \\Lambda_a}{\\stackrel{\\downarrow}{U}}$ be some torus fibration.\nLet $a \\in U$. Then we have a monodromy representation $\\nu: \\pi_1(U,a) \n\\mapsto SL(W_a,\\Lambda_a)$ (see \\cite{MG1}). Moreover if $F' =\n\\stackrel{W_a' \/ \\Lambda_a '}{\\stackrel{\\downarrow}{U}}$ is another fibration\nand $K: W_a \\mapsto W_a' $ is an intertwining isomorphism for monodromy \nrepresentations, then $K$ induces\na natural fiberwise isomorphism between $F$ and $F'$. \n\nSo we can try to compactify a dual fibration by trying to find local \nisomorphisms between it and the original fibration. Let $U$ be a neighbourhood\nin $\\Phi$ and $a \\in U$.\nLet $e_1, \\ldots, e_n$ be\na basis for the lattice $\\Lambda_a$ and $e^1, \\ldots, e^n$ be a dual basis for the lattice $\\Lambda_a^{\\ast}$. Let $K: W_a\/ \\Lambda_a \\mapsto W_a^{\\ast}\/ \n\\Lambda_a^{\\ast}$ be some \nlinear map, which is given in terms of our bases by a matrix $K$. Let\n$\\alpha \\in \\pi_1(U,a)$ and $\\nu(\\alpha)$ be a monodromy map, which is given\nin terms of a basis $(e_i)$ by a matrix $A$. Then a dual representation\n$\\xi^{\\ast}(\\alpha)$ on $W_a^{\\ast}$ is given by a matrix $(A^T)^{-1}$ in a\nbasis $(e^i)$ (see \\cite{MG1}).\n\nSo we need $K \\cdot A = (A^T)^{-1} \\cdot K$ i.e. $K= A^T \\cdot K \\cdot A$.\n\nNow if $n=2$ there is a solution $K = \\left( \\begin{array}{clcr}\n0 & 1 \\\\\n-1 & 0 \\\\\n\\end{array} \\right) $.\n\nWe return now to $M$. On a 6-torus $T^6$ we have a natural \nisomorphism between integral homology and cohomology of SLag tori \n$T_{a,b,c}$. This isomorphism is invariant under $\\mathbb{Z}_2\n\\oplus \\mathbb{Z}_2$ action, hence it induces an isomorphism $\\rho$ between\n$\\Lambda_a$ and $\\Lambda_a^{\\ast}$ outside of bad neighbourhoods.\nTake a point $a$ on a boundary of a bad \nneighbourhood $Y$ in $\\Phi$. Then because of the product structure of our \nfibration over $Y$ we see that the monodromy matrices of $V\/ \\Lambda$ in $Y$ \nlook like $ \\left( \\begin{array}{clcr}\n\\ast & \\ast & 0 \\\\\n\\ast & \\ast & 0 \\\\\n0 & 0 & 1 \n\\end{array} \\right) $\n\nIt is clear from the above that the monodromy representation in Y is isomorphic to a dual representation.\n\nSo we can construct a topological mirror $M'$ as follows: Let $\\overline{W}$ be some\n'bad' neighbourhood in $M$ as before. Let $z_j=x_j+i \\cdot y_j$ be\ncoordinates on a 6-torus. Then a map $\\mu:T^6 \\mapsto T^6$, $\\mu(x_1,y_1,\nx_2, y_2 , x_3,y_3)= (x_1,-y_2,x_2,y_1,x_3,y_3)$ commutes with $\\mathbb{Z}_2^2$ action. Also $\\mu$ maps the boundary $\\partial \n\\overline{W}$ to\nitself. We take $\\overline{W}$ and glue it to $M - \\overline{W}$ by\n$\\mu$ and doing so for each 'bad' neighbourhood we obtain $M'$.\n\nNow $\\mu$ preserves SLag tori on $\\partial \\overline{W}$, and thus $M'$ naturally\nacquires a structure of a fiber bundle over $\\overline{\\Phi}$ with generic fiber being a torus. We claim that $M'$ is a topological mirror of $M$.\n\nIndeed we noted that outside of 'bad' neighbourhoods\nthere is a natural isomorphism $\\rho$ between bundles $V_a$ and $V_a^{\\ast}$\nas before, and of course the bundle $V_a'$ of $M'$ is isomorphic to the \nbundle $V_a$ outside of 'bad' neighbourhoods, so $\\rho$ can be viewed as an isomorphism between $\\Lambda_a^{\\ast}$ and $\\Lambda_a'$. We want to extend $\\rho$\ninside of $\\overline{W}$. First we need to check which isomorphism $\\rho$ induces on $\\partial \\overline{W}$ via the gluing map $\\mu$.\n\nLet $L= T_{a,b,c}$ be a SLag torus contained in $\\partial \\overline{W}$. Let \n$z_j=x_j + i \\cdot y_j$ be coordinates on $T^6$. Then $dy_1, \\ldots, dy_3$ is a\nbasis for $H^1(L,\\mathbb{Z})=\\Lambda_a^{\\ast}$ and $\\partial_{y_1},\\ldots, \\partial_{y_3}$ is a dual basis for $H_1(L,\\mathbb{Z})=\\Lambda_a'$. Then \n\\[\\rho: dy_1 \\mapsto -\\partial_{y_2} ~ , ~ dy_2 \\mapsto \\partial_{y_1} ~ , ~\ndy_3 \\mapsto \\partial_{y_3} \\] \nAs we saw, $\\rho$ is an intertwining operator between the\nmonodromy representations on $V_a^{\\ast}$ and $V_a'$. Hence $\\rho$ extends to an isomorphism inside $\\overline{W}$, and hence $M'$ is a topological mirror of $M$.\n\nSo far we viewed $M'$ just as a differential manifold (and one can easily show that $M'$ is diffeomorphic to $M$). We will see that\nis has additional interesting structures.\n\nLet $\\omega'$,$\\omega''$ and $\\eta = Re(\\eta)+ i \\cdot Im(\\eta)$ as before\n(see p.14). Then we easily see that near $\\partial \\overline{W}$ the gluing map $\\mu$ is an isometry. Also\n$\\mu^{\\ast}(\\omega')= Im \\eta, \\mu^{\\ast}(Im \\eta)= -\\omega',\\mu^{\\ast}(Re \\eta)\n= Re \\eta $.\nNow\n$Im(\\eta) + (dx_3 \\wedge dy_3)$ is a symplectic form on $\\overline{W}$. So we\nsee that we can glue it to our symplectic form $ \\omega' + (dx_3 \\wedge \ndy_3)$ outside of $\\overline{W}$ to get a symplectic form $\\omega^{\\ast}$ on $M'$.\nMoreover near $\\partial \\overline{W}$, $\\mu$ intertwines between $I$ and $K$ - almost \ncomplex structures defined by $\\omega'$ and $Im \\eta$. Thus we can glue\n$K$ inside $\\overline{W}$ to $I$ outside of $\\overline{W}$ to get an almost complex structure\n$I'$ on $M'$ compatible with $\\omega^{\\ast}$. We can glue a form $(Re \\eta + i Im \\eta) \\wedge idz_3 $ outside of $\\overline{W}$ to a form $(Re \\eta -i \\omega'') \\wedge idz_3$ inside $\\overline{W}$ to get a \ntrivialization $\\varphi'$ of a canonical bundle of $I'$.\n\nA submanifold $L \\in \\Phi$, then viewed as a submanifold of $M'$, is \nCalibrated by $Re \\varphi '$, which can be described by alternative \nconditions \n$\\omega^{\\ast} |_L=0, Im \\varphi '|_L= 0$. So we can view our moduli-space on $M'$\nas Special Lagrangian submanifolds, except for the fact that $I'$ is not\nan integrable a.c. structure and so $M'$ is only symplectic. If we were\nable to establish the fibration structure on $M$ by SLag submanifolds of the Calabi-Yau metric instead of $\\omega'$,\nwe would have obtained a Calabi-Yau structure on the mirror. \n\nFinally we prove that $ \\rho: V_a^{\\ast}\/\\Lambda_a^{\\ast} \\mapsto V_a'\/\\Lambda_a'$ is a symplectomorphism and thus $M'$ is a symplectic mirror to $M$ according to Definition 3.3.2. The symplectic structure on $V_a^{\\ast}\/\\Lambda_a^{\\ast}$ was obtained from the map $\\alpha: V_a^{\\ast} \\mapsto T^{\\ast}\\Phi$. Also the symplectic structure on $V_a'\/ \\Lambda_a' $ was obtained from a map $\\xi': V_a' \\mapsto T^{\\ast}\\Phi$. We will prove that\n\\[\\alpha = \\xi' \\circ \\rho \\] and then we are done. This is obviously true outside of bad neighbourhoods. Let now $L$ be in one of bad neighbourhoods, so $L$\nhas a form $T \\times S^1$. Let $\\beta^1,\\beta^2$ be (an oriented) basis for $H^1(T,\\mathbb{Z})$. Then $\\beta^1,\\beta^2, [dy_3]$ is a basis for $H^1(L,\\mathbb{Z})$, which we can view as a basis for $\\Lambda_a^{\\ast}$. Let $\\beta_1, \\beta_2, [S^1]$ be a corresponding dual basis for $H_1(L,\\mathbb{Z})$, which we can also view as a basis for $\\Lambda_a$ and $\\Lambda_a'$. Then $\\rho(\\beta^1) = -\\beta_2, \\rho(\\beta^2)= \\beta_1 ~ and ~ \\rho ([dy^3])= [S^1]$.\n\nLet $v^i= \\xi' (\\beta_i)$ and $\\gamma= \\xi'([S^1])$ in $T^{\\ast}\\Phi$ (because of the product structure $\\gamma$ can be viewed as a 1-form $dx_3$ on a moduli-space inside our bad neighbourhood).\nLet $v_1,v_2, \\partial_{x_3}$ be a dual basis of $T\\Phi$. So if we view $v_i$\n as normal vector fields to the fiber then $i_{v_i}\\omega^{\\ast}$ represent cohomology classes $\\beta^i$. But in a bad neighbourhood\n $\\omega^{\\ast} = Im\\eta + dx_3 \\wedge dy_3$, so\n$[i_{v_i}Im\\eta]= \\beta^i$. \n\nNow by equation \\ref{PD} we have \\[ \\alpha(\\beta^i) (v_j)= [i_{v_j}Im\\varphi] \\cup \\beta^i([L])= [i_{v_j}Im\\eta \\wedge -dy_3] \\cup \\beta^i (L)=\\beta^j \\cup -[dy_3]\\cup \\beta^i([L])= -\\beta^i\\cup \\beta^j([T]) \\] \nAlso one easily shows that $\\alpha(\\beta^i)(\\partial_{x_3})=0 $. So one deduces that\n$\\alpha(\\beta^1)=-v^2 ~ ,~ \\alpha(\\beta^2)= v^1$ and also $\\alpha([dy_3])= \\gamma$. So $\\alpha= \\xi' \\circ \\rho $ and we are done. \n\nRemark: It is clear that applying these ideas we can get analogous results for\na Calabi-Yau 4-fold $N$ obtained from resolution of a quotient of an 8-torus by\n$\\mathbb{Z}_2^3$, there the generators of $\\mathbb{Z}_2$ actions are\n\n$\\alpha: z_1, \\ldots , z_4 \\mapsto -z_1, -z_2, z_3, z_4 $\n\n$\\beta : z_1, \\ldots , z_4 \\mapsto z_1, z_2, -z_3, -z_4$ \n\nand $\\gamma: z_1, \\ldots , z_4 \\mapsto z_1, -z_2 + 1\/2, -z_3 + 1\/2 , z_4$\n\nIndeed the resolution of the quotient by $\\alpha$ and $\\beta$ is a product of\n2 $K^3$ surfaces with a product structure, there each $K^3$ has a metric \nEuclidean outside of bad neighbourhoods as before. \n\nFor a fixed point set of $\\gamma$ we introduce a bad neighbourhood $X$ in \n$z_2,z_3$ coordinates and consider a neighbourhood $Z = T^2 \\times X \\times\nT^2$ in $T^8$. Then $\\alpha$ and $\\beta$ act freely on that neighbourhood.\nInside $Z$ we introduces structures as before and this way we get a Kahler\nmetric and a SLag torus fibration on $N$. \n\n\\subsection{ Holomorphic functions near SLag Submanifolds }\nIn this section we examine holomorphic functions in a neighbourhood of a\nspecial\nLagrangian submanifold. Such examples can be obtained for instance\nfrom Calabi-Yau manifold $X$ in \\(\\mathbb{C}P^{n}\\) defined as a zero locus \nof real\npolynomials. In that case we have \\( L = X \\bigcap \\mathbb{R}P^{n} \\) a Special\nLagrangian submanifold. Let $P$ be some real polynomial of degree\n$k$ without real roots. Then for any polynomial $Q$ of degree $k$ the\nfunction \\( \\frac {Q}{P} \\) is a holomorphic function on $X$ in a\nneighbourhood of $L$. More generally let $L$ be a fixed point set of an \nantiholomorphic involution $\\sigma$ and $h$ a meromorphic function on $M$.\nThen obviously \\( \\overline{h \\circ \\sigma} \\) is also a meromorphic function\non $M$ and so is \\(g = h \\cdot (\\overline{ h \\circ \\sigma})+ 1 \\). Also on $L$\n$g$ is real valued and $\\geq 1$. So $ f= 1\/g$ is a holomorpic function in a \nneighbourhood of $L$. \n\nAn immediate consequence from the fact that SLag submanifolds are `Special',\ni.e. $ Im \\varphi |_{L}= 0$ if the following \n\\begin{thm}\n: Let $L_0$ be Slag Submanifold and $f$ be a holomorphic\nfunction\nin a neighbourhood of $L_0$. Let $\\xi$ be a function on our moduli-space,\n\\( \\xi(L) = \\int_{L} f \\). Then $\\xi$ is a constant function.\n\\end{thm}\n{\\bf Proof}: Consider the following $(n,0)$ form \\( \\mu = f \\varphi\\). Then\n$\\mu$ is holomorphic, hence closed and obviously $\\xi(L)= \\int_{L}\\mu$. \nQ.E.D.\\\\\n\n\\noindent \nThis yields a following corollary :\\\\\nFor \\( 0 < \\theta < \\pi \\) we denote by \\( A_{\\theta} \\) an open\ncone in complex plane given by \\( ( z=re^{i \\rho } | r>0 , 0< \\rho <\n\\theta ) \\).\n\\begin{cor}\n: Let $M$ be a Calabi-Yau $n$-fold and $f$ a holomorphic\nfunction on some domain $U$ in $M$. Let $L(t)$ be flow of Slag\nsubmanifolds contained in $U$ and \\( p \\in U \\) a point s.t. \\( f(p)=0\\).\nSuppose that the distance \\(d(p, L(t)) \\rightarrow 0 \\) as \\(t\n\\rightarrow \\infty \\). Then $L(t)$\ncannot be contained in in the domain \\( f^{-1} (A_{\\theta}) \\) for \n\\( \\theta < \\frac{\\pi}{2n} \\). \n\\end{cor}\nRemark: This Corollary gives a restriction of how singular SLag currents\nmight look like .\\\\\n{\\bf Proof of Corollary 3.4.1}: Suppose $L(t)$ are contained in \\(W= \nf^{-1}(A_{\\theta})\\) as above. We can find an \\( \\epsilon >0 \\) s.t. \n\\(g=f^{n+ \\epsilon }\\) is well defined an holomorphic on $W$ and \n\\( g(W) \\subset\nA_{\\frac{\\pi}{2}} \\). Then \\( h= \\frac{\\pi}{2g} \\) is holomorphic on $W$, \n\\( h(W) \\subset A_{\\frac{\\pi}{2}} \\) and for $z$ close to $p$ we have \\(|h(z)| \\geq const \\cdot \nd(z,p)^{-n- \\epsilon} \\).\n\nSince \\( \\int_{L(t)} h \\) is constant and \\( Re(h),Im(h) >0\\) on $L(t)$\nthen \\( \\int_{L(t)} |h| \\) is bounded by a constant. \\\\\nTake now any \\( \\delta >0 \\) and pick $t$ and \\( p_{t} \\in L(t) \\) s.t.\n\\(d(p,p_{t}) \\leq \\delta \\). Consider \\(B=B(p_{t}, \\delta ) \\bigcap L(t)\n\\). By Theorem 2.0.1, \\( vol(B) \\geq const \\cdot \\delta^{n} \\) and on \n$B$ we have \\( |h| \\geq \\frac{const}{\\delta^{n+ \\epsilon}} \\). So \\( \\int_\n{L(t)} |h| \\geq \\int_{B} |h| \\geq const \\cdot \\delta^{- \\epsilon}\\).\\\\\nNow $\\delta$ was arbitrary - a contradiction. Q.E.D.\\\\\n\n\\noindent\nApplying these ideas we can also get restriction on SLag submanifolds in\n$\\mathbb{C}^n$ which are asymptotic to a cone. We have the following theorem:\n\\begin{thm}\n: Let \\( L \\subset \\mathbb{C}^{n} \\) be a special\nLagrangian submanifold\nasymptotic to a cone $\\Lambda$ and let \\( z_{1} \\ldots z_{n} \\) be\ncoordinates on\n$\\mathbb{C}^n$. Then $L$ cannot be contained in the cone \n\\[ B_{\\theta}^{\\delta}= ( (z_1, \\ldots, z_n)| z_1 \\in A_\\theta ~,~ |z_1|>\n\\delta \\cdot |z_i| ) \\]\nfor $\\delta > 0 ~,~ \\theta < \\pi\/2n $.\n\\end{thm}\nRemark : The order, to which $L$ is required to be asymptotic to a cone\nwill become clear from the proof.\\\\\n\n\\noindent \n{\\bf Proof of the theorem}: Consider a flow $L(t)$ of SLag submanifolds in the unit ball\nin\n$\\mathbb{C}^n$ with boundary in a unit sphere, \\( L(t) = (z|t \\cdot z \\in L , |z| \\leq 1 ) \\). We wish to prove that \\( \\int_{L(t)} |z_{1}|^{-n-\n\\epsilon } \\) is uniformly bounded in $t$ for some $\\epsilon > 0$ as in the proof of Corollary 3.4.1. This will lead us to a\ncontradiction as before because there are points in $L(t)$ which converge\nto the origin in $\\mathbb{C}^n$ for $t \\rightarrow \\infty$.\n\nLet $d$ be the distance function to the origin on $L$. Let \\(v =\n\\nabla d, w=\\frac{v}{||v||^{2} } \\). Then since $L$ is asymptotic to a\ncone, $w$ will be a well-defined v. field outside some ball $B$ in $L$\nand it's length will converge to 1 at $\\infty$. We will also assume that\nvector $w(x)$ is close to the line through $x$ and the origin, i.e. that \nthere is a\nfunction \\( g : R_{+} \\mapsto R_{+} \\) s.t. the length of the orthogonal\ncomponent of $w(x)$ to this line is $\\leq$ \\( g(||x||) \\) and \\( \\int_{[1,\n\\infty ) } \\frac{g(t)}{t} dt < \\infty \\).\n\nWe extend $w$ inside\nof $B$ to be a \\( C^{\\infty} \\) v.field on $L$. Let \\( \\eta_{t} \\) will\nbe flow of $w$ in time $t$, then the derivative of $d$ along \\( \\eta_{t}\n\\) is 1 outside $B$.\\\\\nWe can consider the corresponding flow \\( \\sigma_t \\) on $L_s$,\n\\[ \\sigma_{t}(x) = \\frac{\\eta_{t}(sx)}{s+t} ~,~ \\sigma_{t} : L_{s}\n\\mapsto L_{s+t} \\]\nLet $v_s$ be a vector field on $L_s$ inducing the flow. One can\neasily show that on the boundary of $L_s$ we have \\( ||v_{s}|| \\leq const\n\\cdot \\frac{g(s)}{s} \\). \n\nPick $\\epsilon$ small enough so that for $f(z) = z_{1}^{-n- \\epsilon}$ we have $Ref,Imf $ are positive on $L_t$.\nLet $h(t)= \\int_{L_t}f=\\int_{L_t}f \\varphi$. We need to prove that $h(t)$ \nis a bounded function of $t$ and then we are done. \\\\\nLet $Q_t$ be the boundary of $L_t$. Then \\[h'(t)=\\int_{L_t}{\\cal\nL}_{v_t}f \\varphi=\\int_{L_t}d(i_{v_t}f \\varphi)=\\int_{Q_t}f \\cdot i_{v_t}\n\\varphi \\]\nConditions on $B_{\\theta}^{\\delta}$ imply that $|f|$ is uniformly \nbounded on $Q_t$. Also we know that \\(|v_{t}| \\leq \\frac{g(t)}{t}\\).\\\\\nSo \\( |h'(t)| \\leq const \\cdot vol(Q_{t}) \\frac{g(t)}{t} \\). Now $Q_t$\nconverges to the base of the cone, so it's volume is bounded, so \\(\n|h'(t)| \\) is an integrable function of $t$ by our assumptions, so \n$h(t)$ is uniformly bounded it $t$ . Q.E.D. \\\\\n\n\\noindent \nUsing those ideas for the 2-torus we get a following fact for analytic\nfunctions of 1 complex variable :\n\\begin{lem}\n: There is no holomorphic function $f$ from the open half disk \\\\\n\\(D = (re^{i \\theta } | 0 0 \\) on \\( L_{\\frac{1}{4}} \\). We look at the flow \n\\( L_{t} : t\\rightarrow 0 \\). We have 2 cases :\\\\\n1) We have a (first) value $t_0$ s.t. \\( ReP' (x, t_{0}) = 0 \\) or\n\\(ImP'(x,t_{0}) = 0 \\). W.l.o.g. we assume that the second case holds. Let\n\\(ReP'(x,t_{0}) = a \\). Consider \\( h = \\frac{\\pi}{2f(\\sqrt{P'-\na})} \\).\\\\\nThen as we approach $t_0$, h remains a holomorphic function with \\( Reh ,\nImh > 0 \\) and \\( |h(z)| > const \\cdot |z-(x+it_{0})|^{-2} \\). So as in \nCorollary 3.4.1 we get a contradiction.\\\\\n2) If \\(ReP' , Im P' \\) remain positive as \\( t \\rightarrow 0 \\) then we\nget a contradiction by looking at \\( \\int_{L_{t}}|P'| \\) . Q.E.D. \n\n\\section{Coassociative Geometry on a $G_2$ manifold}\nFor an oriented 7-manifold $M$, let $\\bigwedge ^3 T^{\\ast}M$ be a bundle of\n3-forms on it. This bundle has an open sub-bundle $\\bigwedge_+ ^3 M$ s.t.\n$\\varphi \\in \\bigwedge^3 T^{\\ast}_p M$ is in $\\bigwedge_+ ^3 M$ if there is a\nlinear isomorphism $\\sigma: T_p M \\mapsto \\mathbb{R}^7$ s.t. $\\sigma^{\\ast}\n\\varphi _0 = \\varphi$, there $\\varphi_0$ is a standard $G_2$ 3-form on\n$\\mathbb{R} ^7$ (see \\cite{Joy}, p. 294).\n\nA global section $\\varphi$ of $\\bigwedge_+ ^3 M$ defines a topological $G_2$\nstructure on $M$. In particular this defines a Riemannian metric on $M$.\nWe will be interested in a closed $\\varphi$. If $\\varphi$ is also co-closed then\nit is parallel and defines a metric with holonomy contained in $G_2$ (see\n\\cite{Joy}). In this case the form $\\ast \\varphi$ is a calibration and a\ncalibrated 4-submanifold $L$ is called a coassociative submanifold. This can\nalso be given by an alternative condition $\\varphi|_L = 0$. For a closed\n$\\varphi$ we can define coassociative submanifolds by this condition. They no\nlonger will be Calibrated (because $\\ast \\varphi$ is not closed).\nBut nevertheless they are quite interesting because they admit an unobstructed\ndeformation theory. In fact we can copy a proof of theorem 4.5 in \\cite{Mc} to show that their moduli-space $\\Phi$ is smooth of dimension $b_2 ^+ (L)$\n(proof of\nthat theorem in fact never used the fact that $\\ast \\varphi$ is closed).\nIf $L$ is coassociative as before and $p \\in L$ we can identify the normal\nbundle to $L$ at $p$ with self-dual 2-forms on $L$ by a map \\[v \\mapsto\ni_v \\varphi \\] for $v$ a normal vector to $L$. Thus a tangent space to $\\Phi$\nat $L$ can be identified with closed self-dual 2-forms on $L$.\n\nIn a similar way to SLag Geometry we have a following lemma for finite group\nactions on $M$ :\n\\begin{lem}\nSuppose a finite group $G$ acts on $M$ preserving $\\varphi$. Suppose $L$ is\na coassociative submanifold, $G$ leaves $L$ invariant and acts trivially on the second\ncohomology of $L$. Then $G$ leaves every element of the moduli-space $\\Phi$\nthrough $L$ invariant.\\\nMoreover, suppose $g \\in G$ and $x \\in M - L$ is an isolated fixed point of\n$g$. Then $x$ is not contained in any element in $\\Phi$.\n\\end{lem}\nThe proof of this lemma is completely analogous to proof of Lemma 3.1.1.\n\nNext we want to point out an example in which a $G_2$ manifold is a fibration\nwith generic fiber being a coassociative 4-torus. Our manifold $M$ is obtained from\nresolution of a quotient of a 7-torus by a finite group. We hope to give a\nsystematic way of producing such examples in a future paper.\n\nLet the group be $\\mathbb{Z}_2^3$ with generators \\[ \\alpha: (x_1, \\ldots, x_7)\n\\mapsto (-x_1,-x_2,-x_3,-x_4,x_5,x_6,x_7) \\]\n\\[\\beta : (x_1,\\ldots,x_7) \\mapsto (-x_1+1\/2,1\/2-x_2,x_3,x_4,-x_5,-x_6,x_7) \\]\n\\[\\gamma : (x_1,\\ldots,x_7) \\mapsto (-x_1, x_2, 1\/2-x_3, x_4, -x_5,x_6,-x_7)\\]\n\n(compare with \\cite{Joy}, p.302). We will follow Joyce's exposition of that\nexample.\n\nThe fixed point locus of each generator is a disjoint union of 3-tori. Their\nfixed loci don't intersect and their compositions have no fixed points. Around\neach fixed point the quotient looks like $V=T^3 \\times B\/\\pm1$, there $B$ is a\nball in $\\mathbb{R}^4$. We will show how to get a $G_2$ structure on resolution of singularities $\\overline{V}$. We will treat a fixed locus of, say, $\\alpha$. \n\nLet $x_1, \\ldots, x_4$ be coordinates on $\\mathbb{R}^4$. Let\n$\\omega_1,\\omega_2,\\omega_3$ be a standard HyperKahler package on\n$\\mathbb{R}^4$. \nFor coordinates $x_5,x_6,x_7$, let $\\delta_i$ be a dual 1-form to $x_{8-i}$. \nThen the $G_2$ 3-form on $\\mathbb{R}^7$ is \\[ \\varphi= \\omega_1 \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 + \\omega_3 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3 \\]\nWe can use either one of 3 complex structures on $\\mathbb{R}^4$ to identify it with $\\mathbb{C}^2$. That way for each singularity of the form $\\mathbb{R}^4\/\\pm1$ we get a resolution of a singularity $\\overline{U}$ and a (non-integrable) HyperKahler package on $\\overline{U}$ in a similar way to section 3.3.\nSuppose, for example, that we used a complex structure $I$ on $\\mathbb{R}^4$. Then $\\omega_2$ and $\\omega_3$ lift to $\\overline{U}$. Also $\\omega_1$ is replaced by a\nKahler form $\\omega_1'$ (as in Section 3.3).\n On $\\overline{V}= \\overline{U} \\times T^3$\nwe can consider \\[ \\varphi'= \\omega_1' \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 +\\omega_3 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3 \\]\nand $\\varphi'$ will be a closed $G_2$ form. Doing so for each fixed locus we get a $G_2$ structure on $M$. D. Joyce proved that this 3-form can be deformed to a parallel $G_2$ form. We will use the form $\\varphi'$ because we can construct a coassociative 4-torus fibration on $M$ for $\\varphi'$.\n\nOn $T^7$ we have a following coassociative 4-torus fibration \\[T_{a,b,c}=\n((x_1,\\ldots,x_7)| x_1=a, x_3=b, x_6=c) \\]\nNote that the 4 coordinates on each $T_{a,b,c}$ are chosen so that each\ngenerator of $\\mathbb{Z}_2^3$ acts non-trivially on exactly 2 of those coordinates. Those $T_{a,b,c}$ become coassociative tori on $M$ and fill a big \nopen neighbourhood of $M$. We would like to see what happens then\nthese tori enter a 'bad' neighbourhood $V$ above. For that we have to consider a bigger tubular neighbourhood $W=X \\times T^3$, there\n\\[ X= (v \\in T^4| v = w + (0,a,0,b), ~there~ w \\in U ~and~ a,b \\in \\mathbb{R}\/ \\mathbb{Z} ) \\]\nSo $W$ are exactly those points in $T^7$ which are contained on\n$T_{a,b,c}$ for a torus $T_{a,b,c}$, which intersects $V$. We have a resolution of singularities $\\overline{W}$, which can be viewed as a neighbourhood in $M$.\n\nWe would like to investigate coassociative tori in $\\overline{W}$. As we mentioned, we have 3 different ways to resolve a singularity using either one of the\nstructures $I,J,K$. We have 2 different cases :\n\n1) The structure we are using is either $I$ or $K$. We can assume that it is $I$. The $G_2$ form looks like \\[\\omega_1' \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 + \\omega_3 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3 \\]\nOur tori will look like \n\\[T_c \\times L \\]\nthere $T_c$ is a torus in $x_5,x_6,x_7$ coordinates defined by condition $x_6=c$ and $L$ is in $\\overline{U}$ defined by conditions: $\\omega_1'|_L=0, \\omega_3|_L=0 $, i.e it is a Special Lagrangian torus as in section 3.3. The results of\nsection 3.3 precisely apply to show that the compactified moduli-space fibers\nthe neighbourhood $\\overline{W}$. \n\n2) The structure we are using is $J$. Then we have a package $\\omega_1, \\omega_2',\\omega_3$ and the $G_2$ form looks correspondingly. Then our tori again look like $T_c \\times L$, there $L$ satisfies $\\omega_1|_L=0, \\omega_3|_L = 0$, i.e.\nthey are holomorphic tori with respect to a structure $J$. So what we get is\na neighbourhood in $K^3$ with a standard holomorphic fibration over a neighbourhood in $S^2$. So in any case our manifold $M$ fibers with generic fiber being a coassociative 4-torus.\n\nFinally we want to construct a topological mirror for this torus fibration. We will use definitions from section 3.3. It is clear that because of the local product structure the local monodromy representation is isomorphic to the dual representation.\nHence we can construct a dual fibration by performing a surgery for each 'bad'\nneighbourhood $\\overline{W}$ in a similar way to section 3.3. So for instance if $\\overline{W}$ is a neighbourhood of $\\alpha$ then we glue $\\overline{W}$ to $M - \\overline{W}$ along a boundary by a map \\[ \\eta: (x_1,\\ldots,x_7) \\mapsto (x_1,-x_4,x_3,x_2,x_5,x_6,x_7) \\]\nAlso near the boundary of $\\overline{W}$ we have $\\eta^{\\ast}(\\omega_1)= \\omega_3 ~,~ \\eta^{\\ast}(\\omega_3) = -\\omega_1 ~,~ \\eta^{\\ast}(\\omega_2)= \\omega_2 $.\nSo we can glue a closed $G_2$ form $\\varphi^{\\ast}= \\omega_3 \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 - \\omega_1 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3$ inside of $\\overline{W}$ to $\\varphi'$ outside of\n$\\overline{W}$ to get a closed $G_2$ form $\\varphi''$ on the mirror. Also the mirror fibration is a fibration by coassociative 4-tori with respect to $\\varphi''$. \n\nRemark: The original example in Joyce's paper \\cite{Joy} was a quotient by a slightly different $\\mathbb{Z}_2^3$ action with generators \\[ \\alpha: x_1, \\cdots , x_7 \\mapsto -x_1,-x_2,-x_3,-x_4,x_5,x_6,x_7 \\] \\[ \\beta : x_1 , \\cdots , x_7 \\mapsto \n-x_1,1\/2-x_2,x_3,x_4,-x_5,-x_6,x_7 \\] \\[ \\gamma: x_1, \\cdots , x_7 \\mapsto 1\/2 - x_1,x_2,1\/2-x_3,x_4,-x_5,x_6,-x_7 \\]\nWe get a closed $G_2$ form $\\varphi$ on the resolution of singularities similarly to previous example. Our manifold $M$ again will be fibered by coassociative 4-tori if we start from a family $T_{a,b,c} = ((x_1, \\ldots, x_7)|x_1=a,x_2=b,x_7=c)$. Indeed we consider a neighbourhood $U_i$ of a fixed component of one of\nthe generators and a bigger neighbourhood $X_i=(v \\in T^7|v= u + (0,0,a_1,a_2,a_3,a_4,0) ~ s.t.~ u \\in U ~ and ~ a_i \\in \\mathbb{R}\/\\mathbb{Z}) $. Then $X_i$ are disjoint, so we can use product structure on $X_i$ to get a coassociative torus fibration on $M$ as in the previous example.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the standard approach to quantum mechanics the wave function provides a complete description of any system and is used to calculate probability distributions for observables associated with the system. In the pilot-wave theory pioneered by de Broglie \\cite{de0,de1} in the 1920's, particles have definite positions and velocities and are ``guided\" by the wave function, which satisfies Schrodinger's equation. A similar approach was developed by Bohm \\cite{Bohm1,Bohm1b} in the early 1950's (see \\cite{Bohm5} and \\cite{Hol2} for extensive reviews).\n\nFor a single particle in the non-relativistic theory the velocity of the particle is given by\n\\begin{equation}\n\\frac{d\\vec{X}}{dt}=\\vec{J}(\\vec{X},t)\\;,\n\\end{equation}\nwhere $\\vec{X}$ is the particle's position and\n\\begin{equation}\n\\vec{J}=\\frac{\\hbar}{2im|\\psi|^2}\\left[\\psi^*\\vec{\\nabla}\\psi-\\psi\\vec{\\nabla}\\psi^*\\right]\\;.\n\\end{equation}\nParticle trajectories are, therefore, integral curves to the vector field $\\vec{J}$.\n\nIn this paper I first consider the pilot-wave theory for a non-relativistic particle in a potential. I construct a Hamiltonian that gives Schrodinger's equation and the guidance equation for the particle. This involves imposing a constraint and the use of Dirac's method of dealing with constrained dynamical systems \\cite {Dir1, Dir2}. I then find the Hamiltonian for a relativistic particle in Dirac's theory and for a quantum scalar field.\n\nA Hamiltonian formulation of the pilot-wave theory has also been developed by Holland \\cite{Holland}. His approach is significantly different from the approach taken in this paper and will be discussed at the end of section 2.\n\\section{Non-Relativistic Pilot-Wave Theory}\nIn this section I construct a Hamiltonian that gives Schrodinger's equation and the guidance equation for a single particle and for a collection of particles.\n\nThe Lagrangian\n\\begin{equation}\nL=\\frac{1}{2}i\\hbar\\left[\\psi^*\\frac{\\partial\\psi}{\\partial t}-\\psi\\frac{\\partial\\psi^*}{\\partial t}\\right]-\\frac{\\hbar^2}{2m}\n\\vec{\\nabla}\\psi\\cdot\\vec{\\nabla}\\psi^*-V\\psi^*\\psi\n\\end{equation}\ngives Schrodinger's equation\n\\begin{equation}\ni\\hbar\\frac{\\partial \\psi}{\\partial t}=-\\frac{\\hbar^2}{2m}\\nabla^2\\psi+V\\psi\n\\end{equation}\nunder a variation with respect to $\\psi^*$ and its complex conjugate under a variation with respect to $\\psi$.\n\nThe canonical momenta $\\Pi_{\\psi}=\\partial L\/\\partial \\dot{\\psi}$ and $\\Pi_{\\psi^*}=\\partial L\/\\partial \\dot{\\psi}^*$ are given by\n\\begin{equation}\n\\Pi_{\\psi}=\\frac{1}{2}i\\hbar\\psi^*\\hskip 0.4in and \\hskip 0.4in \\Pi_{\\psi^*}=-\\frac{1}{2}i\\hbar\\psi\\;.\n\\end{equation}\nWe therefore have the primary constraints\n\\begin{equation}\n\\phi_1=\\Pi_{\\psi}-\\frac{1}{2}i\\hbar\\psi^*\\approx 0\\hskip 0.4in and \\hskip 0.4in \\phi_2=\\Pi_{\\psi^*}+\\frac{1}{2}i\\hbar\\psi\\approx 0\\;,\n\\label{Pi}\n\\end{equation}\nwhere $\\approx$ denotes a weak equality, which can only be imposed after the Poisson brackets have been evaluated. These constraints satisfy\n\\begin{equation}\n\\{\\phi_1(\\vec{x}),\\phi_1(\\vec{y})\\}=\\{\\phi_2(\\vec{x}),\\phi_2(\\vec{y})\\}=0\n\\end{equation}\nand\n\\begin{equation}\n\\{\\phi_1(\\vec{x}),\\phi_2(\\vec{y})\\}=-i\\hbar\\delta^{3}(\\vec{x}-\\vec{y})\\;.\n\\end{equation}\nThese constraints are, therefore, second class constraints.\n\nThe canonical Hamiltonian density $h_C=\\Pi_{\\psi}\\dot{\\psi}+\\Pi_{\\psi^*}\\dot{\\psi}^*-L$ is given by\n\\begin{equation}\nh_C=\\frac{\\hbar^2}{2m}\\vec{\\nabla}\\psi\\cdot\\vec{\\nabla}\\psi^*+V\\psi^*\\psi+\\phi_1\\dot{\\psi}+\\phi_2\\dot{\\psi}^*\n\\end{equation}\nand the total Hamiltonian density is given by\n\\begin{equation}\nh_T=\\frac{\\hbar^2}{2m}\\vec{\\nabla}\\psi\\cdot\\vec{\\nabla}\\psi^*+V\\psi^*\\psi+u_1\\phi_1+u_2\\phi_2\\;,\n\\label{HT}\n\\end{equation}\nwhere $u_1$ and $u_2$ are undetermined parameters.\n\nFor consistency we require that\n\\begin{equation}\n\\dot{\\phi_1}=\\{\\phi_1,H_T\\}\\approx 0\\hskip 0.4in and \\hskip 0.4in \\dot{\\phi}_2=\\{\\phi_2,H_T\\}\\approx 0\\;,\n\\end{equation}\nwhere $H_T=\\int h_Td^3x$ is the total Hamiltonian. These two equations give\n\\begin{equation}\nu_1=\\frac{i}{\\hbar}\\left[\\frac{\\hbar^2}{2m}\\nabla^2\\psi-V\\psi\\right]\n\\end{equation}\nand\n\\begin{equation}\nu_2=-\\frac{i}{\\hbar}\\left[\\frac{\\hbar^2}{2m}\\nabla^2\\psi^*-V\\psi^*\\right]\\;.\n\\end{equation}\nSubstituting these expressions for $u_1$ and $u_2$ into $H_T$ and integrating by parts gives\n\\begin{equation}\nH_T=\\frac{i}{\\hbar}\\int\\left[\\Pi_{\\psi}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\psi-V\\psi\\right)-\n\\Pi_{\\psi^*}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\psi^*-V\\psi^*\\right)\\right]d^3x\\;.\n\\end{equation}\nThe equation of motion for $\\psi$ is\n\\begin{equation}\n\\dot{\\psi}=\\{\\psi,H_T\\}=\\frac{i}{\\hbar}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\psi-V\\psi\\right)\\;,\n\\end{equation}\nwhich is Schrodinger's equation. The equation of motion for $\\Pi_{\\psi}$ is\n\\begin{equation}\n\\dot{\\Pi}_{\\psi}=\\{\\Pi_{\\psi},H_T\\}=\\frac{i}{\\hbar}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\Pi_{\\psi}-V\\Pi_{\\psi}\\right)\\;.\n\\end{equation}\nFrom (\\ref{Pi}) we see that this is Schrodinger's equation for $\\psi^*$. Similar results hold for $\\dot{\\psi}^*$ and for\n$\\dot{\\Pi}_{\\psi^*}$. The constraints $\\phi_1$ and $\\phi_2$ can be promoted to strong equations if the Poisson bracket is replaced by the Dirac bracket.\n\nA similar Hamiltonian density $h=-\\frac{i\\hbar}{2m}\\vec{\\nabla}\\Pi\\cdot\\vec{\\nabla}\\psi -\\frac{i}{\\hbar}V\\Pi\\psi$ appears in Schiff\n\\cite{Sch1}, but he does not follow Dirac's procedure to obtain $h$. Gergely \\cite{Ger1} follows Dirac's approach and finds the Hamiltonian density $h=\\frac{\\hbar^2}{2m}\\vec{\\nabla}\\psi^*\\cdot\\vec{\\nabla}\\psi +V\\psi^*\\psi+\\dot{\\psi}\\phi_1+\\dot{\\psi}^*\\phi_2$. Schrodinger's equation and its complex conjugate then follow from the conditions $\\dot{\\phi}_1\\simeq 0$ and $\\dot{\\phi}_2\\simeq 0$\n(see \\cite{Str1} for a discussion on the Hamiltonian formulation of the DKP equation, which is also a first order equation, using Dirac's procedure).\n\nIn the pilot-wave theory the wave function is written as\n\\begin{equation}\n\\psi=Re^{iS\/\\hbar}\\;,\n\\end{equation}\nwhere $R$ and $S$ are real functions. The velocity of the particle is taken to be \\cite{de1,Bohm1}\n\\begin{equation}\n\\frac{d\\vec{X}}{dt}=\\frac{1}{m}\\vec{\\nabla}S(\\vec{X},t)\\;,\n\\label{eom}\n\\end{equation}\nwhere $\\vec{X}(t)$ is the particle's position. The velocity can also be written as\n\\begin{equation}\n\\frac{d\\vec{X}}{dt}=\\frac{\\vec{j}}{\\psi^*\\psi}=\\frac{\\hbar}{2im|\\psi|^2}\\left[\\psi^*\\vec{\\nabla}\\psi-\\psi\\vec{\\nabla}\\psi^*\\right]\\;,\n\\label{J}\n\\end{equation}\nwhere $\\vec{j}$ is the probability current density.\n\nThis equation of motion follows from the Hamiltonian\n\\begin{equation}\nH_p=\\frac{\\vec{p}}{m}\\cdot\\vec{\\nabla}S(\\vec{X},t)\\;.\n\\end{equation}\nTo see this consider the equations of motion\n\\begin{equation}\n\\dot{X}^k=\\{X^k,H_p\\}=\\frac{1}{m}\\left[\\frac{\\partial S}{\\partial x^k}\\right]_{\\vec{x}=\\vec{X}}\n\\label{x}\n\\end{equation}\nand\n\\begin{equation}\n\\dot{p}_k=\\{p_k,H_p\\}=-\\frac{p_l}{m}\\left[\\frac{\\partial^2 S}{\\partial x^k\\partial x^l}\\right]_{\\vec{x}=\\vec{X}}\\;.\n\\label{p}\n\\end{equation}\nEquation (\\ref{x}) is the correct equation of motion for the particle. Equation (\\ref{p}) gives the equation of motion for $p_k$, but $p_k$ is not related to the particle velocity in this theory. We can therefore solve for the particle's trajectory using (\\ref{x}) alone and ignore (\\ref{p}).\n\nNow consider the Hamiltonian for the field and particle\n\\begin{equation}\nH=H_T+H_P\\;.\n\\label{Ham1}\n\\end{equation}\nThe equations of motion $\\dot{\\psi}=\\{\\psi,H\\}$, $\\dot{X}^k=\\{X^k,H\\}$ and $\\dot{p}_k=\\{p_k,H\\}$ are the same as above. However the equation of motion for $\\Pi_{\\psi}$ is altered by the addition of $H_p$ to $H_T$:\n\\begin{equation}\n\\dot{\\Pi}_{\\psi}=\\{\\Pi_{\\psi},H\\}=\\{\\Pi_{\\psi},H_T\\}+\\frac{p_k}{m}\\{\\Pi_{\\psi},\\partial_k S\\}\\;.\n\\label{Poisson}\n\\end{equation}\nTo evaluate $\\{\\Pi_{\\psi},\\nabla^kS\\}$ it is convenient to write $\\partial_kS$ as\n\\begin{equation}\n\\partial_kS(\\vec{X},t)=\\frac{\\hbar}{2i}\\int\\left[\\frac{\\partial_k\\psi(\\vec{x},t)}{\\psi(\\vec{x},t)}-\n\\frac{\\partial_k\\psi^*(\\vec{x},t)}{\\psi^*(\\vec{x},t)}\\right]\\delta^3(\\vec{x}-\\vec{X})d^3x\\;.\n\\end{equation}\nThe last term in (\\ref{Poisson}) given by\n\\begin{equation}\n\\frac{p_k}{m}\\{\\Pi_{\\psi}(\\vec{x}),\\partial_kS(\\vec{X},t)\\}=\\frac{p_k}{m\\psi(\\vec{x},t)}\\frac{\\partial}{\\partial x^k}\\delta^3(\\vec{x}-\\vec{X}).\n\\end{equation}\nThis term can be eliminated by imposing the constraint $p_k\\approx 0$.\nFor consistency we require that\n$\\dot{p}_k=\\{p_k,H\\}\\approx 0$. From equation (\\ref{p}) we see that this is satisfied, so that no new constraints arise.\nThe Hamiltonian $H$ plus the constraint $p_k\\approx 0$ therefore generates Schrodinger's equation and the equation of motion for the particle.\n\nThe generalization of (\\ref{Ham1}) to a system of $N$ particles is straightforward. The Hamiltonian is given by\n\\begin{equation}\nH=\\frac{i}{\\hbar}\\int\\left[\\Pi_{\\psi}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi-V\\psi\\right)-\n\\Pi_{\\psi^*}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi^*-V\\psi^*\\right)\\right]d^3x_1...d^3x_N\n+\\sum_{k=1}^N\\frac{\\vec{p}_k}{m_k}\\cdot\\vec{\\nabla}_kS\\;,\n\\end{equation}\nwhere $\\vec{\\nabla}_kS$ can also be written as\n\\begin{equation}\n\\vec{\\nabla}_kS=\\sum_{k=1}^N\\frac{\\hbar}{2i|\\psi|^2}\\left[\\psi^*\\vec{\\nabla}_k\\psi-\\psi\\vec{\\nabla}_k\\psi^*\\right]\\;,\n\\end{equation}\n$\\psi=\\psi(\\vec{x}_1...,\\vec{x}_N)$ and $V=V(\\vec{x}_1...,\\vec{x}_N,t)$. This Hamiltonian plus the constraints $\\vec{p}_k\\approx 0$ gives both the field and particle equations of motion.\n\nA Hamiltonian formulation of the pilot-wave theory has also been developed by Holland \\cite{Holland}. In his approach the wave function is written as $\\psi=\\sqrt{\\rho}e^{iS\/\\hbar}$ and the Hamiltonian is given by\n\\begin{equation}\nH_{tot}=H+\\int\\left\\{-\\pi_{\\rho}\\left[\\frac{1}{m}\\frac{\\partial}{\\partial q_i'}\\left(\\rho\\frac{\\partial S}{\\partial q_i'}\\right)\\right]-\\pi_{S}\\left[\\frac{1}{2m}\\left(\\frac{\\partial S}{\\partial q_i'}\\right)\\left(\\frac{\\partial S}{\\partial q_i'}\\right)+Q+V\\right]\\right\\}d^3q',\n\\end{equation}\nwhere\n\\begin{equation}\nH=\\sum_i\\frac{p_i^2}{2m}+V+Q\\;,\n\\end{equation}\n$p_i$ is the momentum of the $i^{th}$ particle, V is the classical potential and $Q$ is the quantum potential which is given by\n\\begin{equation}\nQ=-\\frac{\\hbar^2}{2m\\sqrt{\\rho}}\\frac{\\partial^2\\sqrt{\\rho}}{\\partial q_i^2}\\;.\n\\end{equation}\nHamilton's equations for $\\dot{S}$ and $\\dot{\\rho}$ give Schrodinger's equation and Hamilton's equations for the particle degrees of freedom are given by\n\\begin{equation}\n\\dot{q}_i=p_i\/m \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\dot{p}_i=-\\frac{\\partial}{\\partial q_i}[V+Q]\\;.\n\\end{equation}\nCombining these two equations gives\n\\begin{equation}\nm\\ddot{q}_i=-\\frac{\\partial}{\\partial q_i}[V+Q]\\;,\n\\end{equation}\nwhich is the second order form of the theory. This equation can be derived from the first order guidance equation, but it is not equivalent to the guidance equation. Holland then considers Liouville's equation and shows, for pure states, that the momentum is constrained to satisfy the guidance equation. This approach differs significantly from the approach taken in this paper.\n\\section{Relativistic Pilot-Wave Theory}\nIn this section I construct a Hamiltonian that gives the Dirac equation plus the guidance equation.\n\nThe Lagrangian for the Dirac field coupled to the electromagnetic field is given by\n\\begin{equation}\nL=\\frac{i}{2}\\left[\\bar{\\psi}\\gamma^{\\mu}D_{\\mu}\\psi-(D^*_{\\mu}\\bar{\\psi)}\\gamma^{\\mu}\\psi\\right]-m\\bar{\\psi}\\psi\\;,\n\\end{equation}\nwhere $D_{\\mu}=\\partial_{\\mu}+ieA_{\\mu}$ and I have taken $\\hbar=c=1$. There are two primary constraints\n\\begin{equation}\n\\phi_1=\\Pi_{\\psi}-\\frac{1}{2}i\\bar{\\psi}\\gamma^0\\approx 0\\hskip 0.4in and \\hskip 0.4in \\phi_2=\\Pi_{\\bar{\\psi}}+\\frac{1}{2}i\\gamma^0\\psi\\approx 0\n\\label{const}\n\\end{equation}\nthat satisfy\n\\begin{equation}\n\\{\\phi_{1_k}(\\vec{x}),\\phi_{1_{\\ell}}(\\vec{y})\\}=\\{\\phi_{2_k}(\\vec{x}),\\phi_{2_{\\ell}}(\\vec{y})\\}=0\n\\end{equation}\nand\n\\begin{equation}\n\\{\\phi_{1_k}(\\vec{x}),\\phi_{2_{\\ell}}(\\vec{y})\\}=-i(\\gamma^{0})_{\\ell k}\\delta^{3}(\\vec{x}-\\vec{y})\\;.\n\\end{equation}\nThe constraints are, therefore, second class. Following the same procedure that was used for Schrodinger's equation gives\n\\begin{equation}\nH_T=i\\int\\left\\{\\Pi_{\\psi}\\gamma^0\\left[i\\gamma^k\\partial_k\\psi-eA_{\\mu}\\gamma^{\\mu}\\psi-m\\psi\\right]+\n\\left[i(\\partial_k\\bar{\\psi})\\gamma^k+e\\bar{\\psi}A_{\\mu}\\gamma^{\\mu}+m\\bar{\\psi}\\right]\\gamma^0\\Pi_{\\bar{\\psi}}\\right\\}d^3x.\n\\end{equation}\nIn the pilot-wave theory the velocity of the particle is given by \\cite{Bohm2,Hol1}\n\\begin{equation}\n\\frac{dX^{\\mu}}{d\\tau}=J^{\\mu}\n\\end{equation}\nwhere\n\\begin{equation}\nJ^{\\mu}=\\frac{\\bar{\\psi}\\gamma^{\\mu}\\psi}{\\sqrt{a^2+b^2}}\\;,\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; J^{\\mu}J_{\\mu}=1\\;,\n\\end{equation}\n\\begin{equation}\na=\\bar{\\psi}\\psi\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\; b=i\\bar{\\psi}\\gamma^5\\psi\\;.\n\\end{equation}\nThe particle Hamiltonian is\n\\begin{equation}\nH_P=p_{\\mu}J^{\\mu}\n\\end{equation}\nand the Hamiltonian for the field and particle is given by\n\\begin{equation}\nH=H_T+H_p\\;.\n\\end{equation}\nAs in the non-relativistic approach we must impose $p_{\\mu}\\approx 0$ so that addition of $H_p$ to $H_T$ does not alter the field equations for $\\Pi_{\\psi}$ and $\\Pi_{\\bar{\\psi}}$.\n\\section{Pilot-Wave Approach in Scalar Quantum Field Theory}\nIn this section I consider a massive scalar field and find a Hamiltonian that gives Schrodinger's equation plus the guidance equation for the field (see \\cite{Bohm4,Hol3} for discussions on the pilot-wave theory for scalar fields).\n\nConsider a real scalar field $\\phi(x^{\\mu})$ with the Lagrangian\n\\begin{equation}\nL=\\frac{1}{2}\\partial_{\\mu}\\phi\\partial^{\\mu}\\phi-\\frac{1}{2}m^2\\phi^2\\;.\n\\end{equation}\nThe canonical momentum is given by\n\\begin{equation}\n\\Pi=\\frac{\\partial\\phi}{\\partial t}\n\\end{equation}\nand the Hamiltonian is given by\n\\begin{equation}\nH=\\frac{1}{2}\\int\\left[\\Pi^2+(\\nabla\\phi)^2+m^2\\phi^2\\right]d^3x\\;.\n\\end{equation}\nSchrodinger's equation for this theory is given by\n\\begin{equation}\ni\\frac{\\partial\\Psi}{\\partial t}=\\frac{1}{2}\\int\\left[-\\frac{\\delta^2}{\\delta\\phi^2}+(\\nabla\\phi)^2+m^2\\phi^2\\right]d^3x\\;\\Psi\\;.\n\\end{equation}\nThe wave function can be written as\n\\begin{equation}\n\\Psi=Re^{iS}\n\\end{equation}\nand the equation of motion is taken to be\n\\begin{equation}\n\\Pi=\\frac{\\partial\\phi}{\\partial t}=\\frac{\\delta S}{\\delta\\phi}\\;.\n\\end{equation}\nConsider confining the field to a box of length $L$ with periodic boundary conditions. The field can be written as\n\\begin{equation}\n\\phi(\\vec{x},t)=\\sqrt{\\frac{1}{V}}\\sum_{\\bf{k}}q_{\\bf{k}}(t)e^{i\\bf{k}\\cdot\\bf{x}}\\;,\n\\end{equation}\nwhere\n\\begin{equation}\n{\\bf{k}}=\\frac{2\\pi}{L}(n_x,n_y,n_z)\\;,\n\\end{equation}\n$n_x, n_y$ and $n_z$ integers, $q_{\\bf{k}}^*=q_{-\\bf{k}}$ and $p_{\\bf{k}}^*=p_{-\\bf{k}}$.\nSchrodinger's equation is given by\n\\begin{equation}\ni\\frac{\\partial\\Psi}{\\partial t}=\\sum_{{\\bf{k}}\/2}\\left[-\\frac{\\partial^2}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}\n+(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\right]\\Psi\\;,\n\\label{SE}\n\\end{equation}\nwhere $\\Psi=\\Psi(q_{\\bf{k}},q^*_{\\bf{k}},t)$ and ${\\bf{k}}\/2$ indicates that the sum is carried out over half of the values of ${\\bf{k}}$.\n\nA Lagrangian that produces (\\ref{SE}) is\n\\begin{equation}\nL=\\frac{1}{2}i\\left[\\Psi^*\\frac{\\partial\\Psi}{\\partial t}-\\Psi\\frac{\\partial\\Psi^*}{\\partial t}\\right]-\n\\sum_{{\\bf{k}}\/2}\\left[\\frac{\\partial{\\Psi}}{{\\partial q_{\\bf{k}}}}\\frac{\\partial{\\Psi^*}}{{\\partial q^*_{\\bf{k}}}}+\n(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi\\Psi^*\\right]\\;.\n\\end{equation}\nFollowing the same procedure that was discussed in section 2 leads to the\nconstraints\n\\begin{equation}\n\\Pi_{\\Psi}\\approx\\frac{1}{2}i\\hbar\\Psi^*\\hskip 0.4in \\hskip 0.4in \\Pi_{\\Psi^*}\\approx-\\frac{1}{2}i\\hbar\\Psi\n\\end{equation}\nand to the total Hamiltonian\n\\begin{equation}\nH_T=i\\sum_{{\\bf{k}}\/2}\\int\\left\\{\\Pi_{\\Psi}\\left[\\frac{\\partial^2\\Psi}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi\\right]-\n\\Pi_{\\Psi^*}\\left[\\frac{\\partial^2\\Psi^*}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi^*\\right]\\right\\}dqdq^*\\;,\n\\end{equation}\nwhere $dq=\\Pi_{{\\bf{k\/2}}}dq_{{\\bf{k}}}$ and $dq^*=\\Pi_{{\\bf{k\/2}}}dq^*_{{\\bf{k}}}$. This Hamiltonian gives the correct equations of motion assuming Gauss' theorem (and the vanishing of surface terms) in infinite dimensional spaces.\nThe guidance equations are given by\n\\begin{equation}\n\\dot{q}_{\\bf{k}}=\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\n\\dot{q}^*_{\\bf{k}}=\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\;.\n\\end{equation}\nThis equation of motion follows from the Hamiltonian\n\\begin{equation}\nH_p=\\sum_{{\\bf{k}}\/2}\\left[p_{\\bf{k}}\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\n+p^*_{\\bf{k}}\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\right]\\;,\n\\end{equation}\nwhere\n\\begin{equation}\n\\{q_{\\bf{k}},p_{\\bf{l}}\\}=\\{q^*_{\\bf{k}},p^*_{\\bf{l}}\\}=\\delta_{\\bf{kl}}\n\\end{equation}\nand all other Poisson brackets vanish.\nThe equations of motion of the particle and field follow from the Hamiltonian\n\\begin{equation}\nH=H_T+H_p\n\\end{equation}\nplus the constraints $p_{\\bf{k}}\\approx 0$ and $p^*_{\\bf{k}}\\approx 0$.\n\\section{Conclusion}\nIn this paper I showed that the Hamiltonian\n\\begin{equation}\nH=\\frac{i}{\\hbar}\\int\\left[\\Pi_{\\psi}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi-V\\psi\\right)-\n\\Pi_{\\psi^*}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi^*-V\\psi^*\\right)\\right]d^3x_1...d^3x_N\n+\\sum_{k=1}^N\\vec{p}_k\\cdot\\vec{\\nabla}_kS\n\\end{equation}\nplus the constraints $\\vec{p}_k\\approx 0$ gives Schrodinger's equation for a collection of non-relativistic particle in a potential and the guidance equation\n\\begin{equation}\n\\frac{d\\vec{X}_k}{dt}=\\vec{\\nabla}_kS(\\vec{X},t)\\;,\n\\end{equation}\nwhere $\\vec{X}_k$ is the position of the $k^{th}$ particle.\nIt is important to note that the canonical momenta $\\vec{p}_k$ are not related to $\\vec{v}_k$ and it is consistent to impose the constraints\n$\\vec{p}_k\\approx 0$.\n\nI then showed that\n\\begin{equation}\nH=i\\int\\left\\{\\Pi_{\\psi}\\gamma^0\\left[i\\gamma^k\\partial_k\\psi-eA_{\\mu}\\gamma^{\\mu}\\psi-m\\psi\\right]+\n\\left[i(\\partial_k\\bar{\\psi})\\gamma^k+e\\bar{\\psi}A_{\\mu}\\gamma^{\\mu}+m\\bar{\\psi}\\right]\\gamma^0\\Pi_{\\bar{\\psi}}\\right\\}d^3x+p_{\\mu}J^{\\mu},\n\\end{equation}\nwhere\n\\begin{equation}\nJ^{\\mu}=\\frac{\\bar{\\psi}\\gamma^{\\mu}\\psi}{\\sqrt{a^2+b^2}}\\;,\n\\end{equation}\n\\begin{equation}\na=\\bar{\\psi}\\psi\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\; b=i\\bar{\\psi}\\gamma^5\\psi\n\\end{equation}\nplus the constraint $p_{\\mu}\\approx 0$ gives the Dirac equation and the guidance equation\n\\begin{equation}\n\\frac{dX^{\\mu}}{d\\tau}=J^{\\mu}\\;.\n\\end{equation}\n\nLastly I considered the massive Klein-Gordon equation as a quantum field (not as a single particle wave equation). After decomposing the field into normal modes\n\\begin{equation}\n\\phi(\\vec{x},t)=\\sqrt{\\frac{1}{V}}\\sum_{\\bf{k}}q_{\\bf{k}}(t)e^{i\\bf{k}\\cdot\\bf{x}}\n\\end{equation}\nI showed that the Hamiltonian\n\\begin{equation}\n\\begin{array}{cc}\n H=i\\sum_{{\\bf{k}}\/2}\\int\\left\\{\\Pi_{\\Psi}\\left[\\frac{\\partial^2\\Psi}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi\\right]-\n\\Pi_{\\Psi^*}\\left[\\frac{\\partial^2\\Psi^*}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi^*\\right]\\right\\}dqdq^*& \\\\\n & \\\\\n +\\sum_{{\\bf{k}}\/2}\\left[p_{\\bf{k}}\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\n +p^*_{\\bf{k}}\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\right]\\;,\n \\end{array}\\hspace {-0.12in}\n\\end{equation}\nwhere $dq=\\Pi_{{\\bf{k\/2}}}dq_{{\\bf{k}}}$ and $dq^*=\\Pi_{{\\bf{k\/2}}}dq^*_{{\\bf{k}}}$,\nplus the constraints $p_{\\bf{k}}\\approx 0$ and $p^*_{\\bf{k}}\\approx 0$ gives Schrodinger's equation and the guidance equations\n\\begin{equation}\n\\dot{q}_{\\bf{k}}=\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\n\\dot{q}^*_{\\bf{k}}=\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\;.\n\\end{equation}\n\\section*{Acknowledgements}\nThis research was supported by the Natural Sciences and Engineering Research\nCouncil of Canada.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}