diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznetn" "b/data_all_eng_slimpj/shuffled/split2/finalzznetn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznetn" @@ -0,0 +1,5 @@ +{"text":"\\section{Derivation of Eq.~\\eqref{eq:Ea}}\n\\label{app:chambers}\n\nIn this section we derive~\\eqref{eq:Ea}, that the energetic dependence of a single band of the HH model for rational flux $\\alpha = p\/q$ is given by\n\\begin{equation}\n E_j(x_0,y_0) = E_{j}^* - 2 V_x' \\cos q y_0 - 2 V_y' \\cos q x_0 + O(W_j^2\/\\Delta_j).\n\\end{equation}\nIn the related, AA model the quantities $(x_0,y_0)$ have a straightforward interpretation: $x_0$ acts as the phase of the potential, and $y_0$ is a crystal momentum, and \\emph{vice versa} in the dual model obtained by writing in the $y$-basis. The results of this section are obtained using well known properties of the characteristic equation and spectrum of the HH model~\\cite{chambers1965linear,bellissard1982cantor,thouless1983bandwidths,thouless1990scaling,thouless1990scaling,last1994zero,last1992sum}.\n\nOur first step is to obtain a useful form for the characteristic equation, the roots of which are the bands $E_j(x_0,y_0)$. We begin by noting that, per~\\eqref{eq:AA}, when written in the $x$-basis $\\hat{x} \\ket{x} = x \\ket{x}$ the HH hamiltonian takes the form $H = \\int_0^{2\\pi\\alpha} d x_0 H_\\mathrm{AA}(x_0)$ with\n\\begin{equation}\n \\begin{aligned}\n H_\\mathrm{AA}(x_0) & = \\sum_{n \\in \\mathbb{Z}} \\Big[ V_y \\left( \\ket{x_0 + 2 \\pi \\alpha (n + 1) }\\bra{x_0 + 2 \\pi \\alpha n} +\\mathrm{h.c} \\right) + 2 V_x \\cos x_n \\ket{x_0 + 2 \\pi \\alpha n}\\bra{x_0 + 2 \\pi \\alpha n} \\Big].\n \\end{aligned}\n\\end{equation}\nThis Hamiltonian is manifestly periodic under a shift $x \\to x + 2 \\pi \\alpha q$, and so we may project in a momentum sector, in which we obtain the Bloch Hamiltonian\n\\begin{equation}\n \\begin{aligned}\n H_\\mathrm{B}(x_0,y_0) & = - \\sum_{n \\in \\mathbb{Z}} \\Big[ V_y \\left( \\mathrm{e}^{ i y_0} \\ket{n+1 }\\bra{n} +\\mathrm{h.c} \\right) + 2 V_x \\cos x_n \\ket{n}\\bra{n} \\Big]\n \\end{aligned}\n\\end{equation}\nwhere we identify $\\ket{n} \\equiv \\ket{n+q}$. The spectrum of $H$ is thus made up of bands $E_j(x_0,y_0)$ determined by the solutions of the characteristic equation \n\\begin{equation}\n C(E_j(x_0,y_0),x_0,y_0) = 0\n\\end{equation}\nwhere $C$ is the characteristic polynomial given by\n\\begin{equation}\n C(E,x_0,y_0) := \\det ( H_\\mathrm{B}(x_0,y_0) - E).\n\\end{equation}\nRemarkably, the characteristic polynomial has a very simple dependence on $x_0,y_0$\n\\begin{equation}\n C(E,x_0,y_0) = P(E) + C_0(x_0,y_0)\n\\end{equation}\nwhere $P(E)$ is a $q$th order polynomial independent of $x_0$ and $y_0$, and $C_0$ is an energy independent constant\n\\begin{equation}\n \\begin{aligned}\n P(E) & = \\det ( H_\\mathrm{B}(\\pi\/2 q,\\pi \/ 2 q) - E)\n \\\\\n C_0(x_0,y_0) &= - 2 V_x^q \\cos q x_0 - 2 V_y^q \\cos q y_0.\n \\end{aligned}\n \\label{eq:PC}\n\\end{equation}\nHence the roots of the characteristic polynomial are the solutions to the equation\n\\begin{equation}\n P(E) = - C_0(x_0,y_0).\n \\label{eq:PC}\n\\end{equation}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{Fig_chambers.pdf}\n \\caption{\n \\emph{Graphical illustration of solutions to~\\eqref{eq:PC} for $V_x = V_y = V$ and $\\alpha = p\/q = 1\/7$}: $P(E)$ is shown in blue, the range of values swept out by $C_0$ is demarcated by the dashed green lines, and the corresponding range of values swept out by the roots of~\\eqref{eq:PC} is marked in red on the horizontal axis.\n }\n \\label{Fig:Chambers}\n\\end{figure}\n\n\nThe solutions to~\\eqref{eq:PC} are plotted in Fig~\\ref{Fig:Chambers} for $V_x = V_y = V$ and $\\alpha = 1\/7$. In this figure $P(E)$ is shown in blue, the range of values swept out by $C_0$ is demarcated by the dashed green lines, and the corresponding range of values swept out by the solutions to~\\eqref{eq:PC} is marked in red on the horizontal axis. Each red interval corresponds to one band $E_j(x_0,y_0)$. Intuitively, it follows from eyeballing Fig.~\\ref{Fig:Chambers} that each band takes the form\n\\begin{equation}\n E_j(x_0,y_0) \\approx E_j^* + \\frac{C_0(x_0,y_0)}{P'(E_j^*)}\n\\end{equation}\nwhich may be obtained by linearizing $P(E)$ about its roots $E_j^*$. We expect the corrections to this to be small if the variation of the gradient $P'(E)$ is small over the interval in which $E_j(x_0,y_0)$ varies, i.e. to leading order, that $W_j P''(E_j^*) \\ll P'(E_j^*)$ where $W_j$ is the bandwidth. There is reason to expect this leading order analysis of the error should be expected to provide an accurate answer: $P''(E)$ varies only on the scale of the separation between successive roots, and thus we generically expect $P''(E_j^*)$ to provide a good order of magnitude estimate for $P''(E)$ for $E$ in the range $E_{j-1}^* \\leq E \\leq E_{j+1}^*$. In the remainder of this section, we perform such a leading order analysis to make intuitive statement more precise.\n\nHaving obtained a form for the characteristic equation, we see that linearizing $P(E)$ about its roots $E_j^*$, yields\n\\begin{equation}\n 0 = P'(E_j^*) (E_j(x_0,y_0) - E_j^*) + C_0(x_0,y_0) + O(P''(E_j^*)(E_j(x_0,y_0) - E_j^*)^2).\n\\end{equation}\nThe solutions to this equation give the band structure up to an error which must be estimated\n\\begin{equation}\n \\begin{aligned}\n E_j(x_0,y_0) & = E_j^* + \\frac{C_0(x_0,y_0)}{P'(E_j^*)} - O\\left( \\frac{C_0^2(x_0,y_0) P''(E_j^*)}{(P'(E_j^*))^3} \\right)\n \\\\\n &\n = E_j^* - 2 V_x' \\cos q x_0 - 2 V_y' \\cos q y_0 - O\\left( \\frac{W_j^2 P''(E_j^*)}{P'(E_j^*)} \\right)\n \\end{aligned}\n \\label{eq:ejsupp}\n\\end{equation}\nwhere in the second line we have substituted~\\eqref{eq:PC} and defined $V_x' = V_x^q \/ P'(E_j^*)$, $V_y' = V_y^q \/ P'(E_j^*)$ and $W_j = 4 V_x' + 4 V_y'$\n\nFinally, we note that the ratio $P''(E_j^*)\/P'(E_j^*)$ may be related to the band spacing. Specifically, for a quadratic expansion of the characteristic polynomial\n\\begin{equation}\n P(E) = P'(E_j^*) (E_j(x_0,y_0) - E_j^*) - C_0(x_0,y_0) + \\tfrac12 P''(E_j^*)(E_j(x_0,y_0) - E_j^*)^2 + O(P'''(E_j^*)(E_j(x_0,y_0) - E_j^*)^3)\n\\end{equation}\nthis equation has roots at\n\\begin{equation}\n E = E_j^*, \\qquad \\text{and} \\qquad E = E_j^* - \\frac{2 P'(E_j^*)}{ P''(E_j^*) } + O \\left(\\frac{P'(E_j^*)^2 P'''(E_j^*)}{ P''(E_j^*)^2}\\right)\n\\end{equation}\nthis yields and distance between the roots of\n\\begin{equation}\n \\Delta_j = \\frac{2 P'(E_j^*)}{ P''(E_j^*) } + O \\left(\\frac{P'(E_j^*)^2 P'''(E_j^*)}{ P''(E_j^*)^3}\\right)\n\\end{equation}\nwhich, combined with~\\eqref{eq:ejsupp}, yields~\\eqref{eq:Ea} in the main text.\n\n\n\n\n\\section{Numerical evidence of Eq.~\\eqref{eq:Wgap_size}}\n\\label{app:89}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{Fig_supp.pdf}\n \\caption{\n \\emph{Limiting forms for the bandwidths $W_j$ and band gaps $\\Delta_j$}: (a,b) the bandwidths and band gaps are analysed for $\\alpha = 1\/q$ in the limit $q \\to \\infty$ (a) $q^{-1} \\log W_j$ is plotted versus $E_j^*$ for various values of $q$ (legend inset), in (b) $( \\Delta_j q)^{-1}$ is plotted versus $E_j^{(\\mathrm{gap})}$, the inset shows the same data on a logarithmic horizontal scale. In (c,d) analogous plots are shown for $\\alpha = p\/q$ with $q = 2 p + 1$.\n }\n \\label{Fig:supp}\n\\end{figure}\n\nEq.~\\eqref{eq:Wgap_size} is obtained via anayltic arguments by Wilkinson in Refs.~\\cite{wilkinson1984critical,wilkinson1987exact}. Nevertheless, here we provide some numerical evidence of this result.\n\nConsider the HH Hamiltonian~\\eqref{eq:H1} with $\\alpha = 1\/q$ tuned to the critical point $V_x = V_y = V$. Per the previous Appendix, this Hamiltonian has $q$ bands $E_j(x_0,y_0)$ for $j = 1 \\cdots q$. We denote the extrema of each band by $E_j^{( \\min )}$ and $E_j^{( \\max )}$. We further denote the band centers, bandwidths, and gap centers respectively by\n\\begin{equation}\n E_j^* = \\tfrac12 \\left( E_j^{( \\max )} + E_j^{( \\min )} \\right),\n \\qquad \n \\Delta_j = \\rho_j^{-1} = E_{j+1}^{( \\min )} - E_j^{( \\max )} ,\n \\qquad\n E_j^{(\\mathrm{gap})} = \\tfrac12 \\left( E_{j+1}^{( \\min )} + E_j^{( \\max )} \\right)\n\\end{equation}\nwhere $\\rho_j = \\Delta_j^{-1}$ is the density of states at the gap center.\n\nIn Fig.~\\ref{Fig:supp}a we show the bandwidths are decaying exponentially in $q$, specifically we plot $q^{-1} \\log W_j$ versus $E_j^*$ for various values of $q$ (legend inset). The different $q$ series approaches a limiting form at large $q$\n\\begin{equation}\n \\log W_j \\sim - q \\ell(E_j^* \/ V).\n\\end{equation}\nIn Fig.~\\ref{Fig:supp}b we plot $\\rho_j \/ q$ as a function of $E_j^{(\\mathrm{gap})}$, showing that in the same limit the density of states has the limiting form\n\\begin{equation}\n \\rho_j = \\Delta_j^{-1} \\sim q \\varrho(E_j^* \/ V) \/ V.\n\\end{equation}\nIn the inset of Fig.~\\ref{Fig:supp}b the same data is shown on a log scale, showing that $\\varrho(z)$ has an integrable (specficially logarithmic) divergence at $z=0$. Plots Fig.~\\ref{Fig:supp}c-d show the equivalent plots in the limit of large $q$ with $q = 2 p+1$, illustrating that analgous limits occur for $\\alpha \\to 1\/2$. Indeed similar limits apply for $\\alpha$ approaching any rational. \n\n\n\\section{The ergodic map $B(\\bar{\\alpha})$}\n\\label{app:T}\n\nIn this section we show the ergodicity of the map $B:[0,1\/2]\\to[0,1\/2]$ corresponding to an RG where we project into the lower band at each step. That, for $B$ given by\n\\begin{equation}\n B(\\bar{\\alpha}) = \\min\\left[ b(\\bar{\\alpha}) , 1 - b(\\bar{\\alpha})\\right], \\qquad b(\\bar{\\alpha}) = \\frac{1}{\\bar{\\alpha}} - \\left\\lfloor \\frac{1}{\\bar{\\alpha}} \\right\\rfloor,\n\\end{equation}\nas defined in the main text in Eq.~\\eqref{eq:beta_scaling}. Results for an RG projecting into the middle band at each step, as used in the latter part of the paper, follow by the same methods.\n\nWe employ a numerical approach previously used in Ref.~\\cite{briggs2003precise,flajolet1995gauss} to calculate the spectral gap of the Gauss map. Consider an initial set values $\\bar{\\alpha}_0^{(i)} \\in [0,1\/2]$ characterized by a smooth distributed $g_0(\\bar{\\alpha})$. Each of these values can be renormalized to yield $\\bar{\\alpha}_n^{(i)} = B^n(\\bar{\\alpha}_0^{(i)})$, which is also characterised by a smooth distribution function $g_n(\\bar{\\alpha})$. As $n$ is taken large $g_n$ converges to the unique steady state distribution\n\\begin{equation}\n g_n(\\bar{\\alpha}) \\to f(\\bar{\\alpha}) = \\frac{1}{\\log \\varphi} \\cdot \\frac{\\varphi^3}{\\varphi^3 + \\bar{\\alpha}-\\bar{\\alpha}^2}\n \\label{eq:f_app}\n\\end{equation}\nwhere $\\varphi = (1 + \\sqrt{5})\/2$ is the golden ratio. Moreover, the deviation from the limiting distribution is exponentially small in $n$\n\\begin{equation}\n \\log |g_n(\\bar{\\alpha}) - f(\\bar{\\alpha})| \\sim - n\\Delta,\n \n \\label{eq:Delta}\n\\end{equation}\nwhere\n\\begin{equation}\n \\Delta = 3.7856665519818449128 \\ldots\n\\end{equation}\nis the spectral gap of $B$. The statement~\\eqref{eq:Delta} together with $\\Delta>0$ demonstrates the ergodicity of the map $B$. Moreover, the large value $\\Delta \\gg 1$, indicates that the convergence of $g_n(x)$ to $f(x)$ occurs rapidly over an $O(1)$ number of steps. Eq.~\\eqref{eq:Delta} is the main result of this section. Prior to the main result, we arrive at two further results: (i) we show that $f(\\bar{\\alpha})$ in~\\eqref{eq:f_app} is the steady state, and (ii) we show that the map $B$ is chaotic with maximal Lyapunov exponent\n\\begin{equation}\n \\Lambda = \\frac{\\pi^2}{6 \\log \\varphi}.\n\\end{equation}\n\n\\subsection{Steady state distribution of $B$}\n\nThe sequence of distributions $g_n$ are defined by recursive application of the map $B$, i.e. $g_{n+1}(\\bar{\\alpha}) = [B g_n](\\bar{\\alpha})$, where the action of $B$ on $g$ is given explicitly by\n\\begin{equation}\n \\begin{aligned}\n [B g](\\bar{\\alpha}) & : = \\int_0^{1\/2} d\\bar{\\alpha}' \\delta(\\bar{\\alpha} - B(\\bar{\\alpha}')) g(\\bar{\\alpha}') = \\sum_{q=2}^\\infty \\left[ \\frac{g\\left(\\frac{1}{q+\\bar{\\alpha}}\\right)}{(q+\\bar{\\alpha})^2} + \\frac{g\\left(\\frac{1}{q+1-\\bar{\\alpha}}\\right)}{(q+1-\\bar{\\alpha})^2}\\right].\n \\end{aligned}\n\\end{equation}\nNote that the action of $B$ on the space of distributions $g$ is linear, $[B(g+h)] = [B g] + [B h]$, and thus the steady state distribution $f$ is obtained as the leading eigenfunction of $B$, which has a corresponding eigenvalue of unity\n\\begin{equation}\n [B f](\\bar{\\alpha}) = f(\\bar{\\alpha}).\n\\end{equation}\nIt is then straightforward to verify that~\\eqref{eq:f_app} satisfies this relation. The uniqueness of this solution is verified numerically in App.~\\ref{app:ergB}.\n\n\\subsection{Chaoticity of $B$}\n\nThe Lyapunov exponent of a discrete map is given by\n\\begin{equation}\n \\Lambda = \\lim_{n \\to \\infty} \\frac{1}{n} \\sum_{m = 1}^n \\log |B'(\\bar{\\alpha}_n)|\n\\end{equation}\nwhere $\\bar{\\alpha}_n = B^n(\\bar{\\alpha}_0)$, $B'(\\bar{\\alpha})$ is the derivative of $B(\\bar{\\alpha})$ and the Lyapunov exponent $\\Lambda$ is independent of $\\bar{\\alpha}_0$ due to the ergodicity of $B$.\n\nMoreover, as $B$ is ergodic, $\\Lambda$ may be straightforwardly evaluated using the steady state distribution $f(\\bar{\\alpha})$\n\\begin{equation}\n\\begin{aligned}\n \\Lambda & = \\int_0^{1\/2}d \\bar{\\alpha} \\, f(\\bar{\\alpha}) \\, \\log| B(\\bar{\\alpha})|\n = - 2 \\int_0^{1\/2}d \\bar{\\alpha} \\, f(\\bar{\\alpha}) \\, \\log \\bar{\\alpha} \n = \\frac{\\pi^2}{6 \\log \\varphi}\n \\end{aligned}\n \\label{eq:lyapunov_app}\n\\end{equation}\nIn Eq.~\\eqref{eq:lyapunov_app} we have used that $| B(\\bar{\\alpha})| = \\bar{\\alpha}^{-2}$ except at a measure zero set of points, where the derivative is undefined. As $\\Lambda > 0$, $B$ is chaotic.\n\n\\subsection{Ergodicity of $B$}\n\\label{app:ergB}\n\n\n\n\nAs $B$ is a linear operator, it has a spectrum of eigenvalues $\\beta_k \\geq 0$ with associated eigenfunctions $v_k(\\bar{\\alpha})$ which form a complete basis\n\\begin{equation}\n [B v_k](\\bar{\\alpha}) = \\beta_k v_k(\\bar{\\alpha}).\n \\label{eq:B_eigs}\n\\end{equation}\nIn principle the spectrum of eigenvalues may have discrete and continuous components, though in the present case we find only a discrete spectrum allowing us to index them in descending order $1 = |\\beta_0| \\geq |\\beta_1| \\geq |\\beta_2| \\geq \\cdots$. The distribution at late times is found by projecting onto the subspace of eigenfunctions with eigenvalues $|\\beta_k| =1$. If there is exactly one such eigenvalue, which we denote as $\\beta_0 = 1$ (the eigenvalue cannot have a phase as $g_n(\\bar{\\alpha})$ is strictly non-negative), then the steady state distribution $f(\\bar{\\alpha})$ is unique, independent of $g_0$, and given by the corresponding eigenfunction $f = v_0$. The deviation of $g_n$ from $v_0$ is then determined by the first sub-leading eigenvalue: $|f - g_n| = O(\\beta_1^n) = O(\\mathrm{e}^{- \\Delta n})$, where we have defined\n\\begin{equation}\n \\Delta = - \\log |\\beta_1|\n\\end{equation}\nas the spectral gap of $B$.\n\nThe eigenfucntion(s) $v_k(\\bar{\\alpha})$ may be obtained as the solutions to the eigenvalue equation~\\eqref{eq:B_eigs}, however in the absence of an analytic technique to solve this equation, we resort to numerics. To numerically tackle this problem we first re-write in terms of the coordinate $y = 1\/2 - \\bar{\\alpha} \\in [0,1\/2]$. In this coordinate $B$ has the action\n\\begin{equation}\n [B g](y) = \\sum_{h} \\left[ \\frac{g\\left(\\tfrac12 - \\frac{1}{h - y}\\right)}{(h- y)^2} + \\frac{g\\left(\\tfrac12 - \\frac{1}{h+y }\\right)}{(h+y)^2}\\right]\n\\end{equation}\nwhere the sum is taken over the half-integers $h = \\tfrac52, \\tfrac72, \\tfrac92, \\tfrac{11}{2} \\ldots$. To make the problem numerically tractable we subsequently write $B$ in a basis spanned by a countable set of basis elements. For simplicity we choose the basis monomials\n\\begin{equation}\n u_p(y) = y^p = (1\/2 - x)^p\n\\end{equation}\nupon which $B$ acts as\n\\begin{equation}\n \\begin{aligned}\n [B u_p](y) &= \\sum_{h} \\frac{1}{2^p}\\left[ \\frac{\\left(1 - \\frac{2}{h - y}\\right)^p}{(h- y)^2} + \\frac{\\left(1 - \\frac{2}{h+y }\\right)^p}{(h+y)^2}\\right]\n \\\\\n \n \n & = \\sum_{h} \\frac{1}{2^{p+2}} \\sum_{k = 0}^p \\binom{p}{k}\\left( -\\frac{2}{h}\\right)^{k+2}\\left[ \\left( \\frac{1}{1 - y\/h}\\right)^{k+2} + \\left( \\frac{1}{1+y\/h }\\right)^{k+2}\\right]\n \\\\\n & = \\sum_{h} \\frac{1}{2^{p+2}} \\sum_{k = 0}^p \\binom{p}{k}\\left( -\\frac{2}{h}\\right)^{k+2}\\left[ 2 \\sum_{n = 0}^\\infty \\binom{2n + k + 1 }{2n} \\left(\\frac{y}{h}\\right)^{2n}\n \\right]\n \\end{aligned}\n\\end{equation}\nRecalling the definition of the Hurwitz zeta function $\\zeta(s,a) = \\sum_{n=0}^\\infty (n+a)^{-s}$, and rerranging we find\n\\begin{equation}\n [B u_p](y) = \\sum_{k = 0}^p \\sum_{n = 0}^\\infty \\binom{p}{k} \\binom{2n + k + 1 }{2n} \n \\frac{(-1)^k}{2^{p - k - 1}} \\zeta(2n + k + 2,5\/2) u_{2n}(y).\n \\label{eq:Tup}\n\\end{equation}\nWe re-write~\\eqref{eq:Tup} to define $M_{pq}$, the transfer matrix on the basis on monomials $u_p$ we \n\\begin{equation}\n [B u_p](y) = \\sum_{q = 0}^\\infty M_{pq} u_q(y)\n\\end{equation}\nwhere the matrix elements are given by \n\\begin{equation}\n M_{pq} = \\begin{cases}\n \\displaystyle\n \\frac{1}{2^{p-1}}\\sum_{k = 0}^p \\binom{p}{k} \\binom{q + k + 1 }{q} \n (-2)^k\\zeta(q + k + 2,5\/2) & \\qquad q \\text{\\,\\, even},\n \\\\[15pt]\n 0 & \\qquad q \\text{\\,\\, odd}.\n \\end{cases}\n\\end{equation}\n\nThe spectrum of $M$, and hence $B$, may then be numerically estimated by evaluating $M_{pq}$ up to a cutoff $p,q \\leq p_{\\max}$ and diagonalising. The eigenvalues $\\beta_k$ are found to be discrete, non-degenerate and exponentially decaying in $k$. As a result the values of low order eigenvalues converge exponentially as a function of $p_{\\max}$, allowing them to be accurately numerically estimated. The numerical limitation is the evaluation of the matrix elements, which require high precision numerics for even moderately large $p_{\\max}$. Numerically extracted values for the magnitudes of the first five sub-leading eigenvalues are given below (to 20 significant figures)\n\\begin{equation}\n \\begin{aligned}\n - \\log |\\beta_0| &= 0\n \\\\\n \\Delta = \\Delta_1 = - \\log |\\beta_1| & = 3.7856665519818449128\n \\\\\n \\Delta_2 = - \\log |\\beta_2| & = 6.7251453074741971174\n \\\\\n \\Delta_3 = - \\log |\\beta_3| & = 11.339665867968595165\n \\\\\n \\Delta_4 = - \\log |\\beta_4| & = 12.043871233196576668\n \\\\\n \\Delta_5 = - \\log |\\beta_5| & = 16.966376007200018885\n \\end{aligned}\n\\end{equation}\n\n\nIndeed, as expected, the associated leading eigenfunction is found to be\n\\begin{equation}\n f(\\bar{\\alpha}) \\propto \\sum_{q = 0}^\\infty \\left( \\frac{ 1 - 2 \\bar{\\alpha}}{\\varphi^3} \\right)^{2q}\n \\label{eq:f_sum_app}\n\\end{equation}\nwhere $\\varphi = (1 + \\sqrt{5})\/2$ is the Golden Ratio. Performing the sum in~\\eqref{eq:f_sum_app} and normalising yields~\\eqref{eq:f_app}.\n\n\n\n\n\\end{widetext}\n\n\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFor $\\sigma\\dvtx \\mathbb{R}\\to\\mathbb{R}$ and $b\\dvtx\n\\mathbb{R}\\to\\mathbb {R}$, we are interested in the simulation of the\nstochastic differential equation\n\\begin{equation}\ndX_t=\\sigma(X_t)\\,dW_t+b(X_t)\\,dt,\n\\label{sde}\n\\end{equation}\nwhere $X_0=x_0\\in\\mathbb{R}$ and $W=(W_t)_{t\\geq0}$ is a standard Brownian\nmotion. We make the standard Lipschitz assumptions on the coefficients,\n\\[\n\\exists K\\in(0,+\\infty), \\forall x,y\\in\\mathbb{R}\\qquad\n\\bigl|\\sigma(x)-\\sigma(y)\\bigr|+\\bigl|b(x)-b(y)\\bigr|\\leq K|x-y|.\n\\]\n\n\nFor $T>0$, we are interested in the approximation of\n$X=(X_t)_{t\\in[0,T]}$ by its Euler scheme\n$\\bar{X}=(\\bar{X}_t)_{t\\in[0,T]}$ with $N\\geq1$ time-steps. We consider\nthe regular grid $\\{0=t_00, \\exists C<+\\infty, \\forall N\\geq1\n\\qquad\n\\mathcal{W}_p \\bigl(\\mathcal{L}(X),\\mathcal{L}(\\bar{X}) \\bigr)\\leq\n\\frac{C}{N^{2\/3-\\varepsilon}}\n\\]\nproved in Section~\\ref{sec_pathwise} under additional regularity\nassumptions on the coefficients and uniform ellipticity. To construct\nthis coupling, we first obtain in Section~\\ref{sec_marginal} a\ntime-uniform estimation of the Wasserstein distance between the\nrespective laws $\\mathcal{L}(X_t)$ and $\\mathcal{L}(\\bar{X}_t)$ of\n$X_t$ and $\\bar{X}_t$,\n\\[\n\\forall p\\geq1, \\exists C<+\\infty, \\forall N\\geq1\\qquad\n\\sup_{t\\in[0,T]} \\mathcal{W}_p \\bigl(\\mathcal{L}(X_t),\\mathcal{L}(\n\\bar{X}_t) \\bigr)\\leq\\frac{C\\sqrt{\\log(N)}}{N}.\n\\]\nPreviously, in Section~\\ref{Sec_res_std}, we recalled well-known\nresults concerning the moments and the dependence on the initial\ncondition of the solution to the SDE~(\\ref{sde}) and its Euler scheme.\nAlso, we make explicit the dependence of the strong error estimations\n$\\mathbb{E} [\\sup_{s\\le t}|\\bar{X}_s-X_s|^p]$ with respect to\n$t\\in[0,T]$, which will play a key role in our analysis.\n\n\n\n\\section{Basic estimates on the SDE and its Euler scheme}\\label{Sec_res_std}\n\nWe recall some well-known results concerning the flow defined\nby~(\\ref{sde}) (see, e.g., Karatzas and Shreve~\\cite{KS}, page 306) and\nits Euler approximation.\n\n\n\n\\begin{aprop}\nLet us denote by $(X^{x}_t)_{t\\in[0,T]}$ the solution of (\\ref{sde}),\nstarting from $x\\in\\mathbb{R}$. One has that for any $p\\geq1$, the existence\nof a positive constant $C\\equiv C(p,T)$ such that\n\\begin{eqnarray}\n\\forall x \\in\\mathbb{R}\\qquad\n\\mathbb{E} \\Bigl[\\sup_{t\\in[0,T]}\\bigl|X^{x}_t\\bigr|^p\\Bigr]&\\leq& C\\bigl(1+|x|\\bigr)^p, \\label{momenteds}\n\\\\\n\\qquad\\quad \\forall x\\in\\mathbb{R}, \\forall s \\leq t \\leq T\n\\qquad \\mathbb{E} \\Bigl[\\sup _{u\\in[s,t]}\\bigl|X^{x}_{u}-X^{x}_{s}\\bigr|^p\n\\Bigr] &\\leq& C\\bigl(1+|x|\\bigr)^p(t-s)^{p\/2},\\label{accroisseds}\n\\\\\n\\forall x, y\\in\\mathbb{R}\\qquad \\mathbb{E} \\Bigl[\\sup_{t\\in\n[0,T]}\\bigl|X^{x}_t-X^{y}_t\\bigr|^p\n\\Bigr] &\\leq& C|y-x|^p.\\label{cieds}\n\\end{eqnarray}\n\\end{aprop}\n\n\\begin{aprop}\\label{vitfort_prop}\nLet $(\\bar{X}^{x}_t)_{t\\in[0,T]}$\ndenote the\nEuler scheme~(\\ref{eul}) starting from~$x$.\nFor any $p\\in[1,\\infty)$, there exists a positive constant $C\\equiv\nC(p,T)$ such that\n\\begin{eqnarray}\n\\forall N\\geq1, \\forall x\\in\\mathbb{R}\\qquad \\mathbb{E} \\Bigl[\\sup\n_{t\\in[0,T]}\\bigl|\\bar{X}^{x}_t\\bigr|^p\n\\Bigr]&\\leq& C\\bigl(1+|x|\\bigr)^p,\\label{momenteul}\n\\\\\n\\hspace*{30pt}\\forall N\\geq1, \\forall x\\in\\mathbb{R}, \\forall t\\in[0,T]\\qquad \\mathbb{E} \\Bigl[\\sup_{r\\in[0,t]}\\bigl|\\bar{X}^{x}_r-X^{x}_r\\bigr|^p\n\\Bigr]&\\leq&\\frac{C t^{p\/2}(1+|x|)^p}{N^{p\/2}}.\\label{vitfort}\n\\end{eqnarray}\n\\end{aprop}\n\nThe moment bound~(\\ref{momenteul}) for the Euler scheme holds in fact\nas soon as the drift and the diffusion coefficients have a sublinear\ngrowth. The strong convergence order is established in\nKanagawa~\\cite{Ka} for Lipschitz and bounded coefficients. In fact, it\nis straightforward to extend Kanagawa's proof to merely Lipschitz\ncoefficients by using the estimates~(\\ref{momenteds})\nand~(\\ref{momenteul}) and obtain\n\\begin{equation}\n\\label{vitfort2}\n\\qquad\\forall N\\geq1, \\forall x\\in\\mathbb{R}, \\forall t\\in[0,T]\\qquad \\mathbb{E} \\Bigl[\\sup_{r\\in[0,t]}\\bigl|\\bar{X}^{x}_r-X^{x}_r\\bigr|^p\n\\Bigr]\\leq\\frac{C (1+|x|)^p}{N^{p\/2}}.\n\\end{equation}\nThe estimate~(\\ref{vitfort}) precises the dependence on $t$. This\nslight improvement will in fact play a crucial role in constructing the\ncoupling between the diffusion and the Euler scheme. We prove it for\nthe sake of completeness, even though the arguments are standard.\n\n\n\\begin{pf*}{Proof of~(\\ref{vitfort})}\nLet $\\tau_s=\\sup\\{t_i, t_i\\le s\\}$ denote the last discretization time\nbefore~$s$. We have $\\bar{X}^{x}_t-X^x_t=\\int_0^t\nb(\\bar{X}^{x}_{\\tau_s})-b(X^x_s) \\,ds + \\int_0^t\n\\sigma(\\bar{X}^{x}_{\\tau_s})-\\sigma(X^x_s) \\,dW_s$. By the Jensen and\nBurkholder--Davis--Gundy inequalities,\n\\begin{eqnarray*}\n&& \\mathbb{E} \\Bigl[ \\sup_{r \\in[0,t]} \\bigl|\\bar{X}^{x}_r-X^x_r\\bigr|^p\n\\Bigr]\n\\\\\n&&\\qquad \\le 2^p \\biggl( \\mathbb{E} \\biggl[ \\biggl(\\int\n_0^t \\bigl|b \\bigl(\\bar{X}^{x}_{\\tau_s}\n\\bigr)-b \\bigl(X^x_s \\bigr)\\bigr| \\,ds \\biggr)^p\n\\biggr]\n\\\\\n&&\\hspace*{48pt}{} + C_p\\mathbb{E} \\biggl[ \\biggl(\\int_0^t\n\\bigl(\\sigma\\bigl(\\bar{X}^{x}_{\\tau_s} \\bigr)-\\sigma\n\\bigl(X^x_s \\bigr) \\bigr)^2 \\,ds\n\\biggr)^{p\/2} \\biggr] \\biggr)\n\\\\\n&&\\qquad \\le 2^p \\biggl( t^{p-1}\\int_0^t\n\\mathbb{E} \\bigl[ \\bigl|b \\bigl(\\bar{X}^{x}_{\\tau\n_s} \\bigr)-b\n\\bigl(X^x_s \\bigr)\\bigr|^p \\bigr] \\,ds\n\\\\\n&&\\hspace*{48pt}{}+ C_pt^{p\/2-1}\\int_0^t \\mathbb{E} \\bigl[\n\\bigl|\\sigma\\bigl(\\bar{X}^{x}_{\\tau_s} \\bigr)- \\sigma\\bigl(X^x_s\n\\bigr)\\bigr|^p \\bigr]\\,ds \\biggr).\n\\end{eqnarray*}\n\nDenoting by $\\mathrm{Lip}(\\sigma)$ the finite Lipschitz constant of\n$\\sigma$, we have $|\\sigma(\\bar{X}^{x}_{\\tau_s})-\\sigma(X^x_s)|\\le\n\\mathrm{Lip}(\\sigma)(|\\bar{X}^{x}_{\\tau_s}-X^x_{\\tau_s}|+|X^x_{\\tau_s}-X^x_s|)$.\nThus, (\\ref{accroisseds}) and~(\\ref{vitfort2}) yield\\break\n$\\mathbb{E}[|\\sigma(\\bar{X}^{x}_{\\tau_s})-\\sigma(X^x_s)|^p]\\le \\frac{C\n(1+|x|)^p}{N^{p\/2}}, $ and the same bound holds for $b$\nreplacing~$\\sigma$. Since $t^p\\leq T^{p\/2}t^{p\/2}$, we easily conclude.\n\\end{pf*}\n\n\\section{The Wasserstein distance between the marginal laws}\\label{sec_marginal}\n\nIn this section, we are interested in finding an upper bound for the\nWasserstein distance between the marginal laws of the SDE~(\\ref{sde})\nand its Euler scheme. It is well known that the optimal coupling\nbetween two one-dimensional random variables is obtained by the inverse\ntransform sampling. Thus, let $F_t$ and $\\bar{F}_t$ denote the\nrespective cumulative distribution functions of $X_t$ and $\\bar{X}_t$.\nThe $p$-Wasserstein distance between the time-marginals of the\nsolution\\vadjust{\\goodbreak}\nto the SDE and its Euler scheme is given by (see Theorem~3.1.2\nin~\\cite{raru})\n\\begin{equation}\n\\mathcal{W}_p \\bigl(\\mathcal{L}(X_t),\\mathcal{L}(\n\\bar{X}_t) \\bigr)= \\biggl(\\int_0^1\\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u)\\bigr|^p\\,du \\biggr)^{1\/p}.\n\\label{wpinv}\n\\end{equation}\nLet us state now the main result of this section. We set\n\\begin{eqnarray*}\nC^k_b&=& \\bigl\\{f\\dvtx \\mathbb{R} \\rightarrow\\mathbb{R}\\ k \\mbox{\ntimes continuously differentiable s.t. }\n\\\\\n&&\\hspace*{115pt}\\bigl\\|f^{(i)} \\bigr\\|_\\infty<\n\\infty, 0\\le i\\le k \\bigr\\}.\n\\end{eqnarray*}\n\n\n\n\\begin{ahyp}\\label{hyp_wass_marginal}\nLet $a=\\sigma^2$. We assume that $a, b \\in C^2_b$, $a''$\nis globally \\mbox{$\\gamma$-}H\\\"older continuous with $\\gamma>0$ and\n\\[\n\\exists\\underline{a}>0, \\forall x\\in\\mathbb{R}, a(x)\\geq\\underline\n{a} \\mbox{ (uniform ellipticity)}.\n\\]\n\\end{ahyp}\n\n\nSince $\\sigma$ is Lipschitz continuous, under\nHypothesis~\\ref{hyp_wass_marginal}, we have either\n$\\sigma\\equiv\\sqrt{a}$ or $\\sigma\\equiv-\\sqrt{a}$. From now on, we\nassume without loss of generality that $\\sigma\\equiv\\sqrt{a}$ which is\na $C^2_b$ function bounded from below by the positive constant\n$\\underline{\\sigma}=\\sqrt{\\underline{a}}$.\n\n\n\n\\begin{theorem}\\label{wasun}\nUnder Hypothesis~\\ref{hyp_wass_marginal}, we have for\nany $p\\ge1$,\n\\[\n\\forall N\\geq1\\qquad \\sup_{t\\in[0,T]}\\mathcal{W}_p \\bigl(\n\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t) \\bigr)\\leq\n\\frac{C\\sqrt{\\log(N)}}{N},\n\\]\nwhere $C$ is a positive constant that only depends on $p$, $T$,\n$\\underline{a}$ and ($\\|a^{(i)}\\|_\\infty$, $\\|b^{(i)}\\|_\\infty$, $0\\le\ni\\le2$) and does not depend on the initial condition~$x\\in\\mathbb{R}$.\n\\end{theorem}\n\n\n\\begin{arem}\\label{w1unif}\nWhen $p=1$, the slightly better bound\n$\\sup_{t\\in[0,T]}\\mathcal{W}_1(\\mathcal{L}(X_t),\\allowbreak \\mathcal{L}(\\bar\n{X}_t))\\leq\\frac{C}{N}$ holds if $\\sigma$ is uniformly elliptic,\naccording to~\\cite{thesesbai}, Chapter~3. This is proved in a\nmultidimensional setting for $C^\\infty$ coefficients $\\sigma$ and $b$\nwith bounded derivatives by extending the results of \\cite{gu} but can\nalso be derived from a result of Gobet and Labart~\\cite{goblab} only\nsupposing that $b,\\sigma\\in C^{3}_b$. Let $p_t(x,y)$ and\n$\\bar{p}_t(x,y)$ denote, respectively, the density of~$X^{0,x}_t$ and\n$\\bar{X}^{0,x}_t$. Then Theorem~2.3 in~\\cite{goblab} gives the\nexistence of a constant $c>0$ and a finite nondecreasing function $K$\n(depending on the upper bounds of $\\sigma$ and $b$ and their\nderivatives) such that\n\\[\n\\forall(t,x,y) \\in(0,T]\\times\\mathbb{R}^2\\qquad\n\\bigl|p_t(x,y)-\\bar{p}_t(x,y)\\bigr|\\leq\\frac{TK(T)}{N t}\\exp\\biggl(-\\frac\n{c|x-y|^2}{t}\\biggr).\n\\]\nAs remarked in~\\cite{thesesbai}, Chapter~3, for $f\\dvtx \\mathbb{R}\\to\n\\mathbb{R}$ a Lipschitz continuous function with Lipschitz constant not\ngreater than one, one deduces that\n\\begin{eqnarray*}\n\\bigl|\\mathbb{E} \\bigl[f(X_t) \\bigr]-\\mathbb{E} \\bigl[f(\n\\bar{X}_t) \\bigr]\\bigr|&=& \\biggl\\llvert\\int_{\\mathbb\n{R}}\n\\bigl(f(y)-f(x) \\bigr) \\bigl(p_t(x,y)-\\bar{p}_t(x,y)\n\\bigr)\\,dy \\biggr\\rrvert\n\\\\\n&\\leq&\\frac\n{K(T)T}{N t}\\int_{\\mathbb{R}}|y-x|\\exp\\biggl(-\n\\frac{c|x-y|^2}{t} \\biggr)\\,dy\n\\\\\n&=&\\frac{K(T)T}{cN},\n\\end{eqnarray*}\nwhich gives $\\sup_{t\\leq\nT}\\mathcal{W}_1(\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t))\\leq\n\\frac{CK(T)T}{N}$ by the dual formulation of the $1$-Wasserstein\ndistance.\n\\end{arem}\n\nOur approach consists of controlling the time evolution of the\nWasserstein distance. To do so, we need to compute the evolution of\nboth $F_t^{-1}(u)$ and $\\bar{F}_t^{-1}(u)$. In the two next\npropositions, we derive partial differential equations satisfied by\nthese functions by integrating in space the Fokker--Planck equations\nand then applying the implicit function theorem.\n\n\\begin{aprop}\\label{propevolftm1}\nAssume that\nHypothesis~\\ref{hyp_wass_marginal} holds\nThen for any $t\\in(0,T]$, the cumulative distribution function\n$x\\mapsto F_t(x)$ is invertible with inverse denoted by $F_t^{-1}(u)$.\nMoreover, the function $(t,u)\\mapsto F_t^{-1}(u)$ is $C^{1,2}$ on\n$(0,T]\\times(0,1)$ and satisfies\n\\begin{equation}\n\\partial_t F_t^{-1}(u)=-\\frac{1}{2}\n\\partial_u \\biggl(\\frac\n{a(F_t^{-1}(u))}{\\partial_u F_t^{-1}(u)} \\biggr)+b \\bigl(F_t^{-1}(u)\n\\bigr).\\label{fpinvfr}\n\\end{equation}\n\\end{aprop}\n\n\\begin{aprop}\\label{propevolbarftm1}\nAssume that $\\sigma$ and $b$ have linear growth $\\exists C>0$, $\\forall\nx\\in\\mathbb{R}$, $|\\sigma(x)|+|b(x)|\\leq C(1+|x|)$ and that uniform\nellipticity holds, $\\exists\\underline{a}>0$, $\\forall x\\in\\mathbb{R}$,\n$a(x)\\geq\\underline{a}$. Then for any $t\\in(0,T]$, $\\bar{X}_t$ admits a\ndensity $\\bar{p}_t(x)$ with respect to the Lebesgue measure and its\ncumulative distribution function $x\\mapsto\\bar{F}_t(x)$ is invertible\nwith inverse denoted by $\\bar{F}_t^{-1}(u)$. Moreover, for each\n$k\\in\\{0,\\ldots,N-1\\}$, the function $(t,u)\\mapsto\\bar{F}^{-1}_t(u)$ is\n$C^{1,2}$ on $(t_k,t_{k+1}]\\times(0,1)$ and, on this set, it is a\nclassical solution of\n\\begin{eqnarray}\n\\label{eqevolbarftm1}\n\\partial_t \\bar{F}_t^{-1}(u)&=&-\n\\frac{1}{2}\\partial_u \\biggl(\\frac\n{\\alpha_t(u)}{\\partial_u \\bar{F}_t^{-1}(u)} \\biggr)+\n\\beta_t(u),\n\\end{eqnarray}\nwhere $\\alpha_t(u)=\\mathbb{E}[a(\\bar{X}_{t_k})|\\bar{X}_t=\\bar\n{F}_t^{-1}(u)]$ and\n$\\beta_t(u)=\\mathbb{E}[b(\\bar{X}_{t_k})|\\bar{X}_t=\\bar{F}_t^{-1}(u)]$.\n\\end{aprop}\n\n\nThe proofs of these two propositions are postponed to\nAppendix~\\ref{App_Sec1}. Let us mention here that\nProposition~\\ref{propevolftm1} also holds when $b'$ is only H\\\"older\ncontinuous: the Lipschitz assumption on~$b'$ is needed later to prove\nTheorem~\\ref{wasun}. The PDEs (\\ref{fpinvfr})~and~(\\ref{eqevolbarftm1})\nenable us to compute the time derivative of the $p$th power of the\nWasserstein distance (\\ref{wpinv}) and prove, again in\nAppendix~\\ref{App_Sec1} the following key lemma.\n\n\\begin{alem}\\label{lemmajoderwp}\nUnder Hypothesis~\\ref{hyp_wass_marginal}, for $p\\geq2$, the function\n$t\\mapsto\\mathcal{W}_p^p(\\mathcal{L}(X_t),\\allowbreak\n\\mathcal{L}(\\bar{X}_t))$ is continuous on $[0,T]$, and its first order\ndistribution\\vadjust{\\goodbreak} derivative\\break\n$\\partial_t\\mathcal{W}_p^p(\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t))$ is\nan integrable function on $[0,T]$. Moreover, $dt$ a.e.,\n\\begin{eqnarray}\\label{majoderwp}\n&& \\partial_t\\mathcal{W}_p^p \\bigl(\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t) \\bigr)\\nonumber\n\\\\[-1pt]\n&&\\qquad \\leq C \\biggl(\\mathcal{W}_p^p \\bigl(\\mathcal{L}(X_t),\n\\mathcal{L}(\\bar{X}_t) \\bigr)\n\\nonumber\\\\[-9pt]\\\\[-9pt]\n&&\\hspace*{45pt}{} +\\int_0^1\\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u)\\bigr|^{p-1}\\bigl|b \\bigl(\n\\bar{F}_t^{-1}(u) \\bigr)-\\beta_t(u)\\bigr|\\,du\\nonumber\n\\\\[-1pt]\n&&\\hspace*{45pt}{}+\\int_0^1\\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u)\\bigr|^{p-2} \\bigl(a \\bigl(\n\\bar{F}_t^{-1}(u) \\bigr)-\\alpha_t(u)\n\\bigr)^2\\,du \\biggr),\\nonumber\n\\end{eqnarray}\nwhere $C$ is a positive constant that only depends on $p$,\n$\\underline{a}$, $\\|a'\\|_\\infty$ and $\\|b'\\|_\\infty$.\n\\end{alem}\n\nThe last ingredient of the proof of Theorem~\\ref{wasun} is the next\nlemma, the proof of which is also postponed in Appendix~\\ref{App_Sec1}.\n\n\n\\begin{alem}\\label{malcal} Let $\\tau_t=\\sup\\{t_i, t_i\\le t\\}$ denote\nthe last discretization time before~$t$. Under\nHypothesis~\\ref{hyp_wass_marginal}, we have for all $p\\geq1$,\n\\[\n\\exists C<+\\infty, \\forall N\\geq1, \\forall t\\in[0,T]\\qquad \\mathbb{E} \\bigl[\n\\bigl\\llvert\\mathbb{E} [ W_{t}-W_{\\tau_{t\n}|\\bar{X}_{t} ]\n\\bigr\\rrvert^{p} \\bigr] \\leq C \\biggl(\\frac{1}{N\\vee(N^2t)}\n\\biggr)^{p\/2}.\n\\]\n\\end{alem}\n\n\\begin{pf*}{Proof of Theorem~\\ref{wasun}}\nSince\n$\\mathcal{W}_p(\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t))\\le\\mathcal\n{W}_{p'}(\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t)) $ for $p\\le p'$, it\nis enough to prove the estimation for $p\\geq2$. Therefore we suppose\nwithout loss of generality that $p\\ge2$. Let\n$\\psi_p(t)=\\mathcal{W}^2_p(\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t))$\nand\\looseness=-1\n\\begin{eqnarray}\n\\mbox{for any integer } k\\geq1\\qquad h_k(x)=k^{-2\/p}h(kx)\\nonumber\n\\\\\n\\eqntext{\\mbox{where }h(x)= \\cases{x^{2\/p}, &\\quad if $x\\geq1$,\n\\vspace*{2pt}\\cr\n1+\\dfrac{2}{p}(x-1), &\\quad otherwise.}}\n\\end{eqnarray}\\looseness=0\nSince $h_k$ is $C^1$ and nondecreasing, Lemma~\\ref{lemmajoderwp} and\nH\\\"older's inequality imply that\n\\begin{eqnarray*}\n&& h_k \\bigl(\\psi^{p\/2}_p(t) \\bigr)\n\\\\[-2pt]\n&&\\qquad = h_k\n\\bigl(\\mathcal{W}_p^p \\bigl(\\mathcal{L}(X_0),\n\\mathcal{L}(\\bar{X}_0) \\bigr) \\bigr)+\\int_0^th_k'\n\\bigl(\\psi^{p\/2}_p(s) \\bigr)\\partial_s\n\\mathcal{W}_p^p \\bigl(\\mathcal{L}(X_s),\n\\mathcal{L}(\\bar{X}_s) \\bigr)\\,ds\n\\\\[-2pt]\n&&\\qquad \\leq h_k(0)\n+C\\int_0^th_k'\n\\bigl(\\psi^{p\/2}_p(s) \\bigr)\n\\\\[-2pt]\n&&\\hspace*{91pt} {}\\times\\biggl[\\psi ^{p\/2}_p(s)\n\\\\[-2pt]\n&&\\hspace*{107pt}{}\n +\\psi^{(p-1)\/2}_p(s)\n\\biggl(\\int _0^1\\bigl|b \\bigl(\\bar{F}_s^{-1}(u)\n\\bigr)-\\beta_s(u)\\bigr|^p\\,du \\biggr)^{1\/p} \\nonumber\n\\\\\n&&\\hspace*{107pt}\n{}+\\psi^{(p-2)\/2}_p(s) \\biggl(\\int_0^1\\bigl|a\n\\bigl(\\bar{F}_s^{-1}(u) \\bigr)-\\alpha\n_s(u)\\bigr|^p\\,du \\biggr)^{2\/p} \\biggr]\\,ds.\n\\end{eqnarray*}\nSince for fixed $x\\geq0$, the sequence $(h'_k(x))_k$ is nondecreasing\nand converges to $\\frac{2}{p}x^{(2\/p)-1}$ as $k\\to\\infty$, one may take\nthe limit in this inequality thanks to the monotone convergence theorem\nand remark that the image of the Lebesgue measure on $[0,1]$ by\n$\\bar{F}_s^{-1}$ is the distribution of $\\bar{X}_s$ to deduce\n\\begin{eqnarray}\\label{pregron}\n\\psi_p(t)&\\leq&\\frac{2C}{p}\\int_0^t\n\\psi_p(s)+\\psi^{1\/2}_p(s)\\mathbb{E}\n^{1\/p} \\bigl(\\bigl|b(\\bar{X}_s)-\\mathbb{E} \\bigl(b(\n\\bar{X}_{\\tau_s})|\\bar{X}_s \\bigr)\\bigr|^p \\bigr)\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{} + \\mathbb{E}^{2\/p} \\bigl(\\bigl|a(\\bar{X}_s)-\\mathbb{E} \\bigl(a(\n\\bar{X}_{\\tau _s})|\\bar{X}_s \\bigr)\\bigr|^p \\bigr) \\,ds.\\nonumber\n\\end{eqnarray}\nOne has\n\\begin{eqnarray*}\na(\\bar{X}_{\\tau_s})-a(\\bar{X}_s)&=&a'(\n\\bar{X}_s)\\sigma(\\bar{X}_{s}) (W_{\\tau_s}-W_s)\n\\\\[-1pt]\n&&{} -a'(\\bar{X}_s) \\bigl[ \\bigl(\\sigma(\\bar{X}_{\\tau\n_s})-\\sigma(\n\\bar{X}_{s}) \\bigr) (W_s-W_{\\tau_s})+b(\n\\bar{X}_{\\tau\n_s}) (s-\\tau_s) \\bigr]\n\\\\[-2pt]\n&&{}+(\\bar{X}_{\\tau_s}-\\bar{X}_s)\\int_0^1a'\n\\bigl(v \\bar{X}_{\\tau_s}+(1-v)\\bar{X}_s\n\\bigr)-a'( \\bar{X}_s)\\,dv.\n\\end{eqnarray*}\nUsing Jensen's inequality, the boundedness assumptions on $a,b$ and\ntheir derivatives and Lemma \\ref{malcal}, one gets\n\\begin{eqnarray*}\n&& \\mathbb{E} \\bigl(\\bigl|a(\\bar{X}_s)-\n\\mathbb{E} \\bigl(a( \\bar{X}_{\\tau_s})|\\bar{X}_s\n\\bigr)\\bigr|^p \\bigr)\n\\\\\n&&\\qquad \\leq C\\mathbb{E} \\bigl(\\bigl|\\sigma a'(\n\\bar{X}_s)\\bigr|^p\\bigl| \\mathbb{E}\\bigl((W_s-W_{\\tau_s})|\n\\bar{X}_s \\bigr)\\bigr|^p \\bigr)\n\\\\\n&&\\quad\\qquad{}+C\\mathbb{E} \\bigl((s-\\tau_s)^p+\\bigl| \\bigl(\\sigma(\\bar\n{X}_{\\tau\n_s})-\\sigma(\\bar{X}_{s}) \\bigr) (W_s-W_{\\tau_s})\\bigr|^p+|\n\\bar{X}_{\\tau\n_s}-\\bar{X}_s|^{2p} \\bigr)\n\\\\\n&&\\qquad \\leq\\frac{C}{N^{p\/2}\\vee(N^ps^{p\/2})}.\n\\end{eqnarray*}\nThe same bound holds with $a$ replaced by $b$. With (\\ref{pregron}) and\nYoung's inequality, one deduces\n\\begin{eqnarray*}\n\\psi_p(t)&\\leq& C\\int_0^t\n\\psi_p(s)+\\frac{\\psi^{1\/2}_p(s)}{\\sqrt{N}\\vee(N\\sqrt{s})}+\\frac\n{1}{N\\vee(N^2s)}\\,ds\n\\\\\n&\\leq& C\\int _0^t\\psi_p(s)+\n\\frac{1}{N\\vee(N^2s)}\\,ds.\n\\end{eqnarray*}\nOne concludes by Gronwall's lemma.\n\\end{pf*}\n\n\\begin{arem} When $a(x)\\equiv a$ is constant, the term $\\mathbb\n{E}^{2\/p}\n(|a(\\bar{X}_s)-\\break\\mathbb{E}(a(\\bar{X}_{\\tau_s})|\\bar{X}_s)|^p )$ in\n(\\ref{pregron}) vanishes and the above reasoning ensures that $\\bar\n{\\psi}_p(t)$ defined as $\\sup_{s\\in[0,T]}\\psi_p(s)$ satisfies\n\\begin{eqnarray*}\n\\bar{\\psi}_p(t)&\\leq& C\\int_0^t \\bar{\\psi}_p(s) \\,ds\n+C\\bar{\\psi}^{1\/2}_p(t) \\int\n_0^t\\frac{1}{\\sqrt{N}\\vee(N\\sqrt{s})}\\,ds\n\\\\\n&\\le& C\\int _0^t\\bar{\\psi}_p(s)\n\\,ds + \\frac{1}{2}\\bar{\\psi}_p(t) +\\frac{C^2 (T+1)^2}{2N}.\n\\end{eqnarray*}\nBy Gronwall's lemma, we recover the estimation\n$\\sup_{t\\in[0,T]}\\mathcal{W}_p(\\mathcal{L}(X_t),\\break \\mathcal{L}(\\bar\n{X}_t))\\leq\\frac{C}{N}$\nwhich is also a consequence of the strong order of convergence of the\nEuler scheme when the diffusion coefficient is constant.\n\\end{arem}\n\n\\section{The Wasserstein distance between the pathwise laws}\\label{sec_pathwise}\nWe now state the main result of the paper.\n\\begin{ahyp}\\label{hyp_wass_pathwise}\nWe assume that $a \\in C^4_b, b \\in C^3_b$ and\n\\[\n\\exists\\underline{a}>0, \\forall x\\in\\mathbb{R}\\qquad a(x)\\geq\\underline{a}\n\\mbox{ (uniform ellipticity)}.\n\\]\n\\end{ahyp}\nClearly, Hypothesis~\\ref{hyp_wass_pathwise} implies\nHypothesis~\\ref{hyp_wass_marginal}.\n\n\n\\begin{theorem}\\label{main_thm}\nUnder Hypothesis~\\ref{hyp_wass_pathwise}, we have\n\\[\n\\forall p\\geq1, \\forall\\varepsilon>0, \\exists C<+\\infty, \\forall N\\geq1\n\\qquad \\mathcal{W}_p \\bigl(\\mathcal{L}(X),\\mathcal{L}(\\bar{X}) \\bigr)\\leq\n\\frac{C}{N^{2\/3-\\varepsilon}}.\n\\]\n\\end{theorem}\n\nBefore proving the theorem, let us state some of its consequences for\nthe pricing of lookback options. It is well known (see, e.g.,\n\\cite{glas} page 367) that if $(U_k)_{0\\leq k\\leq N-1}$ are independent\nrandom variables uniformly distributed on $[0,1]$ and independent from\nthe Brownian increments $(W_{t_{k+1}}-W_{t_{k}})_{0\\leq k\\leq N-1}$\nthen\n$\\bar{\\hspace*{-1.2pt}\\bar{X}}\\stackrel{\\mathrm{def}}{=}\\frac{1}{2}\\max_{0\\leq\nk\\leq N-1} (\\bar{X}_{t_k}+\\bar{X}_{t_{k+1}}+\\sqrt{(\\bar\n{X}_{t_{k+1}}\\,{-}\\,\\bar{X}_{t_k})^2-2\\sigma^2(\\bar{X}_{t_k})t_1\\ln(U_k)} ) $\nis such\\vspace*{-2pt} that $\n(\\bar{X}_0,\\bar{X}_{t_1},\\ldots,\\bar{X}_{T},\\bar{\\hspace*{-1.2pt}\\bar{X}}\n)\\stackrel{\\mathcal{L}}{=}(\\bar{X}_0,\\bar{X}_{t_1},\\ldots,\\bar\n{X}_{T},\\max_{t\\in [0,T]}\\bar{X}_t)$.\n\\begin{acor}\nIf $f\\dvtx \\mathbb{R}^2\\to\\mathbb{R}$ is Lipschitz continuous, then,\nunder Hypothesis~\\ref{hyp_wass_pathwise},\n\\begin{eqnarray}\\label{vitfaiblook}\n&& \\forall\\varepsilon>0, \\exists C<+\\infty, \\forall N\\geq1\n\\nonumber\\\\[-6pt]\\\\[-12pt]\n&&\\qquad \\Bigl\\llvert \\mathbb{E} \\Bigl[f \\Bigl(X_T,\\max_{t\\in[0,T]}X_t\n\\Bigr) \\Bigr]-\\mathbb{E}\n\\bigl[f(\\bar{X}_T,\\bar{\\hspace*{-1.2pt}\\bar{X}}) \\bigr]\n\\Bigr\\rrvert\\leq\\frac{C}{N^{2\/3-\\varepsilon}}.\\nonumber\n\\end{eqnarray}\n\\end{acor}\n\nTo our knowledge, this result appears to be new. Of course, when $f$ is\nalso differentiable with respect to its second variable, one has\n\\begin{eqnarray*}\n&& \\mathbb{E} \\Bigl[f \\Bigl(X_T,\\max_{t\\in[0,T]}X_t \\Bigr)\n\\Bigr]\n\\\\\n&&\\qquad =\\mathbb{E} \\bigl[f (X_T,x_0 ) \\bigr]+\n\\int_{x_0}^{+\\infty}\\mathbb{E} \\bigl[\n\\partial_2f(X_T,x)1_{\\{\\max_{t\\in[0,T]}X_t\\geq x\\}} \\bigr]\\,dx.\n\\end{eqnarray*}\nOne could try to combine the weak error analysis for the first term on\nthe right-hand side with Theorem 2.3 \\cite{gob2} devoted to barrier\noptions to obtain the order $N^{-1}$ instead on $N^{-2\/3+\\varepsilon}$\nin (\\ref{vitfaiblook}). Unfortunately, one cannot succeed for two main\nreasons. First, it is not clear whether the estimation in Theorem 2.3\n\\cite{gob2} is preserved by integration over $[x_0,+\\infty)$. More\nimportantly, for this estimation to hold, a structure condition on the\npayoff function implying that $\\partial_2f(x,x)=0$ for all $x\\geq x_0$\nis needed.\n\n\\begin{pf*}{Proof of Theorem \\ref{main_thm}}\nWe first deduce from Theorem \\ref{wasun} some bound on the Wasserstein\ndistance between the finite-dimensional marginals of the diffusion $X$\nand its Euler scheme $\\bar{X}$ on a coarse time-grid. For\n$m\\in\\{1,\\ldots,N-1\\}$, we set $n=\\lfloor N\/m\\rfloor$ and define\n\\[\ns_l=\\frac{lmT}{N}\\qquad\\mbox{for } l\\in\\{0,\\ldots,n-1\\}\\mbox{ and } s_n=T.\n\\]\nWe will use this coarse time-grid $(s_l)_{1\\leq l\\leq n}$ to\napproximate the supremum norm on $\\mathcal{C}$ and therefore we endow\nconsistently $\\mathbb{R}^n$ with the norm $|(x_1,\\ldots,\\break x_n)|=\\max\n_{1\\leq l\\leq n}|x_l|$. Combining the next proposition, the proof of\nwhich is postponed in Appendix~\\ref{App_Sec2} with Theorem \\ref{wasun},\none obtains that\n\\begin{equation}\n\\mathcal{W}_p \\bigl(\\mathcal{L}(X_{s_1},\\ldots,X_{s_n}),\\mathcal{L}(\\bar{X}_{s_1},\\ldots,\n\\bar{X}_{s_n}) \\bigr)\\leq\\frac{C\\sqrt{\\log\nN}}{m},\\label{wasfindim}\n\\end{equation}\nwhere the constant $C$ does not depend on $(m,N)$.\n\\begin{aprop}\\label{prop_wass_multi}\nLet $\\mathbb{R}^n$ be endowed\nwith the norm $|(x_1,\\ldots,x_n)|=\\max_{1\\leq l\\leq n}|x_l|$. For any\n$p\\geq1$, there is a constant $C$ not depending on $n$ such that\n\\[\n\\mathcal{W}_p \\bigl(\\mathcal{L}(X_{s_1},\\ldots,X_{s_n}),\\mathcal{L}(\\bar{X}_{s_1},\\ldots,\n\\bar{X}_{s_n}) \\bigr)\\leq Cn\\sup_{0\\leq t\\leq T,x\\in\n\\mathbb{R}}\n\\mathcal{W}_p \\bigl(\\mathcal{L} \\bigl(\\bar{X}^{x}_t\n\\bigr),\\mathcal{L} \\bigl(X^{x}_t \\bigr) \\bigr).\n\\]\n\\end{aprop}\nThere is a probability measure\n$\\pi(dx_1,\\ldots,dx_n,d\\bar{x}_1,\\ldots,d\\bar{x}_n)$ in $\\Pi\n(\\mathcal{L}(X_{s_1},\\allowbreak\\ldots,\nX_{s_n}),\\mathcal{L}(\\bar{X}_{s_1},\\ldots,\\bar{X}_{s_n}))$ which\nattains the Wasserstein distance in the left-hand side of\n(\\ref{wasfindim}); see, for instance, Theorem 3.3.11 \\cite{raru},\naccording to which $\\pi$ is the law of\n$(X_{s_1},\\ldots,X_{s_n},\\xi_{s_1},\\ldots,\\xi_{s_n})$ with\n$(\\xi_{s_1},\\ldots,\\xi_{s_n})\\in\\partial_{|~|}\\varphi\n(X_{s_1},\\ldots,\\break X_{s_n})$ where $\\partial_{|~|}\\varphi$ is the\nsubdifferential, for the above defined norm $|~|$ on $\\mathbb{R}^n$, of\nsome \\mbox{$|~|$-}convex function $\\varphi$. Let\n$\\tilde{\\pi}(x_1,\\ldots,x_n,d\\bar{x}_1,\\ldots,d\\bar{x}_n)$ denote a\nregular conditional probability of $(\\bar{x}_1,\\ldots,\\bar{x}_n)$ given\n$(x_1,\\ldots,x_n)$ when $\\mathbb{R}^{2n}$ is endowed with $\\pi$ and\n$(\\bar{Y}_{s_1},\\ldots,\\bar{Y}_{s_n})$ be distributed according to\n$\\tilde{\\pi}(X_{s_1},\\ldots,X_{s_n},\\break d\\bar{x}_1,\\ldots,d\\bar{x}_n)$. The\nvector $(X_{s_1},\\ldots,X_{s_n},\\bar{Y}_{s_1},\\ldots,\\bar {Y}_{s_n})$\nis distributed according to $\\pi$ so that\n\\begin{eqnarray}\\label{xybar}\n(\\bar{Y}_{s_1},\\ldots,\\bar{Y}_{s_n})&\\stackrel{\n\\mathcal{L}} {=}&(\\bar{X}_{s_1},\\ldots,\\bar{X}_{s_n})\\quad\\mbox{and}\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n\\mathbb{E}^{1\/p} \\Bigl[\\max_{1\\leq l\\leq n}|X_{s_l}-\n\\bar{Y}_{s_l}|^p \\Bigr]&\\leq&\\frac{C\\sqrt{\\log N}}{m}.\\nonumber\n\\end{eqnarray}\nLet $p_t(x,y)$ denote the transition density of the SDE~(\\ref{sde}) and\n$\\ell_t(x,y)=\\log(p_t(x,y))$. According to Appendix \\ref{diff_bridge}\ndevoted to diffusion bridges, the processes\n\\[\n\\biggl(W^l_t=\\int\n_{s_l}^t \\bigl(dW_s-\n\\sigma(X_s)\\partial_x\\ell_{s_{l+1}-s}(X_s,X_{s_{l+1}})\n\\,ds \\bigr),t \\in[s_l,s_{l+1}) \\biggr) _{0\\leq l\\leq n-1}\n\\]\nare independent Brownian motions independent from\n$(X_{s_1},\\ldots,X_{s_n})$. We suppose from now on that the vector\n$(\\bar{Y}_{s_1},\\ldots,\\bar{Y}_{s_n})$ has been generated independently\nfrom these processes and so will be all the random variables and\nprocesses needed in the remaining of the proof (see in particular the\nconstruction of $\\beta$ below). Moreover, again by Appendix\n\\ref{diff_bridge}, the solution of\n\\begin{equation}\\label{defz}\n\\cases{\n\\displaystyle Z^{x,y}_t=x+\n\\int_{s_l}^t\\sigma\\bigl(Z^{x,y}_s\n\\bigr)\\,dW^l_s\n\\vspace*{2pt}\\cr\n\\displaystyle\\hspace*{31pt}{} +\\int_{s_l}^t \\bigl[b \\bigl(Z^{x,y}_s \\bigr)+\\sigma^2 \\bigl(Z^{x,y}_s\n\\bigr)\\partial_x\\ell _{s_{l+1}-s} \\bigl(Z^{x,y}_s,y \\bigr) \\bigr]\\,ds,\n\\vspace*{2pt}\\cr\n\\hspace*{200pt} t\\in[s_l,s_{l+1}),\n\\vspace*{4pt}\\cr Z^{x,y}_{s_{l+1}}=y}\n\\end{equation}\nis distributed according to the conditional law of\n$(X_t)_{t\\in[s_l,s_{l+1}]}$ given $(X_{s_l},\\break X_{s_{l+1}})=(x,y)$ and for\neach $l\\in\\{0,\\ldots,n-1\\}$, one has $(Z^{X_{s_l},X_{s_{l+1}}}_t)_{t\\in\n[s_l,s_{l+1}]}=(X_t)_{t\\in[s_l,s_{l+1}]}$.\n\n\nIn order to construct a good coupling between $\\mathcal{L}(X)$ and\n$\\mathcal{L}(\\bar{X})$, a natural idea would be to extend\n$(\\bar{Y}_{s_1},\\ldots,\\bar{Y}_{s_n})$ to a process\n$(\\bar{Y}_t)_{t\\in[0,T]}$ with law $\\mathcal{L}(\\bar{X})$ by defining\nfor each $l\\in\\{0,\\ldots,n-1\\}$, $(\\bar{Y}_t)_{t\\in[s_l,s_{l+1}]}$ as\nthe process obtained by inserting the Brownian motion $W^l$, the\nstarting point $\\bar{Y}_{s_l}$ and the ending point $\\bar{Y}_{s_{l+1}}$\nin the It\\^{o}'s decomposition of the conditional dynamics of\n$(\\bar{X}_t)_{t\\in[s_l,s_{l+1}]}$ given $\\bar{X}_{s_l}=x$ and\n$\\bar{X}_{s_{l+1}}=y$. Unfortunately, even if this Euler scheme bridge\nis deduced by a simple transformation of the Brownian bridge on a\nsingle time-step, it becomes a complicated process\\vspace*{1pt} when\nthe difference between the starting and ending times is larger than\n$\\frac{T}{N}$ because of the lack of Markov property. At the end of the\nproof, we will choose the difference $s_{l+1}-s_l$ of order\n$\\frac{T}{N^{1\/3}}$ and, therefore,\\vspace*{-2pt} much larger than the\ntime-step $\\frac{T}{N}$. In addition, it is not clear how to compare\nthe paths of the diffusion bridge and the Euler scheme bridge driven by\nthe same Brownian motion. That is, why we are going to introduce some\nnew process $(\\tilde{\\chi}_t)_{t\\in[0,T]}$ such that the comparison\nwill be performed at the diffusion bridge level, which is not so\neasy~yet.\n\nTo construct $\\tilde{\\chi}$, we are going to exhibit a Brownian motion\n$(\\beta_t)_{t\\in[0,T]}$ such that $\\bar{Y}_{s_1},\\ldots,\\bar{Y}_{s_n}$\nare the values on the coarse time-grid $(s_l)_{1\\leq l\\leq n}$ of the\nEuler scheme~(\\ref{eul}) driven by $\\beta$ instead of $W$. The\nextension $(\\bar{Y}_t)_{t\\in[0,T]}$ with law $\\mathcal{L}(\\bar{X})$ is\nthen simply defined as the whole Euler scheme driven by $\\beta$:\n\\begin{eqnarray}\n\\bar{Y}_t=\\bar{Y}_{t_k}+\\sigma(\\bar{Y}_{t_k}) (\n\\beta_t-\\beta_{t_k})+b(\\bar{Y}_{t_k})\n(t-t_k),\\nonumber\n\\\\\n\\eqntext{ t\\in[t_{k},t_{k+1}], 0\\le k\\le N-1.}\n\\end{eqnarray}\nThe construction of $\\beta$ is postponed at the end of the\npresent proof. One then defines\n\\[\n\\chi_t=\\bar{Y}_{s_l}+\\int_{s_{l}}^t\n\\sigma(\\chi_{s})\\,d\\beta_s+\\int_{s_{l}}^tb(\n\\chi_s)\\,ds,\\qquad t\\in[s_{l},s_{l+1}), 0\\le l\\le\nn-1.\n\\]\n\nNotice that the process $\\chi=(\\chi_t)_{t\\in[0,T]}$ which evolves\naccording to the SDE~(\\ref{sde}) with $\\beta$ replacing $W$ on each\ntime-interval $[s_l,s_{l+1})$ is c\\`adl\\`ag: discontinuities may arise\nat the points $\\{s_{l+1}, 0\\leq l\\leq n-1\\}$. We denote by\n$\\chi_{s_{l+1}-}$ its left-hand limit at time $s_{l+1}$ and set\n$\\chi_{T}=\\chi_{s_n-}$. The strong error estimation (\\ref{vitfort})\nwill permit us to estimate the difference between the processes\n$\\bar{Y}$ and $\\chi$. For the subsequent choice of $\\beta$, we do not\nexpect the processes $\\chi$ and $X$ to be close. Nevertheless, the\nprocess $\\tilde{\\chi}$ obtained by setting\n\\[\n\\forall l\\in\\{0,\\ldots,n-1\\}, \\forall t\\in[s_l,s_{l+1})\n\\qquad\n\\tilde{\\chi}_t=Z^{\\chi_{s_l},\\chi\n_{s_{l+1}-}}_t\\quad\\mbox{and}\\quad\\tilde{\\chi}_T=\\chi_T,\n\\]\nwhere $Z^{x,y}$ is defined in (\\ref{defz}) is such that\n$\\mathcal{L}(\\tilde{\\chi})=\\mathcal{L}(\\chi)$ by\nPropositions~\\ref{prop_bridge} and~\\ref{prop_bridge2}. On each coarse\ntime-interval $[s_l,s_{l+1})$ the diffusion bridges associated with $X$\nand $\\tilde{\\chi}$ are driven by the same Brownian motion $W^l$.\nMoreover the differences $|X_{s_l}-Y_{s_l}|$ between the starting\npoints and $|X_{s_{l+1}}-\\chi_{s_{l+1}-}|\\leq\n|X_{s_{l+1}}-Y_{s_{l+1}}|+| Y_{s_{l+1}}-\\chi_{s_{l+1}-}|$ between the\nending points is controlled by (\\ref{xybar}) and the above mentionned\nstrong error estimation. That is, why one may expect to obtain a good\nestimation of the difference between the processes $X$ and\n$\\tilde{\\chi}$. By the triangle inequality and since\n$\\mathcal{L}(\\bar{X})=\\mathcal{L}(\\bar{Y})$ and\n$\\mathcal{L}(\\tilde{\\chi})=\\mathcal{L}(\\chi)$,\n\\begin{eqnarray}\\label{triangle}\n\\qquad\\mathcal{W}_p \\bigl(\\mathcal{L}(\\bar{X}),\\mathcal{L}(X) \\bigr)&\\leq&\n\\mathcal{W}_p \\bigl(\\mathcal{L}(\\bar{X}),\\mathcal{L}(\\chi) \\bigr)+\n\\mathcal{W}_p \\bigl(\\mathcal{L}(\\chi),\\mathcal{L}(X) \\bigr)\n\\nonumber\n\\\\\n&\\leq&\\mathbb{E}^{1\/p} \\Bigl[\\sup_{t\\in[0,T]}|\n\\bar{Y}_t-\\chi_t|^p \\Bigr]+\\mathbb{E}\n^{1\/p} \\Bigl[\\sup_{t\\in[0,T]}|X_t-\\tilde{\n\\chi}_t|^p \\Bigr],\n\\end{eqnarray}\nwhere, for the definition of\n$\\mathcal{W}_p(\\mathcal{L}(\\bar{X}),\\mathcal{L}(\\chi))$ and\n$\\mathcal{W}_p(\\mathcal{L}(\\chi),\\mathcal{L}(X))$, the space of\nc\\`adl\\`ag sample-paths from $[0,T]$ to $\\mathbb{R}$ is endowed with\nthe supremum norm. Let us first estimate the first term on the\nright-hand side. From~(\\ref{vitfort}), we get\n\\[\n\\mathbb{E} \\Bigl[\\sup_{t\\in[s_l,s_{l+1})}|\\bar{Y}_t-\\chi\n_t|^{p} \\big|\\bar{Y}_{s_l} \\Bigr]\\leq C\n\\frac{m^{p\/2}(1+|\\bar{Y}_{s_l}|)^{p}}{N^{p}},\n\\]\nwhere the constant $C$ does not depend on $(N,m)$. We deduce that\n\\begin{eqnarray*}\n\\mathbb{E} \\Bigl[\\sup_{t\\in[0,T]}|\\bar{Y}_t-\n\\chi_t|^{p} \\Bigr]\n&=&\\mathbb{E} \\Bigl[\\max _{0\\leq l\\leq n-1}\\sup_{t\\in[s_l,s_{l+1})}|\\bar{Y}_t-\\chi\n_t|^{p} \\Bigr]\n\\\\\n&\\leq& \\sum_{l=0}^{n-1}\n\\mathbb{E} \\Bigl[\\mathbb{E} \\Bigl[\\sup_{t\\in\n[s_l,s_{l+1})}|\n\\bar{Y}_t-\\chi_t|^{p} \\big|\\bar{Y}_{s_l}\n\\Bigr] \\Bigr]\n\\\\\n&\\leq& C\\frac{m^{p\/2}}{N^{p}}\\sum_{l=0}^{n-1}\n\\mathbb{E} \\bigl[\\bigl(1+|\\bar{Y}_{s_l}|\\bigr)^{p} \\bigr]\n\\\\\n&\\leq& C\n\\frac{m^{p\/2-1}}{N^{p-1}},\n\\end{eqnarray*}\nwhere we used (\\ref{momenteul}) for the last inequality. As a\nconsequence,\n\\begin{equation}\n\\label{Wass_chi_se} \\mathbb{E}^{1\/p} \\Bigl[\n\\sup_{t\\in\n[0,T]}|\\bar{Y}_t-\\chi_t|^p\n\\Bigr] \\le C\\frac{m^{1\/2-1\/p}}{N^{1-1\/p}}.\n\\end{equation}\n\nLet us now estimate the second term on the right-hand side of\n(\\ref{triangle}). By Proposition~\\ref{prop_bridge2} and since for\n$l\\in\\{0,\\ldots,n-1\\}$, $\\chi_{s_l}=\\bar{Y}_{s_l}$,\n\\begin{eqnarray*}\n\\sup_{t\\leq T}|X_t-\\tilde{\\chi}_t|&=&\\max _{0\\leq l\\leq\nn-1}\\sup_{t\\in\n[s_l,s_{l+1})}\\bigl|Z^{X_{s_l},X_{s_{l+1}}}_t-Z^{\\chi_{s_l},\\chi\n_{s_{l+1}-}}_t\\bigr|\n\\\\\n&\\leq& C\\max_{0\\leq l\\leq n-1}|X_{s_l}-\\bar{Y}_{s_l}| \\vee|X_{s_{l+1}}-\\chi_{s_{l+1}-}|.\n\\end{eqnarray*}\nSince, by the triangle inequality and the continuity of $\\bar{Y}$,\n\\begin{eqnarray*}\n|X_{s_{l+1}}-\\chi_{s_{l+1}-}|&\\leq&|X_{s_{l+1}}-\\bar\n{Y}_{s_{l+1}}|+|\\bar{Y}_{s_{l+1}}-\\chi_{s_{l+1}-}|\n\\\\\n&\\leq&\n|X_{s_{l+1}}-\\bar{Y}_{s_{l+1}}|+\\sup_{t\\in[0,T]}|\n\\bar{Y}_{t}-\\chi_{t}|,\n\\end{eqnarray*}\none deduces that\n\\[\n\\sup_{t\\leq T}|X_t-\\tilde{\\chi}_t|\\leq C\n\\Bigl(\\max_{1\\leq l\\leq\nn}|X_{s_l}-\\bar{Y}_{s_l}|+\n\\sup_{t\\in[0,T]}|\\bar{Y}_{t}-\\chi_{t}| \\Bigr).\n\\]\nCombined with (\\ref{xybar}) and (\\ref{Wass_chi_se}), this implies\n\\begin{eqnarray*}\n\\mathbb{E}^{1\/p} \\Bigl[\\sup_{t\\leq T}|X_t- \\tilde{\\chi}_t|^p\n\\Bigr]&\\leq& C\\mathbb{E} ^{1\/p} \\Bigl[\\max_{1\\leq l\\leq\nn}|X_{s_l}-\\bar{Y}_{s_l}|^p \\Bigr]+C\\mathbb{E}^{1\/p}\n\\Bigl[\\sup_{t\\in[0,T]}| \\bar{Y}_{t}-\\chi_{t}|^p \\Bigr]\n\\\\\n&\\leq& C \\biggl(\n\\frac{\\sqrt{\\log N}}{m}+\\frac{m^{1\/2-1\/p}}{N^{1-1\/p}} \\biggr).\n\\end{eqnarray*}\nPlugging this inequality together with (\\ref{Wass_chi_se}) in\n(\\ref{triangle}), we deduce that\n\\[\n\\mathcal{W}_p \\bigl(\\mathcal{L}(X),\\mathcal{L}(\\bar{X}) \\bigr) \\leq\nC \\biggl(\\frac{\\sqrt{\\log N}}{m}+\\frac{m^{1\/2-1\/p}}{N^{1-1\/p}} \\biggr)\n\\]\nand conclude by choosing $m=\\lfloor N^{2\/3} \\rfloor$ that for\n$p\\geq\\frac{1}{3\\varepsilon}$,\n$\\mathcal{W}_p(\\mathcal{L}(X),\\mathcal{L}(\\bar{X}))\\leq\\frac{C}{N^{2\/3-\\varepsilon}}$.\nWhen $\\frac{1}{3\\varepsilon}>1$, the conclusion follows for\n$p\\in[1,\\frac{1}{3\\varepsilon})$ since\n$\\mathcal{W}_p(\\mathcal{L}(X),\\allowbreak\\mathcal{L}(\\bar{X}))\\leq\\mathcal\n{W}_{1\/3\\varepsilon}(\\mathcal{L}(X),\\mathcal{L}(\\bar{X}))$.\n\nTo complete the proof, we still have to construct the Brownian motion\n$\\beta$. We first reconstruct on the fine time grid $(t_k)_{1\\leq k\\leq\nN}$ an Euler scheme $(\\bar{Y}_{t_k},0\\le k\\le N)$ interpolating the\nvalues on the coarse grid $(s_l)_{1\\leq l\\leq n}$. Let us denote by\n$\\bar{p}(x,y)$ the density of the law\n$\\mathcal{N}(x+b(x)T\/N,\\sigma(x)^2T\/N)$ of the Euler scheme starting\nfrom~$x$ after one time step~$T\/N$. Thanks to the ellipticity\nassumption, we have $\\bar{p}(x,y)>0$ for any $x,y\\in\\mathbb{R}$.\nConditionally on $(\\bar{Y}_{s_1},\\ldots,\\bar{Y}_{s_n})$, we generate\nindependent random vectors\n\\[\n(\\bar{Y}_{s_{l-1}+t_1},\\ldots,\\bar{Y}_{s_{l-1}+t_{m-1}})_{1\\leq\nl\\leq n-1}\\quad\\mbox{and}\\quad (\\bar{Y}_{s_{n-1}+t_1},\\ldots,\\bar{Y}_{t_{N-1}})\n\\]\nwith respective densities\n\\[\n\\frac{\\bar{p}(\\bar{Y}_{s_{l-1}},x_1)\\bar{p}(x_1,x_2)\\cdots\\bar\n{p}(x_{n-1},\\bar{Y}_{s_{l}})\n}{\\int_{\\mathbb{R}^{n-1}}\\bar{p}(\\bar{Y}_{s_{l-1}},y_1)\\bar\n{p}(y_1,y_2)\\cdots\\bar{p}(y_{n-1},\\bar{Y}_{s_{l}})\\,dy_1\\cdots\ndy_{n-1}}\n\\]\nand\n\\[\n\\frac{\\bar{p}(\\bar{Y}_{s_{n-1}},x_1)\\bar{p}(x_1,x_2)\\cdots \\bar\n{p}(x_{N-1-m(n-1)},\\bar{Y}_{s_{n}}) }{\\int_{\\mathbb\n{R}^{N-1-m(n-1)}}\\bar{p}(\\bar{Y}_{s_{n-1}},y_1)\\bar{p}(y_1,y_2)\\cdots\n\\bar{p}(y_{N-1-m(n-1)},\\bar{Y}_{s_{n}})\\,dy_1\\cdots dy_{N-1-m(n-1)}}.\n\\]\nThis ensures that $(\\bar{Y}_{t_k})_{0\\leq k\\leq\nn}\\stackrel{\\mathcal{L}}{=}(\\bar{X}_{t_k})_{0\\leq k\\leq n}$. Then we\nget, thanks to the ellipticity condition, that\n$ (\\frac{1}{\\sigma(\\bar{Y}_{t_{k-1}})}(\\bar{Y}_{t_k}-\\bar\n{Y}_{t_{k-1}}-b(\\bar{Y}_{t_{k-1}})) )_{1\\le k\\le N}$~are\nindependent centered Gaussian variables with variance~$T\/N$. By using\nindependent Brownian bridges, we can then construct a Brownian motion\n$(\\beta_t)_{t\\in[0,T]}$ such that\n\\[\n\\beta_{t_k}-\\beta_{t_{k-1}}= \\frac{1}{\\sigma(\\bar{Y}_{t_{k-1}})} \\bigl(\n\\bar{Y}_{t_k}-\\bar{Y}_{t_{k-1}}-b(\\bar{Y}_{t_{k-1}})\n\\bigr),\n\\]\nwhich completes the construction.\n\\end{pf*}\n\n\\section*{Conclusion}\nIn this paper, we prove that the order of convergence of the\nWasserstein distance $\\mathcal{W}_p$ on the space of continuous paths\nbetween the laws of a uniformly elliptic one-dimensional diffusion and\nits Euler scheme with $N$-steps is not worse that\n$N^{-2\/3+\\varepsilon}$. In view of a possible extension to\nmultidimensional settings, two main difficulties have to be resolved.\nFirst, we take advantage of the optimality of the inverse transform\ncoupling in dimension one to obtain a uniform bound on the Wasserstein\ndistance between the marginal laws with optimal rate $N^{-1}$ up to a\nlogarithmic factor. In dimension $d>1$, the optimal coupling between\ntwo probability measures on $\\mathbb{R}^d$ is not available, which\nmakes the estimation of the Wasserstein distance between the marginal\nlaws much more complicated even if, for $\\mathcal{W}_1$, the order\n$N^{-1}$ may be deduced from the results of \\cite{goblab}; see Remark\n\\ref{w1unif}. Next, one has to generalize the estimation on diffusion\nbridges given by Proposition~\\ref{prop_bridge2} which we deduce from\nthe Lamperti transform in dimension $d=1$.\n\nIn the perspective of the multi-level Monte Carlo method introduced by\nGiles~\\cite{giles}, coupling with order of convergence\n$N^{-2\/3+\\varepsilon}$ the Euler schemes with $N$ and $2N$ steps would\nalso be of great interest for variance reduction, especially in\nmultidimensional situations where the Milstein scheme is not feasible;\nsee \\cite{js} for the implementation of this idea in the example of a\ndiscretization scheme devoted to usual stochastic volatility models.\nBut this does not seem obvious from our nonconstructive coupling\nbetween the Euler scheme and its diffusion limit. For both the\nderivation of the order of convergence of the Wasserstein distance on\nthe path space and the explicitation of the coupling, the limiting step\nin our approach is Proposition \\ref{prop_wass_multi}. In this\nproposition, we bound the dual formulation of the Wasserstein distance\nbetween $n$-dimensional marginals by the Wasserstein distance between\none-dimensional marginals multiplied by $n$.\n\nEven if the order of convergence of the Wasserstein distance on the\npath space obtained in the present paper may not be optimal, it\nprovides the first significant step from the order $N^{1\/2}$ obtained\nwith the trivial coupling where the diffusion and the Euler scheme are\ndriven by the same Brownian motion.\n\\begin{appendix}\n\\section{Proofs of Section~\\lowercase{\\protect\\texorpdfstring{\\ref{sec_marginal}}{2}}}\\label{App_Sec1}\n\n\\begin{pf*}{Proof of Proposition \\ref{propevolftm1}}\nAccording to \\cite{friedman}, Theorems 5.4 and 4.7, for any\n$t\\in(0,T]$, the solution $X_t$ of (\\ref{sde}) starting from $X_0=x_0$\nadmits a density $p_t(x)$ w.r.t. the Lebesgue measure on the real line,\nthe function $(t,x)\\mapsto p_t(x)$ is $C^{1,2}$ on\n$(0,T]\\times\\mathbb{R}$, and on this set,\\vspace*{-1pt} it is a\nclassical solution of the Fokker--Planck equation\n\\begin{equation}\n\\partial_t p_t(x)=\\tfrac{1}{2}\\partial\n_{xx} \\bigl(a(x)p_t(x) \\bigr)-\\partial_x\n\\bigl(b(x)p_t(x) \\bigr). \\label{fp}\n\\end{equation}\nMoreover, the following Gaussian bounds hold:\n\\begin{eqnarray}\\label{gb}\n\\bigl|p_t(x)\\bigr|+\\sqrt{t}\\bigl|\\partial_x p_t(x)\\bigr|\\leq\\frac{C}{\\sqrt{t}}e^{-(x-x_0)^2\/Ct}\n\\nonumber\\\\[-10pt]\\\\[-10pt]\n\\eqntext{\\exists C>0, \\forall t\\in(0,T], \\forall x\\in\\mathbb{R}.}\n\\end{eqnarray}\nThe partial derivatives $\\partial_x F_t(x)=p_t(x)$ and\n$\\partial_{xx}F_t(x)=\\partial_xp_t(x)$ exist and are continuous on\n$(0,T]\\times\\mathbb{R}$. For $00, \\forall\n(t,x)\\in(0,T]\\times\\mathbb{R}, |p_t(x)|\\geq\n\\frac{c}{\\sqrt{t}}e^{-(x-x_0)^2\/ct}$. This enables us to apply the\nimplicit function theorem to $(t,x,u)\\mapsto F_t(x)-u$ to deduce that\nthe inverse $u\\mapsto F_t^{-1}(u)$ of $x\\mapsto F_t(x)$ is $C^{1,2}$ in\nthe variables $(t,u)\\in(0,T]\\times(0,1)$ and solves\n\\begin{eqnarray*}\n\\partial_t F_t^{-1}(u)&=&-\\frac{\\partial_t F_t}{\\partial_x\nF_t}\n\\bigl(F_t^{-1}(u) \\bigr)\n\\\\\n&=&-\\frac{1}{2}\\partial_{x} \\bigl(a(x)\\partial_x\nF_t(x) \\bigr)\\big|_{x=F_t^{-1}(u)}\\partial_u\nF_t^{-1}(u)+b \\bigl(F_t^{-1}(u)\n\\bigr)\n\\\\\n&=&-\\frac{1}{2}\\partial_{u} \\biggl(\\frac{a(F_t^{-1}(u))}{\\partial_u\nF_t^{-1}(u)}\n\\biggr)+b \\bigl(F_t^{-1}(u) \\bigr),\n\\end{eqnarray*}\nwhere we used\\vspace*{-1pt} (\\ref{fpfr}) for the second equality and\n$\\partial_u F_t^{-1}(u)=\\frac{1}{\\partial_xF_t( F_t^{-1}(u))}$ for both\nthe second and the third equalities.\n\\end{pf*}\n\n\\begin{pf*}{Proof of Proposition \\ref{propevolbarftm1}}\nFor $t\\in(0,t_1]$,\n$\\bar{X_t}$ admits the Gaussian density with mean $x_0+b(x_0)t$ and\nvariance $a(x_0)t$. By induction on $k$ and independence of\n$W_t-W_{t_k}$ and $\\bar{X}_{t_k}$ in (\\ref{eul}), one checks that for\n$k\\in\\{1,\\ldots,\\allowbreak n-1\\}$, $\\bar{X}_{t_k}$ admits a density\n$\\bar{p}_{t_k}(x)$ and that for $t\\in(t_{k},t_{k+1}]$, $(\\bar\n{X}_{t_k},\\bar{X_t})$ admits the density\n\\[\n\\rho(t_k,t,y,x)=\\bar{p}_{t_k}(y)\\frac{\\exp({-\n{(x-y-b(y)(t-t_k))^2}\/{2a(y)(t-t_k)}})}{\\sqrt{2\\pi a(y)(t-t_k)}}.\n\\]\nThe marginal density $\\bar{p}_t(x)=\\int_\\mathbb{R}\\bar\n{p}_{t_k}(y)\\frac{\\exp({-({x-y-b(y)(t-t_k)^2})\/{2a(y)(t-t_k)}})}{\\sqrt{2\\pi\na(y)(t-t_k)}}\\,dy$ of $\\bar{X}_t$ is continuous on\n$(t_k,t_{k+1}]\\times\\mathbb{R}$ by Lebesgue's theorem and positive.\n\nLet $N(x)=\\int_{-\\infty}^xe^{-y^2\/2}\\frac{dy}{\\sqrt{2\\pi}}$ denote the\ncumulative distribution function of the standard Gaussian law and\n$k\\in\\{0,\\ldots,N-1\\}$. Again by the independence structure in\n(\\ref{eul}), for $(t,x)\\in(t_k,t_{k+1}]\\times\\mathbb{R}$,\n$\\bar{F}_t(x)=\\break \\mathbb{E} (N (\\frac{x-\\bar{X}_{t_k}-b(\\bar\n{X}_{t_k})(t-t_k)}{\\sqrt{a(\\bar{X}_{t_k})(t-t_k)}} ) )$. One~has\n\\begin{eqnarray*}\n\\partial_t N \\biggl(\\frac{x-y-b(y)(t-t_k)}{\\sqrt{a(y)(t-t_k)}} \\biggr\n)&=& - \\biggl(\\frac{x-y-b(y)(t-t_k)}{2\\sqrt{2\\pi a(y)(t-t_k)^3}}+\\frac\n{b(y)}{\\sqrt{2\\pi a(y)(t-t_k)}} \\biggr)\n\\\\\n&&{}\\times \\exp\\biggl(-\\frac\n{(x-y-b(y)(t-t_k))^2}{2a(y)(t-t_k)}\\biggr).\n\\end{eqnarray*}\nBy the growth assumption on $\\sigma$ and $b$, one easily checks that\n$\\forall k\\in\\{0,\\ldots,N\\}$, $\\mathbb{E}(\\bar{X}^2_{t_k})<+\\infty$.\nWith the uniform ellipticity assumption, one deduces by a standard\nuniform integrability argument that $\\bar{F}_t(x)$ is differentiable\nw.r.t. $t$ with partial\\vadjust{\\goodbreak} derivative\n\\begin{eqnarray}\\label{evolfbart}\n\\qquad \\partial_t\\bar{F}_t(x)&=&-\\mathbb{E} \\biggl[ \\biggl(\n\\frac{x-\\bar{X}_{t_k}-b(\\bar{X}_{t_k})(t-t_k)}{2\\sqrt{2\\pi a(\\bar\n{X}_{t_k})(t-t_k)^3}}+\\frac{b(\\bar{X}_{t_k})}{\\sqrt{2\\pi a(\\bar\n{X}_{t_k})(t-t_k)}} \\biggr)\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\hspace*{70pt}\n{}\\times\n\\exp \\biggl(-\\frac{(x-\\bar{X}_{t_k}-b(\\bar\n{X}_{t_k})(t-t_k))^2}{2a(\\bar{X}_{t_k})(t-t_k)}\\biggr) \\biggr]\\nonumber\n\\end{eqnarray}\ncontinuous in $(t,x)\\in(t_k,t_{k+1}]\\times\\mathbb{R}$. In the same way,\none checks smoothness of $\\bar{F}_t(x)$ in the spatial variable $x$ and\nobtains that this function is $C^{1,2}$ on \\mbox{$(t_k,t_{k+1}]\\times\n\\mathbb{R}$.}\n\nWhen $k\\geq1$,\n\\begin{eqnarray*}\n&& \\mathbb{E} \\biggl[b(\\bar{X}_{t_k})\\frac{\\exp (-{(x-\\bar\n{X}_{t_k}-b(\\bar{X}_{t_k})(t-t_k))^2}\/{(2a(\\bar{X}_{t_k})(t-t_k)}))}{\\sqrt{2\\pi a(\\bar\n{X}_{t_k})(t-t_k)}} \\biggr]\n\\\\\n&&\\qquad =\\int_{\\mathbb{R}}b(y)\\rho(t_k,t,y,x)\\,dy\n\\\\\n&&\\qquad =\\mathbb{E} \\bigl[b(\n\\bar{X}_{t_k})|\\bar{X}_t=x \\bigr]\\bar{p}_t(x).\n\\end{eqnarray*}\nFor $k=0$, even if $(\\bar{X}_0,\\bar{X}_t)$ has no density, the\nequality between the opposite sides of this equation remains true.\n\nCombining Lebesgue's theorem and a similar reasoning, one checks that\n\\begin{eqnarray*}\n&& -\\mathbb{E} \\biggl[\\frac{x-\\bar{X}_{t_k}-b(\\bar\n{X}_{t_k})(t-t_k)}{\\sqrt{2\\pi a(\\bar{X}_{t_k})(t-t_k)^3}}\\exp\\biggl({-\\frac\n{(x-\\bar{X}_{t_k}-b(\\bar\n{X}_{t_k})(t-t_k))^2}{2a(\\bar{X}_{t_k})(t-t_k)}}\\biggr) \\biggr]\n\\\\\n&&\\qquad =\\partial\n_x\\mathbb{E} \\biggl[a(\\bar{X}_{t_k})\\frac{\\exp({-{(x-\\bar\n{X}_{t_k}-b(\\bar\n{X}_{t_k})(t-t_k))^2}\/({2a(\\bar{X}_{t_k})(t-t_k)})})}{\\sqrt{2\\pi a(\\bar\n{X}_{t_k})(t-t_k)}}\n\\biggr]\n\\\\\n&&\\qquad =\\partial_x \\bigl[\\mathbb{E} \\bigl(a(\\bar{X}_{t_k})|\n\\bar{X}_t=x \\bigr)\\bar{p}_t(x) \\bigr].\n\\end{eqnarray*}\nWith (\\ref{evolfbart}), one deduces that\n\\begin{equation}\n\\label{gyongyfbart}\n\\qquad\\partial_t\\bar{F}_t(x)=\n\\tfrac{1}{2}\\partial_x \\bigl(\\mathbb{E} \\bigl[a(\\bar\n{X}_{t_k})|\\bar{X}_t=x \\bigr]\\partial_x\n\\bar{F}_t(x) \\bigr)-\\mathbb{E} \\bigl[b(\\bar{X}_{t_k})|\n\\bar{X}_t=x \\bigr]\\partial_x\\bar{F}_t(x).\n\\end{equation}\nOne checks that the function $(t,u)\\mapsto\\bar{F}_t^{-1}(u)$ is smooth\nand satisfies the partial differential equation (\\ref{eqevolbarftm1})\nby arguments similar to the ones given at the end of the proof of\nProposition \\ref{propevolftm1}.\n\\end{pf*}\n\n\n\\begin{arem}\nIn the same way, for $k\\in\\{0,\\ldots,N-1\\}$, one could prove that on\n$(t_k,t_{k+1}]\\times\\mathbb{R}$, $(t,x)\\mapsto\\bar{p}_t(x)$ is\n$C^{1,2}$ and satisfies the partial differential\n\\[\n\\partial_t\\bar{p}_t(x)=\\tfrac{1}{2}\n\\partial_{xx} \\bigl(\\mathbb{E} \\bigl[a(\\bar{X}_{t_k})|\n\\bar{X}_t=x \\bigr]\\bar{p}_t(x) \\bigr)-\n\\partial_x \\bigl(\\mathbb{E} \\bigl[b(\\bar{X}_{t_k})|\n\\bar{X}_t=x \\bigr] \\bar{p}_t(x) \\bigr)\\vadjust{\\goodbreak}\n\\]\nobtained by spatial derivation of (\\ref{gyongyfbart}). This shows that\n$(\\bar{X}_t)_{t\\in[0,T]}$ has the same marginal distributions as the\ndiffusion process with coefficients given by the above conditional\nexpectations, which is also a consequence of \\cite{gyongy}.\n\\end{arem}\n\n\n\\begin{pf*}{Proof of Lemma \\ref{lemmajoderwp}}\nBy the continuity of\nthe paths of $X$ and $\\bar{X}$ and the finiteness of\n$\\mathbb{E} [\\sup_{t\\leq T}(|X_t|^{p+1}+|\\bar\n{X}_t|^{p+1}) ]$, one easily checks that\n$t\\mapsto\\mathcal{W}_p^p(\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t))$ is\ncontinuous.\n\nLet $k\\in\\{0,\\ldots,N-1\\}$ and $s,t\\in(t_k,t_{k+1}]$ with $s\\leq t$.\nCombining Propositions \\ref{propevolftm1}~and~\\ref{propevolbarftm1}\nwith a spatial integration by parts, one obtains for $\\varepsilon\\in\n(0,1\/2)$,\n\\begin{eqnarray}\\label{preipp}\n\\hspace*{12pt}&& \\int_\\varepsilon^{1-\\varepsilon}\\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u)\\bigr|^p\\,du\\nonumber\n\\\\[-2pt]\n&&\\qquad =\\int_\\varepsilon^{1-\\varepsilon}\\bigl|F_s^{-1}(u)-\\bar\n{F}_s^{-1}(u)\\bigr|^p\\,du\\nonumber\n\\\\[-2pt]\n&&\\quad\\qquad{} +p\\int_s^t\\int_\\varepsilon\n^{1-\\varepsilon}\\bigl|F_r^{-1}(u)-\\bar{F}_r^{-1}(u)\\bigr|^{p-2}\n\\bigl(F_r^{-1}(u)-\\bar{F}_r^{-1}(u)\\bigr)\\nonumber\n\\\\[-2pt]\n&&\\hspace*{92pt}\n{}\\times\\bigl(b \\bigl(F_r^{-1}(u) \\bigr)-\n\\beta_r(u) \\bigr)\\,du\\,dr\\nonumber\n\\\\[-2pt]\n&&\\quad\\qquad{} +\\frac{p(p-1)}{2}\\int_s^t\\int\n_\\varepsilon^{1-\\varepsilon\n}\\bigl|F_r^{-1}(u)-\n\\bar{F}_r^{-1}(u)\\bigr|^{p-2} \\bigl(\n\\partial_u F_r^{-1}(u)-\\partial_u\n\\bar{F}_r^{-1}(u) \\bigr)\n\\nonumber\\\\[-9pt]\\\\[-9pt]\n&&\\hspace*{128pt}\n{}\\times \\biggl(\\frac\n{a(F_r^{-1}(u))}{\\partial_u F_r^{-1}(u)}-\n\\frac{\\alpha_r(u)}{\\partial\n_u \\bar{F}_r^{-1}(u)} \\biggr)\\,du\\,dr\\nonumber\n\\\\[-2pt]\n&&\\quad\\qquad{}+\\frac{p}{2}\\int_s^t\\bigl|F_r^{-1}(1-\n\\varepsilon)-\\bar{F}_r^{-1}(1-\\varepsilon)\\bigr|^{p-2}\n\\bigl(F_r^{-1}(1-\\varepsilon)-\\bar{F}_r^{-1}(1-\n\\varepsilon) \\bigr)\\nonumber\n\\\\[-2pt]\n&&\\hspace*{68pt}\n{}\\times\\biggl(\\frac{\\alpha_r(1-\\varepsilon\n)}{\\partial_u \\bar{F}_r^{-1}(1-\\varepsilon)}-\\frac\n{a(F_r^{-1}(1-\\varepsilon))}{\\partial_u F_r^{-1}(1-\\varepsilon\n)} \\biggr)\\,dr\\nonumber\n\\\\[-2pt]\n&&\\quad\\qquad{} -\\frac{p}{2}\\int_s^t\\bigl|F_r^{-1}(\n\\varepsilon)-\\bar{F}_r^{-1}(\\varepsilon)\\bigr|^{p-2}\n\\bigl(F_r^{-1}(\\varepsilon)-\\bar{F}_r^{-1}(\n\\varepsilon) \\bigr)\\nonumber\n\\\\[-2pt]\n&&\\hspace*{68pt}\n{}\\times \\biggl(\\frac{\\alpha_r(\\varepsilon\n)}{\\partial_u \\bar{F}_r^{-1}(\\varepsilon)}-\\frac\n{a(F_r^{-1}(\\varepsilon))}{\\partial_u F_r^{-1}(\\varepsilon)} \\biggr)\\,dr.\\nonumber\n\\end{eqnarray}\n\nWe are now going to take the limit as $\\varepsilon\\to0$. We will check\nat the end of the proof that\n\\begin{eqnarray}\\label{IPP_terms}\n&& \\lim_{u\\rightarrow0^+\\ \\mathrm{or}\\ 1^-}\\ \\sup\n_{r\\in[s,t]}\\frac{a(F_t^{-1}(u))}{\\partial_u\nF_t^{-1}(u) } \\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u) \\bigr|^{p-1}\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\quad{} + \\sup _{r\\in [s,t]}\\frac{\\alpha_t(u) }{\\partial_u \\bar{F}_t^{-1}(u) }\n\\bigl|F_t^{-1}(u)- \\bar{F}_t^{-1}(u) \\bigr|^{p-1}=0,\\nonumber\n\\end{eqnarray}\nwhich enables us to get rid of the two last boundary terms.\\vadjust{\\goodbreak}\n\nCombining Young's inequality with the uniform ellipticity assumption\nand the positivity of $\\partial_uF_t^{-1}(u)$ and $\\partial_u\n\\bar{F}_t^{-1}(u)$, one obtains\n\\begin{eqnarray*}\n&& \\bigl(\\partial_u F_r^{-1}(u)-\n\\partial_u \\bar{F}_r^{-1}(u) \\bigr) \\biggl(\n\\frac{a(F_r^{-1}(u))}{\\partial_u F_r^{-1}(u)}-\\frac{\\alpha_r(u)}{\\partial\n_u \\bar{F}_r^{-1}(u)} \\biggr)\n\\\\\n&&\\qquad = \\bigl(a \\bigl(F_r^{-1}(u) \\bigr)-\n\\alpha_r(u) \\bigr)\\frac{\\partial_u\nF_r^{-1}(u)-\\partial_u \\bar{F}_r^{-1}(u)}{\\partial_u F_r^{-1}(u)\\vee\n\\partial_u \\bar{F}_r^{-1}(u)}\n\\\\\n&&\\quad\\qquad{} -a \\bigl(F_r^{-1}(u) \\bigr)\\frac{((\\partial_u\n\\bar{F}_r^{-1}(u)-\\partial_u F_r^{-1}(u))^+)^2}{\\partial_u\nF_r^{-1}(u)\\partial_u \\bar{F}_r^{-1}(u)}\n\\\\\n&&\\quad\\qquad{}-\\alpha_r(u)\\frac\n{((\\partial_u F_r^{-1}(u)-\\partial_u \\bar\n{F}_r^{-1}(u))^+)^2}{\\partial_u F_r^{-1}(u)\\partial_u \\bar\n{F}_r^{-1}(u)}\n\\\\\n&&\\qquad\\leq\\frac{1}{4\\underline{a}} \\bigl(a \\bigl(F_r^{-1}(u)\n\\bigr)- \\alpha_r(u) \\bigr)^2+\\underline{a}\n\\frac{(\\partial_u F_r^{-1}(u)-\\partial\n_u \\bar{F}_r^{-1}(u))^2}{(\\partial_u F_r^{-1}(u)\\vee\\partial_u \\bar\n{F}_r^{-1}(u))^2}\n\\\\\n&&\\quad\\qquad{} - \\bigl(a \\bigl(F_r^{-1}(u) \\bigr)\\wedge\n\\alpha_r(u) \\bigr)\\frac{(\\partial\n_u \\bar{F}_r^{-1}(u)-\\partial_u F_r^{-1}(u))^2}{\\partial_u\nF_r^{-1}(u)\\partial_u \\bar{F}_r^{-1}(u)}\n\\\\\n&&\\qquad\\leq\\frac{1}{4\\underline{a}} \\bigl(a \\bigl(F_r^{-1}(u)\n\\bigr)- \\alpha_r(u) \\bigr)^2.\n\\end{eqnarray*}\nHence, up to the factor $\\frac{p(p-1)}{2}$, the third term on the\nright-hand side of (\\ref{preipp}) is equal to\n\\begin{eqnarray*}\n&&\\int_s^t\\int_\\varepsilon^{1-\\varepsilon}\\bigl|F_r^{-1}(u)-\n\\bar{F}_r^{-1}(u)\\bigr|^{p-2}\n\\\\[-2pt]\n&&\\hspace*{38pt}{}\\times \\biggl[ \\bigl(\n\\partial_u F_r^{-1}(u)-\\partial_u\n\\bar{F}_r^{-1}(u) \\bigr) \\biggl(\\frac{a(F_r^{-1}(u))}{\\partial_u\nF_r^{-1}(u)}-\n\\frac{\\alpha_r(u)}{\\partial_u \\bar{F}_r^{-1}(u)} \\biggr)\n\\\\\n&&\\hspace*{174pt}{} -\\frac{ (a(F_r^{-1}(u))-\\alpha_r(u) )^2}{4\\underline\n{a}} \\biggr]\\,du\\,dr\n\\\\[-2pt]\n&&\\quad{} +\\frac{1}{4\\underline{a}}\\int_s^t\n\\int_\\varepsilon^{1-\\varepsilon}\\bigl|F_r^{-1}(u)-\n\\bar{F}_r^{-1}(u)\\bigr|^{p-2} \\bigl(a\n\\bigl(F_r^{-1}(u) \\bigr)-\\alpha_r(u)\n\\bigr)^2\\,du\\,dr,\n\\end{eqnarray*}\nwhere the integrand in the first integral is nonpositive. Since\n\\begin{eqnarray*}\n\\hspace*{-5pt}&&\\int_s^t\\hspace*{-1pt}\\int_0^1\\bigl|F_r^{-1}(u)-\n\\bar{F}_r^{-1}(u)\\bigr|^{p-2}\n\\\\[-1pt]\n\\hspace*{-5pt}&&\\hspace*{25pt}{} \\times\\hspace*{-0.3pt} \\bigl(\\bigl|F_r^{-1\\hspace*{-0.3pt}}(u)-\\hspace*{-0.2pt}\n\\bar{F}_r^{-1\\hspace*{-0.3pt}}(u)\\bigr|\\bigl|b \\bigl(F_r^{-1\\hspace*{-0.3pt}}(u)\n\\bigr)-\\beta_r(u)\\bigr|\n+ \\bigl(a \\bigl(F_r^{-1\\hspace*{-0.3pt}}(u)\n\\bigr)-\\alpha_r(u) \\bigr)^2 \\bigr)\\,du\\,dr\n\\\\[-1pt]\n\\hspace*{-5pt}&&\\qquad \\leq2\\|b\\|_\\infty\\int_s^t\n\\mathcal{W}_{p}^{p-1} \\bigl(\\mathcal{L}(X_r),\n\\mathcal{L}(\\bar{X}_r) \\bigr)\\,dr\n\\\\[-1pt]\n\\hspace*{-5pt}&&\\quad\\qquad{}+4\\|a\\|^2_\\infty\n\\int_s^t\\mathcal{W}_{p}^{p-2}\n\\bigl(\\mathcal{L}(X_r),\\mathcal{L}(\\bar{X}_r)\n\\bigr)\\,dr<+\\infty,\n\\end{eqnarray*}\none can take the\nlimit $\\varepsilon\\to0$ in (\\ref{preipp}) using Lebesgue's theorem for\nthe second term on the right-hand side and combining Lebesgue's theorem\nwith monotone convergence for the third term to obtain\n\\begin{eqnarray}\\label{vipp}\n&& \\mathcal{W}_{p}^{p} \\bigl(\\mathcal{L}(X_t),\n\\mathcal{L}(\\bar{X}_t) \\bigr)\\nonumber\n\\\\\n&&\\qquad =\\mathcal{W}_{p}^{p}\n\\bigl(\\mathcal{L}(X_s),\\mathcal{L}(\\bar{X}_s) \\bigr)\n\\nonumber\n\\\\\n&&\\quad\\qquad{}+p\\int_s^t\\int_0^{1}\\bigl|F_r^{-1}(u)-\n\\bar{F}_r^{-1}(u)\\bigr|^{p-2} \\bigl(F_r^{-1}(u)-\n\\bar{F}_r^{-1}(u) \\bigr)\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\hspace*{82pt}{}\\times \\bigl(b \\bigl(F_r^{-1}(u)\n\\bigr)-\\beta_r(u) \\bigr)\\,du\\,dr\n\\nonumber\n\\\\\n&&\\quad\\qquad{}+\\frac{p(p-1)}{2}\\int_s^t\\int\n_0^{1}\\bigl|F_r^{-1}(u)-\\bar\n{F}_r^{-1}(u)\\bigr|^{p-2} \\bigl(\\partial_u\nF_r^{-1}(u)-\\partial_u \\bar\n{F}_r^{-1}(u) \\bigr)\\nonumber\n\\\\\n&&\\hspace*{118pt}\n{}\\times \\biggl(\\frac{a(F_r^{-1}(u))}{\\partial_u\nF_r^{-1}(u)}-\n\\frac{\\alpha_r(u)}{\\partial_u \\bar{F}_r^{-1}(u)} \\biggr)\\,du\\,dr.\\nonumber\n\\end{eqnarray}\nThe last term which belongs to $[-\\infty,+\\infty)$ is finite since so\nare all the other terms. We deduce the integrability of\n\\begin{eqnarray*}\n(r,u)&\\mapsto&\\bigl|F_r^{-1}(u)-\\bar{F}_r^{-1}(u)\\bigr|^{p-2}\n\\bigl(\\partial_u F_r^{-1}(u)-\n\\partial_u \\bar{F}_r^{-1}(u) \\bigr)\n\\\\\n&&\\times{} \\biggl(\\frac{a(F_r^{-1}(u))}{\\partial_u\nF_r^{-1}(u)}-\\frac{\\alpha_r(u)}{\\partial_u \\bar{F}_r^{-1}(u)} \\biggr)\n\\end{eqnarray*}\non $[s,t]\\times(0,1)$. Similar arguments show that the integrability\nproperty and (\\ref{vipp}) remain true for $s=t_k$. By summation, they\nremain true for $0\\leq s\\leq t\\leq T$. So the integrability holds on\n$[0,T]$ for the derivative in the distributional sense\n\\begin{eqnarray*}\n&& \\partial_t\\mathcal{W}_{p}^{p} \\bigl(\n\\mathcal{L}(X_t),\\mathcal{L}(\\bar{X}_t) \\bigr)\n\\\\\n&&\\qquad =p\\int_0^{1}\\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u)\\bigr|^{p-2} \\bigl(F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u) \\bigr) \\bigl(b \\bigl(F_t^{-1}(u)\n\\bigr)-\\beta_t(u) \\bigr)\\,du\n\\\\\n&&\\quad\\qquad{}+\\frac{p(p-1)}{2}\\int_0^{1}\\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u)\\bigr|^{p-2} \\bigl(\n\\partial_u F_t^{-1}(u)-\\partial_u\n\\bar{F}_t^{-1}(u) \\bigr)\n\\\\\n&&\\hspace*{103pt}\n{}\\times \\biggl(\\frac\n{a(F_t^{-1}(u))}{\\partial_u F_t^{-1}(u)}-\n\\frac{\\alpha_t(u)}{\\partial\n_u \\bar{F}_t^{-1}(u)} \\biggr)\\,du\n\\\\\n&&\\qquad \\leq p\\int_0^{1}\\bigl|F_t^{-1}(u)-\n\\bar{F}_t^{-1}(u)\\bigr|^{p-2}\n\\\\\n&&\\hspace*{53pt}\n{}\\times \\biggl[\n\\bigl(F_t^{-1}(u)-\\bar{F}_t^{-1}(u)\n\\bigr) \\bigl(b \\bigl(F_t^{-1}(u) \\bigr)-\n\\beta_t(u) \\bigr)\n\\\\\n&&\\hspace*{106pt}{} +\\frac{(p-1) (a(F_t^{-1}(u))-\\alpha_t(u) )^2}{8\\underline {a}} \\biggr]\\,du.\n\\end{eqnarray*}\nEquation (\\ref{majoderwp}) follows by remarking that\n\\begin{eqnarray*}\n&& \\bigl(a \\bigl(F_t^{-1}(u) \\bigr)-\\alpha_t(u)\n\\bigr)^2\n\\\\\n&&\\qquad \\leq2 \\bigl(\\bigl\\|a'\\bigr\\|_\\infty\n^2\\bigl|F_t^{-1}(u)-\\bar{F}_t^{-1}(u)\\bigr|^2+\n\\bigl(a \\bigl(\\bar{F}_t^{-1}(u) \\bigr)-\n\\alpha_t(u) \\bigr)^2 \\bigr)\n\\end{eqnarray*}\nand using a similar idea for $|b(F_t^{-1}(u))-\\beta_t(u)|$.\n\nTo prove (\\ref{IPP_terms}) for $00$ and decreasing $c_1>0$, we get\nfrom~(\\ref{aronson2}) that\n\\begin{eqnarray}\nK_1(x_0-x)\\exp\\biggl(-\\frac{(x-x_0)^2}{2c_1 } \\biggr) \\le\\rho_r(x) \\le\nK_2 (x_0-x) \\exp\\biggl(-\\frac\n{(x-x_0)^2}{2c_2 } \\biggr)\\nonumber\n\\\\\n\\eqntext{\\forall r\\in[s,t], \\forall x \\le x_0-1,}\n\\end{eqnarray}\nwhich leads to\n\\[\n\\forall x \\le x_0-1\\qquad K_1c_1\\exp\\biggl(-\n\\frac{(x-x_0)^2}{2c_1 } \\biggr) \\le G_r(x) \\le K_2c_2\n\\exp\\biggl(-\\frac{(x-x_0)^2}{2c_2 } \\biggr),\n\\]\nwhere $G_r$ denotes either $F_r$ or $\\bar{F}_r$. Thus, the inverse\nfunction satisfies\n\\begin{equation}\nx_0- \\sqrt{-2c_2 \\log\\biggl(\\frac{u}{K_2c_2}\n\\biggr)} \\le\\bar{F}_r^{-1}(u)\\le x_0-\n\\sqrt{-2c_1 \\log\\biggl(\\frac{u}{K_1c_1} \\biggr)}\\label{continvrep}\n\\end{equation}\nfor $u$ small\nenough. The two last inequalities imply that when $x\\rightarrow-\n\\infty$,\n\\[\n\\forall r\\in[s,t]\\qquad \\bar{F}_r^{-1} \\bigl(F_r(x)\n\\bigr) \\ge x_0 - \\sqrt{-2 c_2 \\biggl[ \\log\\biggl(\n\\frac{K_1c_1}{K_2c_2} \\biggr) -\\frac{(x-x_0)^2}{2c_1} \\biggr]}\n\\]\nand $\\sup_{r\\in[s,t]}|x-\\bar{F}_r^{-1}(F_r(x))\n|\\mathop{=}\\limits_{x\\rightarrow- \\infty}O(x)$. With the boundedness\nof $a$\\vspace*{-1pt} and (\\ref{aronson2}), we easily deduce that\n\\[\n\\lim_{x\\rightarrow- \\infty} \\sup_{r\\in[s,t]}a(x)\np_r(x) \\bigl|x-\\bar{F}_r^{-1}\n\\bigl(F_r(x) \\bigr) \\bigr|^{p-1}=0.\n\\]\nSince, by (\\ref{continvrep}), $\\bar{F}_r^{-1}(u)$\nconverges to $-\\infty$ uniformly in $r\\in[s,t]$ as $u$ tends to $0$,\nwe conclude that\n\\[\n\\lim_{u\\rightarrow0^+}\\sup_{r\\in[s,t]}\\frac\n{a(F_r^{-1}(u))}{\\partial_u\nF_r^{-1}(u) }\n\\bigl|F_r^{-1}(u)-\\bar{F}_r^{-1}(u)\n\\bigr|^{p-1}=0.\n\\]\\upqed\n\\end{pf*}\n\n\\begin{pf*}{Proof of Lemma \\ref{malcal}}\nBy Jensen's inequality,\n\\begin{eqnarray*}\n\\mathbb{E} \\bigl[\\bigl|\\mathbb{E}(W_t-W_{\\tau_t}|\n\\bar{X}_t)\\bigr|^p \\bigr]&\\leq&\\mathbb{E} \\bigl[|W_t-W_{\\tau_t}|^{p}\n\\bigr]\\leq\\frac{C}{N^{p\/2}}.\n\\end{eqnarray*}\nLet us now check that the left-hand side is also smaller than\n$\\frac{C}{t^{p\/2}N^{p}}$. To do this, we will study\n\\[\n{\\mathbb{E}} \\bigl[ (W_{t}-W_{\\tau_{t}})g(\\bar{X}_{t})\n\\bigr],\n\\]\nwhere $g$ is any smooth real valued function.\n\nIn order to continue, we need to do various estimations on the Euler\nscheme and its Malliavin derivative, which we denote by $D_u\\bar{X}_t$.\nLet $\\eta_{t}=\\min\\{t_{i};t\\leq t_{i}\\}$ denote the discretization\ntime just after $t$. We have $D_u\\bar{X}_t=0$ for $u>t$, and\n\\begin{eqnarray}\nD_{u}\\bar{X}_{t}&=&1_{\\{t\\leq\n\\eta_u\\}}\\sigma(\\bar{X}_{\\tau_{t}})\\nonumber\n\\\\\n&&{} +1_{\\{t>\\eta_u\\}} \\bigl(1+\\sigma'(\n\\bar{X}_{\\tau_{t}}) (W_t-W_{\\tau_t})+b'(\\bar{X}_{\\tau_{t}}) (t-\\tau_t) \\bigr)D_u\n\\bar{X}_{\\tau_{t}}\\nonumber\n\\\\\n\\eqntext{\\mbox{for }u\\leq t.}\n\\end{eqnarray}\n\nThen by induction, one clearly obtains that for $u\\le t$,\n\\begin{eqnarray*}\nD_{u}\\bar{X}_{t} & =&\\sigma(\\bar{X}_{\\tau_{u}})\n\\bar{\\mathcal{E}}_{u,t},\n\\\\\n\\bar{\\mathcal{E}}_{u,t} & =& \\cases{ 1, &\\quad if $\\tau_{t}\n\\leq\\eta_{u}$,\n\\vspace*{3pt}\n\\cr\n\\bigl( 1+b^{\\prime}(\n\\bar{X}_{\\tau_{t}}) (t-\\tau_{t})+\\sigma^{\\prime}(\n\\bar{X}_{\\tau_{t}}) (W_{t}-W_{\\tau_{t}}) \\bigr), &\\quad if $\n\\eta_{u}=\\tau_{t}$,\n\\vspace*{4pt}\n\\cr\n\\displaystyle\\prod\n_{i=N\\eta_{u}\/T}^{N\\tau_{t}\/T-1} \\bigl( 1+b^{\\prime}(\n\\bar{X}_{t_{i}}) (t_{i+1}-t_{i})+\\sigma^{\\prime}(\\bar{X}_{t_{i}}) (W_{t_{i+1}}-W_{t_{i}})\n\\bigr)\n\\cr\n\\hspace*{33pt}{}\\times\\bigl( 1+b^{\\prime}(\\bar{X}_{\\tau_{t}}) (t-\\tau_{t})+\n\\sigma^{\\prime}(\\bar{X}_{\\tau_{t}}) (W_{t}-W_{\\tau_{t}})\n\\bigr),&\\quad if $\\eta_{u}<\\tau_{t}$.}\n\\end{eqnarray*}\n\nNote that $\\bar{\\mathcal{E}}$ satisfies the following properties: (1)\n$\\bar{\\mathcal{E}}_{u,t}=\\bar{\\mathcal{E}}_{\\eta(u),t}$ and\\break (2)~$\\bar{\\mathcal{E}}_{t_i,t_j}\\bar{\\mathcal{E}}_{t_j,t}=\\bar{\\mathcal\n{E}}_{t_i,t}$ for $t_i\\le t_j\\le t$. We also introduce the process\n$\\mathcal{E}$ defined by\n\\[\n\\mathcal{E}_{u,t}=\\exp\\biggl( \\int_{u}^{t}b^{\\prime}(X_{s})-\n\\frac\n{1}{2\n\\sigma^{\\prime}(X_{s})^{2}\\,ds+\n\\int_{u}^{t}\\sigma^{\\prime\n}(X_{s})\\,dW_{s}\n\\biggr).\n\\]\nThe next lemma, the proof of which is postponed at the end of the\npresent proof states some useful properties of the processes\n$\\mathcal{E}$ and $\\bar{\\mathcal{E}}$.\n\n\n\n\\begin{alem}\\label{lemme_majorations} Let us assume that $b,\\sigma\\in\nC^2_b$. Then we have\n\\begin{eqnarray}\n\\sup_{0\\leq s\\leq t \\le T}{\\mathbb{E}} \\bigl[\n\\mathcal{E}_{s,t}^{-p} \\bigr]+{\\mathbb{E}} \\bigl[\n\\mathcal{E}_{s,t}^{p} \\bigr] &\\leq& C, \\label{eq:propE}\n\\\\\n\\sup_{0\\leq s\\leq t \\le T}{\\mathbb{E}} \\bigl[ \\bar{\\mathcal{E}}_{s,t}^{p}\n\\bigr] &\\leq& C,\\label{eqA.13}\n\\\\\n\\sup_{0\\leq s,u\\leq t \\le T}{\\mathbb{E}} \\bigl[ |\nD_u\\bar{\\mathcal{E}}_{s,t}|^p+|\nD_u \\mathcal{E}_{s,t}|^p \\bigr] &\\leq& C,\n\\label{eq:A13}\n\\\\\n\\sup_{0\\leq t \\le T}{\\mathbb{E}} \\bigl[ \\llvert\\mathcal\n{E}_{0,t}-\\bar{\\mathcal{E}}_{0,t}\\rrvert\n^{p} \\bigr] &\\leq&\\frac\n{C}{N^{p\/2}}, \\label{vitesse_forte}\n\\end{eqnarray}\nwhere $C$ is a positive constant depending only on $p$ and $T$.\n\\end{alem}\n\nWe next define the localization given by\n\\[\n\\psi=\\varphi\\bigl( \\mathcal{E}_{0,t}^{-1} (\n\\mathcal{E}_{0,t\n-\\bar{\\mathcal{E}}_{0,t} ) \\bigr).\n\\]\nHere $\\varphi\\dvtx \\mathbb{R\\rightarrow}[0,1]$ is a~$C^\\infty$\nsymmetric function so that\n\\[\n\\varphi(x)=\\cases{ 0, &\\quad if $|x|>\\frac{1}{2}$, \\vspace*{2pt}\n\\cr\n1, &\n\\quad if $|x|<\\frac{1}{4}$.}\n\\]\nOne has\n\\begin{eqnarray*}\n\\mathbb{E} \\bigl[ (W_{t}-W_{\\tau_{t}})g(\\bar{X}_{t})\n\\bigr] &=&\\mathbb{E} \\bigl[ (W_{t}-W_{\\tau_{t}})g(\n\\bar{X}_{t})\\psi\\bigr]+\\mathbb{E} \\bigl[ (W_{t}-W_{\\tau_{t}})g(\n\\bar{X}_{t}) (1-\\psi) \\bigr]\n\\\\\n&=&\\int_{\\tau_t}^t\\mathbb{E} \\bigl[\\psi\ng'(\\bar{X}_{t})D_u\\bar{X}_{t}\n\\bigr] \\,du+\\mathbb{E} \\biggl[g(\\bar{X}_{t})\\int_{\\tau_t}^tD_u\n\\psi \\,du \\biggr]\n\\\\\n&&{}+\\mathbb{E} \\bigl[ (W_{t}-W_{\\tau_{t}})g(\n\\bar{X}_{t}) (1-\\psi) \\bigr],\n\\end{eqnarray*}\nwhere the second equality follows from the duality formula; see, for\nexample, Definition 1.3.1 in \\cite{N}. Since for $\\tau_{t}\\leq u\\leq t$\n\\begin{eqnarray*}\n{\\mathbb{E}} \\bigl[ \\psi g^{\\prime}(\\bar{X}_{t})D_{u}\n\\bar{X}_{t} \\bigr] & =& {\\mathbb{E}} \\bigl[ \\psi g^{\\prime}(\n\\bar{X}_{t})\\sigma(\\bar{X}_{\\tau_{t\n}) \\bigr]\n\\\\\n&=&t^{-1}\n\\mathbb{E} \\biggl[\\int_0^t D_sg(\n\\bar{X}_{t})\\frac\n{\\psi\n\\sigma(\\bar{X}_{\\tau_{t\n})}{D_s\\bar{X}_t}\\,ds \\biggr]\n\\\\\n& =&t^{-1}{\\mathbb{E}} \\biggl[ g(\\bar{X}_{t})\\int\n_{0}^{t}\\psi\\sigma(\\bar{X}_{\\tau_{t}})\n\\sigma^{-1} ( \\bar{X}_{\\tau_{s}} ) \\bar{\\mathcal{E\n}_{s,t}^{-1}\\delta W_{s} \\biggr],\n\\end{eqnarray*}\none deduces\n\\begin{eqnarray}\n\\qquad\\qquad\\mathbb{E} [ W_{t}-W_{\\tau_{t}}\\rrvert\\bar{X}_{t} ]\n&=&t^{-1}\\int_{\\tau_{t}}^{t} \\mathbb{E}\n\\biggl[ \\int_{0}^{t}\\psi\\sigma(\n\\bar{X}_{\\tau_{t}})\\sigma^{-1} ( \\bar{X}_{\\tau\n_{s}} )\\bar{\\mathcal{E}}_{s,t}^{-1}\\delta W_{s}\\big|\n\\bar{X}_{t} \\biggr] \\,du\n\\nonumber\\\\[-8pt]\\label{espcond}\\\\[-8pt]\n&&{}+ \\mathbb{E} \\biggl[\\int_{\\tau_t}^t\nD_u\\psi \\,du \\big| \\bar{X}_{t} \\biggr]+\\mathbb{E} \\bigl[ (\nW_{t}-W_{\\tau_{t}} ) (1-\\psi)\\rrvert\\bar{X}_{t}\n\\bigr].\\nonumber\n\\end{eqnarray}\nHere $\\delta W$ denotes the Skorohod integral. In order to obtain the\nconclusion of the lemma, we need to bound the $L^p$-norm of each term\non the right-hand side of~(\\ref{espcond}). In particular, we will use\nthe following estimate (which also proves the existence of the Skorohod\nintegral on the left-hand side below) which can be found in Proposition\n1.5.4 in \\cite{N}:\n\\begin{equation}\n\\label{controle_normep} \\biggl\\llVert\\int_{0}^{t}\n\\psi\\sigma(\\bar{X}_{\\tau_{t}})\\sigma^{-1} ( \\bar{X}_{\\tau_{s}}\n) \\bar{\\mathcal{E}}_{s,t}^{-1}\\delta W_{s} \\biggr\n\\rrVert_{p}\\leq C(p) \\bigl\\llVert\\psi\\sigma(\\bar{X}_{\\tau_{t}})\n\\sigma^{-1} (\\bar{X}_{\\tau_{\\cdot}} )\\bar{\\mathcal{E}}_{\\cdot,t}^{-1}\n\\bigr\\rrVert_{1,p},\\hspace*{-35pt}\n\\end{equation}\nwhere $\\|F_\\cdot\\|_{1,p}^p =\\mathbb{E} [ (\\int_0^t F_s^2 \\,ds )^{p\/2}+\n(\\int_0^t \\int_0^t (D_uF_s)^2 \\,ds\\,du )^{p\/2} ]$. By Jensen's\ninequality for $p\\ge2$, we have\n\\begin{equation}\n\\label{upper_bound_1p} \\qquad\\|F_\\cdot\n\\|_{1,p}^p \\le t^{p\/2-1} \\int_0^t\n\\mathbb{E} \\bigl[|F_s|^p \\bigr] \\,ds + t^{p-2}\n\\int_0^t\\int_0^t\n\\mathbb{E} \\bigl[|D_uF_s|^p \\bigr]\\,ds\\,du\n\\end{equation}\nand we will use this inequality to upper bound~(\\ref{controle_normep}).\nWhen $1\\le p\\le2$, we will use alternatively the following upper bound\n$\\|F_\\cdot\\|_{1,p}^p \\le( \\int_0^t \\mathbb{E}[F_s^2] \\,ds )^{p\/2}+\n(\\int_0^t \\int_0^t \\mathbb{E}[(D_uF_s)^2] \\,ds\\,du )^{p\/2}$ that comes\nfrom H\\\"older's inequality.\n\nFor $\\psi>0$, we have $\\mathcal{E}_{0,t}^{-1} ( \\mathcal\n{E}_{0,t\n-\\bar{\\mathcal{E}}_{0,t} )\\leq\\frac{1}{2}$ so that\n$\\bar{\\mathcal{E}}_{0,t}\\geq\\frac{1}{2}\\mathcal{{E}}_{0,t}>0$. From\nHypothesis~\\ref{hyp_wass_pathwise}, there are constants\n$0<\\underline{\\sigma}\\le\\bar{\\sigma}<\\infty$ such that\n$0<\\underline{\\sigma}\\leq\\sigma\\leq\\bar{\\sigma}$, and one has\n\\begin{eqnarray*}\n&& \\int_{0}^{t}{\\mathbb{E}} \\bigl[ \\bigl( \\psi\n\\sigma(\\bar{X}_{\\tau_{t}} )\\sigma^{-1} ( \\bar{X}_{\\tau_{s}} )\n\\bar{\\mathcal{E}}_{s,t} ^{-1} \\bigr) ^{p} \\bigr]\\,ds\n\\\\\n&&\\qquad \\leq \\biggl(\\frac{\\bar{\\sigma\n}}{\\underline{\\sigma}} \\biggr)^p\\int_{0}^{t}{\n\\mathbb{E}} \\bigl[ \\psi^{p}\\bar{\\mathcal{E}}_{0,t}^{-p}\n\\bar{\\mathcal{E}}_{0,\\eta\n(s)}^{p} \\bigr]\\,ds\n\\\\\n&&\\qquad \\leq \\biggl(\\frac{2\\bar{\\sigma}}{\\underline{\\sigma}} \\biggr)^p\\sqrt\n{\\mathbb{E}\n\\bigl[\\mathcal{{E}}_{0,t}^{-2p} \\bigr]} \\int\n_0^t \\sqrt{\\mathbb{E} \\bigl[|\\bar{\\mathcal{\nE}}_{0,\\eta\n(s)}|^{2p} \\bigr]}\\,ds \\leq C t,\n\\end{eqnarray*}\nby using the estimates~(\\ref{eq:propE}) and (\\ref{eqA.13}).\n\nNext, we focus on getting an upper bound for\n\\begin{equation}\n\\int_0^t \\int_{0}^{t}{\n\\mathbb{E}} \\bigl[ \\bigl\\llvert D_{u} \\bigl( \\psi\\sigma(\n\\bar{X\n_{\\tau_{t}})\\sigma^{-1} ( \\bar{X}_{\\tau_{s}}\n) \\bar{\\mathcal{E\n}_{s,t}^{-1} \\bigr) \\bigr\n\\rrvert^{p} \\bigr] \\,ds \\,du.\\label{eq:Dloc\n\\end{equation}\nTo do so, we compute the derivative using basic derivation rules, which\ngives\n\\begin{eqnarray}\\label{eq:ft}\n&& D_{u} \\bigl( \\psi\\sigma(\\bar{X}_{\\tau_{t}})\n\\sigma^{-1} ( \\bar{X\n_{\\tau_{s}} ) \\bar{\\mathcal{\nE}}_{s,t}^{-1} \\bigr)\\nonumber\n\\\\\n&&\\qquad =D_u\\psi\\sigma(\n\\bar{X}_{\\tau_{t}})\\sigma^{-1} ( \\bar{X}_{\\tau\n_{s}} )\\bar{\\mathcal{E}}_{s,t}^{-1}+\\psi\\sigma^{\\prime}(\n\\bar{X}_{\\tau_{t}})D_{u}\\bar{X}_{\\tau_{t}}\n\\sigma^{-1} ( \\bar{X}_{\\tau_{s}} ) \\bar\n{\\mathcal{E}}_{s,t}^{-1}\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\quad\\qquad{}-\\psi\\sigma(\\bar{X}_{\\tau_{t}}) \\sigma^{-2}\\sigma'\n( \\bar{X}_{\\tau_{s}} )\\sigma(\\bar{X}_{\\tau_u}){\\mathcal{\n\\bar E}} _{u,\\tau_s}\\bar{\\mathcal{E}}_{s,t}^{-1}\n\\mathbf{1}_{u\\le\\tau_s}\\nonumber\n\\\\\n&&\\quad\\qquad{}-\\psi\\sigma(\\bar{X}_{\\tau_{u}})\\sigma^{-1} (\n\\bar{X}_{\\tau_{s}} ) \\bar{\\mathcal{E}}_{s,t}^{-2}D_{u}\n\\bar{\\mathcal{E}}_{s,t}.\\nonumber\n\\end{eqnarray}\nOne has then to get an upper bound for the $L^p$-norm of each term. As\nmany of the arguments are repetitive, we show the reader only some of\nthe arguments that are involved. Let us start with the first term. We\nhave\n\\begin{eqnarray*}\nD_u\\psi&=&\\varphi^{\\prime} \\bigl( \\mathcal{E}_{0,t}^{-1}\n( \\mathcal{E\n_{0,t}-\\bar{\\mathcal{E}}_{0,t} )\n\\bigr) D_{u} \\bigl[ \\mathcal{E\n_{0,t}^{-1}\n( \\mathcal{E}_{0,t}-\\bar{\\mathcal{E}}_{0,t} ) \\bigr]\n\\end{eqnarray*}\nand $D_{u} [ \\mathcal{E\n_{0,t}^{-1} ( \\mathcal{E}_{0,t}-\\bar{\\mathcal{E}}_{0,t} ) ]=\n\\mathcal{E\n_{0,t}^{-2}D_{u}\\mathcal{E\n_{0,t}\\bar{\\mathcal{E}}_{0,t}\n-\\mathcal{E\n_{0,t}^{-1}D_u\\bar{\\mathcal{E}}_{0,t} $. From the estimates\nin~(\\ref{eq:propE}), (\\ref{eqA.13}) and~(\\ref{eq:A13}), we obtain\n\\begin{equation}\n\\label{eq:Dest} \\sup_{u\\in[0,t]}\\llVert D_u\\psi\n\\rrVert_{p}\\leq\\bigl\\|\\varphi^{\\prime}\\bigr\\|_\\infty C(p).\n\\end{equation}\nSince $\\bar{\\mathcal{E}}_{s,t}^{-1}=\\bar{\\mathcal{E}}_{0,\\eta(s)}\n\\bar{\\mathcal{E}}_{0,t}^{-1}$ and $\\bar{\\mathcal{E}}_{0,t}\\geq\n\\frac{1}{2}\\mathcal{{E}}_{0,t}>0$ if $\\varphi^{\\prime} (\n\\mathcal{E}_{0,t}^{-1} ( \\mathcal{E\n_{0,t}-\\bar{\\mathcal{E}}_{0,t} ) ) \\neq0$, we have\n\\[\n \\mathbb{E} \\bigl[ \\bigl\\llvert D_u\\psi\\sigma(\n\\bar{X}_{\\tau_{t}})\\sigma^{-1} ( \\bar{X}_{\\tau\n_{s}} )\n\\bar{\\mathcal{E}}_{s,t}^{-1} \\bigr\\rrvert^p\n\\bigr] \\le\\biggl( \\frac{2\\bar{\\sigma}}{\\underline{\\sigma}} \\biggr)^p\n\\llVert\nD_u\\psi\\rrVert_{2p}^p \\mathbb{E} \\bigl[\n\\bigl\\llvert\\mathcal{{E}}_{0,t}^{-1}\\bar{\\mathcal{\nE}}_{0,\\eta(s)} \\bigr\\rrvert^{2p} \\bigr]^{1\/2}.\n\\]\nSimilar bounds hold for the three other terms. Note that the highest\nre\\-quirements on the derivatives of~$b$ and~$\\sigma$ will come from the\nterms involving $D_u\\bar{\\mathcal{E}}$ in~(\\ref{eq:ft}). Gathering all\nthe upper bounds,\\vspace*{-2pt} we get that\\break $\\llVert\n\\psi\\sigma(\\bar{X}_{\\tau_{t}})\\sigma^{-1} (\\bar {X}_{\\tau_{\\cdot}}\n)\\bar{\\mathcal{E}}_{\\cdot,t}^{-1}\\rrVert _{1,p}^p \\le C(t^{p\/2}+t^p)\n\\le Ct^{p\/2}$ since $0\\le t\\le T$. From~(\\ref{controle_normep}), we\nfinally obtain\n\\[\n\\biggl\\llVert\\int_{0}^{t}\\psi\\sigma(\n\\bar{X}_{\\tau_{u}})\\sigma^{-1} ( \\bar{X}_{\\tau_{s}} )\n\\bar{\\mathcal{E}}_{s,t}^{-1}\\delta W_{s} \\biggr\n\\rrVert_{p}\\leq C(p)t^{1\/2}.\n\\]\n\nWe are now in position to conclude. Using Jensen's inequality, the\nresults (\\ref{eq:propE}), (\\ref{vitesse_forte}), (\\ref{espcond}),\n(\\ref{eq:Dest}) and the definition of $\\varphi$ together with\nChebyshev's inequality, we have for any $k>0$ that\n\\begin{eqnarray*}\n&& \\mathbb{E} \\bigl[ \\bigl\\llvert\\mathbb{E} \\bigl[ W_{t}-W_{\\tau_{t}}\n|\\bar{X}_{t} \\bigr] \\bigr|^{p} \\bigr]\n\\\\[-2pt]\n&&\\qquad \\leq C \\biggl( t^{-p}(t-\\tau_{t})^{p\n\\biggl\\llVert\\int_{0}^{t}\\psi\\sigma(\n\\bar{X}_{\\tau_{t}\n)\\sigma^{-1} ( \\bar{X}_{\\tau_{s}}\n) \\bar{\\mathcal{E}}_{s,t}^{-1}\\delta\nW_{s} \\biggr\\rrVert_{p}^p\n\\\\[-2pt]\n&&\\hspace*{45pt}{} +(t-\\tau_{t})^{p-1}\\int\n_{\\tau_{t}}^{t}\\llVert D_u\\psi\\rrVert\n_{p}^p\\,du\n\\\\[-2pt]\n&&\\hspace*{45pt}{}+\\sqrt{\\mathbb{E} \\bigl(|W_t-W_{\\tau_t}|^{2p}\n\\bigr)} 4^{k\/2} \\bigl(\\mathbb{E} \\bigl(|\\mathcal{{E}}_{0,t}-\n\\bar{\\mathcal{E}}_{0,t}|^{2k} \\bigr)\\mathbb{E} \\bigl(\\mathcal\n{{E}}_{0,t}^{-2k} \\bigr) \\bigr)^{1\/4} \\biggr)\n\\\\[-2pt]\n&&\\qquad \\leq C \\biggl( t^{-p\/2}(t-\\tau_{t})^{p}+(t-\\tau\n_{t})^{p}+ \\biggl(\\frac{1}{N} \\biggr)^{ ( 2p+k ) \/4}\n\\biggr)\n\\\\[-2pt]\n&&\\qquad \\leq C \\biggl( \\frac{1}{t^{p\/2}N^{p}}+\\frac{1}{N^{p\/2+k\/4}} \\biggr).\n\\end{eqnarray*}\\upqed\n\\end{pf*}\\eject\n\n\\begin{pf*}{Proof of Lemma~\\ref{lemme_majorations}}\nThe upper bounds~(\\ref{eq:propE}) and (\\ref{eqA.13}) on $\\mathcal{{E}}$ and $\\bar{\\mathcal\n{E}}$ are obvious since $b'$ and $\\sigma'$ are bounded. Now, let us\nremark that $\\bar{\\mathcal{E}}$ and $\\mathcal{{E}}$ satisfy\n\\begin{eqnarray*}\n\\mathcal{{E}}_{u,t}&=&1+\\int_u^t\n\\sigma' ({X}_{s})\\mathcal{{E}}_{u,s}\n\\,dW_s+\\int_u^tb'\n({X}_{s})\\mathcal{{E}}_{u,s} \\,ds,\n\\\\\n\\bar{\\mathcal{E}}_{\\eta_u,t}&=&1+\\int_{\\eta_u}^t\n\\sigma' (\\bar{X}_{\\tau_s})\\bar{\\mathcal{E}}_{\\eta_u,\\tau_s}\n\\,dW_s+\\int_{\\eta_u}^tb' (\n\\bar{X}_{\\tau_s})\\bar{\\mathcal{E}}_{\\eta\n_u,\\tau_s} \\,ds.\n\\end{eqnarray*}\nThus, (\\ref{vitesse_forte}) can be easily obtained by noticing that\n$(\\bar{X}_t,\\bar{\\mathcal{E}}_{0,t})$ is the Euler scheme for the SDE\n$(X_t,\\mathcal{E}_{0,t})$ which has Lipschitz coefficients, and by\nusing the strong convergence order of $1\/2$; see, for\nexample,~\\cite{Ka}.\\vadjust{\\goodbreak}\n\nThe estimate (\\ref{eq:A13}) on $D_u{\\mathcal{E}}$ is given, for\nexample, by Theorem 2.2.1 in \\cite{N}. On the other hand, we have for\n$\\eta(s)\\le u\\le t$\n\\begin{eqnarray*}\nD_u\\bar{\\mathcal{E}}_{\\eta_s,t}&=&\\sigma'(\n\\bar{X}_{\\tau_u}) \\bar{\\mathcal{E}}_{\\eta_s,\\tau_u}\n\\\\\n&&{} +\\int _{\\eta_u}^t \\bigl[ \\sigma''(\n\\bar{X}_{\\tau_r}) \\sigma(\\bar{X}_{\\tau_u}) \\bar{\n\\mathcal{E}}_{\\eta_u,\\tau_r} \\bar{\\mathcal{E}}_{\\eta_s,\\tau_r} +\n\\sigma'(\\bar{X}_{\\tau_r}) D_u\\bar{\n\\mathcal{E}}_{\\eta_s,\\tau_r} \\bigr] \\,dW_r\n\\\\\n&&{}+\\int_{\\eta_u}^t \\bigl[ b''(\n\\bar{X}_{\\tau_r}) \\sigma(\\bar{X}_{\\tau_u}) \\bar{\n\\mathcal{E}}_{\\eta_u,\\tau_r} \\bar{\\mathcal{E}}_{\\eta_s,\\tau_r} +b'(\n\\bar{X}_{\\tau_r}) D_u\\bar{\\mathcal{E}}_{\\eta_s,\\tau_r}\n\\bigr]\\,dr.\n\\end{eqnarray*}\nIn order to obtain a $L^p(\\Omega)$ estimate, we then\nuse~(\\ref{eqA.13}), $b,\\sigma\\in C^2_b$ and Gronwall's lemma.\n\\end{pf*}\\vspace*{-15pt}\n\n\\section{Proofs of Section~\\lowercase{\\protect\\texorpdfstring{\\ref{sec_pathwise}}{3}}}\\vspace*{-5pt}\\label{App_Sec2}\n\n\\begin{pf*}{Proof of Proposition \\ref{prop_wass_multi}}\nWe use the dual representation of the Wasserstein distance\n(\\ref{defwas}) deduced from Kantorovitch duality theorem (see, e.g.,\nTheorem 5.10, page 58 \\cite{villani}),\n\\[\n\\mathcal{W}^p_p(\\mu,\\nu)=\\sup_{\\phi\\in L^1(\\nu)}\n\\biggl(\\int_E\\tilde{\\phi}(x)\\mu(dx)-\\int\n_E\\phi(x)\\nu(dx) \\biggr),\n\\]\nwhere $\\tilde{\\phi}(x)=\\inf_{y\\in E} (\\phi(y)+|y-x|^p )$.\n\nWe also denote by $(X^{s,x}_t)_{t\\in[s,T]}$ the solution to (\\ref{sde})\nstarting from $x\\in\\mathbb{R}$ at time $s\\in[0,T]$ and by\n$(\\bar{X}^{t_j,x}_t)_{t\\in[t_j,T]}$ the Euler scheme starting from $x$\nat time $t_j$ with $j\\in\\{0,\\ldots,N\\}$. It is enough to check that\n\\begin{eqnarray*}\nw_k&\\stackrel{\\mathrm{def}}{=}&\\mathcal{W}_p \\bigl(\\mathcal{L}\n\\bigl(\\bar{X}_{s_1},\\ldots,\\bar{X}_{s_k},X^{s_k,\\bar\n{X}_{s_k}}_{s_{k+1}},\\ldots,X^{s_k,\\bar{X}_{s_k}}_{s_{n}} \\bigr),\n\\\\\n&&\\hspace*{21pt}\n\\mathcal{L} \\bigl(\n\\bar{X}_{s_1},\\ldots,\\bar{X}_{s_{k-1}},X^{s_{k-1},\\bar\n{X}_{s_{k-1}}}_{s_{k}},\\ldots,X^{s_{k-1},\\bar{X}_{s_{k-1}}}_{s_{n}} \\bigr) \\bigr)\n\\end{eqnarray*}\nis smaller\\vspace*{1pt} than $C\\sup_{0\\leq t\\leq\nT,x\\in\\mathbb{R}}\\mathcal{W}_p(\\mathcal{L}(\\bar{X}^{x}_t),\\mathcal{L}(X^{x}_t))$\nsince $\\mathcal{W}_p(\\mathcal{L}(\\bar{X}_{s_1},\\ldots,\\bar{X}_{s_n}),\\allowbreak \\mathcal\n{L}(X_{s_1},\\ldots,X_{s_n}))\\leq\\sum_{k=1}^nw_k$. For $f\\dvtx\n\\mathbb{R}^n\\rightarrow\\mathbb{R}$ a bounded measurable function and\n\\[\n\\tilde{f}(x_1,\\ldots,x_n)=\\inf_{(y_1,\\ldots,y_n)\\in\\mathbb{R}^n}\n\\Bigl\\{ f(y_1,\\ldots,y_n)+\\max_{1\\leq\nj\\leq n}|y_j-x_j|^p\n\\Bigr\\},\n\\]\nwe set\n$f_k(x_1,\\ldots,x_k)=\\mathbb{E}(f(x_1,\\ldots,x_k,X^{s_k,x_k}_{s_{k+1}},\\ldots,X^{s_k,x_k}_{s_n}))$.\nFirst choosing\n\\[\n(y_1,\\ldots,y_{k-1},y_{k+1},\\ldots,y_n)= \\bigl(\\bar{X}_{s_1},\\ldots,\\bar\n{X}_{s_{k-1}},X^{s_k,y_k}_{s_{k+1}},\\ldots,X^{s_k,y_k}_{s_{n}}\n\\bigr),\n\\]\nthen conditioning to $\\sigma(W_s,s\\leq s_{k})$ and using (\\ref{cieds}),\nnext conditioning to $\\sigma(W_s,s\\leq s_{k-1})$ and using the dual\nformulation of the Wasserstein distance, one gets\n\\begin{eqnarray*}\n&&\\mathbb{E} \\bigl(\\tilde{f} \\bigl(\\bar{X}_{s_1},\\ldots,\\bar\n{X}_{s_k},X^{s_k,\\bar\n{X}_{s_k}}_{s_{k+1}},\\ldots,X^{s_k,\\bar{X}_{s_k}}_{s_{n}}\n\\bigr)\n\\\\[-2pt]\n&&\\hspace*{9pt}\n{}-f \\bigl(\\bar{X}_{s_1},\\ldots,\\bar{X}_{s_{k-1}},X^{s_{k-1},\\bar\n{X}_{s_{k-1}}}_{s_{k}},\\ldots,X^{s_{k-1},\\bar\n{X}_{s_{k-1}}}_{s_{n}} \\bigr) \\bigr)\n\\\\[-2pt]\n&&\\qquad \\leq\\mathbb{E} \\Bigl(\\inf_{y_k\\in\\mathbb{R}} \\Bigl\\{f \\bigl(\\bar\n{X}_{s_1},\\ldots,\\bar{X}_{s_{k-1}},y_k,X^{s_k,y_k}_{s_{k+1}},\\ldots,X^{s_k,y_k}_{s_{n}} \\bigr)\n\\\\[-2pt]\n&&\\hspace*{125pt}{}+\\max_{k\\leq j\\leq\nn}\\bigl|X^{s_k,y_k}_{s_{j}}-X^{s_k,\\bar{X}_{s_k}}_{s_{j}}\\bigr|^p\n\\Bigr\\}\n\\\\[-2pt]\n&&\\hspace*{10pt}\\quad\\qquad{}-f \\bigl(\\bar{X}_{s_1},\\ldots,\\bar{X}_{s_{k-1}},X^{s_{k-1},\\bar\n{X}_{s_{k-1}}}_{s_{k}},\\ldots,X^{s_{k-1},\\bar{X}_{s_{k-1}}}_{s_{n}} \\bigr) \\Bigr)\n\\\\[-2pt]\n&&\\qquad\\leq\\mathbb{E} \\Bigl(\\inf_{y_k\\in\\mathbb{R}} \\bigl\\{f_k(\\bar\n{X}_{s_1},\\ldots,\\bar{X}_{s_{k-1}},y_k)+C|y_k-\n\\bar{X}_{s_k}|^p \\bigr\\}\n\\\\[-2pt]\n&&\\hspace*{95pt}{}\n-f_k \\bigl(\\bar\n{X}_{s_1},\\ldots,\\bar{X}_{s_{k-1}},X^{s_{k-1},\\bar\n{X}_{s_{k-1}}}_{s_{k}}\n\\bigr) \\Bigr)\n\\\\[-2pt]\n&&\\qquad\\leq C\\mathbb{E} \\bigl(\\mathcal{W}_p^p \\bigl(\\mathcal{L}\n\\bigl(X^{s_{k-1},x}_{s_k} \\bigr),\\mathcal{L} \\bigl(\\bar\n{X}^{s_{k-1},x}_{s_k} \\bigr) \\bigr)\\big|_{x=\\bar{X}_{s_{k-1}}} \\bigr)\n\\\\[-2pt]\n&&\\qquad \\leq C\\sup_{x\\in\\mathbb{R}}\\mathcal{W}^p_p \\bigl(\n\\mathcal{L} \\bigl(\\bar{X}^{x}_{s_k-s_{k-1}} \\bigr),\\mathcal{L}\n\\bigl(X^{x}_{s_k-s_{k-1}} \\bigr) \\bigr)\n\\\\[-2pt]\n&&\\qquad\\leq C\\sup_{0\\leq t\\leq T,x\\in\\mathbb{R}}\\mathcal{W}^p_p\n\\bigl( \\mathcal{L} \\bigl(\\bar{X}^{x}_t \\bigr),\\mathcal{L}\n\\bigl(X^{x}_t \\bigr) \\bigr).\n\\end{eqnarray*}\\upqed\n\\end{pf*}\n\n\\section{Some properties of diffusion bridges}\\label{diff_bridge}\nLet us suppose that the SDE $dX_t=b(X_t)\\,dt+\\sigma(X_t)\\,dW_t$,\n$X_0=x$ has a~transition density $p_t(x,y)$ which is positive and of\nclass $\\mathcal{C}^{1,2}$ with respect to $(t,x)\\in\\mathbb{R}_+^*\n\\times \\mathbb{R}$. We check later in this section that this holds\nunder Hypothesis~\\ref{hyp_wass_pathwise}. Then, the law of the\ndiffusion bridge with deterministic time horizon~$\\mathcal{T}$ is given\nby (see, e.g., Fitzsimmons, Pitman and Yor~\\cite{FPY})\n\\begin{eqnarray}\n\\mathbb{E} \\bigl[F(X_u,0\\le u\\le t)|X_\\mathcal{T}=y \\bigr]=\n\\mathbb{E} \\biggl[F(X_u,0\\le u\\le t)\\frac{\np_{\\mathcal{T}-t}(X_t,y)}{p_\\mathcal{T}(x,y)} \\biggr],\\nonumber\n\\\\\n\\eqntext{0\\le t<\\mathcal{T},}\n\\end{eqnarray}\nwhere $F\\dvtx C([0,t],\\mathbb{R})\\rightarrow\\mathbb{R}$ is a bounded\nmeasurable function. Indeed for\\allowbreak $g\\dvtx \\mathbb{R}\\to\\mathbb{R}$\nmeasurable and bounded, using that $X_\\mathcal{T}$ has the density\n$p_\\mathcal{T}(x,y)$, then the Markov property at time $t$, one checks\nthat\n\\begin{eqnarray*}\n&& \\mathbb{E} \\biggl[\\mathbb{E} \\biggl[F(X_u,0\\le u\\le t)\n\\frac{\np_{\\mathcal{T}-t}(X_t,y)}{p_\\mathcal{T}(x,y)} \\biggr] \\bigg|_{y=X_\\mathcal{T}\n}g(X_\\mathcal{T}) \\biggr]\n\\\\\n&&\\qquad =\n\\mathbb{E} \\biggl[F(X_u,0\\le u\\le t)\\int_\\mathbb\n{R}g(y)p_{\\mathcal{T}-t}(X_t,y)\\,dy \\biggr]\n\\\\\n&&\\qquad =\\mathbb{E} \\bigl[F(X_u,0\\le u\\le t)\\mathbb{E}\n\\bigl[g(X_\\mathcal{T})|X_t \\bigr] \\bigr]\n\\\\\n&&\\qquad =\\mathbb{E}\n\\bigl[F(X_u,0\\le u\\le t)g(X_\\mathcal{T}) \\bigr].\n\\end{eqnarray*}\n\n\nWe thus focus on the change of probability measure\n\\[\n\\frac{d\\mathbb{P}^y}{d\\mathbb{P}} \\bigg|_{\\mathcal{F}_t}=\\frac\n{p_{\\mathcal{T}-t}(X_t,y)}{p_\\mathcal{T}(x,y)}=:M_t,\n\\]\nso that $\\mathbb{E}[F(X_u,0\\le u\\le t)|X_\\mathcal{T}=y]=\\mathbb\n{E}^y[F(X_u,0\\le u\\le t)]$ where $\\mathbb{E}^y$ denotes the expectation\nwith respect to $\\mathbb{P}^y$. We define $\\ell_t(x,y)=\\log p_t(x,y)$.\nThe process $(M_t)_{t\\in[0,\\mathcal {T})}$ is a martingale, and by\nIt\\^o's formula, we get $dM_t=M_t \\partial_x\n\\ell_{\\mathcal{T}-t}(X_t,y) \\sigma(X_t)\\,dW_t$, which gives\n\\[\nM_t=\\exp\\biggl( \\int_0^t\n\\partial_x\\ell_{\\mathcal{T}-s}(X_s,y) \\sigma\n(X_s)\\,dW_s -\\frac{1}{2} \\int\n_0^t \\partial_x\n\\ell_{\\mathcal{T}\n-s}(X_s,y)^2 \\sigma(X_s)^2\\,ds\n\\biggr).\n\\]\nGirsanov's theorem then gives that for all $y\\in\\mathbb{R}$,\n$(W^y_t=W_t-\\int_0^t \\partial_x\\ell_{\\mathcal{T}-s}(X_s,\\break y)\n\\*\\sigma(X_s)\\,ds)_{t\\in [0, \\mathcal{T})}$ is a Brownian motion\nunder~$\\mathbb{P}^y$, so that $(W^{X_\\mathcal{T} }_t)_{t\\in [0,\n\\mathcal{T})}$ is a Brownian motion independent of~$X_\\mathcal{T}$.\nMoreover, we have\n\\begin{equation}\n\\label{bridge_dyn} dX_t= \\bigl[b(X_t)+\n\\partial_x\\ell_{\\mathcal{T}-t}(X_t,y)\n\\sigma(X_t)^2 \\bigr]\\,dt +\\sigma(X_t)\\,dW_t^y,\n\\end{equation}\nwhich gives precisely the diffusion bridge dynamics.\n\nConversely, we would like now to reconstruct the diffusion from the\ninitial and the final value by using diffusion bridges. The following\nresult, stated in dimension one, may be generalized to higher\ndimensions.\n\n\n\\begin{aprop}\\label{prop_bridge} We consider an SDE\n$dX_t=b(X_t)\\,dt+\\sigma(X_t)\\,dW_t$, $X_0=x$ with a transition density\n$p_t(x,y)$ positive and of class $\\mathcal{C}^{1,2}$\nwith respect to $(t,x)\\in\\mathbb{R}_+^* \\times\\mathbb{R}$. Let $(B_t,t\\ge0)$ be a\nstandard Brownian motion and $Z_\\mathcal{T}$ be a~random variable with\ndensity $p_\\mathcal{T}(x,y)$ drawn independently from~$B$. We assume\nthat pathwise uniqueness holds for the SDE\n\\begin{eqnarray}\\label{eds_pont}\ndZ^{x,y}_t &=& \\bigl[b\n\\bigl(Z^{x,y}_t \\bigr)+\\partial_x\n\\ell_{\\mathcal{T}-t} \\bigl(Z^{x,y}_t,y \\bigr) \\sigma\n\\bigl(Z^{x,y}_t \\bigr)^2 \\bigr]\\,dt +\\sigma\n\\bigl(Z^{x,y}_t \\bigr)\\,dB_t,\\quad t\\in[0,\\mathcal{T}),\\hspace*{-25pt}\n\\nonumber\\\\[-4pt]\\\\[-4pt]\nZ^{x,y}_0 &=& x,\\nonumber\\hspace*{-25pt}\n\\end{eqnarray}\nfor any $x,y\\in\\mathbb{R}$, and set $Z_t= Z^{x,Z_\\mathcal{T}}_t$ for $t\n\\in\n[0,\\mathcal{T})$. Then, $(Z_t)_{t \\in[0,\\mathcal{T}]}$ and\n$(X_t)_{t\\in[0,\\mathcal{T}]}$ have the same law.\n\\end{aprop}\n\nA consequence of this result is that $(Z_t,t \\in[0,\\mathcal{T}])$ has\ncontinuous paths, which gives that $\\lim_{t\\rightarrow\n\\mathcal{T}-}Z^{x,y}_t = y$ a.s., $dy$-a.e.\n\n\\begin{pf}\nLet $t\\in[0,\\mathcal{T})$ and $F\\dvtx C([0,t],\\mathbb\n{R})\\to\\mathbb{R}$ and $g:\\mathbb{R}\\to\\mathbb{R}$ be bounded and\nmeasurable functions. Since pathwise uniqueness for the\nSDE~(\\ref{eds_pont}) implies weak uniqueness, we get\n\\begin{eqnarray*}\n\\mathbb{E} \\bigl[F \\bigl(Z_{u}^{x,y},0\\leq u\\leq t \\bigr)\n\\bigr]&=&\\mathbb{E}^y \\bigl[F(X_u,0\\leq u\\leq t) \\bigr]\n\\\\\n&=& \\mathbb{E} \\biggl[F(X_u,0\\leq u\\leq t)\\frac\n{p_{\\mathcal{T}\n-t}(X_t,y)}{p_\\mathcal{T}(x,y)} \\biggr].\n\\end{eqnarray*}\nThus we have\n\\begin{eqnarray*}\n\\mathbb{E} \\bigl[F(Z_{u},0\\leq u\\leq t)g(Z_\\mathcal{T})\n\\bigr]&=&\\mathbb{E} \\biggl[F(X_u,0\\leq u\\leq t)\\int\n_\\mathbb{R}p_{\\mathcal{T}\n-t}(X_t,y)g(y)\\,dy \\biggr]\n\\\\\n&=&\n\\mathbb{E} \\bigl[F(X_u,0\\leq u\\leq t)g(X_\\mathcal{T} )\n\\bigr].\n\\end{eqnarray*}\nHence the finite-dimensional marginals of the two processes are equal.\nSince $(X_t)_{t\\in[0,\\mathcal{T}]}$ has continuous paths and\n$(Z_t)_{t\\in[0,\\mathcal{T}]}$ has c\\`adl\\`ag paths (continuous on\n$[0,\\mathcal{T})$ with a possible jump at $\\mathcal{T}$), this\ncompletes the proof.\n\\end{pf}\n\n\nFrom now on, we assume that Hypothesis~\\ref{hyp_wass_pathwise} holds.\nWe introduce the Lamperti transformation of the stochastic process\n$(X_t,t\\ge0)$. We\\vspace*{-4pt} define $\\varphi(x)=\\int_0^x\\frac{dy}{\\sigma(y)}$ and\n$\\alpha(y)= (\\frac{b}{\\sigma}-\\frac{\\sigma'}{2} )\\circ\n\\varphi^{-1}(y)$, $\\hat{X}_t\\stackrel{\\mathrm{def}}{=}\\varphi(X_t)$ so\nthat we have\n\\begin{equation}\n\\label{sde_lamperti} d\\hat{X}_t= \\alpha(\n\\hat{X}_t)\\,dt + dW_t,\\qquad t\\in[0,T].\n\\end{equation}\nBy\\vspace*{-2pt} Hypothesis~\\ref{hyp_wass_pathwise}, $\\varphi$ is a $C^5$ bijection,\n$\\alpha\\in C^3_b$ and both $\\varphi$ and $\\varphi^{-1}$ are Lipschitz\ncontinuous. We denote by $\\hat{p}_t(\\hat{x},\\hat{y})$ the transition density\nof~$\\hat{X}$ and $\\hat{\\ell}_t(\\hat{x},\\hat{y})=\\log(\\hat\n{p}_t(\\hat{x},\\hat{y}))$.\n\n\n\\begin{alem}\\label{lem_densite} The density $\\hat{p}_t(\\hat{x},\\hat\n{y})$ is\n$C^{1,2}$ with respect to $(t,\\hat{x})\\in\\mathbb{R}_+^*\\times\n\\mathbb{R}$. Besides, we have\n\\[\n\\partial_{\\hat{x}} \\hat{\\ell}_t(\\hat{x},\\hat{y})=\n\\frac{\\hat\n{y}-\\hat{x}}{t}-\\alpha(\\hat{x})+ g_t(\\hat{x},\\hat{y}),\n\\]\nwhere $g_t(\\hat{x},\\hat{y})$ is a continuous function on $\\mathbb\n{R}_+\\times\n\\mathbb{R}^2$ such that\n$\\partial_{\\hat{x}} g_t(\\hat{x},\\hat{y})$ and $\\partial_{\\hat{y}}\ng_t(\\hat{x},\\hat{y})$\nexist and\n\\[\n\\forall T>0\\qquad\n\\sup_{t\\in[0,T], \\hat{x},\\hat{y}\\in\\mathbb\n{R}}\\bigl|\\partial_{\\hat{x}}\ng_t(\\hat{x},\\hat{y})\\bigr|+\\bigl|\\partial_{\\hat{y}} g_t(\\hat\n{x},\\hat{y})\\bigr|<\\infty.\n\\]\n\\end{alem}\n\n\n\\begin{pf}\nIt is well known that we can express the transition\ndensity~$\\hat{p}_t(\\hat{x},\\hat{y})$ by using Girsanov's theorem as an\nexpectation on a Brownian bridge between $\\hat{x}$~and~$\\hat{y}$.\nNamely, since $\\alpha$ and its derivatives are bounded, we can apply a\nresult stated in Gihman and Skorohod~\\cite{gs} (Theorem~1, Chapter~3,\nSection~13) or in Rogers \\cite{rog} to get that $\\hat{p}_t(\\hat{x},\\hat{y})$ is positive and\n\\begin{eqnarray*}\n\\hat{\\ell}_t(\\hat{x},\\hat{y})&=&-\\frac{(\\hat{x}-\\hat\n{y})^2}{2t}+\\int\n_{\\hat{x}}^{\\hat{y}\n}\\alpha(z)\\,dz\n\\\\\n&&{} +\\log\\mathbb{E}\n\\biggl(\\exp\\biggl({-\\frac{1}{2}\\int_0^t\\bigl(\\alpha\n'+\\alpha\n^2\\bigr)\\biggl(\\hat{x}+W_s+\\frac{s}{t}(\\hat{y}-\\hat{x}-W_t)\\biggr)\\,ds} \\biggr)\\biggr)\n\\\\\n&&{} -\\frac{1}{2}\\log(2\\pi t).\n\\end{eqnarray*}\nClearly, $\\hat{\\ell}_t(\\hat{x},\\hat{y})$ is $C^{1,2}$ in\n$(t,\\hat{x})\\in\\mathbb{R}_+^*\\times\n\\mathbb{R}$ (we can use carefree the dominated convergence theorem for\nthe third\nterm since $\\alpha\\in C^3_b$), and we have\n\\begin{eqnarray*}\ng_t(\\hat{x},\\hat{y})\n&=& -\\frac{1}{2}\n\\biggl(\\mathbb{E} \\biggl[\\exp\\biggl({-\\frac\n{1}{2}\\int_0^t\\bigl(\\alpha'+\\alpha^2\\bigr)\\biggl(\\hat{x}+W_s+\\frac{s}{t}(\\hat{y}-\\hat\n{x}-W_t)\\biggr)\\,ds}\\biggr)\n\\\\\n&&\\hspace*{33pt}{}\\times \\int_0^t\\frac{t-s}{t}\\bigl(\\alpha''+2\\alpha\\alpha'\\bigr)\\biggl(\\hat\n{x}+W_s+\\frac\n{s}{t}(\\hat{y}-\\hat{x}-W_t)\\biggr)\\,ds \\biggr]\\biggr)\n\\\\\n&&\\hspace*{9pt}{}\\bigg\/\\biggl({\\mathbb{E} \\biggl[\\exp\\biggl({-\\frac\n{1}{2}\\int\n_0^t\\bigl(\\alpha'+\\alpha^2\\bigr)\\biggl(\\hat{x}+W_s+\\frac{s}{t}(\\hat{y}-\\hat\n{x}-W_t)\\biggr)\\,ds}\\biggr) \\biggr]}\\biggr).\n\\end{eqnarray*}\nThis is a continuous function on~$\\mathbb{R}_+\\times\\mathbb{R}^2$,\nand we easily\nconclude by using the dominated convergence theorem and~$\\alpha\\in\nC^3_b$.\n\\end{pf}\n\nBy straightforward calculations, we have\n\\[\np_t(x,y)=\\frac{1}{\\sigma(y)}\\hat{p}_t \\bigl(\\varphi(x),\n\\varphi(y) \\bigr)\n\\]\nand $p_t(x,y)$ is thus positive and $C^{1,2}$ with respect to $(t,x)$.\nThe diffusion bridge~(\\ref{bridge_dyn}) is thus well defined. Since\n$\\partial_x \\ell_t(x,y) =\\frac{1}{\\sigma(x)}\\partial_{\\hat x}\n\\hat{\\ell}_t(\\varphi(x),\\varphi(y))$, we get by It\\^{o} formula\nfrom~(\\ref{bridge_dyn})\n\\begin{eqnarray*}\nd \\hat{X}_t&=& \\bigl[\\alpha(\\hat{X}_t)+\n\\partial_{\\hat{x}} \\hat{\\ell}_{\\mathcal{T}-t} \\bigl(\\hat{X}_t,\n\\varphi(y) \\bigr) \\bigr]\\,dt+dW^y_t,\n\\\\\ndW^y_t &=&dW_t-\n\\partial_{\\hat{x}} \\hat{\\ell}_{\\mathcal{T}-t} \\bigl(\\hat{X}_t,\n\\varphi(y) \\bigr)\\,dt.\n\\end{eqnarray*}\nTherefore, as one could expect, the Lamperti transform on the diffusion\nbridge coincides with the diffusion bridge on the Lamperti transform.\n\n\\begin{aprop}\\label{prop_bridge2}\nLet Hypothesis~\\ref{hyp_wass_pathwise} hold. There exists a\ndeterministic constant~$C$ such that\n\\[\n\\forall\\mathcal{T}\\in(0,T], x,x',y,y'\\in\\mathbb{R}\\qquad\n\\sup_{t\\in[0,\\mathcal{T})} \\bigl|Z^{x,y}_t-Z^{x',y'}_t\\bigr|\n\\le C \\bigl(\\bigl|x-x'\\bigr|\\vee\\bigl|y-y'\\bigr| \\bigr)\n\\]\nand in particular, pathwise uniqueness holds for~(\\ref{eds_pont}).\n\\end{aprop}\n\n\n\\begin{pf}\nFor $\\hat{x},\\hat{y}\\in\\mathbb{R}$, we consider the following SDE:\n\\begin{eqnarray}\\label{pont_Z2}\nd \\hat{Z}^{\\hat{x},\\hat\n{y}}_t&=&dB_t+\n\\biggl[\\frac{\\hat{y}-\\hat{Z}^{\\hat{x},\\hat{y}}_t}{\\mathcal\n{T}-t}+g_{\\mathcal{T}-t} \\bigl(\\hat{Z}^{\\hat{x},\\hat{y}}_t,\n\\hat{y} \\bigr) \\biggr]\\,dt,\\qquad t\\in[0, \\mathcal{T}),\n\\nonumber\\\\[-20pt]\\\\\n\\hat{Z}^{\\hat{x},\\hat{y}}_{0}&=&\\hat{x},\\nonumber\n\\end{eqnarray}\nwhich corresponds to the diffusion bridge on the Lamperti\ntransform~$\\hat{X}$. We set\n$\\Delta_t=\\hat{Z}^{\\hat{x},\\hat{y}}_t-\\hat{Z}^{\\hat{x}',\\hat\n{y}'}_t$ for\n$t\\in[0,\\mathcal{T})$ and $\\hat{x}',\\hat{y}' \\in\\mathbb{R}$. We have\n\\[\nd \\Delta_t = \\biggl[\\frac{\\hat{y}-\\hat{y}'-\\Delta_t}{\\mathcal{T}-t}\n+g_{\\mathcal{T}-t} \\bigl(\n\\hat{Z}^{\\hat{x},\\hat{y}}_t,\\hat{y} \\bigr)-g_{\\mathcal{T}-t} \\bigl(\\hat\n{Z}^{\\hat{x}',\\hat{y}'}_t,\\hat{y}' \\bigr) \\biggr]\\,dt\n\\]\nand thus $d(|\\Delta_t| \\vee|\\hat{y}-\\hat{y}'|)=\n\\operatorname{sign}(\\Delta_t)\\mathbf{1}_{|\\Delta_t|\\ge|\\hat{y}-\\hat{y}'|}\\,d\\Delta_t$.\nOn the one hand, we observe that\n$\\mathbf{1}_{|\\Delta_t|\\ge|\\hat{y}-\\hat{y}'|}[\\operatorname{sign}(\\Delta_t)\n(\\hat{y}-\\hat{y}')-|\\Delta_t|]\\le0$. On the other hand, $g_t$ is\nuniformly Lipschitz w.r.t. $(\\hat{x},\\hat{y})$ on $t\\in[0,T]$ by\nLemma~\\ref{lem_densite}, which leads to\n\\[\nd \\bigl(|\\Delta_t| \\vee\\bigl|\\hat{y}-\\hat{y}'\\bigr| \\bigr)\\le C\n\\bigl(|\\Delta_t| \\vee\\bigl|\\hat{y}-\\hat{y}'\\bigr| \\bigr)\n\\]\nfor some positive constant~$C$. Gronwall's lemma gives then\n$|\\Delta_t|\\le e^{CT}(|\\hat{x}-\\hat{x}'|\\vee|\\hat{y}-\\hat{y}'|)$.\nThis gives in\nparticular pathwise uniqueness for~(\\ref{pont_Z2}).\n\nNow, let us\\vspace*{-1pt} assume that $(Z^{x,y}_t)_{t\\in[0,\\mathcal{T})}$\nsolves~(\\ref{eds_pont}). Then\n$\\varphi(Z^{x,y}_t)$ solves~(\\ref{pont_Z2}) with $\\hat{x}=\\varphi\n(x)$ and\n$\\hat{y}=\\varphi(y)$, and we necessarily have\n$Z^{x,y}_t=\\varphi^{-1}(\\hat{Z}_t^{\\varphi(x),\\varphi(y)})$ by pathwise\nuniqueness. Both $\\varphi$ and $\\varphi^{-1}$ are Lipschitz, and we\ndenote by $K$ a~common Lipschitz constant. Then we get\n\\begin{eqnarray*}\n\\bigl|Z^{x,y}_t-Z^{x',y'}_t\\bigr| &=&\\bigl|\n\\varphi^{-1} \\bigl(\\hat{Z}_t^{\\varphi(x),\\varphi(y)} \\bigr) -\n\\varphi^{-1} \\bigl(\\hat{Z}_t^{\\varphi(x'),\\varphi(y')} \\bigr) \\bigr|\n\\\\\n&\\le&\nK^2 e^{CT} \\bigl(\\bigl|x-x'\\bigr|\\vee\\bigl|y-y'\\bigr|\n\\bigr),\n\\end{eqnarray*}\nwhich gives the desired result.\n\\end{pf}\n\\end{appendix}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUnderstanding the evolution of neutral hydrogen (HI{}) in dark matter\nhaloes is important for models of galaxy formation\n\\citep{somerville2015, blanton2009, barkana2016}. The HI{} content\nof dark matter haloes forms an intermediate state in the baryon cycle\nthat connects the hot shock-heated gas and star-forming molecular gas\nin haloes \\citep{2010ApJ...718.1001B, 2010MNRAS.409..515F,\n 2012ApJ...753...16K}. Constraints on HI{} in galaxies therefore\nreveal the role of gas dynamics, cooling, and regulatory processes\nsuch as stellar feedback and gas inflow and outflow in galaxy\nformation \\citep{prochaska09, 2011MNRAS.414.2458V,\n 2015MNRAS.447.1834B, 2015MNRAS.451..878K, 2016MNRAS.456.1115B}.\nHI{} also traces environmental processes like satellite quenching,\ntidal interactions and ram-pressure stripping\n\\citep{2012MNRAS.427.2841F, 2012MNRAS.424.1471L, 2013MNRAS.429.2191Z,\n lagos2014}. The average HI mass content of dark matter haloes can\nbe expressed as an HI-mass halo-mass (HIHM) relation.\n\nAt low redshifts ($z \\sim 0$), constraints on HI{} in galaxies are\nderived from the observations of the 21~cm emission line of hydrogen\nin large-area blind galaxy surveys like the HI{} Parkes All Sky\nSurvey \\citep[HIPASS,][]{meyer2004} and the Arecibo Fast Legacy ALFA\nsurvey \\citep[ALFALFA,][]{giovanelli2005}, which provide measurements\nof the mass function and clustering of HI{}-selected galaxies. There\nare also targeted surveys such as The HI{} Nearby Galaxy Survey\n\\citep[THINGS,][]{walter2008}, the Galex Arecibo SDSS Survey\n\\citep[GASS,][]{catinella2010}, and the Westerbork HI{} survey of\nSpiral and Irregular Galaxies \\citep[WHISP,][]{vanderhulst2001}, which\nfocus on a smaller number of resolved galaxies. Efforts are also\ncurrently underway to constrain the density and clustering of HI{}\nusing intensity mapping without resolving individual galaxies\n\\citep{chang10, masui13, switzer13}. In the future, current and\nupcoming facilities such as MeerKAT \\citep{jonas2009}, the Square\nKilometre Array \\citep[SKA,][]{2015aska.confE..19S} and its\npathfinders, and the Canadian Hydrogen Intensity Mapping Experiment\n\\citep[CHIME,][]{2014SPIE.9145E..22B}, will provide unprecedented\ninsight into the evolution of the cosmic neutral hydrogen content\nacross redshifts.\n\n\nUnfortunately, the intrinsic faintness of the 21~cm line and the\nlimits of current radio facilities hamper direct detection of HI{}\nfrom individual galaxies at redshifts above $z \\sim 0.1$. Spectral\nstacking has been used to probe the HI{} content of undetected\nsources out to redshifts $z \\sim 0.24$ \\citep{lah07, lah2009, rhee13,\n delhaize13}. At higher redshifts, therefore, constraints on the\ndistribution and evolution of HI in galaxies come chiefly from high\ncolumn density Lyman-$\\alpha$ absorption systems (Damped\nLyman-$\\alpha$ Absorbers; DLAs) with column density\n$N_\\mathrm{HI}>10^{20.2}$~cm$^{-2}$ in the spectra of bright\nbackground sources such as quasars. DLAs are the main reservoir of HI\nbetween redshifts $z\\sim 2$--$5$, containing $> 80 \\%$ of the cosmic\nHI content \\citep{wolfe1986, lanzetta1991, gardner1997, prochaska09,\n rao06, noterdaeme12, zafar2013}. At low redshift, DLAs have been\nfound to be associated with galaxies \\citep{lanzetta1991} and to\ncontain the vast majority ($\\sim 81\\%$) of the HI{} gas in the local\nuniverse \\citep{zwaan2005a}. At high redshift, the kinematics of DLAs may\nsupport the hypothesis that they probe HI in large rotating disks\n\\citep{1997ApJ...487...73P, 2001MNRAS.326.1475M, 2015MNRAS.447.1834B} or proto-galactic clumps \\citep{haehnelt1998}.\nThe three-dimensional clustering of DLAs \\citep{fontribera2012} points\nto DLAs being preferentially hosted by dark matter haloes with mass $M\n\\sim 10^{11} M_{\\odot}$ at redshift $z \\sim 3$.\n\nSemi-analytical models and hydrodynamical simulations have provided\nclues towards the evolution of HI{} in galaxies and its relation to\nstar-formation, feedback and galaxy evolution \\citep{dave2013,\n duffy2012, lagos2011, obreschkow2009a, nagamine2007, pontzen2008,\n tescari2009, hong2010, cen2012, fu2012, kim2013, bird2014,\n popping2009, popping2014, eagle2016, kim2016,\n martindale2016}. {Semi-analytical methods \\citep[e.g.,][]{berry2014, popping2014, somerville15} typically reproduce the\nHI{} mass functions and the HI{}-to-stellar-mass scaling relations\nfound in low-redshift HI{} observations and DLA observables.}\nSimulation techniques have also been used to model DLA populations at\nhigher redshifts \\citep{pontzen2008} and their relation to galaxy\nformation and feedback processes \\citep{bird2014, rahmati2013,\n rahmati2014}. Hydrodynamical simulations suggest that DLAs are\nhosted in haloes with mass $10^{10}$--$10^{11} h^{-1}$ M$_\\odot$\n\\citep[e.g.,][]{bird2014}. In the presence of strong stellar\nfeedback, these simulations can reproduce the observed abundance and\nclustering of DLAs but end up having an excess of HI{} at low\nredshifts ($z<3$).\n\nAnalytical techniques offer complementary insight into the processes\ngoverning the HI{} content of dark matter halos. Analytical methods\nhave been used for modelling 21~cm intensity mapping observables,\nparticularly the HI{} bias and power spectrum\n\\citep{marin2010,wyithe2010, sarkar2016} as well as DLAs \\citep{haehnelt1996,\n haehnelt1998,barnes2009, barnes2010, 2013ApJ...772...93K,\n barnes2014}. These models use prescriptions for assigning HI{} mass\nto dark matter halos as inputs to the model, either directly or in\nconjunction with cosmological simulations \\citep{bagla2010, marin2010,\n gong2011, guhasarkar2012}. In \\citet{hptrcar2016}, the 21-cm- and\nDLA-based analytical approaches are combined towards a consistent\nmodel of HI{} evolution across redshifts. It is found that a model\nthat is consistent with low-redshift radio as well as high-redshift\noptical\/UV observations requires a fairly rapid transition of HI{}\nfrom low-mass to higher-mass haloes at high redshifts. A more complete\nstatistical {data-driven} approach \\citep{2016arXiv160701021P}\nconstrains the HIHM relation using low- and high-redshift\nobservations {in a halo model framework}.\n\nAn essential ingredient in analytical techniques is therefore the HIHM\nrelation. In this paper, we employ the technique of abundance\nmatching to quantify the observational constraints on the HIHM\nrelation in the post-reionization Universe. Abundance matching has\nbeen widely used to describe the relation between the stellar mass of\ngalaxies and the mass of their host dark matter halos \\citep{vale2004,\n vale2006, conroy2006, behroozi2010, guo2010, shankar2006,\n moster2010, moster2013}. The basic assumption involved is that\nthere is a monotonic relationship between a galaxy property (say,\nstellar mass or galaxy luminosity) and the host dark matter halo\nproperty (say, the host halo mass). In its simplest form, abundance\nmatching involves matching the cumulative abundance of galaxies to\nthat of their (sub)haloes, thereby assigning the most luminous\ngalaxies to the most massive haloes. The mapping between the\nunderlying galaxy property and the host halo mass can be derived from\nthis. {A key feature of this approach is that being completely empirical\\footnote{{A caveat is that the halo mass function being used is theoretical, and the assumption of matching the most massive haloes is involved.}}, it is free from the uncertainties involved in physical models of HI{} and galaxy evolution. {It is therefore a complementary analysis to forward modelling techniques, including semi-analytical models and hydrodynamical simulations.}\n\nThe HI{} mass function \\citep{rao1993} is the radio equivalent of the\noptical luminosity function in galaxies and is an important\nstatistical quantity in the observations of gas-rich galaxies. It\nmeasures the volume density of HI{}-selected galaxies as a function\nof the HI{} mass and simulations suggest that its shape is a more\nsensitive probe of some aspects of galaxy formation physics than the\ngalaxy luminosity function \\citep{kim2013}. At low redshifts, the\nHI{} mass function is fairly well-constrained over four decades in HI\nmass \\citep{zwaan05, martin10}. \\citet{papastergis2013} constrained\nthe HIHM relation at low redshift using ALFALFA data and found that\nthe observed clustering of HI was reproduced well by this approach. In\nthis work, we describe the results of abundance matching HI{} mass to\ndark matter halo mass using the low-redshift radio observations of the\nHI{} mass function \\citep{zwaan05, martin10} and then evolve the\nrelation using the complementary information available through DLA\nmeasurements at high redshift. The combination of the radio data at\nlow redshifts and DLA observations at higher redshifts constrains a\nmulti-epoch HI{}-halo mass relation with the available data. We also compare how the results from this approach are consistent with those from studies in previous literature.\n\nThe paper is organized as follows. In Section~\\ref{sec:abmatch}, we\ndetail the abundance matching technique and apply it to three\nlow-redshift HI mass function measurements. We also combine the\nresultant HIHM relation with the stellar-mass halo-mass (SHM) relation\nto discuss the HI-to-stellar-mass ratio in low-redshift galaxies. In\nSection~\\ref{sec:observations}, we extend the low-redshift HIHM\nrelation to higher redshifts using measurements of DLA column density\ndistribution and clustering. We compare the relation so derived with\nother HI{} models in the literature, and conclude in\nSection~\\ref{sec:conc}.\n\n\\section{HIHM relation at low redshift}\n\\label{sec:abmatch}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/mhiandm.pdf}\n \\end{center}\n \\caption{The blue and red curves show the HI mass functions derived\n from the HIPASS \\citep{zwaan05} and ALFALFA data \\citep{martin10},\n respectively. The shaded region shows the combined uncertainty.\n The black curve shows the halo mass function.}\n \\label{fig:mhiandm}\n\\end{figure} \n\nWe derive the HIHM relation at $z\\sim 0$ by abundance matching dark\nmatter haloes with HI-selected galaxies. We use the HI mass function\nfrom the HIPASS \\citep{meyer2004} and ALFALFA \\citep{martin10} datasets,\nthe latter derived using the $1\/V_{\\rm max}$ as well as the 2DSWML\n(2-Dimensional StepWise Maximum Likelihood) methods:\n\n\\begin{itemize}\n\\item HIPASS: This complete catalogue of HI sources contains 4,315\n galaxies \\citep{meyer2004}. The HI mass function $\\phi(M_{\\rm HI})$ is fitted by a Schechter function using the the 2-Dimensional StepWise Maximum Likelihood (2DSWML) method, with a total of 4010 galaxies. The\n effective volume $V_{\\rm eff}$ is calculated for each galaxy\n individually and the values of $1\/V_{\\rm eff}$ are summed in bins of\n HI mass to obtain the 2DSWML mass function. The resultant best-fit\n parameters are $\\alpha = -1.37 \\pm 0.03 \\pm 0.05$,\n $\\log(M_{*}\/M_{\\odot}) = 9.80 \\pm 0.03 \\pm 0.03 h_{75}^{-2}$ and\n $\\phi^* = (6.0 \\pm 0.8 \\pm 0.6) \\times 10^{-3} h_{75}^3$ Mpc$^{-3}$\n (the two error values show statistical and systematic errors,\n respectively; \\citealt{zwaan05}). The distribution of HI masses is\n calculated using 30 equal-sized mass bins spanning $6.4 <\n \\log_{10}M_{\\rm HI} < 10.8$ (in $M_{\\odot}$).\n\n\n\\item ALFALFA: This catalogue contains 10,119 sources to form the\n largest available sample of HI-selected galaxies \\citep{martin10}.\n The ALFALFA survey measures the HI mass function by using both the\n 2DSWML as well as the $1\/V_{\\rm max}$ methods. The HI mass function\n is fitted with the Schechter form, with the best-fitting parameters\n $\\phi^* = (4.8 \\pm 0.3) \\times 10^{-3} h_{70}^3$ Mpc$^{-3}$, $\\log (M_*\/M_{\\odot}) + 2\n \\log(h_{70}) = 9.95 \\pm 0.04$, and $\\alpha = -1.33 \\pm 0.03$ with\n the $1\/V_{\\rm max}$ method, and $\\phi^* = (4.8 \\pm 0.3) \\times\n 10^{-3} h_{70}^3$ Mpc$^{-3}$, $\\log (M_*\/M_{\\odot}) + 2 \\log(h_{70}) = 9.96 \\pm 0.2$,\n and $\\alpha = -1.33 \\pm 0.02$ with the 2DSWML method. The two\n determinations of the HI mass function are in good agreement.\\footnote{In the figures, we only indicate the ALFALFA 2DSWML mass function fit for clarity.}\n\\end{itemize}\n\nTo match HI-selected galaxies to dark matter haloes, we use the\nSheth-Tormen \\citep{sheth2002} form of the dark matter halo mass\nfunction. Figure~\\ref{fig:mhiandm} shows the comparison of the three\nHI mass functions mentioned above with the halo mass function, which\nis shown by the solid black curve. This corresponds to the assumption\nthat each dark matter halo hosts one HI galaxy with its HI mass\nproportional to the host dark matter halo mass. The shaded region in\nFigure~\\ref{fig:mhiandm} shows the combined uncertainty in the\nobserved HI mass functions. \nMatching the abundance of the halo mass function and the fitted HI\nmass function then leads to the relation between the HI mass and the\nhalo mass \\citep[e.g.,][]{vale2004}:\n\\begin{equation}\n \\int_{M (M_{\\rm HI})}^{\\infty} \\frac{dn}{ d \\log_{10} M'} \\ d \\log_{10} M' = \\int_{M_{\\rm HI}}^{\\infty} \\phi(M_{\\rm HI}') \\ d \\log_{10} M_{\\rm HI}'\n \\label{eqn:abmatch}\n\\end{equation}\nwhere $dn \/ d \\log_{10} M$ is the number density of dark matter haloes with logarithmic\nmasses between $\\log_{10} M$ and $\\log_{10} (M$ + $dM)$, and $\\phi(M_{\\rm HI})$ is the\ncorresponding number density of HI galaxies in logarithmic mass bins. Solving\nEquation~(\\ref{eqn:abmatch}) gives a relation between the HI-mass\n$M_{\\rm HI}$ and the halo mass $M$. Note that this approach assumes\nthat there is a monotonic relationship between $M_{\\rm HI}$ and $M$.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/abmatch_coldgasfrac.pdf}\n \\end{center}\n \\caption{\\textit{Top panel}: The HIHM relation at $z=0$ derived from HIPASS\n (blue curve) and ALFALFA (red curve) HI mass functions. The black\n curve shows a combined fit to the mass functions using the parametric form of\n Equation~(\\ref{moster12}). The shaded region shows the error in\n the fit. \\textit{Lower panel}: The HI mass fraction, $M_{\\rm\n HI}\/M$ as a function of halo mass $M$ at $z=0$. Also shown for\n comparison in both panels is the SHM relation \\citep{moster2013}.}\n\\label{fig:coldgasfrac}\n\\end{figure} \n\nSolving Equation~(\\ref{eqn:abmatch}) in the mass range\n$10^6$~M$_{\\odot} < M_{\\rm HI} < 10^{11}$~M$_{\\odot}$, we show the\nresultant HIHM relation in the top panel of\nFigure~\\ref{fig:coldgasfrac}. The red curve shows the HIHM relation\nobtained from the ALFALFA data, while the blue curve shown the same\nfor the HIPASS data. We find that the HI mass monotonically increases\nas a function of the halo mass and changes slope at a characteristic\nvalue of the halo mass. This behaviour is qualitatively similar to\nthe SHM relation \\citep{moster2013}, which is shown by the dashed red\ncurve in the top panel of Figure~\\ref{fig:coldgasfrac}. For small\nmass haloes, the HI mass is nearly equal to the stellar mass. But the\nHI mass decreases more rapidly than the stellar mass as a function of\nhalo mass, and for high mass haloes the HI mass is down to almost a tenth of\nthe stellar mass. The characteristic mass for the HIHM relation is also slightly smaller ($10^{11.7} M_{\\odot}$) than that for the SHM relation ($\\sim 10^{12} M_{\\odot}$).\nThe HIHM relation is shown as the ratio of the HI and halo masses in\nthe lower panel of Figure~\\ref{fig:coldgasfrac}. The peak HI mass\nfraction is about 1\\%, and this reduces down to 0.01\\% at both high and low\nmasses. The peak HI mass fraction is in good agreement with the\nabundance matching estimates of \\citet{puebla2011, evoli2011,\n baldry2008} and the direct estimate of \\citet{papastergis2012} for\nthe baryonic mass fraction. It had been found that the clustering of\nthe HI selected galaxies in ALFALFA \\citep{papastergis2013} was also\nwell-matched by abundance matching at $z \\sim 0$, and the cold gas\nfraction showed a maximum at halo masses close to $10^{11.1 - 11.3}\nM_{\\odot}$, which was lower than the corresponding peak for the stellar mass\nfraction ($10^{11.8} M_{\\odot}$).\n\nWe parameterise the HIHM relation by a function of the form introduced\nfor the SHM relation by \\citet{moster2013},\n\\begin{equation}\nM_{\\rm HI} = 2 N_{10} M \\left[\\left(\\frac{M}{M_{10}}\\right)^{-b_{10}} + \\left(\\frac{M}{M_{10}}\\right)^{y_{10}}\\right]^{-1}.\n\\label{moster12}\n\\end{equation}\nWe fit the HIHM relation by the function of this form using non-linear\nleast squares. The best-fitting values of the free parameters are\n$M_{10}=(4.58 \\pm 0.19)\\times 10^{11}$~M$_\\odot$, $N_{10}=(9.89\\pm\n4.89)\\times 10^{-3}$, $b_{10}=0.90 \\pm 0.39$ and $y_{10}=0.74 \\pm\n0.03$.\nThe errors here are estimated by propagating the uncertainties in\nFigure~\\ref{fig:mhiandm}. The best-fit HIHM relations are shown in\nFigure~\\ref{fig:coldgasfrac} (black curves), with the corresponding\nerror indicated by the shaded region.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/mhimstar.pdf} \n \\end{center}\n \\caption{The HI-mass stellar-mass relation obtained by abundance\n matching combined with the SHM relation determined by\n \\citet{moster2013}, are shown by the solid curves. {The 68\\%\n scatter in the relation is indicated by the blue band.} The\n green band shows the region around the median in which 68\\% of the\n galaxies in the EAGLE reference simulation lie on this plane\n \\citep{eagle2016}. Also shown are the data from individual objects\n detected in the GASS and COLD GASS surveys, and the nearby\n galaxies in HERACLES and THINGS \\citep{leroy2008}.}\n \\label{fig:mhimstar}\n\\end{figure} \n\n\\subsection{The HI-mass stellar-mass relation}\n\nWe can combine our derived HIHM relation with known SHM relations to\nunderstand the relationship between the HI mass and stellar mass in\ndark matter haloes. \\citet{moster2013} use a multi-epoch abundance\nmatching method with observed stellar mass functions (SMFs) to\ndescribe the evolution of the SHM relation across redshifts. At each\nredshift, they parameterise the SHM relation using the functional form\nin Equation~(\\ref{moster12}). At low redshifts, the SMFs of\n\\citet{li2009} based on the Sloan Digital Sky Survey (SDSS) DR7\n\\citep{york2000, abazajian2009} are used, along with the observations\nof \\citet{baldry2008}. At higher redshifts, the SMFs by\n\\citet{gonzalez2008} are used for massive galaxies, and those by\n\\citet{santini2012} for the low mass galaxies. From the results of\nabundance matching, the mean SHM relation is obtained, which is then\nused to populate haloes in the Millennium\n\\citep[MS-I;][]{millenium2005} and the Millennium - II\n\\citep[MS-II;][]{boylan2009} simulations with galaxies. From this, the\nmodel stellar mass functions are derived and directly compared to\nobservations to constrain the free parameters in the SHM relation. The\nresulting mean stellar mass fraction at $z \\sim 0$ is shown by the\ndashed line in Figure~\\ref{fig:coldgasfrac}.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/mhimstarm.pdf} \n \\end{center}\n \\caption{The HI-mass to stellar-mass ratio as a function of the halo\n mass at $z\\sim 0$. The blue and red curves combine our results\n for HIPASS and ALFALFA data, respectively, with the SHM relation\n from \\citet{moster2013}. The parametrized fit is indicated by the black curve. The shaded region shows the uncertainty in the HI-mass to\n stellar-mass ratio obtained by propagating errors from\n Figure~\\ref{fig:coldgasfrac}.}\n \\label{fig:mhimstarm}\n\\end{figure} \n\nWe use the \\citet{moster2013} results for the SHM relation, coupled to\nour abundance matching results for HIHM to arrive at a HI-mass\nstellar-mass relation. This is shown by the solid red and blue curves\nin Figure~\\ref{fig:mhimstar} for HIPASS and ALFALFA\nrespectively. {The 68\\% scatter in the relation is indicated by\n the blue band.} For comparison, we also show the measurements from\n750 galaxies in the redshift range $0.025 < z < 0.05$ and $M_{*} >\n10^{10}$~M$_{\\odot}$ from the GALEX Arecibo SDSS survey\n\\citep[GASS;][]{catinella2010, catinella2013}, and 366 galaxies from\nthe COLD GASS survey \\citep{saintonge2011, saintonge2011a,\n catinella2012}. We also show results from \\citet{leroy2008}, which\nis a compilation of individual galaxies detected in the HERA CO Line\nExtragalactic Survey \\citep[HERACLES;][]{leroy2009} that are part of\nThe HI Nearby Galaxy Survey \\citep[THINGS;][]{walter2008}, which\ncovers HI{} masses in the range $(0.01$--$14) \\times\n10^9$~M$_{\\odot}$. These measurements are consistent with our result,\nalthough the observational data exhibit a somewhat large\nscatter. {We note that the HI-stellar mass relation\n from the ALFALFA data and the THINGS data show some discrepancy at\n low stellar masses (also seen in \\citet{popping2015}, which\n matches the data in \\citet{leroy2008}, but has difficulty matching\n the ALFALFA data mass function at low HI\n masses). However, the main\n aim of the present work is to provide an understanding of the\n HI-mass halo-mass relation, and as such, we do not conjecture on\n the observed discrepancy of the \\citet{leroy2008} results with the\n ALFALFA data.} We also compare our HI-mass stellar-mass relation\nwith that found in the EAGLE hydrodynamical simulations\n\\citep{schaye2015, crain2015}. The EAGLE simulations model the\nformation and evolution of galaxies in the presence of various\nfeedback processes. They also model the HI content of galaxies by\nusing calibrated fitting functions from radiative transfer simulations\nto estimate self-shielding, and also employing empirical relations to\ncorrect for molecular gas formation \\citep{eagle2016}. The green band\nin Figure~\\ref{fig:mhimstar} shows the region around the median on the\nHI-mass stellar-mass diagram occupied by 68\\% of galaxies in the\nreference EAGLE simulation (labelled ``L100N1504'' in\n\\citealt{schaye2015}). Our results are in good agreement with the\nEAGLE predictions, except possibly at the highest stellar masses\n($M_*>10^{10}$~M$_\\odot$) where the HI mass in EAGLE galaxies starts\nto decrease. This is likely a reflection of the AGN feedback in\nEAGLE, that heats and expunges cold gas from high mass galaxies by\ntheir massive central black holes \\citep{eagle2016}.\n\nFigure~\\ref{fig:mhimstarm} shows the HI-mass to stellar-mass ratio as\na function of the halo mass. The blue and red curves show the results\nfor HIPASS and ALFALFA respectively, and the black curve shows the\nparametrized fit. In each case, we obtain the HI-mass to stellar-mass\nratio by combining our HIHM relation with the SHM relation of\n\\citet{moster2013}. The HI-mass to stellar-mass ratio is about 25\\%\nin a rather broad range of halo masses from $10^{11}$ to\n$10^{13}$~M$_\\odot$. {The ratio decreases to about 10\\% at halo masses\nabove this range, and is more uncertain below this range, due to the uncertainty in the data and the fitting (Fig. \\ref{fig:coldgasfrac} lower panel) at lower masses.} The shaded regions show the\nuncertainty in the HI-mass to stellar-mass ratio, obtained by\npropagating the errors from Figure~\\ref{fig:coldgasfrac}. \n\n\\section{HIHM relation at high redshift}\n\\label{sec:observations}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{cll}\n\\hline\n$z$ & Observable & Source \\\\\n\\hline\n$\\sim$ 1 & $\\Omega_{\\rm HI}b_{\\rm HI}$ & \\citet{switzer13} \\\\\n & $f_{\\rm HI}$ & \\citet{rao06} \\\\\n & $dN\/dX$ & \\citet{rao06} \\\\\n2.3 & $\\Omega_{\\rm DLA}$ & \\citet{zafar2013} \\\\\n & $f_{\\rm HI}$ & \\citet{noterdaeme12} \\\\\n & $b_{\\rm DLA}$ & \\citet{fontribera2012} \\\\\n & $dN\/dX$ & \\citet{zafar2013} \\\\\n > 3 & $dN\/dX$ & \\citet{zafar2013} \\\\\n \\hline\n\\end{tabular}\n\\caption{High-redshift data used in this paper. The measurement of\n $\\Omega_{\\rm HI}b_{\\rm HI}$ comes from HI intensity mapping at\n $z\\sim 0.8$ by \\citet{switzer13}. \\citet{rao06} use measurements of\n absorption systems at median redshifts $z \\sim 0.609$ and $z \\sim\n 1.219$ to derive the DLA parameters. All other data come from\n Lyman-$\\alpha$ absorption measurements using high-redshift quasar\n spectra.}\n\\label{table:data}\n\\end{table}\n\nDue to the intrinsic faintness of the 21~cm line, the direct detection\nof HI from resolved galaxies is difficult at redshifts above $z \\sim\n0.1$. At higher redshifts ($z<5$), therefore, constraints on the\ndistribution and evolution of HI in galaxies mainly come from high\ncolumn density Lyman-$\\alpha$ absorption systems (Damped\nLyman-$\\alpha$ Absorbers; DLAs) with column densities\n$N_\\mathrm{HI}>10^{20.3}$~cm$^{-2}$ in the spectra of bright\nbackground sources such as quasars. The relevant observables at these\nredshifts are the incidence rate $dN\/dX$ of DLAs, the column density\ndistribution $f_\\mathrm{HI}(N_\\mathrm{HI},z)$ of DLAs at high column\ndensities, the three-dimensional clustering of DLAs as quantified by\ntheir clustering bias relative to the underlying dark matter, and the\ntotal amount of neutral hydrogen in DLAs \\citep{wolfe1986,\n lanzetta1991, gardner1997, prochaska09, rao06, noterdaeme12,\n zafar2013}. A detailed summary of the low- and high-redshift HI\nobservables is provided in \\citet{hptrcar2015}. We now extend the HIHM\nrelation obtained at $z=0$ to higher redshifts by using these\nobservables. Throughout the analysis, we use the cosmological\nparameters $h = 0.71$, $\\Omega_m = 0.281$, $\\Omega_{\\Lambda} = 0.719$,\n$\\sigma_8 = 0.8$, $n_s = 0.964$. \n\n\\subsection{Modelling the HI observables}\n\nTo model the distribution of HI density within individual dark matter\nhaloes, we use the redshift- and mass-dependent modified\nNavarro-Frenk-White (NFW; \\citealt{1996ApJ...462..563N}) profile\nintroduced by \\citet{barnes2014}:\n\\begin{equation}\n \\rho_{\\rm HI}(r) = \\frac{\\rho_0 r_s^3}{(r + 0.75 r_s) (r+r_s)^2},\n \\label{rhodef}\n\\end{equation} \nwhere $r_s$ is the scale radius defined as $r_s=R_v(M)\/c(M,z)$, with\n$R_v(M)$ being the virial radius of the halo. The halo concentration\nparameter, $c(M,z)$ is approximated by:\n\\begin{equation}\n c(M,z) = c_{\\rm HI} \\left(\\frac{M}{10^{11} M_{\\odot}} \\right)^{-0.109} \\left(\\frac{4}{1+z} \\right).\n\\end{equation} \nThe profile in Equation~(\\ref{rhodef}) is motivated by {the analytical modelling of} cooling in multiphase halo gas by \\cite{maller2004}. In the above\nequation, $c_{\\rm HI}$ is a free parameter, the concentration\nparameter for the HI, analogous to the dark matter halo concentration\n$c_0 = 3.4$ \\citep{maccio2007}. The value of this parameter can be\nconstrained by fitting to the observations. The \n$\\rho_0$ in Equation~(\\ref{rhodef}) is determined by normalization to\nthe total HI mass:\n\\begin{equation}\n \\int_0^{R_v(M)} 4 \\pi r^2 \\rho_{\\rm HI}(r) dr = M_{\\rm HI} (M)\n\\end{equation} \nThus, both the HI-halo mass relation as well as the radial\ndistribution of HI are required for constraining the HI profile.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width = \\columnwidth, scale=0.45]{.\/mostercompare.pdf} \n \\end{center}\n \\caption{The evolution of the parameters of the HIHM relation\n (Equation~\\ref{mosterredshiftevol}). The green curves show our\n best-fit parameter inferences with 68\\% confidence intervals shown\n by the orange shaded region. For comparison, the evolution of the\n corresponding quantities for the SHM relation of\n \\citet{moster2013} is shown in blue.}\n \\label{fig:evolution}\n\\end{figure} \n\n\n \\begin{figure*}\n \\begin{center}\n \\includegraphics[width=\\textwidth]{.\/columndensity.pdf} \n \\end{center}\n \\caption{The best-fit column density distribution (red curves) in\n our model at redshifts 0, 1 and 2.3, compared to the\n observations. The blue shaded regions show the 68\\% confidence\n limits. The model fits the high redshift column density\n distributions quite well but has difficulty in fitting the column\n density distribution at $z=0$, especially at low column\n densities.}\n \\label{fig:fhibias0}\n\\end{figure*} \n\nThe DLA based quantities at different redshifts can now be computed by\ndefining the column density of a halo at impact parameter $s$ as\n\\citep{barnes2014, hptrcar2016}:\n\\begin{equation}\n N_{\\rm HI}(s) = \\frac{2}{m_H} \\int_0^{\\sqrt{R_v(M)^2 - s^2}}dl \\ \\rho_{\\rm HI}\\left(\\sqrt{s^2 + l^2}\\right)\n \\label{coldenss}\n\\end{equation} \nwhere $m_H$ is the hydrogen atom mass and $R_v(M)$ is the virial radius\nassociated with a dark matter halo of mass $M$. We define the DLA\ncross-section of the halo as $\\sigma_{\\rm DLA} = \\pi s_*^2$, where\n$s_*$ is defined such that $N_{\\rm HI}(s_*) = 10^{20.3}$ cm$^{-2}$. The\nclustering bias of DLAs, $b_{\\rm DLA}$, can then be written as \n\\begin{equation}\n b_{\\rm DLA} (z) = \\frac{\\int_{0}^{\\infty} dM n (M,z) b(M,z) \\sigma_{\\rm DLA} (M,z)}{\\int_{0}^{\\infty} dM n (M,z) \\sigma_{\\rm DLA} (M,z)},\n\\end{equation}\nwhere $n(M,z)$ is the comoving halo mass function and $b(M,z)$ is the\nclustering bias factor of haloes \\citet{scoccimarro2001}. The DLA\nincidence $dN\/dX$ can be calculated as\n\\begin{equation}\n \\frac{dN}{dX} = \\frac{c}{H_0} \\int_0^{\\infty} n(M,z) \\sigma_{\\rm DLA}(M,z) \\ dM,\n \\label{dndxdef}\n\\end{equation} \nand the column density distribution $f_{\\rm HI}(N_{\\rm HI}, z)$ is given by\n\\begin{multline}\n f(N_{\\rm HI}, z) \\equiv \\frac{d^2 n}{dX d N_{\\rm HI}} \\\\\n = \\frac{c}{H_0} \\int_0^{\\infty} n(M,z) \\left|\\frac{d \\sigma}{d N_{\\rm HI}} (M,z) \\right| \\ dM\n \\label{coldensdef}\n\\end{multline}\nwhere\n\\begin{equation}\n \\frac{d\\sigma}{dN_{\\rm HI}}=2\\pi s\\frac{ds}{dN_{\\rm HI}},\n\\end{equation}\nwith $N_{\\rm HI}(s)$ defined by Equation~(\\ref{coldenss}). The\ndensity parameter for DLAs, $\\Omega_{\\rm DLA}$ is obtained by\nintegrating the column density distribution \n\\begin{equation}\n \\Omega_{\\rm DLA}(N_{\\rm HI}, z) = \\frac{m_H H_0}{c \\rho_{c,0}} \\int_{10^{20.3}}^{\\infty} f_{\\rm HI}(N_{\\rm HI}, z) \\ N_{\\rm HI} \\ d N_{\\rm HI},\n\\end{equation}\nwhere $\\rho_{c,0}$ is the present-day critical density.\n\nAt high redshifts, we also use the measurement of $\\Omega_{\\rm HI}b_{\\rm\n HI}$ from HI intensity mapping at $z\\sim 0.8$ by \\citet{switzer13}.\nTo calculate this quantity in our model, the HI density parameter is\ngiven by\n\\begin{equation}\n \\Omega_{\\rm HI} (z) = \\frac{1}{\\rho_{c,0}} \\int_0^{\\infty} n(M, z) M_{\\rm HI} (M,z) dM \\ .\n \\label{omegaHI}\n\\end{equation} \nThe bias of HI is given by\n\\begin{equation}\nb_{\\rm HI} (z) = \\frac{\\int_{0}^{\\infty} dM n(M,z) b (M,z) M_{\\rm HI} (M,z)}{\\int_{0}^{\\infty} dM n(M,z) M_{\\rm HI} (M,z)}\n\\label{biasHI}\n\\end{equation}\nwhere $b(M,z)$ is the dark matter halo bias. We fit the HI density\nprofiles of haloes at $z=0$ by using the column\ndensity distribution at $z=0$ for $N_\\mathrm{HI}>10^{20.3}$~cm$^{-2}$,\nderived from the WHISP data by \\citet{zwaan2005a}. \n\n\n\n\\subsection{Extending the HIHM relation to high redshifts}\n\\label{sec:extending}\nWe can now extend the HIHM relation developed in\nSection~\\ref{sec:abmatch} to higher redshifts. We do this by\nparameterising the HIHM relation evolution in a manner similar to the\nparameterisation of the SHM relation evolution by \\citet{moster2013}.\nWe write the HIHM relation at higher redshifts as\n\\begin{equation}\nM_{\\rm HI} = 2 N_{1} M \\left[\\left(\\frac{M}{M_{1}}\\right)^{-b_{1}} + \\left(\\frac{M}{M_{1}}\\right)^{y_{1}}\\right]^{-1},\n\\label{mosterredshiftevol}\n\\end{equation}\nwhich has the same form as Equation~(\\ref{moster12}). The parameters\nin Equation~(\\ref{mosterredshiftevol}) are written as:\n\\begin{align}\n& \\log_{10} M_{1} = \\log_{10} M_{10} + \\frac{z}{z + 1} M_{11}, \\nonumber \\\\\n& N_{1} = N_{10} + \\frac{z}{z + 1} N_{11}, \\nonumber \\\\ \n& b_{1} = b_{10} + \\frac{z}{z + 1} b_{11},\\ \\mathrm{and}\\nonumber \\\\\n& y_{1} = y_{10} + \\frac{z}{z + 1} y_{11}. \n\\label{eq:evol}\n\\end{align}\n\nThe parameters $M_{10}$, $N_{10}$, $b_{10}$ and $y_{10}$ are defined\nin Equation~(\\ref{moster12}) for $z=0$. The four additional\nparameters, $M_{11}$, $N_{11}$, $b_{11}$ and $y_{11}$, introduced by\nEquations~(\\ref{eq:evol}) govern the evolution of the HIHM at high\nredshift. These four parameters together with the HI density profile\nparameter $c_{\\rm HI}$ are to be constrained from the high redshift\nobservations. This is done by using the data available from $z=0$ to\n$5$ as summarised in Table~\\ref{table:data}. We use the measurements of the incidence rate $dN\/dX$ of DLAs, the column\ndensity distribution $f_\\mathrm{HI}(N_\\mathrm{HI},z)$ of DLAs at high\ncolumn densities, the three-dimensional clustering of DLAs as\nquantified by their clustering bias relative to the dark matter,\nand the total amount of neutral hydrogen in DLAs \\citep{wolfe1986,\n lanzetta1991, gardner1997, prochaska09, rao06, noterdaeme12,\n zafar2013}, as well as the measurements of the HI column density\ndistribution and clustering from radio data at $z<1$ \\citep{zwaan2005a,\n switzer13}.\n \n\n\\begin{figure}\n \\begin{center}\n \\hskip-0.2in \\includegraphics[width = \\columnwidth, scale=0.6]{.\/omegabiasdndx.pdf} \n \\end{center}\n\\caption{Our model predictions for the density parameter, clustering\n bias, and DLA incidence rate (red, with 68\\% confidence intervals\n indicated by the error bars) compared to the observations. Note\n that at redshift $z \\sim 1$, \\citet{switzer13} constrain the product\n $\\Omega_{\\rm HI} b_{\\rm HI}$. Shown here is the observed\n $\\Omega_{\\rm HI} b_{\\rm HI}$ divided by the model value of $b_{\\rm\n HI}$ (top panel) and $\\Omega_{\\rm HI}$ (second panel). The model\n successfully matches these observations, including the bias at high\n redshifts.}\n\\label{fig:panels123}\n\\end{figure} \n\nThe best-fitting values for the five parameters $M_{11}, N_{11},\nb_{11}$, $y_{11}$ and $c_{\\rm HI}$, and their errors are now estimated\nby a Bayesian Markov Chain Monte Carlo (MCMC) analysis using the\n\\textsc{CosmoHammer} package \\citep{akeret2013}. The likelihood,\n\\begin{equation}\n\\mathcal{L} = \\exp\\left(-\\frac{\\chi^2}{2}\\right) \n\\end{equation}\nis maximized with respect to the five free parameters, with:\n\\begin{equation}\n\\chi^2 = \\sum_i\\frac{(f_{\\rm i} - f_{\\rm obs,i})^2}{\\sigma^2_{\\rm obs,i}}\n\\end{equation}\nwhere the $f_{\\rm i}$ are the model predictions, $f_{\\rm obs,i}$ are\nthe observational data and $\\sigma^2_{\\rm obs,i}$ are the squares of\nthe associated uncertainties (here assumed independent).\n\nThe best fitting parameters and their 68\\% errors are\n$M_{11}=1.56^{+0.53}_{-2.70}$,\n$N_{11}=0.009^{+0.06}_{-0.001}$, $b_{11}=-1.08^{+1.52}_{-0.08}$,\n$y_{11}=4.07^{+0.39}_{-2.49}$, and $c_{\\rm\n HI}=133.66^{+81.39}_{-56.23}$. The inferred evolution of the four\nparameters of the HIHM relation in Equation~(\\ref{mosterredshiftevol})\nis shown in Figure~\\ref{fig:evolution} together with the 68\\%\nerrors. For comparison, the evolution of the corresponding parameters\nin the SHM relation parametrization of \\citep{moster2013} are also\nshown. {The model allows for a wide range of parameters in the HIHM relation at high redshifts.} The increase in the {best-fitting} characteristic mass follows the\nincrease in the characteristic halo mass of the SHM relation. The\nevolution of the high mass slope $y_1$ is much more rapid for the HIHM\nrelation than the SHM relation. As we will see below, the high value\nof the clustering bias factor for DLAs at high redshifts forces the\nincrease in the characteristic halo mass of the HIHM relation but the\nmore gradual increase observed in the DLA incidence rate prevents us\nfrom putting too much HI in high mass halos, which constrains the high\nmass slope to very steep values. \n\nFigure~\\ref{fig:fhibias0} shows the column density distribution\nderived from our model at $z\\sim 0, 1,$ and $2.3$ together with the\nassociated 68\\% statistical error. \n\n{At $z \\sim 0$, only the concentration parameter of the profile is used to obtain the column density distribution, since the HIHM relation has been directly fixed by the results of abundance matching. The concentration parameter is assumed to be equal to that obtained from the fitting of higher redshifts, which is done using the analysis outlined in Sec. \\ref{sec:extending}.} The relation fits the available\ndata reasonably well, but leads to an underprediction of the observed\ncolumn density distribution at $z \\sim 0$ at low column densities\n($N_\\mathrm{HI}<10^{21.4}$~cm$^{-2}$). \\footnote{{The two datasets for the column density distribution at $z \\sim 0$ (which indicate a systematic offset) are shown only for comparison, and not directly fitted. The parameters involved in the HIHM are obtained from the abundance matching fits, and the concentration parameter is obtained from the results of the higher-redshift column density fitting. \n The steep\nslope of the HIHM relation for $z=0$ leads to a lower column\ndensity distribution than observed, suggesting that {the\n altered NFW profile may not fully describe} the HI density profiles\nof halos at $z=0$, or that there {may be} a possible tension between\nthe HI mass function and the column density distribution at $z=0$. We\nexplore this issue in further detail in future work.}} Figure~\\ref{fig:panels123}\ncompares other quantities in our model to their observed values. The\nincidence rate of DLAs is fit very well by the model throughout the\nredshift range considered here. {The measurements of the density\nparameters of HI and DLAs, and the clustering bias of $z \\sim 2.3$ DLAs are also fit well.} The fit to the measured HI bias at $z=0$ is also good,\nalthough it is somewhat poor at $z=1$.\n\n\\begin{figure*}\n \\begin{center}\n \\includegraphics[scale=0.6, width = \\textwidth]{.\/allz.pdf} \n \\end{center}\n \\caption{\\textit{Left panel}: The HIHM relation inferred at redshifts $z=0$,\n $1$, $2$, $3$ and $4$ from the present work. \\textit{Right panels}: The\n HIHM relation relation in the present work compared to the results\n of other approaches in the literature at redshifts $z=0$, $1$, $2$\n and $3$.}\n \\label{fig:massevolall}\n\\end{figure*} \n \n\\subsection{Comparison to other models of HI at high redshift}\n\n\nFigure~\\ref{fig:massevolall} shows the inferred best-fitting HIHM at $z=0$, $1$,\n$2$, $3$ and $4$ in the present model, together with their associated uncertainties. In each case, the black curve shows the best-fit HIHM relation\n and the grey band shows the 68\\% scatter around it. The figure also presents a\ncomparison of the HIHM obtained from hydrodynamical simulations and\nother approaches in the literature at $z=0$, $1$, $2$ and $3$. {{These are briefly described below:\n\\begin{enumerate}\n\n\\item At $z=0$, the model that comes closest to the present work is the\nnon-parametric HIHM relation of \\citet{marin2010}, although their\nlow-mass slope is shallower. \n\n\\item The hydrodynamical simulations of\n\\citet{dave2013} produce an HIHM relation that has very similar\nhigh-mass and low-mass slopes as the present HIHM relation. The high characteristic mass of\nthe {average best-fitting} HIHM relation in the present work is a natural consequence of\nmatching the abundance of haloes with HI-selected galaxies, under the\nassumption that HI-mass of dark matter haloes scales monotonically\nwith their virial mass.\n\n\\item \\citet{bagla2010} used a set of analytical\nprescriptions to populate HI{} in dark matter haloes. In their\nsimplest model, HI{} was assigned to dark matter haloes with a\nconstant fraction $f$ by mass, within a mass range. The maximum and\nminimum masses of haloes that host HI{} were assumed to be\nredshift-dependent. It was also assumed that haloes with virial\nvelocities of greater than 200 km\/s and less than 30 km\/s do not host\nany HI. \n\n\\item \\citet{gong2011} provide nonlinear analytical forms of the\nHIHM relation at $z=1$, $2$ and $3$, derived from the results of the\nsimulations of \\citet{obreschkow2009a}. These predict a\n slightly different form for the HIHM relation. \n \n \\item The model of\n\\citet{barnes2014} uses an HIHM relation that reproduces the observed\nbias of DLA systems at $z \\sim 2.3$, and constrains stellar feedback\nin shallow potential wells. \n\n\\item \\citet{2016arXiv160701021P} used a {statistical data-driven} approach to\nderive the best-fitting HIHM relation and radial distribution profile\n$\\rho_{\\rm HI}(r)$ for $z=0$--$4$, from a joint analysis combining the\ndata from the radio observations at low redshifts and the Damped\nLyman-Alpha (DLA) system observables at high redshifts, along the\nlines of the present work. This approach also produces results\n consistent with the present work, although the present best-fit HIHM\n relation at high redshifts may prefer a higher characteristic halo mass.\n \n \\end{enumerate}\n \n \n It can be seen that all these\n models are consistent with each other and with the data at the 68\\% confidence level. Tighter constraints on the HIHM relation at high redshifts may be achieved with the availability of better quality data with upcoming radio telescopes.\n\n}}\n\n\n\n\\section{Conclusions}\n\\label{sec:conc}\n\nIn this paper, we have explored the evolution of the neutral hydrogen\ncontent of galaxies in the last 12 Gyr (redshifts $z=0$--$4$). At\nredshift $z=0$, this work follows the approach of abundance matching,\nwhich has been widely used for the stellar mass content of galaxies to\nmodel galaxy luminosity functions \\citep{vale2004,\n vale2006, conroy2006, shankar2006,\n guo2010, behroozi2010, moster2010, moster2013}.\nA parameterised functional form for a monotonic relationship between\nthe HI{} and halo mass is assumed to obtain the HI- Halo Mass (HIHM)\nrelation. The best fit values of the parameters that fit the observed\nHI{} mass function from radio data are then obtained. This approach\nof modelling the HIHM relation at $z=0$ from the radio data at low\nredshifts has been followed previously by \\citet{papastergis2013}.\nOur abundance matched HIHM agrees with that derived by these authors.\n\nWe further explore how well the abundance matching approach at $z = 0$\ncan be constrained by fitting to the high redshift data. We extend the\nlow redshift determination of the HIHM relation by postulating that\nthe evolution of the HIHM relation is similar to the\nstellar-to-halo-mass (SHM) relation. We parameterize this evolution\nanalogously to the evolution of the SHM relation by\n\\citet{moster2013}. {The physical motivation for the\n parametrization is that the HI-follows-stars functional form works\n well at low redshifts, which is in turn a consequence of the fact\n that the underlying mass\/luminosity functions can both be described\n by the Schechter form.} Observational measurements of the HI{} mass\nfunction are not {yet} available at these redshifts. Hence, we use\nmeasurements of the HI{} column density distribution function and the\nHI{} clustering from UV\/optical observations of quasar absorption\nspectra. We assume that high column density systems (DLAs;\n$N_\\mathrm{HI}>10^{20.3}$~cm$^{-2}$) probe systems are high-redshift\nanalogs of HI{} in galaxies detected in radio surveys at low\nredshifts \\citep{zwaan2005a}.\n\nOur procedure allows a modeling of low and high redshift measurements\nof the HI{} content of galaxies to obtain the evolution of the HIHM\nrelation from $z=0$ to $2.3$ with the associated uncertainty. This\ntechnique is complementary to the forward modelling approach which\naims to characterize HI{} using a halo model framework similar to\nthat of the underlying dark matter \\citep{2016arXiv160701021P}.\n{However, the present work represents a first attempt to characterize\n the HIHM relation empirically, directly from the data. Due to the\n sparse nature of the high-redshift data at present, there is\n considerable scatter in the high-redshift HIHM relation. {As\n a result, other apparently dissimilar models from the\n literature are also consistent with the data and the allowed range of the present work. The scatter in the HIHM relation at higher redshifts can be reduced with tighter constraints on the HI mass functions from upcoming and future radio surveys.} \n\nOur results provide a useful benchmark to calibrate the HI{} physics\nin hydrodynamical simulations, especially at low redshifts where\ncorrect treatment of star formation and feedback as well as cooling\nand formation of molecular hydrogen are critical. They also provide an\nestimate of the uncertainty in the HIHM relation coming from the\nhigh-redshift data, and motivate further work towards possibly tighter\nconstraints on the HIHM relation.\n\n\n\\section*{Acknowledgements}\n\nWe thank Alireza Rahmati, Alexandre Refregier and Sergey Koposov for\nuseful discussions, Daniel Lenz for pointing out a minor typo and Robert Crain for kindly providing the data\nfrom the EAGLE simulations. This work has made use of the VizieR\ncatalogue access tool, CDS, Strasbourg, France. The original\ndescription of the VizieR service was published in the A\\&AS 143, 23.\nHP's research is supported by the Tomalla Foundation. GK gratefully\nacknowledges support from the ERC Advanced Grant 320596 `The\nEmergence of Structure During the Epoch of Reionization'.\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConvolutional neural network (CNN) architectures exhibit state-of-the-art performance on a variety of learning tasks dealing with 1D, 2D, and 3D grid-structured data such as acoustic signals, images, and videos, in which\nconvolution serves as a feature extractor~\\cite{lecun2015deep}. However, the (usual) convolution operation is not applicable when applying CNN to data that is supported on an arbitrary graph rather than on a regular grid structure, since the number and topology of neighbors of each vertex on the graph varies, and it is difficult to design a fixed-size filter scanning over the graph-structured data for feature extraction.\n\nRecently, there has been an increasing interest in graph CNNs~\\cite{bruna2013spectral, defferrard2016convolutional, thomas2017semi,monti2017geometric, levie2017cayleynets}, attempting to generalize deep learning methods to graph-structured data, specifically focusing on the design of graph convolution.\nIn this paper, we propose the topology adaptive graph convolutional network (TAGCN), a unified convolutional neural network to learn nonlinear representations for the graph-structured data. It slides a set of fixed-size learnable filters on the graph simultaneously, and the output is the weighted sum of these filters' outputs, which extract both vertex features and strength of correlation between vertices. Each filter is adaptive to the topology of the local region on the graph where it is applied. TAGCN unifies filtering in both the spectrum and vertex domains; and applies to both directed and undirected graphs.\n\n\nIn general, the existing graph CNNs can be grouped into two types: spectral domain techniques and vertex domain techniques. In \\citet{bruna2013spectral}, CNNs have been generalized to graph-structured data, where convolution is achieved by a pointwise product in the spectrum domain according to the convolution theorem.\nLater, \\citet{defferrard2016convolutional} and \\citet{levie2017cayleynets} proposed spectrum filtering based methods that utilize Chebyshev polynomials and Cayley polynomials, respectively. \n\\citet{thomas2017semi} simplified this spectrum method and obtained a filter in the vertex domain, which achieves state-of-the-art performance.\nOther researchers~\\citep{diffusion, monti2017geometric} worked on designing feature propagation models in the vertex domain for graph CNNs.\n\\citet{yang2016revisiting, dai2016discriminative, grover2016node2vec, du2016convergence} study transforming graph-structured data to embedding vectors for learning problems.\nMore recently, \\citep{Bengio18} proposed graph attention networks leveraging masked self-attentional layers to address the approximation of exiting graph convolutions networks.\nNevertheless, it still remains open how to extend CNNs from grid-structured data to arbitrary graph-structured data with local feature extraction capability.\n\nWe define rigorously the graph convolution operation on the vertex domain as multiplication by polynomials of the graph adjacency matrix, which is consistent with the notion of convolution in graph signal processing.\nIn graph signal processing \\cite{sandryhaila2013discrete}, polynomials of the adjacency matrix are graph filters, extending to graph based data from the usual concept of filters in traditional time or image based signal processing. Thus, comparing ours with existing work on graph CNNs, our paper provides a solid theoretical foundation for our proposed convolution step instead of an ad-hoc approach to convolution in CNNs for graph structured data. \n\n{Further, our method avoids computing the spectrum of the graph Laplacian as in \\citet{bruna2013spectral}, or approximating the spectrum using high degree Chebyshev polynomials of the graph Laplacian matrix (in \\citet{defferrard2016convolutional}, it is suggested that one needs a $25^{\\textrm{th}}$ degree Chebyshev polynomial to provide a good approximation to the graph Laplacian spectrum) or using high degree Cayley polynomials of the graph Laplacian matrix (in \\citet{levie2017cayleynets}, $12^{\\textrm{th}}$ degree Cayley polynomials are needed). We also clarify that the GCN method in \\citet{thomas2017semi} is a first order approximation of the Chebyshev polynomials approximation in \\citet{defferrard2016convolutional}, which is very different from our method. Our method has a much lower computational complexity than the spectrum based methods, since our method only uses polynomials of the adjacency matrix with maximum degree $2$ as shown in our experiments. Finally, the method that we propose exhibits better performance than existing methods as no approximation is required.}\nOur contributions are summarized as follows:\n\\begin{itemize}\n\t\\item\n\tThe proposed TAGCN explores a general $K$-localized filter for graph convolution in the vertex domain to extract local features on a set of size-$1$ up to size-$K$ receptive fields.\n\tThe topologies of these filters are adaptive to the topology of the graph as they scan the graph to perform convolution.\n\tIt replaces the fixed square filters in traditional CNNs for the grid-structured input data volumes in traditional CNNs.\n\tThus, the convolution that we define in the convolution step for the vertex domain is consistent with convolution in traditional CNNs.\n\t\\item\n\t{We analyze the mechanisms of the graph convolutional layers and prove that if only a size-k filter is used, as the convolutional layers go deeper under certain condition, the output of the last convolutional layer is the projection of the output of the first convolutional layer along the eigenvector corresponding to the eigenvalue of the graph adjacency matrix with the largest amplitude.\n\t\tThis linear approximation leads to information loss and classification accuracy degradation.\n\t\tIn contrast, using a set of size-1 up to size-K filters (as in our TAGCN) can avoid the linear approximation and increases the representation capability. Therefore, it leads to improved classification accuracy.}\n\t\\item\n\t{TAGCN is consistent with the convolution in graph signal processing.\n\t\tIt applies to both directed and undirected graphs.\n\t\tMoreover, it has a much lower computational complexity compared with recent methods since it only needs polynomials of the adjacency matrix with maximum degree $2$ compared with the $25^{\\textrm{th}}$ and $12^{\\textrm{th}}$ degree Laplacian matrix polynomials in \\citet{defferrard2016convolutional} and \\citet{levie2017cayleynets}. }\n\t\n\t\\item\n\t{As no approximation to the convolution is needed in TAGCN, it achieves better performance compared with existing methods.}\n\tWe contrast TAGCN with recently proposed graph CNN including both spectrum filtering methods \\citep{bruna2013spectral,defferrard2016convolutional} and vertex domain propagation methods \\citep{thomas2017semi,monti2017geometric,diffusion},\n\tevaluating their performances on three commonly used graph-structured data sets.\n\tOur experimental tests show that TAGCN consistently achieves superior performance on all these data sets.\n\\end{itemize}\n\n\\section{Convolution on Graph}\nWe use boldface uppercase and lowercase letters to represent matrices and vectors, respectively.\nThe information and their relationship on a graph $\\mathcal{G}$ can be represented by\n$\\mathcal{G} = (\\mathcal V, \\mathcal {E}, \\bar{\\textbf A})$, where $\\mathcal V$ is the set of vertices, $\\mathcal E$ is the set of edges, and $\\bar{\\textbf A}$ is the weighted adjacency matrix of the graph; the graph can be weighted or unweighted, directed or undirected.\nWe assume there is no isolated vertex in $\\mathcal G$.\nIf $\\mathcal{G}$ is a \\emph{directed weighted} graph, the weight $\\bar{\\textbf A}_{n,m}$ is on the directed edge from vertex $m$ to $n$.\nThe entry $\\bar{\\textbf A}_{n,m}$ reveals the dependency between node $n$ and $m$ and can take arbitrary real or complex values.\nThe graph convolution is general and can be adapted to graph CNNs for particular tasks.\nIn this paper, we focus on the vertex semisupervised learning problem,\nwhere we have access to very limited labeled vertices, and the task is to classify the remaining unlabeled vertices by feeding the output of the last convolutional layer to a fully connected layer.\n\n\n\\subsection{Graph Convolutional Layer for TAGCN}\nWithout loss of generality, we demonstrate graph convolution on the $\\ell$-th hidden layer. The results apply to any other hidden layers.\nSuppose on the $\\ell$-th hidden layer, the input feature map for each vertex of the graph has $C_{\\ell}$ features.\nWe collect the $\\ell$-th hidden layer\ninput data on all vertices\nfor the $c$-th feature by the vector $\\textbf x^{(\\ell)}_{c}\\in \\mathbb R^{N_{\\ell}}$, where\n$c = 1, 2,\\ldots C_{\\ell}$ and $N_{\\ell}$ is the number of vertices\\footnote{Graph coarsening could be used and the number of vertices may vary for different layers.}.\nThe components of $\\textbf x^{(\\ell)}_{c}$ are indexed by vertices of the data graph representation\n$\\mathcal G=(\\mathcal V, \\mathcal {E}, \\bar{\\textbf A})$\\footnote{We use superscript $(\\ell)$ to denote data on the $\\ell$th layer and superscript $ {\\ell}$ to denote the $\\ell$-th power of a matrix.}.\nLet $\\textbf G^{(\\ell)}_{c,f}\\in \\mathbb R^{N_{\\ell}\\times N_{\\ell}}$ denote the $f$-th graph filter.\nThe graph convolution is the matrix-vector product, i.e., $\\textbf G^{(\\ell)}_{c,f}\\textbf x^{(\\ell)}_{c}$.\nThen the $f$-th output feature map followed by a ReLU function is given by\n\\begin{equation}\\label{out_f}\n\\textbf y_f^{(\\ell)} = \\sum_{c=1}^{C_{\\ell}}\\textbf G^{(\\ell)}_{c,f}\\textbf x^{(\\ell)}_{c} + b_f\\textbf 1_{N_{\\ell}},\n\\end{equation}\nwhere $ b_f^{(\\ell)}$ is a learnable bias, and\n$\\textbf 1_{N_{\\ell}}$ is the $N_{\\ell}$ dimension vector of all ones.\nWe design $\\textbf G^{(\\ell)}_{c,f}$ such that $\\textbf G^{(\\ell)}_{c,f}\\textbf x^{(\\ell)}_{c}$ is a meaningful convolution on a graph with arbitrary topology.\n\nIn the recent theory on graph signal processing~\\citep{sandryhaila2013discrete}, the \\emph{graph shift} is defined as a local operation that replaces a graph signal at a graph vertex by a linear weighted combination of the values of the graph signal at the neighboring vertices:\n$$\\tilde{\\textbf x}^{(\\ell)}_{c} = \\bar{\\textbf A} \\textbf x^{(\\ell)}_{c}.$$\nThe graph shift $\\bar{\\textbf A}$ extends the time shift in traditional signal processing to graph-structured data.\nFollowing \\citet{sandryhaila2013discrete},\na graph filter $\\textbf G^{(\\ell)}_{c,f}$ is shift-invariant, i.e.,\nthe shift $\\bar{\\textbf A}$ and the filter\n$\\textbf G^{(\\ell)}_{c,f}$ commute,\n$\\bar{\\textbf A}(\\textbf G^{(\\ell)}_{c,f}\\textbf x_c^{(\\ell)}) =\n\\textbf G^{(\\ell)}_{c,f} (\\bar{\\textbf A} \\textbf x_c^{(\\ell)})$, if under appropriate assumption $\\textbf G^{(\\ell)}_{c,f}$ is a polynomial in $\\textbf A$,\n\\begin{equation}\\label{con}\n\\textbf G^{(\\ell)}_{c,f} = \\sum_{k = 0}^{K} g^{(\\ell)}_{c,f,k}\\textbf A^{k}.\n\\end{equation}\nIn (\\ref{con}),\nthe $g^{(\\ell)}_{c,f,k}$ are the graph filter polynomial coefficients; the quantity\n$\\textbf A = \\textbf D^{-\\frac{1}{2}}\\bar{\\textbf A}\\textbf D^{-\\frac{1}{2}}$ is the normalized adjacency matrix\nof the graph, and $\\textbf D = \\text{diag}[\\textbf d]$ with the $i$th component being $\\textbf d(i) = \\sum_j \\textbf A_{i,j}$.\\footnote{There is freedom to normalize $\\textbf A$ in different ways; here it is assumed that $\\bar{\\textbf A}_{m,n} $ is nonnegative and the above normalization is well defined.}\nWe adopt the normalized adjacency matrix to guarantee that\nall the eigenvalues of $\\textbf A$ are inside the unit circle, and therefore $\\textbf G^{(\\ell)}_{c,f}$ is computationally stable.\nThe next subsection shows we will adopt $ 1\\times C_{\\ell}, 2\\times C_{\\ell},\\ldots,$ and $ K\\times C_{\\ell}$ filters sliding on the graph-structured data.\nThis fact coincides with GoogLeNet \\citep{googlenet}, in which a set of filters with different sizes are used in each convolutional layer.\nFurther, It is shown in the appendix that the convolution operator defined in (\\ref{con}) is consistent with that in\nclassical signal processing when the graph is in the 1D cyclic form, as shown in Fig.~\\ref{f1}.\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.4\\columnwidth, height = 1.6cm]\n\t\t{.\/1D.PNG}\n\t\\end{center}\n\t\\caption{Graph topology of a 1-D cyclic graph.}\n\t\\label{f1}\n\\end{figure}\n\n\nFollowing the CNN architecture, an additional nonlinear operation, e.g, rectified linear unit (ReLU) is used after every graph convolution operation:\n$$\\textbf x_{f}^{(\\ell+1)} = \\sigma\\left(\\textbf y_{f}^{(\\ell)}\\right), $$\nwhere $\\sigma(\\cdot)$ denotes the ReLU activation function applied to the vertex values.\n\n\\subsection{Analysis of Graph Convolutional Layers}\n\n{In the following, we analyze the mechanisms of the graph convolutional layers. \n\tWe start from the graph filter with the form of a {monomial} $g_{\\ell}\\textbf A^{k_{\\ell}}$ for the $\\ell$-th layer and $C_{\\ell} = F_{\\ell} = 1$ with $\\ell =1,\\ldots L$.\n\tIn the following, we show that when the graph convolutional layers go deeper, the output of the last graph convolutional layer is proportional to the projection of the output data of the first convolutional layer along the eigenvector corresponding to the eigenvalue of the graph adjacency matrix with the largest amplitude.\n\t\n\t\\begin{theorem}\n\t\tFor any filters $\\textbf A^{k_{\\ell}}$, with $k_{\\ell}\\in \\{1,2,3,\\ldots\\}$, \n\t\t$$\\lim_{L\\to +\\infty} \\underbrace{\\sigma\\left( \\sigma\\cdots \\sigma \\left(g_2 \\textbf A^{k_2} \\sigma\\left(g_1 \\textbf A^{k_1} \\textbf x \\right)\\right)\\right)}_{\\textrm{$L$ times $\\sigma(\\cdot)$}}\n\t\t= m\n\t\t\\\n\t\t\\langle \\textbf y_1^{(1)}, \\textbf v_1 \\rangle \\textbf v_1.\n\t\t$$\n\t\twith $m=\\prod_{\\ell=1}^{\\ell = L}g_{\\ell}$ and $\\textbf y_1^{(1)} =\\sigma\\left(g_1 \\textbf A^{k_1} \\textbf x\\right)$.\n\t\\end{theorem}\t\n\t\n\t\\noindent {\\textbf {Proof}} \\quad For any input data $\\textbf x\\in \\mathbb R^N$ on the graph with $N$ vertices, the output of the first graph convolutional layer is \n\t$\\textbf y_1^{(1)} = \\sigma\\left(g_1 \\textbf A^{k_1} \\textbf x\\right).$\n\tAccording to the definition of ReLU function, we know that each component in $\\textbf y_1^{(1)}$ is nonnegative. \n\tThe data feeded to the fully connected layer for classification, which is the output of the $L$-th graph convolutional layer, is \n\t$ \\sigma\\left( \\sigma\\cdots\\left( \\sigma\\left(g_2\\textbf A^{k_2}\\textbf y_1^{(1)} \\right)\\right)\\right). $\n\tIt can be observed that all the learned $g_i$ with $i\\geq 2$ should be positive, otherwise, the output would be $\\textbf 0$ resulting in an all-zero vector feeded to the fully connected layer for classification.\n\tFurther, since all the components of $\\textbf A^{k_{\\ell}}$ are nonnegative and $\\textbf y_1^{(1)}$ is nonnegative, by conduction, the input of ReLU function in each layer is nonnegative. \n\tTherefore, the output of the $L$-th graph convolutional layer can be written equivalently as \n\t\\begin{equation}\n\t\\begin{split}\n\t&\\underbrace{\\sigma\\left( \\sigma\\cdots \\sigma \\left(g_2 \\textbf A^{k_2} \\sigma\\left(g_1 \\textbf A^{k_1} \\textbf x \\right)\\right)\\right)}_{\\textrm{$L$ times $\\sigma(\\cdot)$}}\\\\\n\t=& m\n\t\\textbf A^{\\sum_{\\ell = 2}^{L} k_{\\ell}}\\textbf y_1^{(1)}\\\\\n\t{\\overset{(a)}{=}} & m\n\t\\textbf V\\textbf J^{\\sum_{\\ell = 2}^{L} k_{\\ell}}\\textbf V^{-1} \\textbf y_1^{(1)} \\\\\n\t{\\overset{(b)}{=}}&m\n\t\\textbf V\\textbf J^{\\sum_{\\ell = 2}^{L} k_{\\ell}}\\textbf V^{-1}\\left( c_1\\textbf v_1 + c_2\\textbf v_2\\ldots c_N\\textbf v_N\\right)\\\\\n\t{\\overset{(c)}{=}}& m\n\t\\textbf V\\textbf J^{\\sum_{\\ell = 2}^{L} k_{\\ell}}\\left(c_1\\textbf e_1+c_2\\textbf e_2\\ldots+c_N\\textbf e_N\\right).\n\t\\end{split}\n\t\\end{equation}\n\tIn (a), we use eigendecomposition of $\\textbf A$, where $\\textbf V=[\\textbf v_1, \\ldots, \\textbf v_N]$ with $\\textbf v_i$ the eigenvector of $\\textbf A$, and $\\textbf J$ is a diagonal matrix with diaonal elements being the eigenvalues of $\\textbf A$.\\footnote{When $\\textbf A$ is asymmetric and rank deficient, Jordan decomposition is adopted to replace eigendecomposition, and then $\\textbf J$ is a block diagonal matrix. The remaining analysis also applies for the Jordan decomposition.}\n\tEquation (b) is due to the fact that \n\tthe set of eigenvectors $\\left\\{\\textbf v_i\\right\\}_{i=1}^N$ form an orthogonal basis, and one can express $\\textbf y_1^{(1)}\\in \\mathbb R^N$ by a linear combination of those vectors, with $c_i = \\langle \\textbf y_1^{(1)}, \\textbf v_i \\rangle$.\n\tIn (c), $\\left\\{\\textbf e_i\\right\\}_{i=1}^{N}$ is the standard basis.\n\t\n\tWithout loss of generality, the graph is assumed to be strongly connected, and we have the unique largest eigenvalue. Then, following the definition of $\\textbf A$, we have \n\t$\\textbf J =\\textrm{diag}\\left([1, \\lambda_2, \\ldots, \\lambda_N] \\right)$ with $|\\lambda_k|<1$ for all $k>2$. Then we obtain \n\t\\begin{equation}\n\t\\begin{split}\n\t&\\lim_{L\\to +\\infty}\\textbf V\\textbf J^{\\sum_{\\ell = 2}^{L} k_{\\ell}}\\left(c_1\\textbf e_1+c_2\\textbf e_2\\ldots+c_N\\textbf e_N\\right)\\\\\n\t&=\n\tc_1\\textbf v_1\n\t= \\langle \\textbf y_1^{(1)}, \\textbf v_1 \\rangle \\textbf v_1.\n\t\\end{split}\n\t\\end{equation} \\QEDA\n\t\n\tNote that when $k_{\\ell}=1$ for all $\\ell\\in \\{1,2,\\ldots L\\}$, the graph convolutional filter reduces to $g_{\\ell}\\textbf A$ which is used in \\cite{thomas2017semi}.\n\tDue to the linear approximation (projection along the eigenvector corresponding to the largest eigenvalue amplitude), the information loss would degrade the classification accuracy.\n\tHowever, if we choose the graph filter as a set of filters from size-1 to size-$K$, it is not a projection anymore, and \n\tthe representation capability of graph convolutional layers is improved. }\n\n\n\n\\subsection{Filter Design for TAGCN Convolutional Layers}\n{{ In this section, we would like to understand the proposed convolution\n\t\tas a feature extraction operator in traditional CNN rather than as embedding propagation. Taking this point of view helps us to profit from the design knowledge\/experience from traditional CNN and apply it to grid structured data. Our definition of weight of a path and the following filter size for graph convolution in this section make it possible to design a graph CNN architecture similar to GoogLeNet \\citep{googlenet}, in which a set of filters with different sizes are used in each convolutional layer. In fact, we found that a combination of size 1 and size 2 filters gives the best performance in all three data sets studied, which is a polynomial with maximum order 2. }\n\t\n\t\n\tIn traditional CNN, a $K\\times K\\times C_{\\ell}$ filter scans over the input grid-structured data for feature extraction.\n\tFor image classification problems, the value $K$ varies for different CNN architectures and tasks to achieve better performance.\n\tFor example, in VGG-Verydeep-16 CNN model \\citep{vgg}, only $3\\times 3\\times C_{\\ell}$ filters are used; in ImageNet CNN model \\citep{imagenet}, $11\\times 11\\times C_{\\ell}$ filters are adopted; and in GoogLeNet \\citep{googlenet}, rather than using the same size filter in each convolutional layer, different size filters, for example, $1\\times 1\\times C_{\\ell}$, $3\\times 3\\times C_{\\ell}$ and $5\\times 5\\times C_{\\ell}$ filters, are concatenated in each convolution layer.\n\tSimilarly, we propose a general $K$-localized filter for graph CNN.\n\t\n\tFor a graph-structured data, we cannot\n\tuse a square filter window since the graph topology is no longer a grid.\n\tIn the following, we demonstrate that the convolution operation $ \\textbf G^{(\\ell)}_{c,f}\\textbf x^{(\\ell)}_{c}$ with $\\textbf G^{(\\ell)}_{c,f}$ a polynomial filter\n\t$\\textbf G^{(\\ell)}_{c,f} = \\sum_{k = 0}^{K} g^{(\\ell)}_{c,f,k}\\textbf A^{k}$ is equivalent to using a set of filters with filter size from $1$ up to $K$.\n\tEach $k$-size filter, which is used for local feature extraction on the graph, is\n\t$k$-localized in the vertex domain.\n\t\n\tDefine a \\emph{path} of length $m$ on a graph $\\mathcal G$ as a sequence $v = (v_0,v_1,...,v_m)$ of vertices $v_k \\in \\mathcal V $ such that\n\teach step of the path $(v_k,v_{k+1})$ corresponds to an (directed) edge of the graph, i.e., $(v_k,v_{k+1}) \\in \\mathcal {E}$.\n\tHere one path may\n\tvisit the same vertex or cross the same edge multiple times.\n\tThe following adjacency matrix $\\textbf A $ is one such example:\n\t\\[\\textbf A =\n\t\\begin{bmatrix}\n\t\\begin{smallmatrix}\n\t0 & 1 & 0 & 2&3&0&0&\\cdots \\\\\n\t1 & 0 & 4 & 5&0&0&0&\\cdots \\\\\n\t0 & 1 & 0 & 0&0&0&1&\\cdots \\\\\n\t1 & 1 & 0 & 0&6&0&0&\\cdots \\\\\n\t1 &0 & 0 & 1&0&1&0&\\cdots\\\\\n\t0 &0 & 0 & 0&1&0&0&\\cdots\\\\\n\t0 &0 & 0 & 1&0&0&0&\\cdots\\\\\n\t\\vdots &\\vdots & \\vdots & \\vdots &\\vdots &\\vdots &\\vdots &\\ddots\n\t\\end{smallmatrix}\n\t\\end{bmatrix}.\\]\n\tSince $\\textbf A$ is asymmetric, it represents a directed graph, given in Fig. \\ref{f2}.\n\tIn this example, there are $6$ different length $3$-paths on the graph from vertex $2$ to vertex $1$, namely, $(2,1,4,1)$, $(2,1,2,1)$, $(2,1,5,1)$, $(2,3,2,1)$, $(2,4,2,1)$, and $(2,4,5,1)$.\n\t\n\tWe further define the\n\t\\emph{weight of a path} to be the product of the edge weights along the path, i.e.,\n\t$\\phi( p_{0,m}) = \\prod_{k=1}^{m}\\textbf A_{v_{k-1},v_k}$, where $p_{0,m}=(v_0, v_1,\\ldots v_m)$.\n\tFor example,\n\tthe weight of the path $(2,1,4,1)$ is $1\\times 1\\times 2=2$.\n\tThen, the $(i,j)$th entry of $\\textbf A^k$ in (\\ref{con}), denoted by $\\omega (p_{j,i}^k) $, can be interpreted as\n\tthe sum of the weights of all the length-$k$ paths from $j$ to $i$, which is $$\\omega (p_{j,i}^k) = \\sum_{j\\in \\left\\{\\tilde j|\\tilde j \\textrm{ is $k$ paths to $i$}\\right\\} }\\phi( p_{j,i}).$$\n\tIn the above example, it can be easily verified that\n\t$\\textbf A^3_{1,2} = 18$ by summing up\n\tthe weights of all the above six paths from vertex $2$ to vertex $1$ with length $3$.\n\tThen, the $i$th component of\n\t$\\textbf A^k\\textbf x^{(\\ell)}_{c}$\n\tis the weighted sum of the input features of each vertex $\\textbf x^{(\\ell)}_{c}$ that are length-$k$ paths away to vertex $i$.\n\tHere,\n\t$k$ is defined as the\n\t\\emph{filter size}.\n\tThe output feature map is a vector with each component given by the size-$k$ filter sliding on the graph following a fixed order of the vertex indices.\n\t\n\tThe output at the $i$-th component\n\tcan be written explicitly as\n\t$\n\t\\sum_{c=1}^{C_{\\ell}}\\sum_{j}\n\tg^{(\\ell)}_{c,f,k}\\omega (p_{j,i}^k) \\textbf x^{(\\ell)}_{c}(j)$.\n\tThis weighted sum is similar to the dot product for convolution for a grid-structured data in traditional CNNs.\n\tFinnaly, the output feature map is a weighted sum of convolution results from filters with different sizes, which is\n\t\\begin{equation}\\label{tagcn}\n\t\\textbf y_f^{(\\ell)}(i) = \\sum_{k=1}^{K_{\\ell}} \\sum_{c=1}^{C_{\\ell}}\\sum_{j\\in \\{\\tilde j|\\tilde j \\textrm{ is $k$ paths to $i$}\\} }\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n\tg^{(\\ell)}_{c,f,k}\\omega (p_{j,i}^k) \\textbf x^{(\\ell)}_{c}(j) + b_f\\textbf 1_{N_{\\ell}}.\n\t\\end{equation}\n\tThe above equation shows that\n\teach neuron in the graph convolutional layer is connected only to a local region (local vertices and edges) in the vertex domain of the input data volume, which is adaptive to the graph topology.\n\tThe strength of correlation is explicitly utilized in $\\omega (p_{j,i}^k)$.\n\tWe refer to this method as topology adaptive graph convolutional network (TAGCN).\n\t\n\t\n\tIn Fig.~\\ref{f2}, we show TAGCN with an example of $2$-size filter sliding from vertex $1$ (figure on the left-hand-side) to vertex $2$ (figure on the right-hand-side).\n\tThe filter is first placed at vertex $1$. Since paths $(1,2,1)$ $(5,4,1)$ and so on (paths with red glow) are all $2$-length paths to vertex $1$, they are covered by this $2$-size filter. Since paths $(2,3)$ and $(7,3)$ are not on any $2$-length path to vertex $1$, they are not covered by this filter.\n\tFurther, when this 2-size filter moves to vertex $2$, paths $(1,5)$, $(4,5)$ and $(6,5)$ are no longer covered, but paths $(2,3)$ and $(7,3)$ are first time covered and contribute to the convolution with output at vertex $2$.\n\t\n\tFurther, $\\textbf y_f^{(\\ell)}(i)$\n\tis the weighted sum of the input features of vertices in $\\textbf x^{(\\ell)}_{c}$ that are within $k$-paths away to vertex $i$, for $k=0, 1,\\ldots K$, with weights given by the products of components of $\\textbf A^k$ and $g^{(\\ell)}_{c,f,k}$.\n\tThus the output is the weighted sum of the feature map given by the filtered results from $1$-size up to $K$-size filters.\n\tIt is evident that the vertex convolution on the graph using $K$th order polynomials is $K$-paths localized.\n\tMoreover, different vertices on the graph share $g^{(\\ell)}_{c,f,k}$.\n\tThe above local convolution and weight sharing properties of the convolution (\\ref{tagcn}) on a graph are very similar to those in traditional CNN.\n\t\n\t{Though the convolution operator defined in (\\ref{con}) is defined on the vertex domain, it can also be understood as a filter in the spectrum domain, and it is consistent with the definition of convolution in graph signal processing. We provide detailed discussion in the appendix.}\n\n\n\n\n\n\n\n\n\t\n\t\\begin{figure}[t]\n\t\t\\begin{center}\n\t\t\t\\includegraphics[width=0.8\\columnwidth, height =5.5cm]\n\t\t\t{.\/Conv.PNG}\n\t\t\\end{center}\n\t\t\\caption{An example of a directed graph with weights along directed edges corresponding to $\\textbf A$.\t\n\t\t\tThe parts with glow on left(right)-hand-side represent filters at different locations.\n\t\t\tThe figure on the left-hand-side denotes the filtering\/convolution starting from vertex $1$, then the filter slides to vertex $2$ as shown on the right-hand-side with filter topology adaptive to the new local region.\n\t\t\n\t\t\n\t\t}\n\t\t\\label{f2}\n\t\\end{figure}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\n\n\n\n\n\n\n\t\n\t\n\t\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\\section{Relation with Other Existing Formulations}\n\n\tIn general, there are two types of graph convolution operators for the CNN architecture.\n\tOne defines the convolution in the spectrum domain, whose output feature map is the multiplication of the inverse Fourier transform matrix with the filtered results in the spectrum domain \\citep{bruna2013spectral, defferrard2016convolutional, levie2017cayleynets}.\n\tBy doing further approximations based on this spectrum domain operator, a simplified convolution was obtained in~\\citet{thomas2017semi}.\n\tThe other defines convolution by a feature propagation model in the vertex domain such as MoNet in ~\\citet{monti2017geometric} and the diffusion CNN (DCNN) in~\\cite{diffusion}.\n\tWe investigate in detail each alternative.\n\t\n\t\n\tIn \\citet{bruna2013spectral,defferrard2016convolutional, levie2017cayleynets}, the convolution operation was defined using the convolution theorem and filtering operation in the spectrum domain by computing the eigendecomposition of the normalized Laplacian matrix of the graph.\n\tThe Laplacian matrix $\\textbf L$ is defined as\n\t$\\textbf L = \\textbf D - \\textbf A$ with the further assumption that $\\textbf A$ is symmetric to guarantee that $\\textbf L$ is positive semi-definite.\n\tThe convolution defined by the multiplication in the spectrum domain is approximated by \\citet{defferrard2016convolutional} by\n\t\\begin{equation}\\label{Cheby}\n\t\\textbf U g_{\\theta} \\textbf U^T\\textbf x\n\t\\approx \\sum_{k=0}^{K}\n\t\\theta_kT_k\\left[\\frac{2}{\\lambda_{\\textrm{max}}}\\textbf L - \\textbf I\\right]\n\t\\textbf x^{(\\ell)}_{c},\n\t\\end{equation}\n\twhere $T_k\\left[\\cdot\\right]$ is the $k$th order matrix Chebyshev polynomial~\\citep{shuman2013emerging} where\n\t\\begin{equation}\\label{ChePoly}\n\tT_k(\\textbf L) = 2\\textbf L T_{k-1}[\\textbf L] - T_{k-2}[\\textbf L],\n\t\\end{equation}\n\twith the initial values defined as $T_0[\\textbf L]=\\textbf I$ and $T_1\\left[\\textbf L\\right] = \\textbf L.$\n\tWe refer later to this method as ChebNet for performance comparison.\n\tNote that the Laplacian matrix can be seen as\n\ta differentiator operator.\n\tThe assumption of symmetric $\\textbf A$ restricts the application to undirected graphs.\n\tNote that in \\citet{defferrard2016convolutional}, Laplacian matrix polynomials with maximum order $K =25$ is needed to approximate the convolution operation on the left-hand side in (\\ref{Cheby}), which imposes the computational burden.\n\tWhile TAGCN only needs an adjacency matrix polynomials with maximum order $2$ to achieve better performance as shown in the experiment part.\n\t\n\tIn \\citet{thomas2017semi}, a graph convolutional network (GCN) was obtained by { a first order approximation of (\\ref{Cheby}).}\n\tIn particular, let $K=1$ and make the further assumptions that $\\lambda_{\\textrm{max}} =2$ and $\\theta_0 = \\theta_1=\\theta$.\n\tThen a simpler convolution operator that does not depend on the spectrum knowledge is obtained as\n\t\\begin{equation}\\label{Cheby2}\n\t\\textbf U g_{\\theta} \\textbf U^T\\textbf x\n\t\\approx \\sum_{k=0}^{1}\n\t\\theta_kT_k\\!\\!\\left[\\frac{2\\textbf L}{\\lambda_{\\textrm{max}}} \\!- \\textbf I\\right]\\!\\!\n\t\\textbf x^{(\\ell)}_{c}\n\t\\!\\approx\n\t\\theta(\\textbf I + \\textbf D^{-\\frac{1}{2}}\n\t\\textbf A \\textbf D^{-\\frac{1}{2}} )\\textbf x_c^{(\\ell)}.\\nonumber\n\t\\end{equation}\n\tNote that $\\textbf I + \\textbf D^{-\\frac{1}{2}}\n\t\\textbf A \\textbf D^{-\\frac{1}{2}}$ is a matrix with eigenvalues in $[0,2]$.\n\tA renormalization trick is adopted here by letting\n\t$\\widetilde{\\textbf A} = \\textbf A + \\textbf I$ and $\\widetilde{\\textbf D}_{i,i} = \\sum_{j}\\widetilde{\\textbf A}_{i,j}$.\n\tFinally, the convolutional operator is approximated by\n\t\\begin{equation}\\label{Cheby22}\n\t\\textbf U g_{\\theta} \\textbf U^T\\textbf x\n\t\\approx\n\t\\theta \\widetilde{\\textbf D}^{-\\frac{1}{2}}\n\t\\widetilde{\\textbf A} \\widetilde{\\textbf D}^{-\\frac{1}{2}}\n\t\\textbf x_c^{(\\ell)}= \\theta \\widehat{\\textbf A},\n\t\\end{equation}\n\twhere $ \\widehat{\\textbf A} = \\widetilde{\\textbf D}^{-\\frac{1}{2}}\n\t\\widetilde{\\textbf A} \\widetilde{\\textbf D}^{-\\frac{1}{2}} $.\n\tIt is interesting to observe that this method though obtained by simplifying the spectrum method has a better performance than the spectrum method \\citep{defferrard2016convolutional}.\n\tThe reason may be because the simplified form is equivalent to propagating vertex features on the graph, which can be seen as a special case of our TAGCN method, though there are other important differences.\n\t\n\tAs we analyzed in Section 2.2, GCN as in (\\ref{Cheby22}) or even extending it to higher order, i.e., $\\theta \\widehat{\\textbf A}^k$ only project the input data to the graph eigenvector of the largest eigenvalue when the convolutional layers go deeper.\n\t\n\t\n\t\n\t\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\n\n\n\n\n\n\n\n\n\t\n\t\n\t\n\t{ Our TAGCN is able to leverage information at a farther distance, but it is not a simple extension of GCN \\citet{thomas2017semi}.\n\t\tFirst, the graph convolution in GCN is defined as a first order Chebyshev polynomial of the graph Laplacian matrix, which is an approximation to the graph convolution defined in the spectrum domain in \\citet{defferrard2016convolutional}.\n\t\tIn contrast, our graph convolution is rigorously defined as multiplication by polynomials of the graph adjacency matrix; this is not an approximation, rather, it simply is filtering with graph filters as defined and as being consistent with graph signal processing.}\t\n\t\n\t{Next, we show the difference between our work and the GCN method in \\citet{thomas2017semi} when using 2nd order (K=2, 2 steps away from the central node) Chebyshev polynomials of Laplacian matrix.\n\t\tIn the GCN paper \\citet{thomas2017semi}, it has been shown that $\\sum_{k=0}^{1} \\theta_k T_k(\\textbf L) \\approx \\widehat{\\textbf A}$ as repeated in (\\ref{Cheby22}), and $T_2[\\textbf L] =2\\textbf L^2$\n\t\tby the definition of Chebyshev polynomial. Then, extending GCN to the second order Chebyshev polynomials (two steps away from a central node) can be obtained from the original definition in T. Kipf's GCN \\citep[eqn (5)]{thomas2017semi} as $\\sum_{k=0}^{2} \\theta T_k(L)= \\widehat{\\textbf A} + 2\\textbf L^2 -\\textbf I$, which is different from our definition as in (\\ref{con}). Thus, it is evident that our method is not a simple extension of GCN. We apply graph convolution as proposed from basic principles in the graph signal processing, with no approximations involved, while both GCN in \\citet{thomas2017semi} and \\citet{defferrard2016convolutional} \\citet{levie2017cayleynets} are based on approximating the convolution defined in the spectrum domain.\n\t\tIn our approach, the degree of freedom is the design of the graph filter-its degree and its coefficients. Ours is a principled approach and provides a generic methodology. The performance gains we obtain are the result of capturing the underlying graph structure with no approximation in the convolution operation. }\n\t\n\t\n\t\n\t\\citet{simonovsky2017dynamic} proposed the edge convolution network (ECC) to extend the convolution operator from regular grids to arbitrary graphs.\n\tThe convolution operator is defined similarly to (\\ref{Cheby2}) as\n\t\\begin{equation}\n\t\\textbf y_f^{(\\ell)}(i)\n\t=\n\t\\sum_{j\\in \\mathcal N(i)} \\frac{1}{\\left|\\mathcal {N}(i)\\right|} \\bm\\Theta^{(\\ell)}_{j,i}\\textbf x^{(\\ell)}_{c}(j) + b_{f}^{(\\ell)},\\nonumber\n\t\\end{equation}\n\twith $\\bm\\Theta^{(\\ell)}_{j,i}$ is the weight matrix that needs to be learned.\n\t\n\t\n\t\n\tA mixture model network (MoNet) was proposed in \\cite{monti2017geometric}, with convolution defined as\n\t\\begin{equation}\n\t\\textbf y^{(\\ell)}_{f} (i)\n\t=\n\t\\sum_{f=1}^F\\sum_{j\\in \\mathcal N(i)}\n\tg_f \\kappa_f\n\t\\textbf x^{(\\ell)}_{c}(j),\\nonumber\n\t\\end{equation}\n\t\\vspace{-0.1cm}where\n\t$\\kappa_f$ is a Gaussian kernel with\n\t$\\kappa_f = \\exp\\left\\{-\\frac{1}{2}(\\textbf u - \\bm\\mu_f)^T\\mathbf\\Sigma_f^{-1}(\\textbf u - \\bm\\mu_f)\\right\\}$\n\tand\n\t$g_f$ is the weight coefficient for each Gaussian kernel $\\kappa_f$.\n\tIt is further assumed that $\\mathbf\\Sigma_f$ is a $2\\times 2$ diagonal matrix.\n\t\n\tGCN, ECC, and MoNet all design a propagation model on the graph; their\n\tdifferences are on the weightings used by each model.\n\t\n\t\n\n\t\n\t\n\t\n\t\\cite{diffusion} proposed a diffusion CNN (DCNN) method that considers a diffusion process over the graph.\n\tThe transition probability of a random walk on a graph is given by $\\textbf P =\\textbf D^{-1}\\textbf A$, which is equivalent to the normalized adjacency matrix.\n\t\\begin{equation}\n\t\\textbf y_{c,f}^{(\\ell)} = \\textbf g^{(\\ell)}_{c,f}\\textbf P^f\\textbf x^{(\\ell)}_{c}.\\nonumber\n\t\\end{equation}\n\n\t\n\tBy comparing the above methods with TAGCN in (\\ref{tagcn}), it can be concluded that\n\tGCN, ECC and MoNet can be seen as a special case of TAGCN because in (\\ref{tagcn}) the item with $k=1$ can be seen as an information propagation term.\n\tHowever, as shown in Section~2.2, when the convolutional layers go deeper, the output of the last convolutional layer of GCN, ECC and MoNet are all linear approximations of the output of the corresponding first convolutional layer which degradates the representation capability. TAGCN overcomes this by\n\tdesigning a set of fixed size filters that is adaptive to the input graph topology when performing convolution on the graph.\n\tFurther,\tcompared with existing spectrum methods \\citep{bruna2013spectral, defferrard2016convolutional, levie2017cayleynets}, TAGCN satisfies the convolution theorem as shown in the previous subsection and implemented in the vertex domain, which avoids performing costly and practically numerical unstable eigendecompositions.\n\t\n\tWe further compare the number of weights that need to be learned in each hidden layer for these different methods in Table 1.\n\tAs we show later, $K=2$ is selected in our experiments\n\tusing cross validation.\n\tHowever, for ChebNet in \\citep{defferrard2016convolutional}, it is suggested that one needs a $25^{\\textrm{th}}$ degree Chebyshev polynomial to provide a good approximation to the graph Laplacian spectrum.\n\tThus we have a moderate number of weights to be learned.\n\tIn the following, we show that our method achieves the best performance for each of those commonly used graph-structured data sets.\n\t\n\t\n\t\\section{Experiments}\n\t\\label{others}\\begin{table}[t]\n\t\t\\caption{Number of weights need to be learned for the $\\ell$-th layer. }\n\t\t\\label{sample-table}\n\t\t\\begin{center} \\fontsize{9}{9}\n\t\t\t\\begin{tabular}{llllll}\n\t\t\t\t\\multicolumn{1}{c}{\\bf DCNN} &\\multicolumn{1}{c}{\\bf ECC}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf ChebNet}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf GCN}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf MoNet}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf TAGCN}\n\t\t\t\t\\\\ \\hline \\\\\n\t\t\t\t$F_{\\ell}C_{\\ell}$ &$F_{\\ell}C_{\\ell}$ & $25F_{\\ell}C_{\\ell}$ &\n\t\t\t\t$ F_{\\ell}C_{\\ell}$ &$4F_{\\ell}C_{\\ell}$ &$2F_{\\ell}C_{\\ell}$\\\\\n\t\t\t\\end{tabular}\n\t\t\\end{center}\n\t\\end{table}\n\tThe proposed TAGCN is general and can be fit to the general graph CNN architectures for different tasks. In the experiments, we focus on the vertex semisupervised learning problem, where we have access to only a few labeled vertices, and the task is to classify the remaining unlabeled vertices. To compare the performance of TAGCN with that of existing methods, we extensively evaluate TAGCN on three graph-structured datasets, including the Cora, Citesser and Pubmed datasets. The datasets split and experiments setting closely follow the standard criteria in \\cite{yang2016revisiting}.\n\t\n\t\n\n\n\tIn each data set, the vertices are the documents and the edges are the citation links. Each document is represented by sparse bag-of-words feature vectors, and the citation links between documents are provided. Detailed statistics of these three data sets are summarized in Table \\ref{data}. It shows the number of nodes and edges that corresponding to documents and citation links, and the number of document classes in each data set. Also, the number of features at each vertex is given. Label rate denotes the number of labeled documents that are used for training divided by the total number of documents in each data set.\n\t\n\t\n\t\\label{others}\\begin{table}[t]\n\t\t\\caption{Dataset statistics, following \\cite{yang2016revisiting}}\n\t\t\\label{data}\n\t\t\\begin{center}\\fontsize{9}{9}\n\t\t\t\\begin{tabular}{llllll}\n\t\t\t\t\\multicolumn{1}{c}{\\bf Dataset} &\\multicolumn{1}{c}{\\bf Nodes}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf Edges}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf Classes}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf Features}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf Label Rate}\n\t\t\t\t\\\\ \\hline \\\\\n\t\t\t\tPubmed &19,717 & 44,338 & 3&500&0.003\\\\\n\t\t\t\tCiteseer &3,327 & 4,732 &6 & 3,703&0.036\\\\\n\t\t\t\tCora &2,708 & 5,429 & 7 & 1,433&0.052\\\\\n\t\t\t\\end{tabular}\n\t\t\\end{center}\n\t\\end{table}\n\t\\begin{table}\n\t\t\\caption{Summary of results in terms of percentage classification accuracy with standard variance}\n\t\t\\label{result}\n\t\t\\begin{center}\\fontsize{9}{9}\n\t\t\t\\begin{tabular}{lllll}\n\t\t\t\t\\multicolumn{1}{c}{\\bf Dataset} &\\multicolumn{1}{c}{\\bf Pubmed }\n\t\t\t\t&\\multicolumn{1}{c}{\\bf Citeseer}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf Cora}\n\t\t\t\n\t\t\t\t\\\\ \\hline \\\\\n\t\t\t\tDeepWalk &65.3 & 43.2 &67.2\\\\\n\t\t\t\tPlanetoid & 77.2 &64.7 & 75.7 \\\\\n\t\t\t\tDCNN & 73.0$\\pm$0.5 &- & 76.8$\\pm$0.6 \\\\\n\t\t\t\n\t\t\t\tChebNet & 74.4 &69.8 & 79.5 \\\\\n\t\t\t\tGCN & \n\t\t\t\t{{79.0}} &{{70.3} }& 81.5 \\\\\n\t\t\t\n\t\t\t\tMoNet & 78.81$\\pm$0.4 &- & {{81.69$\\pm$0.5}} \\\\\n\t\t\t\tGAT &{79.0$\\pm$0.3} &\\textbf{72.5$\\pm$ 0.7} & {83.0$\\pm$0.7} \\\\\n\t\t\t\tTAGCN (\\textrm{ours}) &\\textbf{81.1$\\pm$0.4} &{71.4$\\pm$ 0.5} & \\textbf{83.3$\\pm$0.7} \\\\\n\t\t\t\n\t\t\t\\end{tabular}\n\t\t\\end{center}\n\t\\end{table}\n\t\n\t\n\t\n\t\\subsection{Experimental Settings}\n\tWe construct a graph for each data set with nodes representing documents and undirected edges\\footnote{We use undirected graphs here as citation relationship gives positive correlation between two documents. However, in contrast with the other approaches surveyed here, the TAGCN method is not limited to undirected graphs if directed graphs are better suited to the applications. } linking two papers if there is a citation relationship. We obtain the adjacency matrix $\\bar{\\textbf A}$ with $0$ and $1$ components and further obtain the normalized matrix $\\textbf A$.\n\t\n\tIn the following experiments, we design a TAGCN with two hidden layers (obtained from cross validation) for the semi-supervised node classification. In each hidden layer, the proposed TAGCN is applied for convolution, followed by a ReLU activation. $16$ hidden units (filters) are designed for each hidden layer, and dropout is applied after each hidden layer. The softmax activation function is applied to the output of the second hidden layer for the final classification. For ablation study, we evaluate the performance of TAGCN with different filter sizes from 1 to 4. To investigate the performance for different number of parameters, we also design a TAGCN with $8$ filters for each hidden layer and compare its classification accuracy with all the baselines and TAGCN with $16$ filters. We train our model using Adam \\citep{Adam} with a learning rate of $0.01$ and early stopping with a window size\n\tof $45$. Hyperparameters of the networks (filter size, dropout rate, and number of hidden layers) are selected by cross validation.\n\t\n\tTo make a fair comparison, we closely follow the same split of training, validation, and testing sets as in \\cite{thomas2017semi, yang2016revisiting}, i.e.,\n\t$500$ labeled examples for hyperparameters (filter size, dropout rate, and number of hidden layers) optimization and cross-entropy error is used for classification accuracy evaluation.\n\tThe performance results of the proposed TAGCN method are an average over $100$ runs.\n\t\n\t\\subsection{Quantitative Evaluations}\n\t\n\tWe compare the classification accuracy with other recently proposed graph CNN methods as well as a graph embedding methods known as DeepWalk and Planetoid \\cite{perozzi2014deepwalk, yang2016revisiting}. \n\tThe recent published graph attention networks (GAT) \\cite{Bengio18} leveraging masked self-attentional layers is also compared.\n\t\n\tThe quantitative results are summarized in Table~\\ref{result}. Reported numbers denote classification accuracy in percentage. Results for DeepWalk, Planetoid, GCN, and ChebNet are taken from \\citet{thomas2017semi}, and results for DCNN and MoNet are taken from \\citet{monti2017geometric}. All the experiments for different methods are based on the same data statistics shown in Table 2. The datasets split and experiments settings closely follow the standard criteria in \\cite{yang2016revisiting,thomas2017semi}. Table~\\ref{result} shows that our method outperforms all the recent state-of-the-art graph CNN methods (DCNN, ChebNet, GCN, MoNet) by obvious margins for all the three datasets. \n\t\n\t{These experiment results \n\t\tcorroborate our analyses in Section~2.2 and Section~3 that as no approximation to the convolution is needed in TAGCN, it achieves better performance compared with spectrum approximation method such as ChebNet and GCN.\n\t\tFurther, using a set of size-1 up to size-2 filters avoids the linear approximation by the simply size-1 filter (\\textbf A in GCN \\cite{thomas2017semi}), which further verify the efficacy of the proposed TAGCN.\n\t\tCompared with the most recent GAT method \\cite{Bengio18}, our method exhibit obvious advantage for the largest dataset Pubmed.\n\t\tIt should be note that GAT suffers from storage limitation in their model and not scale well for large-scale graph as explained by the authors. }\n\t\n\tFor ablation study, we further compare the performance of different filter sizes from $K=1$ to $K=4$ in Table \\ref{order}. It shows that the performances for filter size $K=2$ are always better than that for other filter sizes. The value $K=1$ gives the worst classification accuracy. As $K=1$ the filter is a monomial, this further validates the analysis in Section 2.2 that monomial filter results in a very rough approximation. In Table \\ref{order}, we also compare the performance of different number of filters, which reflects different number of network parameters.\n\tNote, we also choose filer size $K=2$ and filter number $F_{\\ell} =8$ that results in the same number of network parameters as that in GCN, MoNet, ECC and DCNN according to Table~1.\n\tIt shows that the classification accuracy using $8$ filters is comparable with that using $16$ filters in each hidden layer for TAGCN. Moreover, TAGCN with $8$ filters can still achieve higher accuracy than GCN, MoNet, ECC and DCNN methods.\n\t{ This proves that, even with a similar number of parameters or architecture, our method still exhibits superior performance than GCN. }\n\t\n\t\n\t{ As we analyzed and explained in Section 2.2 and Section 3, TAGCN in our paper is not simply extending GCN \\citet{thomas2017semi} to $k$-th order. Nevertheless, we implement $\\textbf A^2$ and compare its performance with ours. For the data sets Pubmed, Cora, and Citeseer, the classification accuracies are $79.1 (80.8)$, $81.7(83.0)$ and $70.8 (71.2)$, where the numbers in parentheses are the results obtained with our method. Our method still achieves a noticeable performance advantage over $\\textbf A^2$; in particular, we note the significant performance gain with the Pubmed database that has the largest number of nodes among these three data sets. }\n\t\n\t\n\t\\begin{table}[t]\n\t\t\\caption{ TAGCN classification accuracy (ACC) with different parameters}\n\t\t\\label{order}\n\t\t\\begin{center} \\fontsize{9}{9}\n\t\t\t\\begin{tabular}{llll}\n\t\t\t\t\\multicolumn{1}{c}{\\bf Data Set} &\\multicolumn{1}{c}{\\bf Filter Size}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf Filter Number}\n\t\t\t\t&\\multicolumn{1}{c}{\\bf ACC}\n\t\t\t\n\t\t\t\t\\\\ \\hline \\\\[-1.5ex]\n\t\t\t\tCiteseer\n\t\t\t\t&1 & 16&68.9 \\\\\n\t\t\t\t&2 & 16&\\textbf{71.4} \\\\\n\t\t\t\t&3 & 16&70.0 \\\\\n\t\t\t\t&4 & 16&69.8 \\\\\n\t\t\t\t&2 & \\textbf{8}&\\textbf{71.2} \\\\\n\t\t\t\t\\hline \\\\[-1.5ex]\n\t\t\t\tCora\n\t\t\t\t&1 & 16&81.4 \\\\\n\t\t\t\t&2 & 16&\\textbf{83.3} \\\\\n\t\t\t\t&3 & 16&82.1 \\\\\n\t\t\t\t&4 & 16&81.8 \\\\\n\t\t\t\t&2 & \\textbf{8}&\\textbf{83.0} \\\\\n\t\t\t\t\\hline \\\\[-1.5ex]\n\t\t\t\tPubmed\n\t\t\t\t&1& 16&79.4 \\\\\n\t\t\t\t&2 & 16&\\textbf{81.1} \\\\\n\t\t\t\t&3 & 16&80.9 \\\\\n\t\t\t\t&4 & 16&79.5 \\\\\n\t\t\t\t&2 & \\textbf{8}&\\textbf{80.8} \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{center}\n\t\\end{table}\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\n\t\\section{Conclusions}\n\tWe have defined a novel graph convolutional network that rearchitects the CNN architecture for graph-structured data.\n\tThe proposed method, known as TAGCN, is adaptive to the graph topology as the filter scans the graph.\n\tFurther, TAGCN inherits properties of the convolutional layer in classical CNN, i.e., local feature extraction and weight sharing.\n\tOn the other hand, by the convolution theorem, TAGCN that implements in the vertex domain offers implement in the spectrum domain unifying graph CNN in both the spectrum domain and the vertex domain.\n\tTAGCN is consistent with convolution in graph signal processing.\n\tThese nice properties lead to a noticeable performance advantage in classification accuracy on\n\tsemi-supervised graph vertex classification problems with low computational complexity.\n\t\t\t\n\t\t\t\n\t\t\t\\section{Appendix: Spectrum response of TAGCN}\n\t\t\tIn classical signal processing \\citep{oppenheim1999discrete}, the convolution in the time domain is equivalent to multiplication in the spectrum domain.\n\t\t\tThis relationship is known as the convolution theorem.\n\t\t\t\\citet{sandryhaila2013discrete} showed\n\t\t\tthat the graph filtering defined in the vertex domain satisfies the generalized convolution theorem naturally and can also interpret spectrum filtering for both directed and undirected graphs.\n\t\t\tRecent work \\citep{bruna2013spectral, defferrard2016convolutional} used the convolution theorem for undirected graph-structured data and designed a spectrum graph filtering.\n\t\t\t\n\t\t\t\\begin{figure}[h]\n\t\t\t\t\\begin{center}\n\t\t\t\t\t\\includegraphics[width=0.4\\columnwidth, height = 1.8cm]\n\t\t\t\t\t{.\/1D.PNG}\n\t\t\t\t\\end{center}\n\t\t\t\t\\caption{Graph topology of a 1-D cyclic graph.}\n\t\t\t\t\\label{f1}\n\t\t\t\\end{figure}\n\t\t\t\n\t\t\tAssume that the adjacency matrix $\\textbf A$ for a graph is diagonalizable, i.e.,\n\t\t\t$\\textbf A = \\textbf F^{-1}\\textbf J \\textbf F$ with $\\textbf J$ a diagonal matrix.\n\t\t\tThe components on the diagonal of $\\textbf J$ are eigenvalues of $\\textbf A$, and the column vectors of $\\textbf F^{-1}$ are the right eigenvectors of $\\textbf A$; the row vectors of $\\textbf F$ are the left eigenvectors of $\\textbf A$ \\footnote{\n\t\t\t\tWhen $\\textbf A$ is not diagonalizable,\n\t\t\t\tthe columns of $\\textbf F^{-1}$ and the rows of $\\textbf F$ are the generalized right and left eigenvectors of $\\textbf A$, respectively.\n\t\t\t\tIn this case, $\\textbf F$ is no longer a unitary matrix. Also, $\\textbf J$ is a block diagonal matrix; it is then in Jordan form. Interested readers may refer to \\citet{sandryhaila2013discrete} or \\citet{Deri} and references therein.}.\n\t\t\tBy diagonalizing $\\textbf A$ in (\\ref{con}) for TAGCN, we obtain\n\t\t\t\\begin{equation} \\label{generalization}\n\t\t\t\\textbf G^{(\\ell)}_{c,f} \\textbf x_c^{(\\ell)}=\n\t\t\t\\textbf F^{-1} \\left( \\sum_{k = 0}^{K} g^{(\\ell)}_{c,f,k} \\textbf J^k \\right)\\textbf F \\textbf x_c^{(\\ell)}.\n\t\t\t\\end{equation}\n\t\t\tThe expression on the left-hand-side of the above equation represents the filtering\/convolution on the vertex domain.\n\t\t\tMatrix $\\textbf F$ defines the graph Fourier transform \\citep{sandryhaila2013discrete, SPM}, and $ \\textbf F \\textbf x_c^{(\\ell)}$\n\t\t\tis the input feature spectrum map, which is a linear mapping from the input feature on the vertex domain to the spectrum domain.\n\t\t\tThe polynomial $ \\sum_{k = 0}^{K} g^{(\\ell)}_{c,f,k} \\textbf J^k$ is the spectrum of the\n\t\t\tgraph filter.\n\t\t\tRelation (\\ref{generalization}), which is equation (27) in \\citet{sandryhaila2013discrete} generalizes the classical convolution\n\t\t\ttheorem to graph-structured data: convolution\/filtering on the vertex domain becomes multiplication in the spectrum domain.\n\t\t\tWhen the graph is in the 1D cyclic form, as shown in Fig. \\ref{f1}, the corresponding adjacency matrix is of the form\n\t\t\t\\[\\textbf A =\n\t\t\t\\begin{bmatrix}\n\t\t\t& & & 1 \\\\\n\t\t\t1 & & & \\\\\n\t\t\t&\\ddots & & \\\\\n\t\t\t& & 1 &\n\t\t\t\\end{bmatrix}.\\]\n\t\t\tThe eigendecomposition of $\\textbf A$\n\t\t\tis\n\t\t\t$$\\textbf A =\n\t\t\t\\frac{1}{N} \\textrm{DFT}^{-1}\n\t\t\t\\begin{bmatrix}\n\t\t\t& e^{-j\\frac{2\\pi 0}{N}} & \\\\\n\t\t\t&\\ddots & \\\\\n\t\t\t& & e^{-j\\frac{2\\pi (N-1)}{N}}\n\t\t\t\\end{bmatrix} \\textrm{DFT},\n\t\t\t$$\n\t\t\twhere $\\textrm{DFT}$ is the discrete Fourier transform matrix.\n\t\t\tThe convolution operator defined in (\\ref{con}) is consistent with that in\n\t\t\tclassical signal processing.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{Physics context and motivation for quantitative analysis}\n\n\nUnderstanding the dependence of material properties of continuous media on frequency is a natural and practically relevant task, stemming from the theoretical and experimental studies of ``metamaterials'', {\\it e.g.} materials that exhibit negative refraction of propagating wave packets. Indeed, it was noted as early as in the pioneering work \\cite{Veselago}, that negative refraction is only possible under the assumption of frequency dispersion, {\\it i.e.} when the material parameters (permittivity and permeability in electromagnetism, elastic moduli and mass density in acoustics) are not only frequency-dependent, but also become negative in certain frequency bands.\n\nIndependently of the search for metamaterials, in the course of the development of the theory of electromagnetism, it has transpired in modern physics that the Maxwell equations need to be considered with time-nonlocal ``memory'' terms, see {\\it e.g.} \\cite[Section 7.10]{Jackson} and also \\cite{Cessenat}, \\cite{Tip_1998}. The related generalised system (in the absence of charges and currents in the domain of interest) has the form\n\\begin{equation}\n\\rho\\partial_tu+\\int_{-\\infty}^ta(t-\\tau)u(\\tau)d\\tau+{\\rm i}Au=0,\\qquad A=\\left(\\begin{array}{cc}0 & {\\rm i}\\,{\\rm curl}\\\\[0.3em] {\\rm -i}\\,{\\rm curl}& 0\\end{array}\\right),\n\\label{gen_Maxwell}\n\\end{equation}\n\\noindent where $u$ represents the (time-dependent) electromagnetic field $(H, E)^\\top$, the matrix $\\rho$ depends on the electric permittivity and magnetic permeability,\nand $a$ is a matrix-valued ``susceptibility\" operator, set to zero in the more basic form of the system.\\footnote{From the rigorous operator-theoretic point of view, $A$ in (\\ref{gen_Maxwell}) is treated as a self-adjoint operator in a Hilbert space $\\mathbb H$ of functions of $x\\in\\Omega,$ for example $\\mathbb H=L^2(\\Omega; {\\mathbb R}^6),$ where $\\Omega$ is the part of the space occupied by the medium.}\n\n\nApplying the Fourier transform in time $t$ to (\\ref{gen_Maxwell}), an equation in the frequency domain is obtained:\n\\begin{equation}\n\\bigl(i\\omega\\rho+\\widehat{a}(\\omega)\\bigr)\\widehat{u}(\\cdot,\\omega)+iA\\widehat{u}(\\cdot, \\omega)=0,\n\\label{frequency_dep}\n\\end{equation}\n\\noindent where $\\widehat{u}$ is the Fourier transform of $u,$ and $\\omega$ is the frequency.\nEquation (\\ref{frequency_dep}) is often interpreted as a ``non-classical'' version of Maxwell's system of equations, where the permittivity and\/or permeability are frequency-dependent. The existence of such media (commonly known as Lorentz materials) and the analysis of their properties go back a few decades in time and has also attracted considerable interest quite recently, {\\it e.g.} in the study of plasma in tokamaks, see \\cite{Weder} and references therein.\n\n\n\nSimultaneously with the above developments in the physics literature, recent mathematical evidence, see \\cite{Jikov}, \\cite{Bullshitte}, suggests that such novel material behaviour, which is incompatible (see \\cite{Birman,ChKisYe_PDE,ChKisYe}) with the mathematical assumption of uniform ellipticity of the corresponding differential operators\n(such as $A$ in (\\ref{gen_Maxwell})), may be explained by means of the asymptotic analysis (``homogenisation'') of operator families with rapidly oscillating, and non-uniformly elliptic, coefficients.\n\nIt is therefore reasonable to ask the question of whether frequency dispersion laws such as pertaining to (\\ref{frequency_dep}), which in turn may provide one with metamaterial behaviour in appropriate frequency intervals \\cite{Veselago}, can be derived by some process of homogenisation of composite media with contrast (or, as we shall suggest below, any other miscroscopic degeneracies resonating with the macroscopic wavefields).\n\n\\subsection{Basis for the mathematical framework} If one were to look for an asymptotic expansion of eigenmodes of a high-contrast composite, {\\it restricted} to the soft component of the medium, one would notice (see, e.g., \\cite{CherKis}) that their leading order terms can be understood as the eigenmodes of boundary-value problems with impedance (i.e., frequency dependent) boundary conditions. Such problems have been considered in the past (see, e.g., \\cite{PavlovFaddeev}), motivated by the analysis of the wave equation. On the other hand, by the celebrated analysis of the so-called generalised resolvents of \\cite{Naimark,Naimark1943} one knows, that a problem of this type admits a conservative dilation, which is constructed by adding the hidden degrees of freedom. In fact, precisely this latter observation has been used in \\cite{Figotin_Schenker_2005,Figotin_Schenker_2007b} in devising a conservative ``extension'' of a time-dispersive system of the type \\eqref{gen_Maxwell}. The substance of the argument that is proposed in the present paper is that the aforementioned conservative dilation is in fact precisely the asymptotic model of the original high-contrast composite. Furthermore, the leading order terms of its eigenmodes restricted to the {\\it stiff} component are solutions to a problem of the type \\eqref{frequency_dep} with frequency dispersion. They can be easily expressed in terms of the above impedance boundary value problems, thus yielding an explicit description of the link between the resonant soft inclusions and the macroscopic time-dispersive properties.\n\nTherefore, models of continuous media with frequency-dependent effective boundary conditions can be seen as natural building blocks for media with frequency dispersion.\n\nIt is of a considerable value to relate these ideas to the earlier works \\cite{KuchmentZeng, KuchmentZeng2004, Exner}, where similar limiting impedance-type problems are obtained in the spectral analysis of ``thin\" periodic structures, converging to metric graphs. Here, one obtains the aforementioned impedance setup (see Fig. \\ref{fig:exner}) on the limiting graph as the asymptotics of the eigenmodes of a Neumann Laplacian, when the ``thickness'' of the structure vanishes for one particular (resonant) scaling between the ``edge'' and ``vertex'' volumes of the structure.\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=1.0]{hren3.pdf}\n\\end{center}\n\\caption{{\\scshape An example of a resonant thin network} {\\small Edge volumes are asymptotically of the same order as vertex volumes. The stiffness of the material of the structure is of the order period-squared. }}\n\\label{fig:exner}\n\\end{figure}\n\n\n\n\nIt is instructive to point out that the results of \\cite{CherKis} establish a thrilling relationship between the analysis of thin structures and the homogenisation theory of high-contrast composites.\n\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=1.0]{hren2.pdf}\n\\end{center}\n\\caption{{\\scshape High-contrast superlattice} {\\small The problem for a superlattice is reduced to a one-dimensional high-contrast problem. This is asymptotically equivalent to an impedance-type problem on the soft component.}}\n\\label{chain}\n\\end{figure}\nNamely, the paper \\cite{CherKis} deals with the case of the so-called superlattices \\cite{Tsu} with high contrast, see Fig. \\ref{chain}.\n While simple to set up, the related system of ordinary differential equations (subject to the appropriate conditions of continuity of fields and fluxes) is nontrivial from the point of view of quantitative analysis, see also \\cite{ChCG}. It is shown that the asymptotic model for this system is precisely the one derived in \\cite{KuchmentZeng, KuchmentZeng2004, Exner} in the case of a resonant thin structure converging to a chain-graph, see Fig. \\ref{fig:exner}.\nAs we shall argue in the present article, such superlattices (and the corresponding chain-graphs) offer a simple prototype for a metamaterial, via the mathematical approach outlined above.\n\nThe result described above suggests, that thin networks might acquire the same asymptotic properties as those of the corresponding high-contrast composites. It is therefore a viable conjecture, that the metamaterial properties of a medium can be attained via a version of geometric contrast instead of relying upon the contrast between material components. This is especially promising when the required material contrast cannot be guaranteed, as is commonly the case in elasticity and electromagnetism. The corresponding thin networks on the other hand have been made available in the study of graphenes and related areas. This subject will be further pursued in a forthcoming publication.\n\nThe above exposition vindicates the value of quantum graph models in the analysis of high-contrast composites, where we follow the well-established convention, see \\cite{Kuchment2}, to use the term {\\it quantum graph} for an ordinary differential operator of second order defined on a metric graph. These graph-based models are seen as natural limits of composite thin networks consisting of a large number of channels (for, say, acoustic or electromagnetic waves), where a combination of high-contrast and rapid oscillations becomes increasingly taxing at small scales and often leads to impractical numerical costs.\nFor channels with low cross-section-to-length ratios,\nthe material response of such a system, see Fig.\\,\\ref{thick_chain}, is closely approximated by a quantum graph as described above.\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=0.7]{hren1.pdf}\n\\end{center}\n\\caption{{\\scshape Thin network} {\\small An example of a high-contrast periodic network. Stiff channels are in grey, soft channels are in blue.}}\n\\label{thick_chain}\n\\end{figure}\nSystems of this type are a particular example of high-contrast composites and thus, as explained above, they possess resonant properties at the miscroscale, which leads to macroscopic dispersion by the above argument. At a very crude level, this is similar to the way in which particle motion on the atomic scale leads to Lorentz-type electromagnetism, see {\\it e.g.} \\cite[Chapter 1]{Nussenzweig} for the analysis of a related model of damped harmonic oscillator.\n\nFurthermore, periodic quantum graphs with vanishing period can serve as realistic explicitly solvable ODE models for multidimensional continuous media, as demonstrated\\footnote{We remark, that it was Professor Pavlov who had pioneered the mathematical study of quantum graphs, see \\cite{Pavlov_old}.}, e.g., in \\cite{MelnikovPavlov}, where an $h-$periodic cubic lattice is shown to be close (up to and including the scattering properties) to the Laplacian in $\\mathbb R^d$. More involved periodic graphs can be used to model non-trivial media, including anisotropic ones.\n\nAs a particular realistic example of a thin network with high contrast, consider the problem of modelling acoustic wave propagation in a system of channels $\\Omega^{\\varepsilon, \\delta}$, $\\varepsilon$-periodic in one direction, of thickness $\\delta\\ll\\varepsilon$, and with contrasting material properties (cf. Fig. \\ref{thick_chain}). To simplify the presentation, we assume the antiplane shear wave polarisation (the so called S-waves), which leads to a scalar wave equation for the only non-vanishing component $W,$ of the form\n\\[\nW_{tt}-\\nabla_x\\cdot (a^\\varepsilon(x)\\nabla_xW)=0,\\qquad u=W(x,t),\\quad x, t\\in{\\mathbb R},\n\\]\nwhere the coefficient $a^\\varepsilon$ takes values one and $\\varepsilon^2$ in different channels of the $\\varepsilon$-periodic structure.\nLooking for time-harmonic solutions $W(x,t)=U(x)\\exp({\\rm i}\\omega t),$ $\\omega>0,$ one arrives at the spectral problem\n\\begin{equation}\n-\\nabla\\cdot (a^\\varepsilon\\nabla U)=\\omega^2U.\n\\label{spectral}\n\\end{equation}\nAs we argue below, the behaviour of (\\ref{spectral}) is close, in a quantitatively controlled way as $\\varepsilon\\to0,$ to that of an ``effective medium'' on ${\\mathbb R}$ described by an equation of the form\n\\begin{equation}\n-U''=\\beta(\\omega)U,\n\\label{dispersive}\n\\end{equation}\nfor an appropriate function $\\beta=\\beta(\\omega)$, explicitly given in terms of the material parameters $a^\\varepsilon$ and the topology of the original system of channels.\n\nThe goal of the present paper is to derive an explicit general formula for the function $\\beta$ in (\\ref{dispersive}), in terms of the topology of the graph representing the original domain of wave propagation, which is no longer restricted to the example shown in Fig.\\,\\ref{thick_chain}. As noted above, the presence of both rapid oscillations and high contrast make the task mathematically nontrivial. In our approach, which is new, we call upon some recently developed machinery in the operator-theoretic analysis of abstract boundary-value problems (which in our case take the form of boundary-value problems for differential operators of interest). In the subsequent work \\cite{ChKisYe_PDE} we develop the corresponding analysis for the multidimensional case, which is neither included nor an extension of the analysis for graphs presented in this article. However, it is based on the same set of mathematical ideas, which makes us hope that the foundations for (\\ref{dispersive}) in the case of PDEs is clear from what follows.\n\nUnlike the approach aimed at derivation of norm-resolvent convergence, which we adopt in \\cite{ChKisYe,ChKisYe_PDE}, in the present paper, having the convenience of the more physically inclined reader in mind, we systematically treat the subject from the point of view of spectral problems and in particular of the asymptotic analysis of eigenmodes. We refer the interested reader to the aforementioned papers, where further mathematical details, which we think are out of scope here, are contained.\n\n\n\nThe present paper can be viewed as following in the footsteps of \\cite{CherKis} in that it relies upon the analysis of the fibre representations (obtained via the Floquet-Gelfand transform) of the original periodic operator. This is carried out using the boundary triples theory (see, e.g., \\cite{Gor,DM}), which generalises the classical methods based on the Weyl-Titchmarsh $m-$coefficient, applied to self-adjoint extensions of symmetric operators. This allows us to develop a novel approach to the homogenisation of a class of periodic high-contrast problems on ``weighted quantum graphs'', {\\it i.e.} one-dimensional versions of thin composite media where the material parameters on one of the components are much lower than on the others and scaled in a ``critical'' way with respect to the period of the composite. We reiterate that the idea that such media can be viewed as idealised models of thin periodic critical-contrast networks has been explored in the mathematics literature, see \\cite{KuchmentZeng2004}, \\cite{Exner}, \\cite{Zhikov_singular_structures} and elsewhere. The backbone of our approach is, as explained above, the study of eigenfunctions of the problem restricted to one (``soft'') component of the composite only. After the asymptotics for these is obtained, it proves possible to reconstruct the ``complete'' eigenfunctions, where we implicitly rely upon the classical results of operator theory, in particular dealing with out-of-space self-adjoint extensions of symmetric operators and associated generalised resolvents.\n\n\\subsection {Physics interpretation and relevance to metamaterials}\n\nOur argument leads to the understanding of the phenomenon of critical-contrast homogenisation limit as a manifestation of a frequency-converting device: if one restricts the eigenfunctions to the ``stiff'' component, they prove to be close to those of the medium where the soft component has been replaced with voids, \\emph{but} correspond to non-trivially shifted eigenfrequencies. This is precisely what one would expect in the setting of time-dispersive media after the passage to the frequency domain, {\\it cf.} (\\ref{frequency_dep}).\n\nFrom the physics perspective, this link between homogenisation and frequency conversion can be viewed as a justification of an ``asymptotic equivalence'' between eigenvalue problems for periodic composites with high contrast and problems with nonlinear dependence on the spectral parameter, which in the frequency domain characterise ``time-dispersive media'', as in (\\ref{gen_Maxwell}), see also \\cite{Tip_1998, Tip_2006, Figotin_Schenker_2005,\nFigotin_Schenker_2007b}.\n\nAs we mentioned above, the phenomenon of frequency dispersion emerging as a result of homogenisation has been observed in the two-scale formulation applied to critical-contrast PDEs in, {\\it e.g.}, \\cite{Jikov, Bullshitte}. Our approach goes beyond the results of \\cite{Jikov, Bullshitte} in several ways. First, being based on an explicit asymptotic analysis of operators, using the recent developments in the theory of abstract boundary-value problems (see {\\it e.g.} \\cite{Ryzh_spec}), it provides an explicit procedure for recovering the dispersion relation and does not draw upon the well-known two-scale asymptotic techniques.\n\nThe approach we develop in the present paper thus offers a new perspective on frequency-dispersive (time non-local) continuous media in the sense that it provides a recipe for the construction of such media with prescribed dispersive properties from periodic composites whose individual components are non-dispersive.\nIt has been known that time-dispersive media \\cite{Figotin_Schenker_2005} in the frequency domain can be realised as a ``restriction'' of a conservative Hamiltonian defined on a space which adds the ``hidden'' degrees of freedom.\\footnote{\nThis is based on the observation that the equation (\\ref{frequency_dep}) can be written in the form of an eigenvalue problem ${\\mathcal A}U=\\omega U,$ $U\\in{\\mathcal H},$ for a suitable self-adjoint ``dilation\" ${\\mathcal A}$ of the operator $A,$ so that ${\\mathcal A}$ acts in a space ${\\mathcal H}\\supset{\\mathbb H}.$ The vector field $U$ has a natural physical interpretation in terms of additional electromagnetic field variables, the so-called polarisation $P$ and magnetisation $M,$ so that the full (12-dimensional) field vector is $(H, E, P, M)^\\top.$ }\n\nIn summary, the existing belief in the engineering and physics literature that time-dispersive properties often arise as the result of complex microstructure of composites suggests to look for a rather concrete class of such conservative Hamiltonian dilations, namely, those pertaining to differential operators on composites with critical contrast. Our results can be viewed as laying foundations for rigorously solving this problem.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Infinite-graph setup}\n\nConsider a graph ${\\mathbb G}_\\infty,$ periodic in one direction, so that ${\\mathbb G}_\\infty+\\ell={\\mathbb G}_\\infty,$ where $\\ell$ is a fixed vector, which defines the graph axis.\nLet the periodicity cell ${\\mathbb G}$ be a finite compact graph of total length $\\varepsilon\\in(0,1),$ and denote by\n$e_j,$ $j=1,2,\\dots n,$ $n\\in{\\mathbb N}$ its edges. For each $j=1,2,\\dots, n,$ we identify $e_j$ with the interval $[0,\\varepsilon l_j],$ where $\\varepsilon l_j$ is the length of $e_j.$ We associate with the graph ${\\mathbb G}_\\infty$ the Hilbert space\n$$\nL_2({\\mathbb G}_\\infty):=\\bigoplus\\limits_{{\\mathbb Z}}\\bigoplus\\limits_{j=1}^n L_2(0, \\varepsilon l_j).\n$$\n\nConsider a sequence of operators $A^\\varepsilon,$ $\\varepsilon>0,$ in $L_2({\\mathbb G}_\\infty),$ generated by second-order differential expressions\n\\begin{equation}\n-\\frac{d}{dx}\\left(\\bigl(a^\\varepsilon\\bigr)^2\\frac{d}{dx}\\right),\n\\label{diff_operator}\n\\end{equation}\nwith positive ${\\mathbb G}$-periodic coefficients $(a^\\varepsilon)^2$ defined on ${\\mathbb G}_\\infty,$\nwith the domain ${\\rm dom}(A^\\varepsilon)$ that describes the coupling conditions at the vertices of\n${\\mathbb G}_\\infty:$\n\\begin{equation}\n{\\rm dom}(A^\\varepsilon)=\n\\left\\{\nu\\in\\bigoplus\\limits_{e\\in{\\mathbb G}_\\infty}W^{2,2}\\bigl(e)\\Big|\\ u\n\\text{\\ continuous,}\\ \\sum_{e\\ni V}\\sigma_e(a^\\varepsilon)^2u'(V)=0\\\n\\forall\\ V\\in{\\mathbb G}_\\infty\\right\\},\n\\label{Atau}\n\\end{equation}\nIn the formula (\\ref{Atau}) the summation is carried out over the edges $e$\nsharing the vertex $V,$ the coefficient $(a^\\varepsilon)^2$ in the vertex condition is calculated on the edge $e,$ and $\\sigma_{ e}=-1$ or $\\sigma_{e}=1$ for $e$ incoming or outgoing for $V,$ respectively.\nThe matching conditions \\eqref{Atau} represent the so-called standard, or Kirchhoff, conditions of combined continuity of the function and equality to zero of sums of co-normal derivatives at all vertices.\n\n\\section{Gelfand transform}\n\\label{Gelfand_section}\nWe seek to apply the one-dimensional Gelfand transform\n\\begin{equation}\\label{1dGelfand}\nv(x)=\\sqrt{\\frac{\\varepsilon}{2\\pi}}\\sum\\limits_{n\\in \\mathbb{Z}}u(x+\\varepsilon n) {\\rm e}^{-it(x+\\varepsilon n)}.\n\\end{equation}\nto the operator $A^\\varepsilon$ defined on ${\\mathbb G}_\\infty$ in order to obtain the direct fibre integral for the operator $A^\\varepsilon:$\n\\begin{equation}\\label{vonNeumann}\nA^{\\varepsilon}=\\int_{\\oplus}A^\\varepsilon_t dt.\n\\end{equation}\nIn order to do achieve this goal, we first note that the geometry of ${\\mathbb G}_\\infty$ is encoded in the matching conditions \\eqref{Atau} \\emph{only}. This opens up a possibility to embed the graph ${\\mathbb G}_\\infty$ into $\\mathbb R^1$ by rearranging it edges\nas consecutive segments of the real line (leading to a one-dimensional $\\varepsilon$-periodic chain graph). In doing so we drop the customary practice of drawing graphs in a way reflecting matching conditions ({\\it i.e.}, so that these are local relative to graph vertices). The above embedding leads to rather complex non-local matching conditions, but, on the positive side, allows us to use the Gelfand transform as required by \\eqref{1dGelfand}, \\eqref{vonNeumann}.\n\nThe Gelfand transform leads to periodic conditions on the boundary of the cell $\\mathbb G$ and thus in our case identifies the ``left\" boundary vertices of the graph $\\mathbb G$ with their translations by $\\ell$, which results in a modified graph $\\widehat{\\mathbb G}$. Apart from this, the matching conditions for the internal vertices of $\\mathbb G$ admit the same form as for $A^\\varepsilon$, except for the fact that the Kirchhoff matching is replaced by a Datta-Das Sarma one (the latter can be viewed as a weighted Kirchhoff), see below in \\eqref{Atau1}. Unimodular weights appearing in Datta-Das Sarma conditions are precisely due to the non-locality of matching conditions mentioned above for the embedding of $\\mathbb G_\\infty$ into $\\mathbb R^1$.\n\nThe image of the Gelfand transform is described as follows. There exists a unimodular list $\\{w_V(e)\\}_{e\\ni V},$ {\\it cf.} \\cite{ChKisYe}, defined at each vertex $V$ of $\\widehat{\\mathbb G}$ as a finite collection of values corresponding to the edges adjacent to $V$.\nFor each $t\\in[-\\pi\/\\varepsilon,\\pi\/\\varepsilon)$, the fibre operator $A^\\varepsilon_t$ is generated by the differential expression\n\\begin{equation}\n\\left(\\frac 1i \\frac d{dx}+t\\right)(a^\\varepsilon)^2\\left(\\frac 1i \\frac d{dx}+t\\right)\n\\label{diff_expr}\n\\end{equation}\non the domain\n\\begin{multline}\n{\\rm dom}(A^\\varepsilon_t)=\n\\Bigg\\{\nv\\in\\bigoplus\\limits_{e\\in {\\mathbb G}}W^{2,2}\\bigl(e)\\ \\Big|\\ w_V(e)v|_e(V)=w_V(e')v|_{e'}(V)\n\\text{\\ for all } e,e' \\\\ \\text{ adjacent to } V, \\ \\sum_{e\\ni V}\\partial^{(t)}v(V)=0\\ \\ \\\n{\\rm for\\ each\\ vertex}\\ V\\Bigg\\},\n\\label{Atau1}\n\\end{multline}\nwhere $\\partial^{(t)}v(V)$ is the weighted ``co-derivative'' $\\sigma_{e}w_V(e)(a^\\varepsilon)^2(v'+{\\rm i}t v)$ of the function $v$ on the edge $e,$ calculated at $V.$\n\n\n\n\\section{Boundary triples for extensions of symmetric operators}\n\\label{triples_section}\n\nIn the analysis of the asymptotic behaviour of the fibres of the original operator representing the quantum graph, we employ the framework of boundary triples for a symmetric operator with equal deficiency indices for the description of a class of its extensions. Part of the toolbox of the theory of boundary triples is the generalisation of the classical Weyl-Titchmarsh $m$-function to the case of a matrix (finite deficiency indices) and operators (infinite deficiency indices).\n\nThe boundary triples theory is a very convenient toolbox for dealing with extensions of linear operators, originating in the works of M.\\,G. Kre\\u\\i n. In essence, it is an operator-theoretic interpretation of the second Green's identity. As such, it allows one to pass over from the consideration of functions in Hilbert spaces to a formulation in which one deals with objects in the boundary spaces (such as traces of functions and traces of their normal derivatives), which in the context of quantum graphs are finite-dimensional. Furthermore, it allows one to use explicit concise formulae for the resolvents of operators under scrutiny and for other related objects. Thus it facilitates the analysis by expressing the familiar, commonly used in this area, objects in a concise way.\n\n\n\\begin{definition}[\\cite{Gor,Ko1,DM}]\nSuppose that $A_{\\rm max}$ is the adjoint to a densely defined symmetric operator on a separable Hilbert space $H$ and let $\\Gamma_0,$ $\\Gamma_1$ be linear mappings of ${\\rm dom}(A_{\\max})\\subset H$\nto a separable Hilbert space $\\mathcal{H}.$\n\nA. The triple\n$(\\mathcal{H}, \\Gamma_0,\\Gamma_1)$ is called \\emph{a boundary\ntriple} for the operator $A_{\\max}$ \nif the following two conditions hold:\n\\begin{enumerate}\n\\item For all $u,v\\in {\\rm dom}(A_{\\max})$ one has the second Green's identity\n\\begin{equation}\n\\langle A_{\\max} u,v \\rangle_H -\\langle u, A_{\\max} v \\rangle_H = \\langle \\Gamma_1 u, \\Gamma_0\nv\\rangle_{\\mathcal{H}}-\\langle\\Gamma_0 u, \\Gamma_1 v\\rangle_{\\mathcal{H}}.\n\\label{Green_identity}\n\\end{equation}\n\\item The mapping\n${\\rm dom}(A_{\\max})\\ni u\\longmapsto (\\Gamma_0 u,\n\\Gamma_1 u)\\in{\\mathcal H}\\oplus{\\mathcal H}$\nis onto\n\\end{enumerate}\n\nB. A restriction ${A}_B$ of the operator $A_{\\rm max}$ such\nthat $A_{\\rm max}^*=:A_{\\min}\\subset A_B\\subset A_{\\max}$ is called\nalmost solvable if there exists a boundary triple\n$(\\mathcal{H}, \\Gamma_0,\\Gamma_1)$ for $A_{\\max}$ and a bounded\nlinear operator $B$ defined on $\\mathcal{H}$ such that\n\\[\n{\\rm dom}({A_B})=\\bigl\\{u\\in{\\rm dom}(A_{\\rm max}):\\ \\Gamma_1u=B\\Gamma_0u\\bigr\\}.\n\\]\n\n\nC. The operator-valued Herglotz\\footnote{For a definition and properties of Herglotz functions, see {\\it e.g.} \\cite{Nussenzweig}.} function $M=M(z),$ defined by\n\\begin{equation}\n\\label{Eq_Func_Weyl}\nM(z)\\Gamma_0 u_{z}=\\Gamma_1 u_{z}, \\ \\\nu_{z}\\in \\ker (A_{\\max}-z),\\ \\ z\\in\n\\mathbb{C}_+\\cup{\\mathbb C}_-,\n\\end{equation}\nis called the Weyl-Titchmarsh $M$-function of the operator\n$A_{\\max}$ with respect to the corresponding boundary triple.\n\\end{definition}\n\n\nSuppose $A_B$ be a self-adjoint almost solvable restriction of $A_{\\rm max}$ with compact resolvent. Then $M(z)$ is analytic on the real line away from the eigenvalues of $A_\\infty,$ where $A_\\infty$ is the restriction of $A_{\\rm max}$ to domain $\\dom(A_\\infty)=\\dom(A_{\\rm max})\\cap\\ker(\\Gamma_0).$ It is a key observation for what follows that $u\\in{\\rm dom}(A_B)$ is an eigenvector of $A_B$ with eigenvalue $z_0\\in{\\mathbb C}\\setminus{\\rm spec}(A_\\infty)$ if and only if\n\\begin{equation}\n\\bigl(M(z_0)-B\\bigr)\\Gamma_0u=0.\n\\label{eigeneq}\n\\end{equation}\n\n\nIn the next section we utilise a particular operator $A_{\\rm max}$ and a boundary triple $({\\mathcal H}, \\Gamma_0, \\Gamma_1),$ which we use to analyse the resolvents of the operators on quantum graphs\nintroduced earlier.\n\n\n\n\n\n\n\n\\section{Graph with high contrast: prototype for time-dispersive media}\n\\label{our_graph}\nIn what follows we develop a general approach to the analysis of weighted quantum graphs with critical contrast. We demonstrate it on one particular example, which, as we show in Appendix A, exhibits all the properties of the generic case. We have therefore chosen to present the analysis in the terms that are immediately applicable to the general case and, whenever advisable, we provide statements that carry over without modifications. Speaking of a ``general'' case, we imply an operator of the class introduced in Section 2, where some of the edges $e_{\\text{soft}}$ of the cell graph $\\mathbb G$ carry the weight $a^\\varepsilon=\\varepsilon$, with the remaining edges carrying weights of order 1 uniformly in $\\varepsilon$.\n\nThe rationale of the present section is in fact extendable to an even more general setup (including the one of periodic high-contrast PDEs), which we treat in the paper \\cite{ChKisYe_PDE}. However, in the present paper we consider a rather simplified model, in view of keeping technicalities to a bare minimum and thus hopefully making the matter transparent to the reader.\n\nConsider the graph ${\\mathbb G}_\\infty$ with the periodicity cell ${\\mathbb G}$ shown in Figure \\ref{infinite_graph_figure}.\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=1.5]{gebra0.png}\n\\end{center}\n\\caption{{\\scshape Periodicity cell $\\mathbb G.$} {\\small The intervals of lengths $\\varepsilon l_1$ and $\\varepsilon l_3$ are ``stiff\", {\\it i.e.} they carry the weights $a_1^2$ and $a_3^2$, respectively, whereas the interval of length $\\varepsilon l_2$ is ``soft\", with weight $\\varepsilon^2.$}}\n\\label{infinite_graph_figure}\n\\end{figure}\nThe Gelfand transform, see Section \\ref{Gelfand_section}, applied to this graph, yields the graph $\\widehat{\\mathbb G}$ of Figure \\ref{compact_fig}. In the present section we show that there exists a boundary triple such that $A^\\varepsilon_t$ is an almost solvable extension of the corresponding $A_{\\min}$, and the\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=0.7]{gebra00.png}\n\\end{center}\n\\caption{{\\scshape The graph $\\widehat {\\mathbb G}.$} {\\small The left and right boundary vertices have been identified.}}\n\\label{compact_fig}\n\\end{figure}\n$M$-function (which is in our case a matrix-valued function; for convenience, it is written as a function of $k:=\\sqrt{z}$, with the branch chosen so that $\\Im k>0$) of $A_{\\max}$ is given by\n\\begin{equation}\n\\label{Msplit}\nM(k,\\varepsilon, t)=k\\widetilde{M}^{\\rm stiff}(\\varkappa,\\tau)+\\varepsilon\\widetilde{M}^{\\rm soft}(k, \\tau),\\quad \\varkappa:=\\varepsilon k,\\quad\\tau:=\\varepsilon t,\n\\end{equation}\nwhere\n\\[\n\\widetilde{M}^{\\rm stiff}(\\varkappa,\\tau):=\\left(\\begin{array}{cc}\n-a_1\\cot\\dfrac{\\varkappa l_1}{a_1} -a_3\\cot\\dfrac{\\varkappa l_3}{a_3}\\ &\\\na_1\\dfrac{{\\rm e}^{-i(l_1+l_3)\\tau}}{\\sin\\dfrac{\\varkappa l_1}{a_1}}+a_3 \\dfrac{{\\rm e}^{il_2\\tau}}{\\sin\\dfrac{\\varkappa l_3}{a_3}}\\\\[3.3em]\na_1\\dfrac{{\\rm e}^{i(l_1+l_3)\\tau}}{\\sin\\dfrac{\\varkappa l_1}{a_1}}+a_3 \\dfrac{{\\rm e}^{-il_2\\tau}}{\\sin\\dfrac{\\varkappa l_3}{a_3}}\\ &\\\n-a_1\\cot\\dfrac{\\varkappa l_1}{a_1}-a_3\\cot\\dfrac{\\varkappa l_3}{a_3}\n\\end{array}\\right),\n\\]\n\\begin{equation}\n\\widetilde{M}^{\\rm soft}(k,\\tau):=k\\left(\\begin{array}{cc}-\\cot k l_2\\ &\\ \\dfrac{{\\rm e}^{il_2\\tau}}{\\sin k l_2}\\\\[1.6em]\n\\dfrac{{\\rm e}^{-il_2\\tau}}{\\sin k l_2}\\ &\\ -\\cot k l_2\n\\end{array}\\right),\n\\label{M_soft_tilde}\n\\end{equation}\n\nNote that for all $\\tau\\in[-\\pi, \\pi),$ the function $\\widetilde{M}^{\\rm soft}(\\cdot,\\tau)$ is meromorphic and regular at zero.\n\n\nEssentially, the claim made is a straightforward consequence of the double integration by parts, followed by a simple rearrangement of terms.\nIn the rest of this section we sketch the construction applicable in the general case, which in particular yields the above claim for the model graph considered. Under the definitions of Section \\ref{triples_section}, the maximal operator\n$A_{\\rm max}=A_{\\rm min}^*$ is defined by the same differential expression (\\ref{diff_expr}) on the domain\n\\begin{multline}\\label{domAmax}\n{\\rm dom}(A_{\\rm max})=\n\\biggl\\{\nv\\in\\bigoplus\\limits_{e\\in \\widehat{\\mathbb G}}W^{2,2}\\bigl(e)\\ \\Big|\\ w_V(e)v|_e(V)=w_V(e')v|_{e'}(V)\n\\\\\n\\text{\\ for all } e,e' \\text{ adjacent to } V,\\ \\\n\\forall\\,V \\in\\widehat{\\mathbb G}\n\\biggr\\}.\n\n\\end{multline}\nIn what follows we use the triple $({\\mathbb C}^m, \\Gamma_0, \\Gamma_1),$ where $m$ is the number of vertices in the graph $\\widehat{\\mathbb G}$, and\n\\begin{equation}\n\\Gamma_0v=\\bigl\\{v(V)\\bigr\\}_V,\n\\qquad \\Gamma_1v=\\Bigl\\{\\sum_{e\\ni V}\\partial^{(t)}v(V)\\Bigr\\}_V,\\qquad v\\in{\\rm dom}(A_{\\rm max}),\n\\label{boundary_operators}\n\\end{equation}\nwhere $u(V)$ is defined as the common value of $w_V(e)v|_e(V)$ for all edges $e$ adjacent to $V$.\n\n\nBy definition of the $M$-matrix one has\n$\n\\Gamma_1v=M\\Gamma_0v,$\n$\nv\\in\\ker (A_{\\rm max} - z).\n$\nFunctions $v\\in\\ker (A_{\\rm max} - z)$\nhave the form\n$$\nv(x)=\\exp(-{\\rm i}xt)\\biggl\\{A_e\\exp\\biggl(-\\frac{{\\rm i}kx}{a^\\varepsilon}\\biggr)+B_e\\exp\\biggl(\\frac{{\\rm i}kx}{a^\\varepsilon}\\biggr)\\biggr\\},\\quad x\\in e,\\quad A_e, B_e\\in{\\mathbb C},\n$$\nwhere $k:=\\sqrt{z}$, and the co-derivative is given by\n$$\na_\\varepsilon^2(v'(x)+it v(x)\n= {\\rm i}ka^\\varepsilon\\exp(-{\\rm i}xt)\n\\biggl\\{-A_e\\exp\\biggl(-\\frac{{\\rm i}kx}{a^\\varepsilon}\\biggr)+B_e\\exp\\biggl(\\frac{{\\rm i}kx}{a^\\varepsilon}\\biggr)\\biggr\\}, \\qquad x\\in e,\n$$\nFor the vertex $V$\nand for every ``Dirichlet data\" vector $\\Gamma_0v$ one of whose entries is unity and the other entries vanish,\nthe ``Neumann data'' vector $\\Gamma_1v$ gives the column of the $M$-matrix corresponding to $V.$\nThe corresponding $\\Gamma_1v$ has diagonal and off-diagonal entries of the form, respectively,\n$$\n-\\sum_{e\\in V} ka^\\varepsilon\\cot\\left(\\dfrac{k \\varepsilon l_e}{a^\\varepsilon}\\right),\\qquad\n\\qquad \\sum_{e\\in V} ka^\\varepsilon \\widetilde w_V(e)\\biggl(\\sin \\dfrac{k \\varepsilon l_e}{a^\\varepsilon}\\biggr)^{-1},\n$$\nwhere $\\{\\widetilde w_V(e)\\}_{e\\ni V}$ is a unimodular list uniquely determined by the list $\\{w_V(e)\\}_{e\\ni V}$.\nThe resulting $M$-matrix is constructed from these columns over all vertices $V.$\n\nIn particular, for the example of Fig. \\ref{infinite_graph_figure} -- \\ref{compact_fig}, we have the following:\nthe boundary space $\\mathcal H$ pertaining to the graph $\\widehat{\\mathbb G}$ is chosen as $\\mathcal H=\\mathbb C^2$. The unimodular list functions $w_{V_1}$ and $w_{V_2}$ are chosen as follows:\n\\begin{equation*}\\label{eq:1-weights}\n\\begin{gathered}\n\\{w_{V_1}(e^{(j)})\\}_{j=1}^3=\\{1,1,e^{i\\tau(l^{(2)}+l^{(3)})}\\},\\quad \\{w_{V_2}(e^{(j)})\\}_{j=1}^3=\\{e^{i\\tau l^{(3)}},1,1\\}\n\\end{gathered}\n\\end{equation*}\nand\n\\begin{equation*\n\\begin{gathered}\n\\{\\widetilde w_{V_1}(e^{(j)})\\}_{j=1}^3=\\{e^{-i\\tau(l^{(1)}+l^{(3)})},e^{i\\tau l^{(2)}},e^{i\\tau l^{(2)}}\\},\\\\ \\{\\widetilde w_{V_2}(e^{(j)})\\}_{j=1}^3=\\{e^{i\\tau(l^{(1)}+l^{(3)})},e^{-i\\tau l^{(2)}},e^{-i\\tau l^{(2)}}\\},\n\\end{gathered}\n\\end{equation*}\nyielding the formula \\eqref{M_soft_tilde}.\n\n\n\n\n\n\n\n\n\\section{Asymptotic diagonalisation of the $M$-matrix and the eigenvector asymptotics}\\label{sect:asymp_diag}\n\nThe present section is the centrepiece of our approach. The major difficulty to overcome is of course the fact that the operator $A^\\varepsilon_t$ entangles in a non-trivial way the stiff and soft components of the medium. On the level of the analysis of the operator itself this problem admits no obvious solution, unless one is prepared to introduce a two-scale asymptotic ansatz. On the other hand, the $M$-matrix calculated above will be shown to be additive with respect to the decomposition of the medium (hence the notation $M^{\\rm soft}$ and $M^{\\rm stiff}$). Thus, via the representation \\eqref{eigeneq}, it proves possible to use the asymptotic expansion of $M^{\\rm stiff}$, which is readily available, to recover the asymptotics of eigenmodes, restricted to the soft component. This way, the homogenisation task at hand can be viewed as a version of the perturbation analysis in the boundary space pertaining to the problem.\n\n\n\n\n\n\nIn the example considered (and in the general case in view of Appendix A)\nit follows from (\\ref{eigeneq}), (\\ref{Msplit})\nthat $u_\\varepsilon$ is an eigenfunction of the operator $A^\\varepsilon_t,$ see (\\ref{diff_expr})--(\\ref{Atau1}), if and only if\n\\begin{equation}\nM^{\\rm soft}\\Gamma_0u_\\varepsilon=-M^{\\rm stiff}\\Gamma_0u_\\varepsilon,\\qquad M^{\\rm soft}:=\\varepsilon\\widetilde{M}^{\\rm soft},\\quad\nM^{\\rm stiff}:=k\\widetilde{M}^{\\rm stiff}.\n\\label{bc1}\n\\end{equation}\nIn writing (\\ref{bc1}), we assume, without loss of generality, that the eigenvalue $z_\\varepsilon=k^2$ corresponding to the eigenfunction $u_\\varepsilon$ does not belong to the spectrum of the Dirichlet decoupling $A_\\infty^t,$ defined according to the general theory of Section \\ref{triples_section} for the operators we introduce in Section \\ref{Gelfand_section}. Indeed, in any compact subset of $\\mathbb C,$ for small enough $\\varepsilon,$ this spectrum coincides with the $\\varepsilon$-independent set of poles of the matrix\n$\\widetilde{M}^{\\rm soft},$ see (\\ref{M_soft_tilde}). For the same reason, we can safely assume that the eigenvalues $z_\\varepsilon$ do not belong to the spectrum of the Dirichlet operator on the soft inclusion. These assumptions ensure that that the condition $z_0\\in{\\mathbb C}\\setminus{\\rm spec}(A_\\infty)$ for the validity of (\\ref{eigeneq}) is satisfied in both cases: for the $M$-matrix of the operator $A^\\varepsilon_t,$ where $B=0,$ and for the $M$-matrix of the operator on the soft component represented by (\\ref{bc1}), where the role of $B$ is played by the matrix $-M^{\\rm stiff}.$\n\nWe proceed by observing that the matrices $M^{\\text{soft}}$ and $M^{\\text{stiff}}$ in (\\ref{bc1}) can be treated as $M$-matrices of certain triples on their own. In particular, it will be instrumental in what follows to attribute this meaning to $M^{\\text{soft}}$. To this end, consider the decomposition of the graph $\\widehat{\\mathbb G}$ into its ``soft'' $\\mathbb G^{\\text{soft}}$ and ``stiff'' $\\mathbb G^{\\text{stiff}}$ components (each of these is treated as a graph, so that $\\widehat{\\mathbb G}=\\mathbb G^{\\text{soft}}\\cup \\mathbb G^{\\text{stiff}}$) and the operator $A_{\\max}^{\\text{soft}}$ defined by \\eqref{diff_expr}, \\eqref{domAmax}, with $\\widehat{\\mathbb G}$ replaced by $\\mathbb G^{\\text{soft}}$. The boundary space for $A_{\\max}^{{\\rm soft}}$ can be defined as $\\mathcal H$, the same as the boundary space for the operator $A_{\\max}$ (again by Appendix A in the general case). The boundary operators $\\Gamma_j^{\\text{soft}}$, $j=0,1,$ are defined as in \\eqref{boundary_operators} for the graph $\\mathbb G^{\\text{soft}}$. Then, by inspection, the $M$-matrix for the operator $A_{\\max}^{\\text{soft}}$ is nothing but $M^{\\text{soft}}$ (see \\cite{CherKisSilva} for further details).\n\nFor each $v\\in{\\rm dom}(A_{\\rm max}),$ define $\\widetilde{v}$ to be the restriction of $v$ to the soft component $\\mathbb G^{\\text{soft}}$. It is obvious that $\\widetilde{v}\\in\\dom(A_{\\max}^{{\\rm soft}}).$\n\n\n\n\n\nWe notice that (\\ref{bc1}) implies, in particular, that\n\\begin{equation}\nM^{\\rm soft}\\Gamma_0^{\\rm soft}\\widetilde{u}_\\varepsilon=B^\\varepsilon\\Gamma_0^{\\rm soft}\\widetilde{u}_\\varepsilon,\\qquad\\qquad B^\\varepsilon:=-M^{\\rm stiff}.\n\\label{suggested_bound_cond}\n\\end{equation}\nFurthermore, since $M^{\\rm soft}$ is the $M$-matrix for the pair $(\\Gamma^{\\rm soft}_0, \\Gamma^{\\rm soft}_1),$ one has\n\\[\nM^{\\rm soft}\\Gamma_0^{\\rm soft}\\widetilde{u}_\\varepsilon=\\Gamma_1^{\\rm soft}\\widetilde{u}_\\varepsilon,\n\\]\nso the condition (\\ref{suggested_bound_cond}) takes a form similar to (\\ref{Eq_Func_Weyl}):\n\\begin{equation}\n\\Gamma_1^{\\rm soft}\\widetilde{u}_\\varepsilon=B^\\varepsilon\\Gamma_0^{\\rm soft}\\widetilde{u}_\\varepsilon.\n\\label{eq_eigenvector3}\n\\end{equation}\n\nThis condition involves the Dirichlet data of the solution to the spectral equation for $A_{\\max}^{\\text{soft}}$ which is an ODE on the graph $\\mathbb G^{\\text{soft}}$ with a constant coefficient. The Dirichlet data $\\Gamma_0^{\\rm soft}\\widetilde{u}_\\varepsilon$ determine the vector $\\widetilde{u}_\\varepsilon$ uniquely. The named vector is interpreted as a solution to the spectral equation on the soft component of the graph $\\widehat{\\mathbb G}$ subject to\n$z$-dependent boundary conditions, encoded in \\eqref{eq_eigenvector3}. On the other hand, this vector can also be used to reconstruct the vector $u_\\varepsilon$: indeed, from $\\Gamma_0 u_\\varepsilon=\\Gamma_0^{\\text{soft}}\\widetilde{u}_\\varepsilon$ it follows, that $u_\\varepsilon$, which is by assumption an eigenvector to $A^\\varepsilon_t$ at the point $z$, is nothing but a continuation of $\\widetilde{u}_\\varepsilon$ to the rest of the graph $\\widehat{\\mathbb G}$ based on its Dirichlet data at the boundary of the soft component. It follows, cf. \\eqref{eq_eigenvector3}, that the asymptotic analysis can be reduced to the soft component, with the information about the presence of the stiff component fed into the related asymptotic procedure by means of the stiff-soft interface.\n\nBefore we proceed further, let us take another look at the equation $M\\Gamma_0u_\\varepsilon=0,$ {\\it cf.} (\\ref{bc1}), which is equivalent to $u_\\varepsilon$ being an eigenvector of $A^\\varepsilon_t$ at the value of spectral parameter $z$. Using the fact that $M=M^{\\text{soft}}+M^{\\text{stiff}}$ as well as the explicit expressions for the matrices $M^{\\text{soft}},$ $M^{\\text{stiff}},$ {\\it cf.} \\eqref{Msplit}, it is easily seen that the leading-order term of $\\Gamma_0 u_\\varepsilon$, and thus of $u_\\varepsilon$, does not depend on the soft component of the medium, since the elements of $M^{\\text{soft}}$ are $\\varepsilon$-small. On the other hand, the situation is drastically different from the viewpoint of the associated dispersion relation, which must be guaranteed for the \\emph{solvability} of $M \\Gamma_0 u_\\varepsilon=0$. The dispersion relation follows from the condition $\\det M=0$, and it is \\emph{here, and here only}, that the soft component of the medium makes its presence felt in the problem. Due to the fact that $M^{\\text{stiff}}$ is rank one at $\\tau=0$, it transpires that the leading-order term of the equation $\\det M=0$ \\emph{in the case of critical contrast only} blends together in a non-trivial way the stiff and soft components of the medium. Bearing this in mind, the phenomenon of critical-contrast homogenisation can be seen as a manifestation of a frequency-converting device: if one restricts the eigenfunctions to the stiff component, they are $\\varepsilon$-close to those of the medium where the soft component has been replaced with voids, \\emph{but} correspond to non-trivially shifted eigenfrequencies. This is precisely what one would expect in the setting of time-dispersive media after the passage to the frequency domain, {\\it cf.} (\\ref{gen_Maxwell}), (\\ref{frequency_dep}). We will come back to this discussion in Section 8.\n\nLet us return to the analysis of \\eqref{eq_eigenvector3}, which, as explained above, contains all the information on the asymptotic behaviour of $A^\\varepsilon_t$. We notice that the named equation corresponds to a homogeneous ODE; the non-trivial dependence on $\\varepsilon$ is concealed in the right-hand side, which describes $\\varepsilon$- \\emph{and} frequency-dependent boundary conditions. The problem of asymptotic analysis of eigenfunctions of $A^\\varepsilon_t$ is thus effectively reduced to the analysis of the asymptotic behaviour of these boundary conditions. This analysis however is greatly simplified by the fact that $B^\\varepsilon$ is equal to $-M^{\\text{stiff}}$, where $M^{\\text{stiff}}$ is shown to be the $M$-matrix of $A_{\\max}^{\\text{stiff}}$ (see Appendix A) by a similar argument to that applied above to $M^{{\\rm soft}}$. Hence, the asymptotics sought for $M^{{\\rm stiff}}$ is simply the asymptotics of the Dirichlet-to-Neumann map of a uniformly elliptic problem at zero frequency, which allows to use well-known elliptic techniques.\n\nFirstly, we notice that the results of Section 5 combined with the asymptotic formulae\n\n\\begin{equation*}\na_e \\cot \\frac{\\varkappa l_e}{a_e} = \\frac{a_e^2}{\\varkappa l_e}-\\frac 13\\varkappa l_e+ O(\\varkappa^3),\\quad\\quad\\quad a_e\\biggl(\\sin\\dfrac{\\varkappa l_e}{a_e}\\biggr)^{-1} = \\frac{a_e^2}{\\varkappa l_e}+\\frac{1}{6}\\varkappa l_e+ O(\\varkappa^3),\n \\end{equation*}\nyield the following statement.\n\n\n\\begin{lemma}\n\\label{M_expansion}\nSuppose that $K\\subset{\\mathbb C}$ is compact.\nOne has\n\\begin{equation*}\n\\widetilde{M}^{\\rm stiff}(\\varkappa, \\tau)\n=\\varkappa^{-1}M_0(\\tau)+\\varkappa M_1(\\tau)+O(\\varkappa^3),\\quad \\tau\\in[-\\pi, \\pi),\\ \\varkappa=\\varepsilon k,\\ \\varepsilon\\in(0,1),\\ k\\in K,\n\\end{equation*}\nwhere $M_0$ and $M_1$ are analytic matrix functions of $\\tau$.\n\\end{lemma}\n\n\nIt follows from Lemma \\ref{M_expansion} that, for all $\\tau\\in[-\\pi,\\pi),$\n\\begin{equation}\nB^\\varepsilon(z)=\\varepsilon^{-1}B_0\n+\\varepsilon zB_1+O(\\varepsilon^3z^2),\\qquad\\varepsilon\\in(0,1),\\ \\sqrt{z}\\in K,\n\\label{asympt}\n\\end{equation}\nwhere $B_0,$ $B_1$ are Hermitian matrices that depend on $\\tau$ only.\nThe following two lemmata carry over to the general case with minor modifications, since they only pertain to the stiff component of the medium and therefore rely upon the general uniformly elliptic properties of the latter.\n\\begin{lemma}\n\\label{mu_lemma}\nThere exist $\\gamma\\geq0$ (where $\\gamma=0$ if and only if the graph $\\mathbb{G}^{\\rm stiff}$ is a tree\\footnote{Recall that a tree is a connected forest \\cite{Cvetkovich}.}) and an eigenvalue branch $\\mu^{(\\tau)}$ for the matrix $B_0,$ such that\n$\\dim\\Ker\\bigl(B_0-\\mu^{(\\tau)}\\bigr)=1,$ $\\tau\\in[-\\pi,\\pi),$ and\n\\begin{equation}\n\\mu^{(\\tau)}=\\gamma\\tau^2 + O(\\tau^4).\n\\label{mu_asymptotics}\n\\end{equation}\n\\end{lemma}\n\n\nWe denote by $\\psi^{(\\tau)}$ the normalised eigenvector for the eigenvalue $\\mu^{(\\tau)},$ so that\n\n $\\psi^{(0)}=(1\/\\sqrt{2})(1,1)^\\top,$ {\\it i.e.} the trace of the first eigenvector of the Neumann problem on the stiff component at zero quiasimomentum, which is clearly constant.\n Let $\\mathcal P:=\\langle \\cdot, \\psi^{(\\tau)}\\rangle \\psi^{(\\tau)}$ and $\\mathcal P_{\\bot}$ be the orthogonal projections in the boundary space onto $\\psi^{(\\tau)}$ and its orthogonal complement, respectively.\n\n\\begin{lemma}\n\\label{bound_below_lemma}\nThere exists $C_\\perp>0$ such that\n\\begin{equation}\n\\mathcal P_{\\bot} B_0 \\mathcal P_{\\bot}\\ge C_\\perp\\mathcal P_{\\bot},\n\\label{bound_below}\n\\end{equation}\nin the sense that the operator $\\mathcal P_{\\bot}(B_0-C_\\perp)\\mathcal P_{\\bot}$ is non-negative.\n\\end{lemma}\n\n\n\nWe use Lemma \\ref{bound_below_lemma} to solve \\eqref{eq_eigenvector3} asymptotically. The overall idea is to diagonalise the leading order term $\\varepsilon^{-1} B_0$ of the asymptotic expansion of $B^\\varepsilon$ in \\eqref{eq_eigenvector3}. From Lemma \\ref{mu_lemma} we infer that $B_0$ has precisely one eigenvalue quadratic in $\\tau$ (which thus gets close to zero), while Lemma \\ref{bound_below_lemma} provides us with a bound below on the remaining eigenvalue. The fact that the eigenvalue $\\mu^{(\\tau)}$ degenerates requires that the next term in the asymptotics of $B^\\varepsilon$ be taken into account in the related eigenspace. This additional term is easily seen to be $z-$dependent (in fact, linear in $z$).\n\nWe start with an auxiliary rescaling of the soft component. Namely, we introduce the unitary operator\n$\\Phi_\\varepsilon$ mapping $v\\mapsto \\widehat{v}$ according to the formula $\\widehat{v}(\\cdot)=\\sqrt{\\varepsilon}{v}(\\varepsilon\\cdot)$. Under this mapping, the length of the soft component loses its dependence on $\\varepsilon$. The operator $\\widehat{A}_{\\rm max}^{\\text{soft}}$ is defined as the unitary image of $A_{\\rm max}^{\\text{soft}}$ under the mapping $\\Phi_\\varepsilon$, and $\\widehat{\\Gamma}_0^{\\rm soft},$ $\\widehat{\\Gamma}_1^{\\rm soft}$ are the boundary operators for the rescaled soft component:\n\\[\n\\widehat{\\Gamma}_0^{\\rm soft}\\widehat{v}:=\\bigl\\{\\widehat{v}(V)\\bigr\\}_V,\n\\qquad \\widehat{\\Gamma}_1^{\\rm soft}\\widehat{v}:=\\biggl\\{\\sum_{e\\ni V}\\widehat{\\partial}^{(\\tau)}\\widehat{v}(V)\\biggr\\}_V,\\qquad \\widehat{v}\\in {\\rm dom}\\bigl(\\widehat{A}_{\\rm max}^{\\text{soft}}\\bigr),\n\\]\nwhere we set $\\widehat{v}(V)$ as the common value of $w_V(e)\\widehat{v}|_e(V)$ for all $e$ adjacent to $V,$ and\n$\\widehat{\\partial}^{(\\tau)} \\widehat{v}(V)$ is the expression $\\sigma_{e}w_V(e)(\\widehat{v}'+{\\rm i}\\tau \\widehat{v})$ on the edge $e,$ calculated at $V.$ Note that $\\widehat{\\Gamma}_1^{\\rm soft}$ does not depend on $\\varepsilon$.\n\nUnder the rescaling $\\Phi_\\varepsilon$ the equation \\eqref{eq_eigenvector3} becomes\n\\begin{equation}\n\\widehat{\\Gamma}_1^{\\rm soft}\\widehat{u}_\\varepsilon=\\varepsilon^{-1} B^\\varepsilon\\widehat{\\Gamma}_0^{\\rm soft}\\widehat{u}_\\varepsilon,\n\\label{eq_eigenvector_resc}\n\\end{equation}\nwhere in accordance with the above convention $\\widehat{u}_\\varepsilon=\\Phi_\\varepsilon \\widetilde{u}_\\varepsilon$.\n\n\nWe start our diagonalisation procedure by considering the non-degenerate ei\\-gen\\-spa\\-ce of $B^\\varepsilon$.\nApplying $\\mathcal P_{\\bot}$ to both sides of \\eqref{eq_eigenvector_resc}, replacing $B^\\varepsilon$ by its asymptotics (\\ref{asympt}) and using (\\ref{bound_below}) yields\n\\begin{equation}\n\\mathcal P_{\\bot} \\widehat{\\Gamma}_1^{\\rm soft} \\widehat{u}_\\varepsilon =\\varepsilon^{-2}\\mathcal P_{\\bot} B_0 \\mathcal P_{\\bot} \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon + O(1)\\ge \\varepsilon^{-2}C_\\perp\\mathcal P_{\\bot} \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon + O(1),\n\\label{asym_rel}\n\\end{equation}\nwhere we assume that $u_\\varepsilon$ is $L^2$-normalised.\nMultiplying by $\\varepsilon^2$ both sides of (\\ref{asym_rel}) and applying the Sobolev embedding theorem to the left-hand side of (\\ref{asym_rel}),\nwe infer\n\\begin{equation}\\label{eq_part-solution}\n\\mathcal P_{\\bot} \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon = O(\\varepsilon^2).\n\\end{equation}\nPlugging this partial solution back into \\eqref{eq_eigenvector_resc}, to which $\\mathcal P$ is applied on both sides, we obtain\n\\begin{align*}\n\\mathcal P \\widehat{\\Gamma}_1^{\\rm soft} \\widehat{u}_\\varepsilon\n&= \\varepsilon^{-2}\\mathcal P B_0 \\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon + z\\mathcal P B_1\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon + O(\\varepsilon^2)\\\\[0.4em]\n&=\\varepsilon^{-2}\\mu^{(\\tau)}\\mathcal P\\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon + z\\mathcal P B_1\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon + O(\\varepsilon^2).\n\\end{align*}\n\nWe have proved that up to an error term admitting a uniform estimate $O(\\varepsilon^2)$ one has the following asymptotically equivalent problem for the eigenvector $\\widehat{v}_\\varepsilon$:\n\\begin{equation}\\label{eq:eigenvector_asymp}\n \\mathcal P_{\\bot} \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon =0,\\quad\n \\mathcal P \\widehat{\\Gamma}_1^{\\rm soft} \\widehat{u}_\\varepsilon = \\varepsilon^{-2}\\mu^{(\\tau)}\\mathcal P\\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon + z\\mathcal P B_1\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon.\n\\end{equation}\n\nWe use Lemma \\ref{mu_lemma} and expand $\\mathcal P B_1 \\mathcal P$ in powers of $\\tau=\\varepsilon t$ as follows\\footnote{In the example considered in the present paper, as opposed to the general case, one can prove that $\\mathcal P B_1 \\mathcal P= \\mathcal P B_1^{(0)}\\mathcal P+O(\\tau^2)$, see the calculation in \\cite[Appendix B]{ChKisYe} for details. This yields the error bound $O(\\varepsilon^2)$ in the statement of Theorem \\ref{eff_thm} below.}: $\\mathcal P B_1 \\mathcal P= \\mathcal P B_1^{(0)}\\mathcal P+O(\\tau)$.\nThe second equation in \\eqref{eq:eigenvector_asymp} admits the form\n\\begin{equation}\n\\label{eq_eigenvector4}\n\\mathcal P \\widehat{\\Gamma}_1^{\\rm soft} \\widehat{u}_\\varepsilon= \\gamma t^2 \\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon+z \\mathcal P B_1^{(0)}\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon +\n(O(\\tau)+O(\\tau^4\/\\varepsilon^2))\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon.\n\\end{equation}\nExpressing $\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon$ from the latter equation, it is easily seen based on embedding theorems that \\eqref{eq_eigenvector4} is asymptotically equivalent, up to an error uniformly estimated as $O(\\varepsilon)$, to the following equation:\n\\begin{equation}\n\\label{eq_eigenvector5}\n\\mathcal P \\widehat{\\Gamma}_1^{\\rm soft} \\widehat{u}_\\varepsilon= \\gamma t^2 \\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon+z \\mathcal P B_1^{(0)}\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}_\\varepsilon.\n\\end{equation}\n\n\n We formulate the above result as the following theorem.\n\n\\begin{theorem}\n\\label{eff_thm}\nLet $\\widehat{u}$ solve the following equation on the re-scaled soft component:\n\\begin{equation*}\\label{eq:eq}\n\\begin{aligned}\n\\widehat{A}_{\\rm max}^{\\rm soft}\\widehat{u}(x)&=z\\widehat{u}(x),\\\\[0.3em]\n\\mathcal P_{\\bot} \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u} &= 0,\\\\[0.3em]\n\\mathcal P \\widehat{\\Gamma}_1^{\\rm soft} \\widehat{u}&= \\gamma t^2 \\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}+z \\mathcal P B_1^{(0)}\\mathcal P \\widehat{\\Gamma}_0^{\\rm soft} \\widehat{u}.\n\\end{aligned}\n\\end{equation*}\n\nThen the eigenvalues $z_\\varepsilon$ and their corresponding eigenfunctions $u_\\varepsilon$ of the operators $A^\\varepsilon_t$\nare $O(\\varepsilon)$-close uniformly in $t\\in[-\\pi\/\\varepsilon, \\pi\/\\varepsilon)$, in the sense of $\\mathbb C$ and in the sense of the $L^2$ norm, respectively, to the values $z$ as above and functions ${u}_{\\rm eff}$ defined as follows. On the soft component ${\\mathbb G}^{\\rm soft}$ we set $u_{\\rm eff}(\\cdot):=(1\/\\sqrt{\\varepsilon})\\widehat{u}({\\varepsilon}^{-1}\\cdot)$. On the stiff component ${\\mathbb G}^{\\rm stiff}$ the function $u_{\\rm eff}$ is obtained as the extension by $(1\/\\sqrt{\\varepsilon})v,$\nwhere $v$ is the solution of the operator equation\n\\[\nA_{\\rm max}^{\\rm stiff} v=0,\n\\]\ndetermined by the Dirichlet data of $\\widehat{u}(\\varepsilon^{-1}\\cdot),$ where $A_{\\rm max}^{\\rm stiff}$ is defined by (\\ref{maternoe_slovo}), Appendix A.\n\n\n\n\n\n\n\n\\begin{remark}\nIt is straightforward to see that the eigenvalue $\\mu^{(\\tau)}$ in Lemma \\ref{mu_lemma} is the least, by absolute value, Steklov eigenvalue of $A_{\\max}^{\\rm stiff}$, {\\it i.e.} the least $\\kappa$ such that the problem\n$$\n\\begin{aligned}\nA_{\\max}^{{\\rm stiff}}\\breve v&=0, \\quad \\breve v\\in W^{2,2}(\\mathbb G^{\\rm stiff}),\\\\[0.3em]\n\\Gamma_1^{\\rm stiff}\\breve v&=\\kappa \\Gamma_0^{\\rm stiff}\\breve v.\n\\end{aligned}\n$$\nadmits a non-trivial solution $\\breve v.$ Note that for this solution $\\breve v$ one has $\\Gamma_0^{\\rm stiff}\\breve v=\\psi^{(\\tau)}.$ It follows that for the function $v$ of Theorem \\ref{eff_thm} one has $v=c\\breve v,$ where $c$ is a constant determined by $\\widehat{u}.$\n\\end{remark}\n\n\n\n\\end{theorem}\n\n\n\n\n\\section{Eigenvalue and eigenvector asymptotics in the example of Section \\ref{our_graph}}\n\nHere we provide the result of an explicit calculation applying the general procedure described in the previous section to the specific example of Section \\ref{our_graph} (see \\cite{ChKisYe} for details).\nWe start by expanding the matrix $B^\\varepsilon$\nas a series in powers of $\\varepsilon$:\n$$\n\\widehat{B}:=\\varepsilon^{-1}B^\\varepsilon=\n\\widehat{B}_0+z\\widehat{B}_1+O(\\varepsilon^2z^2),\\\n\\widehat{B}_0:= { \\frac{1}{\\varepsilon^2 }\\begin{pmatrix} D&\\overline{\\xi}\\\\[0.7em]\n \\xi& D\n \\end{pmatrix}},\\ \\widehat{B}_1:=\n {\\begin{pmatrix} E & \\overline{\\eta}\\\\[0.7em]\n \\eta& E\n \\end{pmatrix}},\n$$\nwhere\n\\begin{align}\n\\xi:&=-\\frac{a_1^2}{l_1}\\exp\\bigl({\\rm i}\\tau(l_1+l_3)\\bigr)-\\frac{a_3^2}{l_3}\\exp(-{\\rm i}\\tau l_2),\\quad\\quad\\quad D:=\\frac{a_1^2}{l_1}+\\frac{a_3^2}{l_3},\n\\label{xi_def}\\\\[0.75em]\n\\eta:&=\\dfrac{1}{6}\\Bigl(l_1\\exp\\bigl({\\rm i}\\tau(l_1+l_3)\\bigr)+l_3\\exp(-{\\rm i}\\tau l_2)\\Bigr),\\quad\\quad\\quad E:=\\dfrac{1}{3}(l_1+l_3).\\nonumber\n\\end{align}\n\n The matrix $\\varepsilon^2 \\widehat{B}_0$ is Hermitian and has two distinct eigenvalues, $\\mu=D-|\\xi|$ and $\\mu_\\bot=D+|\\xi|$. The eigenvalue branch $\\mu$ is singled out by the condition $\\mu\\vert_{\\tau=0}=0$. In order to diagonalise the matrix $\\widehat{B}_0$, consider the normalised eigenvectors $\\psi^{(\\tau)}=(1\/\\sqrt{2})(1,-\\xi\/|\\xi|)^\\top$ and $\\psi^{(\\tau)}_\\bot=(1\/\\sqrt{2})(1,\\xi\/|\\xi|)^\\top$ corresponding to the eigenvalues $\\mu$ and $\\mu_\\bot$, respectively, and the matrix $X:=\\bigl(\\psi^{(\\tau)}, \\psi^{(\\tau)}_\\bot\\bigr).$\nThe projections ${\\mathcal P},$ ${\\mathcal P}_\\bot$ introduced in the previous section are as follows:\n\\[\n{\\mathcal P}=\\frac{1}{2}\\left(\\begin{array}{cc}1&\\dfrac{\\overline{\\xi}}{\\vert\\xi\\vert}\\\\[1.3em] \\dfrac{\\xi}{\\vert\\xi\\vert} &1\\end{array}\\right),\\quad\\quad {\\mathcal P}_\\bot=\\frac{1}{2}\\left(\\begin{array}{cc}1 &-\\dfrac{\\overline{\\xi}}{\\vert\\xi\\vert}\\\\[1.3em] -\\dfrac{\\xi}{\\vert\\xi\\vert} & 1\\end{array}\\right).\n\\]\n\nIt follows by a straightforward calculation that the effective spectral problem\nis given by\n\\begin{equation}\n-\\biggl(\\frac d{dx}+{\\rm i}\\tau\\biggr)^2u=zu\n\\label{res_eq}\n\\end{equation}\n\\begin{multline}\nu(0)=-\\frac {\\overline{\\xi}}{|\\xi|}u(l_2),\\\\\n(u'+{\\rm i}\\tau u)(0)+\\frac {\\overline{\\xi}}{|\\xi|}(u'+{\\rm i}\\tau u)(l_2)=\\Biggl(\\biggl(\\dfrac{l_1}{a_1^2}+\\dfrac{l_3}{a_3^2}\\biggr)^{-1}\\biggl(\\frac{\\tau}{\\varepsilon}\\biggr)^2-(l_1+l_3)z\\Biggr)u(0),\n\\label{res_bc}\n\\end{multline}\n\nBy invoking Theorem \\ref{eff_thm}, the problem (\\ref{res_eq})--(\\ref{res_bc}) on the scaled soft component provides the asymptotics, as $\\varepsilon\\to0,$ of the eigenvalue problems for the family $A^\\varepsilon_t,$ $t=\\tau\/\\varepsilon\\in[-\\pi\/\\varepsilon,\\pi\/\\varepsilon).$ Its spectrum, {\\it i.e.} the set of values $z$ for which (\\ref{res_eq})--(\\ref{res_bc}) has a non-trivial solution, as well as the corresponding eigenfunctions approximate, up to terms of order $O(\\varepsilon^2),$ the corresponding spectral information for the family $A^\\varepsilon_t,$ and consequently, $A^\\varepsilon.$ Notice that the stiff component of the original graph (where the eigenfunctions converge to a constant, in a suitable scaled sense), appears in this limit problem through the boundary datum $u(0).$ In the next section we show that an appropriate extension of the function space for (\\ref{res_eq})--(\\ref{res_bc}) by the (one-dimensional) complementary space of constants leads to an eigenvalue problem for a self-adjoint operator, describing a conservative system. Solving this latter eigenvalue problem for the element in the complementary space yields a frequency-dispersive formulation we announced in the introduction.\n\n\n\n\n\n\n\n\\section{Frequency dispersion in a ``complementary\" medium}\n\n\\subsection{Self-adjoint out-of-space extension}\n\n\\label{Ahom}\n\nFollowing the strategy outlined at the end of the last section, we treat $u(0)$ in (\\ref{res_bc}) as an additional field variable, and reformulate (\\ref{res_eq})--(\\ref{res_bc}) as an eigenvalue problem in a space of pairs $(u, u(0)),$ see (\\ref{spectral_eq}).\n\nMore precisely, for all values $\\tau\\in[-\\pi, \\pi),$ consider an operator $A^{\\rm hom}_\\tau$ in the space $L^2(0, l_2)\\oplus \\mathbb{C}$\ndefined as follows. The domain $\\text{\\rm dom}\\bigl(A^{\\rm hom}_\\tau\\bigr)$ consist of all pairs $(u,\\beta)$ such that $u\\in W^{2,2}(0, l_2)$ and the quasiperiodicity condition\n\\begin{equation}\nu(0)=\\overline{w_\\tau}u(l_2)=:\\frac{\\beta}{\\sqrt{l_1+l_3}},\\qquad w_\\tau\\in{\\mathbb C},\n\\label{quasi_cond}\n\\end{equation}\nis satisfied. On $\\text{\\rm dom}\\bigl(A^{\\rm hom}_\\tau\\bigr)$\nthe action of the operator is set by\n\\begin{equation}\nA^{\\rm hom}_\\tau\\left(\\begin{matrix}u\\\\[0.3em] \\beta\\end{matrix}\\right)=\n\\left(\\begin{array}{c}\\biggl(\\dfrac{1}{\\rm i}\\dfrac{d}{dx}+\\tau\\biggr)^2u\\\\[1.1em]\n\\dfrac{1}{\\sqrt{l_1+l_3}}\\Gamma_\\tau\\left(\\begin{matrix}u\\\\[0.3em] \\beta\\end{matrix}\\right)\n\\end{array}\\right),\n\\label{lim_form}\n\\end{equation}\nwhere $\\Gamma_\\tau: W^{2,2}(0, l_2)\\oplus{\\mathbb C}\\to{\\mathbb C}$ is bounded. We set\n\\begin{equation}\n\\Gamma_\\tau\\left(\\begin{matrix}u\\\\[0.3em] \\beta\\end{matrix}\\right)=-(u'+{\\rm i}\\tau u)(0)+\\overline{w_\\tau}\n(u'+{\\rm i}\\tau u)(l_2)+\n\\frac{(\\sigma t)^2}{\\sqrt{l_1+l_3}}\\beta, \\quad\\sigma^2:=\\biggl(\\dfrac{l_1}{a_1^2}+\\dfrac{l_3}{a_3^2}\\biggr)^{-1},\n\\label{Gamma_part}\n\\end{equation}\nwhere $w_\\tau=-{\\xi}\/{|\\xi|}$ (see \\eqref{xi_def} for the definition of $\\xi$), in which case $A^{\\rm hom}_\\tau$ is a self-adjoint operator on the domain described by (\\ref{quasi_cond}). Moreover, (\\ref{res_eq})--(\\ref{res_bc}) is the problem on the first component of spectral problem for the operator $A^{\\rm hom}_\\tau:$\n\\begin{equation}\nA^{\\rm hom}_\\tau\\left(\\begin{matrix} u\\\\[0.3em] \\beta\\end{matrix}\\right)=z\\left(\\begin{matrix} u\\\\[0.3em] \\beta\\end{matrix}\\right).\n\\label{spectral_eq}\n\\end{equation}\n\nWe now re-write this spectral problem in terms of the complementary component $\\beta\\in{\\mathbb C}.$ In order to do this, we represent the function\n$u$ in (\\ref{spectral_eq}) as a sum of two: one of them is a solution to the related inhomogeneous Dirichlet problem, while the other takes care of the boundary condition. More precisely, consider the solution $v$ to the problem\n\\begin{equation*}\n-\\biggl(\\frac{d}{dx}+{\\rm i}\\tau\\biggr)^2v=0,\\qquad\\qquad\nv(0)=1,\\ \\ \\ \\ \\ v(l_2)=w_\\tau,\n\\end{equation*}\n{\\it i.e.}\n\\begin{equation}\nv(x)=\\Bigl\\{1+l_2^{-1}\\Bigl(w_\\tau\\exp({\\rm i}\\tau l_2)-1\\Bigr)x\\Bigr\\}\\exp(-{\\rm i}\\tau x),\\quad\\quad x\\in(0, l_2).\n\\label{function_v}\n\\end{equation}\nThe function\n\\[\n\\widetilde{u}:=u-\\frac{\\beta}{\\sqrt{l_1+l_3}}v\n\\]\nsatisfies\n\\begin{equation*}\n-\\biggl(\\frac{d}{dx}+{\\rm i}\\tau\\biggr)^2\\widetilde{u}-z\\widetilde{u}=\\frac{z\\beta}{\\sqrt{l_1+l_3}}v,\\quad\\qquad\n \\widetilde{u}(0)=\\widetilde{u}(l_2)=0.\n\n\\end{equation*}\nIn other words, one has\n\\begin{equation*}\n\\widetilde{u}=\\frac{z\\beta}{\\sqrt{l_1+l_3}}(A_{\\rm D}-zI)^{-1}v,\n\\end{equation*}\nwhere $A_{\\rm D}$ is the Dirichlet operator in $L^2(0, l_2)$ associated with the differential expression\n\\[\n-\\biggl(\\frac{d}{dx}+{\\rm i}\\tau\\biggr)^2.\n\\]\nWe can now write the ``boundary'' part of the spectral equation (\\ref{spectral_eq}) as\n\\begin{equation}\nK(\\tau, z)\\beta=z\\beta,\\quad K(\\tau, z):=\\dfrac{1}{l_1+l_3}\\left\\{z\\Gamma_\\tau\\left(\\begin{matrix\n(A_{\\rm D}-zI)^{-1}v\\\\[0.3em] 0\\end{matrix}\\right)+\n\\Gamma_\\tau\\left(\\begin{matrix}v\\\\[0.3em] \\sqrt{l_1+l_3}\\end{matrix}\\right)\\right\\}.\n\\label{K_expr}\n\\end{equation}\nIn accordance with the rationale for introducing the component $\\beta,$ the effective dispersion relation for the operator $A_{\\tau\/\\varepsilon}^\\varepsilon,$\n$\\tau\\in[-\\pi,\\pi),$ is given by\n\\[\nK(\\tau, z)=z.\n\\]\nThe explicit expression for this relation that we have obtained, see (\\ref{K_expr}), is new, and it quantifies explicitly the r\\^ole of the soft component of the composite in the macroscopic frequency-dispersive properties. In particular, the expression (\\ref{K_expr}) shows that the soft inclusions enter the macroscopic equations via a Dirichlet-to-Neumann map on the boundary of the inclusions.\n\n\\subsection{Explicit formula for the time-dispersion kernel}\n\nHere we compute explicitly the kernel $K(\\tau, z)$ entering the effective dispersion relation for $A_\\tau^\\varepsilon.$ In view of possible generalisations, and recalling the pioneering formula in \\cite[Section 8]{Jikov} for effective dispersion in double-porosity media, we represent the action of the resolvent $(A_{\\rm D}-zI)^{-1}$ as a series in terms of the normalised eigenfunctions\n\\begin{equation}\n\\phi_j(x)=\\sqrt{\\frac{2}{l_2}}\\exp(-{\\rm i}\\tau x)\\sin\\frac{\\pi jx}{l_2},\\qquad x\\in(0,l_2),\\qquad\\qquad j=1,2,3,\\dots,\n\\label{function_phi}\n\\end{equation}\nof the operator $A_{\\rm D}.$ This yields\n\\begin{equation}\nK(\\tau, z):=\\dfrac{1}{l_1+l_3}\\left\\{z\\sum_{j=1}^\\infty\\dfrac{\\langle v, \\phi_j\\rangle}{\\mu_j-z}\\Gamma_\\tau\\left(\\begin{matrix\n\\varphi_j\\\\[0.3em] 0\\end{matrix}\\right)+\n\\Gamma_\\tau\\left(\\begin{matrix}v\\\\[0.3em] \\sqrt{l_1+l_3}\\end{matrix}\\right)\\right\\}.\n\\label{K_general1}\n\\end{equation}\nwhere $\\mu_j=(\\pi j\/l_2)^2,$ $j=1,2,3,\\dots,$ are the eigenvalues corresponding to (\\ref{function_phi}). For the choice (\\ref{Gamma_part}) of $\\Gamma_\\tau$ we obtain (see (\\ref{function_v}), (\\ref{function_phi}))\n\\[\n\\Gamma_\\tau\\left(\\begin{matrix}v\\\\[0.3em] \\sqrt{l_1+l_3}\\end{matrix}\\right)\n=\\frac{2}{l_2}\\bigl(1-\\Re\\theta(\\tau)\\bigr)+\\biggl(\\frac{\\sigma\\tau}{\\varepsilon}\\biggr)^2,\\qquad \\theta(\\tau):=\\frac{\\dfrac{a_1^2}{l_1}{\\rm e}^{-{\\rm i}\\tau}+\\dfrac{a_3^2}{l_3}}{\\biggl|\\dfrac{a_1^2}{l_1}{\\rm e}^{-{\\rm i}\\tau}+\\dfrac{a_3^2}{l_3}\\biggr|},\n\\]\n\\[\n\\Gamma_\\tau\\left(\\begin{matrix\n\\varphi_j\\\\[0.3em] 0\\end{matrix}\\right)=-\\sqrt{\\frac{2}{l_2}}\\frac{\\pi j}{l_2}\\bigl((-1)^{j+1}\\overline{\\theta(\\tau)}+1\\bigr),\\\n\\langle v, \\phi_j\\rangle=\\frac{\\sqrt{2l_2}}{\\pi j}\\bigl((-1)^{j+1}\\theta(\\tau)+1\\bigr),\\ j=1,2,\\dots\n\\]\nSubstituting the above expressions into (\\ref{K_general1}) and making use of the formulae, see {\\it e.g.} \\cite[p.\\,48]{Gradshteyn_Ryzhik},\n\\begin{equation*}\n\\sum_{j=1}^\\infty\\frac{1}{(\\pi j)^2-x^2}=\\frac{1}{2}\\biggl(\\frac{1}{x^2}-\\frac{\\cos x}{x\\sin x}\\biggr),\\quad\n\\sum_{j=1}^\\infty\\frac{(-1)^j}{(\\pi j)^2-x^2}=\\frac{1}{2}\\biggl(\\frac{1}{x^2}-\\frac{1}{x\\sin x}\\biggr),\\quad x\\notin\\pi{\\mathbb Z},\n\\end{equation*}\nwe obtain\n\n\\begin{equation}\nK(\\tau, z)=\\frac{1}{l_1+l_3}\\biggl\\{\n\\frac{2\\sqrt{z}\\cos(l_2\\sqrt{z})}{\\sin(l_2\\sqrt{z})}\n-\\frac{2\\sqrt{z}}{\\sin(l_2\\sqrt{z})}\\Re\\theta(\\tau)+\\biggl(\\frac{\\sigma\\tau}{\\varepsilon}\\biggr)^2\\biggr\\}.\n\\label{K_example}\n\\end{equation}\n\n\n\\subsection{Asymptotically equivalent model on the real line}\n\nIn this section we are going to treat (\\ref{K_expr}), (\\ref{K_example}) as a nonlinear eigenvalue problem in the space of second components of pairs\n$(u, \\beta)= L^2(0, l_2)\\oplus{\\mathbb C}.$ As is evident from above, this problem is closely related to (\\ref{res_eq})--(\\ref{res_bc}), via the construction presented in Section \\ref{Ahom}.\nWe show next that the aforementioned macroscopic field is governed by a certain frequency-dispersive formulation. In order to obtain the latter,\nwe will use a suitable inverse Gelfand transform.\n\nOur strategy can be seen as motivated by the following elementary observation, closely linked with the Birman-Suslina study of homogenisation in the moderate contrast case, albeit understood in terms of spectral equations. Starting with the spectral problem\n\\begin{equation}\\label{eq:b-s-problem}\n-\\frac {d^2u}{dx^2}=zu \\text{\\ \\ on\\ \\ } L_2(\\mathbb R),\n\\end{equation}\none applies the Gelfand transform\\footnote{Recall, {\\it cf.} Section \\ref{Gelfand_section}, that the Gelfand transform is a map\n$L^2({\\mathbb R})\\to L^2\\bigl((0, \\varepsilon)\\times(-\\pi\/\\varepsilon,\\pi\/\\varepsilon)\\bigr)$ given by\n\\begin{equation*}\n{\\mathcal G}u(y, t)=\\sqrt{\\frac{\\varepsilon}{2\\pi}}\\sum_{n\\in{\\mathbb Z}}u(x+\\varepsilon n)\\exp\\bigl(-{\\rm i}t(x+\\varepsilon n)\\bigr),\\qquad t\\in\\bigl[-\\pi\/\\varepsilon, \\pi\/\\varepsilon\\bigr),\\qquad x\\in(0,\\varepsilon).\n\\end{equation*}} (well-defined on generalised eigenvectors due to the rigging procedure, see, {\\it e.g.,} \\cite{Berezansky,BS}) to obtain for $\\widetilde u:=\\mathcal G u$\n$$\n-\\biggl(\\frac d {dx}+i t\\biggr)^2 \\widetilde u(x,t) = z \\widetilde u(x,t), \\quad x\\in(0,\\varepsilon),\\quad t\\in[-\\pi\/\\varepsilon,\\pi\/\\varepsilon).\n$$\nWe compute the inner products of both sides in $L_2(0,\\varepsilon)$ with the normalised constant function $(1\/\\sqrt{\\varepsilon})\\mathbbm 1$, which yields the dispersion relation of the original problem via the equation\n$$\nt^2 \\widehat u (t)=z \\widehat u(t),\n$$\nwhere $\\widehat u$ is the Fourier transform of the function $u\\in L_2(\\mathbb R)$. The latter equation is then solved in the distributional sense,\n\\begin{equation}\\label{eq:beta}\n\\beta (t)=\\sum_{m} c_m \\delta(t-t_m),\n\\end{equation}\nwhere $\\beta (t):=\\widehat u(t)$ and the sum in \\eqref{eq:beta} is taken over $m=1,2$, $t_1, t_2$ being the zeroes of the equation $t^2=z$ and $c_m$ are arbitrary constants. Ultimately, one applies the inverse Gelfand transform\n\\[\n({\\mathcal G}^*f)(x)=\\sqrt{\\frac{\\varepsilon}{2\\pi}}\\int\\limits_{-\\pi\/\\varepsilon}^{\\pi\/\\varepsilon}f(t)\\exp({\\rm i}tx)dt,\\quad f\\in L^2\\biggl(-\\frac{\\pi}{\\varepsilon}, \\frac{\\pi}{\\varepsilon}\\biggr),\\qquad x\\in{\\mathbb R},\n\\]\nto the function $\\mathfrak B (x,t):=(1\/\\sqrt{\\varepsilon})\\beta(t)\\mathbbm 1(x),$ {\\it i.e.}\n$$\nv(x):=\\sqrt{\\frac{\\varepsilon}{2\\pi}}\\int_{-\\pi\/\\varepsilon}^{\\pi\/\\varepsilon} \\mathfrak B(x,t) \\exp(i t x) dt, \\qquad x\\in{\\mathbb R}.\n$$\nIt is easily seen that this function is precisely the solution to \\eqref{eq:b-s-problem}.\n\nWe emulate the above argument for the case of interest to us, starting from the eigenvalue problem\n$K(\\tau, z)\\beta=z\\beta,$ which we now treat as an equation in the distributional sense with $K$ given by (\\ref{K_example}). It admits the form\n\\begin{equation}\n(\\sigma t)^2\\beta=\\biggl\\{(l_1+l_3)z-\\frac{2\\sqrt{z}\\cos(l_2\\sqrt{z})}{\\sin(l_2\\sqrt{z})}+\\frac{2\\sqrt{z}}{\\sin(l_2\\sqrt{z})}\\Re \\theta(\\varepsilon t)\\biggr\\}\\beta,\\qquad t=\\frac{\\tau}{\\varepsilon},\n\\label{spectral_final}\n\\end{equation}\nThe solution is defined by \\eqref{eq:beta}, where $\\{t_m\\}$ is the set of zeroes of the equation $K(\\varepsilon t,z)=z$.\n\nSecond, we argue that the function $\\mathfrak B(x,t)$ as defined above\nis the $\\varepsilon$-periodic Gelfand transform of the solution to a spectral equation on ${\\mathbb R}$ for a differential operator with constant coefficients, where the conventional spectral parameter $z$ is replaced by a nonlinear in $z$ expression, as on the right-hand side of (\\ref{spectral_final}).\n\nIndeed, expand the function $\\Re\\theta(\\tau)$ into Fourier series\n\\[\n\\Re\\theta(\\tau)=\\frac{1}{\\sqrt{2\\pi}}\\sum_{n=-\\infty}^\\infty c_n\\exp({\\rm i}n\\tau),\\qquad\nc_n:=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\pi}^{\\pi}\\Re\\theta(\\tau)\\exp(-{\\rm i}n\\tau)d\\tau,\\qquad n\\in{\\mathbb Z}.\n\\]\nand apply to $\\mathfrak B(x,t)$ the inverse\nGelfand transform ${\\mathcal G}^*:$\n\n\\[\n({\\mathcal G}^*f)(x)=\\sqrt{\\frac{\\varepsilon }{2\\pi}}\\int\\limits_{-\\pi\/\\varepsilon}^{\\pi\/\\varepsilon}f(t)\\exp({\\rm i}tx)dt,\\quad f\\in L^2\\biggl(-\\frac{\\pi}{\\varepsilon }, \\frac{\\pi}{\\varepsilon }\\biggr),\\qquad x\\in{\\mathbb R}.\n\\]\nWe denote $U:={\\mathcal G}^*\\mathfrak B$ and notice that\n\\[\n\\sqrt{\\frac{\\varepsilon }{2\\pi}}\\int\\limits_{-\\pi\/\\varepsilon}^{\\pi\/\\varepsilon}t^2\\mathfrak B(x,t)\\exp({\\rm i}tx)dt=-\\frac{d^2}{dx^2}\\Biggl(\\sqrt{\\frac{\\varepsilon }{2\\pi}}\\int\\limits_{-\\pi\/\\varepsilon}^{\\pi\/\\varepsilon}\\mathfrak B(x,t)\\exp({\\rm i}tx)dt\\Biggr)=-U''(x)\n\\]\nand\n\\begin{align*}\n&\\sqrt{\\frac{\\varepsilon }{2\\pi}}\\int\\limits_{-\\pi\/\\varepsilon}^{\\pi\/\\varepsilon}\\Re\\theta(\\varepsilon t)\\mathfrak B(x,t)\\exp({\\rm i}tx)dt=\\sum_{n=-\\infty}^\\infty c_n{\\frac{\\sqrt{\\varepsilon} }{2\\pi}}\\int\\limits_{-\\pi\/\\varepsilon}^{\\pi\/\\varepsilon}\\mathfrak B(x,t)\\exp\\bigl({\\rm i}t(x+\\varepsilon n)\\bigr)dt\\\\\n\\\\\n&=\\frac{1}{\\sqrt{2\\pi}}\\sum_{n=-\\infty}^\\infty c_n U(x+\\varepsilon n)\n\\sim\n\\frac{1}{\\sqrt{2\\pi}}\\sum_{n=-\\infty}^\\infty c_n U(x)\n=\\Re\\theta(0) U(x)=U(x),\\qquad \\varepsilon\\to0.\n\\end{align*}\n\nThe above asymptotics as $\\varepsilon\\to0$ is understood in the sense of $W^{-2,2}(\\mathbb R).$ It can be demonstrated, see \\cite{ChKisYe}, that the order of convergence is $O(\\varepsilon^{2})$ (and $O(\\varepsilon)$ in the general case), however we do not dwell on the complete proof here. The idea of the proof, which is standard, can be, for example, the following. Instead of the function $\\beta,$ define $\\beta^0$ by the expression (\\ref{eq:beta}), where the sequence $\\{t_m\\}$ is replaced by the sequence $\\{t_m^0\\}$ of zeros of the equation $K^0(\\tau, z)=z.$ Here $K^0$ is defined by (\\ref{K_example}) with $\\Re\\theta(\\tau)$ replaced by $\\Re\\theta(0)=1.$ It is then shown that $\\beta$ is $O(\\varepsilon^{2})$-close, in the sense of distributions, to $\\beta^0,$ from where one obtains the claim by taking the inverse Gelfand transform of the function ${\\mathfrak B}^0(x, t)=(1\/\\sqrt{\\varepsilon})\\beta^0(t){\\mathbbm 1}(x).$\n\nIt follows that the limit equation on the function $U$ takes the form\n\\begin{equation}\n-\\sigma^2 \\,U''(x)\n=\\biggl\\{(l_1+l_3)z+2\\sqrt{z}\\tan\\biggl(\\frac{l_2\\sqrt{z}}{2}\\biggr)\\biggr\\}U(x),\\qquad x\\in{\\mathbb R}.\n\\label{limit_spectral}\n\\end{equation}\nIn particular, the limit spectrum is given by the set of $z\\in{\\mathbb R}$ for which the expression in brackets on the right-hand side of (\\ref{limit_spectral}) is non-negative, see Fig.\\,\\ref{fig:tangens}.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=1]{tangens.pdf}\n\\end{center}\n\\caption{{\\scshape Dispersion function.} {\\small The plot of the dispersion function on the right-hand side of (\\ref{limit_spectral}), for $L=0.2.$ The spectral gaps are highlighted in bold.}}\n\\label{fig:tangens}\n\\end{figure}\n\n\n\n\n\n\n\\section*{Appendix A: The reduction of the general case to the one treated in Section \\ref{sect:asymp_diag}}\\label{App_Kis}\n\nWe proceed as follows. First, we decompose the graph $\\widehat{\\mathbb G}$ into the union of its stiff and soft components, $\\widehat{\\mathbb G}=\\mathbb G^{\\text{soft}}\\cup \\mathbb G^{\\text{stiff}}$, each of these being a graph on its own. The common boundary of them is $\\partial \\mathbb G:=\\mathbb G^{\\text{soft}}\\cap \\mathbb G^{\\text{stiff}},$ and it is treated as a set of vertices. Second, we consider two maximal operators $\\breve A_{\\max}^{\\text{soft}}$ and $\\breve A_{\\max}^{\\text{stiff}},$ which are densely defined in $L_2(\\mathbb G^{\\text{soft}})$ and $L_2(\\mathbb G^{\\text{stiff}})$, respectively, by \\eqref{diff_expr}, \\eqref{domAmax} applied to $\\mathbb G^{\\text{soft}}$ and $\\mathbb G^{\\text{stiff}}$. Furthermore, we introduce the orthogonal projections $P^{\\text{soft}}, P^{\\text{stiff}}$ in the boundary space $\\mathcal H$ onto the subspaces pertaining to vertices of $\\mathbb G^{\\text{soft}}$ and $\\mathbb G^{\\text{stiff}}$, respectively. Finally, we construct boundary triples for $\\breve A_{\\max}^{\\text{soft (stiff)}}$ with boundary spaces $P^{\\text{soft (stiff)}}\\mathcal H$ and boundary operators $\\breve\\Gamma_j^{\\text{soft (stiff)}}$, $j=0,1$ ({\\it cf.} \\eqref{boundary_operators}), respectively.\n\nNow consider the restrictions\n\\begin{equation}\n\\begin{aligned}\n&A_{\\max}^{\\text{soft (stiff)}}=\\breve A_{\\max}^{\\text{soft (stiff)}}\\big|_{\\dom(A_{\\max}^{\\text{soft (stiff)}})},\\\\[0.4em]\n&\\dom\\bigl(A_{\\max}^{\\text{soft (stiff)}}\\bigr):=\\Bigl\\{u\\in \\dom\\bigl(\\breve A_{\\max}^{\\text{soft (stiff)}}\\bigr)\\Big| (1-P_{\\partial \\mathbb G})\\breve\\Gamma_1^{\\text{soft (stiff)}}u=0\\Bigr\\},\n\\end{aligned}\n\\label{maternoe_slovo}\n\\end{equation}\nwhere $P_{\\partial \\mathbb G}$ is defined as an orthogonal projection in $\\mathcal H$ onto the subspace pertaining to the vertices belonging to $\\partial \\mathbb G$. For these two maximal operators, one has the common boundary space $P_{\\partial\\mathbb G}\\mathcal H$ and boundary operators defined by\n$$\n\\Gamma_j^{\\text{soft (stiff)}}:=P_{\\partial\\mathbb G} \\breve\\Gamma_j^{\\text{soft (stiff)}},\\quad j=0,1.\n$$\nThe corresponding $M$-matrices $M^{\\text{soft (stiff)}}$ are computed as inverses of the matrices $$P_{\\partial \\mathbb G}\\bigl(\\breve M^{\\text{soft (stiff)}}\\bigr)^{-1}P_{\\partial \\mathbb G},$$ where the latter are considered in the reduced space $P_{\\partial \\mathbb G} \\mathcal H$ and $\\breve M^{\\text{soft (stiff)}}$ are $M$-matrices of $\\breve A_{\\max}^{\\text{soft (stiff)}}$ relative to the boundary triples\n$\\bigl(P^{\\text{soft (stiff)}}\\mathcal H, \\breve\\Gamma_0^{\\text{soft (stiff)}}, \\breve\\Gamma_1^{\\text{soft (stiff)}}\\bigr)$.\n\nIt is easily shown that the operator $A^\\varepsilon_t$ is expressed as an almost solvable extension parameterised by the matrix $B=0$ relative to a triple which has the $M$-matrix $M=M^{\\text{soft}}+M^{{\\rm stiff}}$. It follows that all the prerequisites of the analysis carried out in Section \\ref{sect:asymp_diag} are met.\n\n\n\n\n\n\\section*{Appendix B: Proof of Lemma \\ref{mu_lemma}}\n\n\nThe proof could be carried out on the basis of \\cite{Yorzh3}, \\cite{Yorzh4} and is rather elementary. Nevertheless, in the present paper we have elected to follow an alternative approach to this proof, which has an advantage of carrying over to the PDE case with minor modifications.\n\nFor simplicity we set $w_V(e)=1$ for all $e, V$ in (\\ref{Atau1}), as the argument below is unaffected by the concrete choice of the list $\\{w_V(e)\\}_{e\\ni V},$ $V\\in\\widehat{\\mathbb G},$ in the construction of Section \\ref{Gelfand_section}. For convenience, we also imply that the unitary rescaling to a graph of length one has been applied to the operator family $A_t^\\varepsilon$. For brevity, we keep the same notation for the unitary images of graphs $\\widehat{\\mathbb{G}}$, $\\mathbb{G}^{\\rm stiff}$ and $\\partial \\mathbb G$ under this transform.\n\nFor each $\\tau\\in[-\\pi, \\pi),$ the eigenvalues of $B_0(\\tau)$ are those $\\mu\\in{\\mathbb C}$ for which there exists $u\\neq0$ satisfying\n\\begin{equation}\n\\left\\{\\begin{array}{ll}\\biggl(\\dfrac{d}{dx}+{\\rm i}\\tau\\biggr)^2u=0\\quad{\\rm in}\\ {\\mathbb G}^{\\rm stiff}, \\\\[1.2em]\n-\\sum_{e\\ni V}\\sigma_e\\bigl(u'_{e}(V)+{\\rm i}\\tau u(V)\\bigr)=\\mu u(V),\\quad V\\in\\partial{\\mathbb G},\\\\[1em]\nu\\ {\\rm continuous\\ on\\ }{\\mathbb G}^{\\rm stiff},\n\\end{array}\\right.\n\\label{problem}\n\\end{equation}\nwhere $u'_{e}(V)$ is the derivative of $u$ along the edge $e$ of ${\\mathbb G}^{\\rm stiff}$ evaluated at $V\\in\\partial{\\mathbb G},$ and, as before,\n$\\sigma_{e}=-1$ or $\\sigma_{e}=1,$ depending on whether $e$ is incoming or outgoing for $V,$ respectively.\nIt is known that the spectrum of (\\ref{problem}) is discrete and the least eigenvalue, which clearly coincides with $\\mu^{(\\tau)},$ is simple.\n\n{\\it Formal series.} In order to show (\\ref{mu_asymptotics}), we first consider series in powers of ${\\rm i}\\tau:$\n\\begin{equation}\n\\mu=\\sum_{k=1}^\\infty\\alpha_j({\\rm i}\\tau)^{2k},\\qquad\n u=\\sum_{j=0}^\\infty u_j({\\rm i}\\tau)^j,\n \\label{expansion}\n\\end{equation}\nwhere $u_j,$ $j=1,2,\\dots$ are continuous on ${\\mathbb G}^{\\rm stiff}.$\n\n\nNote that the expansion for $\\mu$ contains only even powers of the parameter $\\tau,$ as it is an even function of $\\tau.$ Indeed, the function obtained from the eigenfunction $u$ in (\\ref{problem}) by changing the directions of all edges of the graph is clearly an eigenfunction for (\\ref{problem}) with $\\tau$ replaced by $-\\tau.$ (On such a change of edge direction, the weights $w_e(V),$ ${e\\ni V},$ $V\\in\\widehat{\\mathbb G},$ are replaced by their complex conjugates.) In view of the fact that for all $\\tau\\in(-\\pi,\\pi]$ the eigenvalue $\\mu^{(\\tau)}$ is simple, we obtain $\\mu^{(-\\tau)}=\\mu^{(\\tau)}.$\n\nSubstituting the expansion (\\ref{expansion}) into (\\ref{problem}) and equating the coefficients on different powers of\n$\\tau,$ we obtain a sequence of recurrence relations for $u_j,$ $j=0,1,\\dots$ In particular, the problem for $u_0$ is obtained by comparing the coefficients on $\\tau^0:$\n\\[\n\\left\\{\\begin{array}{lll}u_0''=0\\ \\ \\ {\\rm on}\\ \\ {\\mathbb G}^{\\rm stiff}, \\\\[0.5em]\n\\sum_{e\\ni V}\\sigma_e(u_0)_e'(V)=0,\\quad V\\in\\partial{\\mathbb G},\\\\[0.6em]\nu_0\\ {\\rm continuous\\ on\\ }{\\mathbb G}^{\\rm stiff}.\n\n\\end{array}\\right.\n\\]\nAssuming that ${\\mathbb G}^{\\rm stiff}$ contains a loop, it follows that\n$u_0$ is a constant, which we set to be unity. In the case opposite, i.e., when ${\\mathbb G}^{\\rm stiff}$ is a tree, $\\mu^{(\\tau)}\\equiv 0$ for all $\\tau$, and the claim of Lemma follows trivially.\n\nWe impose the condition of vanishing mean of $u_j,$ $j=1,2,\\dots$ over ${\\mathbb G}^{\\rm stiff}.$ This is justified by the convergence estimates below as well as the fact that the eigenvalue $\\mu$ is simple. The choice $u_0=1$ thus corresponds to the ``normalisation\" condition that the mean over ${\\mathbb G}^{\\rm stiff}$ of the eigenfunction $u$ for (\\ref{problem}) is close to unity\\footnote{The eigenfunction $u$ clearly does not vanish identically, at least for small values of $\\tau.$} for small values of $\\tau.$\n\n\nProceeding with the asymptotic procedure, the problem for $u_1$ is obtained by comparing the coefficients on $\\tau^1:$\n\\[\n\\left\\{\\begin{array}{ll}u''_1=0\\ \\ {\\rm on}\\ \\ {\\mathbb G}^{\\rm stiff}, \\\\[0.7em]\n\\sum_{e\\ni V}\\sigma_e\\bigl((u_1)_e'(V)+1\\bigr)=0,\\quad V\\in\\partial{\\mathbb G},\\\\[0.8em]\nu_1\\ {\\rm continuous\\ on\\ }{\\mathbb G}^{\\rm stiff},\\\\[0.8em]\n\\int_{{\\mathbb G}^{\\rm stiff}}u_1=0.\n\\end{array}\\right.\n\\]\nFurther, the equation for $u_2$ is obtained by comparing the coefficients on $\\tau^2:$\n\\begin{equation}\n\\left\\{\\begin{array}{ll}u''_2=-2u'_1-1 \\ \\ {\\rm on}\\ \\ {\\mathbb G}^{\\rm stiff}, \\\\[1.1em]\n-\\sum_{e\\ni V}\\sigma_e\\bigl((u_2)_e'(V)+u_1(V)\\bigr)=\\alpha_2,\\quad V\\in\\partial{\\mathbb G},\\\\[1.2em]\nu_2\\ {\\rm continuous\\ on\\ }{\\mathbb G}^{\\rm stiff},\\\\[0.8em]\n\\int_{{\\mathbb G}^{\\rm stiff}}u_2=0.\n\\end{array}\\right.\n\\label{u_2}\n\\end{equation}\n The condition for solvability of the problem (\\ref{u_2}) yields the expression for $\\alpha_2,$ as follows:\n\\[\n\\int_{{\\mathbb G}^{\\rm stiff}}(-2u'_1-1)=\\int_{{\\mathbb G}^{\\rm stiff}}u''_2=-\\sum_{V\\in\\partial{\\mathbb G}}\\ \\sum_{e\\ni V}\\sigma_e(u_2)_e'(V)\n=\\sum_{V\\in\\partial{\\mathbb G}}\\Bigl(\\sum_{e\\ni V}\\sigma_e u_1(V)+\\alpha_2\\Bigr).\n\\]\nRe-arranging the terms in the last equation, we obtain\n\\[\n\\alpha_2=-\\bigl\\vert\\partial{\\mathbb G}\\bigr\\vert^{-1}\\int_{{\\mathbb G}^{\\rm stiff}}(u'_1+1).\n\\]\nThe above asymptotic procedure is continued, to obtain the terms of all orders in (\\ref{expansion}). In particular, for the term $u_3$ in the expansion for $u$ we obtain\n\\begin{equation*}\n\\left\\{\\begin{array}{ll}u''_3=-2u'_2-u_1 \\ \\ {\\rm on}\\ \\ {\\mathbb G}^{\\rm stiff}, \\\\[1.1em]\n-\\sum_{e\\ni V}\\sigma_e\\bigl((u_3)_e'(V)+u_2(V)\\bigr)=\\alpha_2u_1,\\quad V\\in\\partial{\\mathbb G},\\\\[1.2em]\nu_3\\ {\\rm continuous\\ on\\ }{\\mathbb G}^{\\rm stiff},\\\\[0.8em]\n\\int_{{\\mathbb G}^{\\rm stiff}}u_3=0.\n\\end{array}\\right.\n\\label{u_3}\n\\end{equation*}\n\n\n{\\it Error estimates.}\nWe write\n\\[\nu=1+{\\rm i}\\tau u_1+({\\rm i}\\tau)^2u_2+({\\rm i}\\tau)^3u_3+R,\\qquad \\mu^{(\\tau)}=\\alpha_2({\\rm i}\\tau)^2+r,\n\\]\nso that $R,$ $r$ satisfy\n\\begin{empheq}[right=\\empheqrbrace]{align}\n&\\biggl(\\dfrac{d}{dx}+{\\rm i}\\tau\\biggr)^2R=-({\\rm i}\\tau)^4(2u_3'+u_2)-({\\rm i}\\tau)^5u_3\\quad \\text{ on } \\mathbb{G}^{\\rm stiff},\\label{R_equation}\n\\\\[0.4em]\n&-\\sum_{e\\ni V}\\sigma_e (R'_{e}(V)+{\\rm i}\\tau R(V))=\\label{bc}\\\\\n&=\\bigl(r+\\alpha_2({\\rm i}\\tau)^2\\bigr)\n\\bigl(1+{\\rm i}\\tau u_1+({\\rm i}\\tau)^2u_2+({\\rm i}\\tau)^3u_3+R\\bigr)\\nonumber\\\\[0.5em]\n&-\\alpha_2({\\rm i}\\tau)^2(1+{\\rm i}\\tau u_1),\\quad V\\in\\partial \\mathbb{G}\\nonumber\\\\[0.4em]\n&R\\ {\\rm continuous\\ on\\ }{\\mathbb G}^{\\rm stiff},\\nonumber\\\\[0.4em]\n&\\int_{{\\mathbb G}^{\\rm stiff}}R=0.\\ \\ \\ \\ \\ \\ &\\nonumber\n\\end{empheq}\n\nNotice first that\n\\begin{multline}\nr+\\alpha_2({\\rm i}\\tau)^2=\\mu^{(\\tau)}=\\min_{u\\in W^{2,2}({\\mathbb G}^{\\rm stiff})}\\biggl(\\sum_{\\partial{\\mathbb G}}\\vert u\\vert^2\\biggr)^{-1}\\int_{{\\mathbb G}^{\\rm stiff}}\\Biggl\\vert\\biggl(\\dfrac{d}{dx}+{\\rm i}\\tau\\biggr)u\\Biggr\\vert^2 \\le\\bigl\\vert\\partial{\\mathbb G}\\bigr\\vert^{-1}\\bigl\\vert {\\mathbb G}^{\\rm stiff}\\bigr\\vert\\tau^2.\n\\label{mu_est}\n\\end{multline}\nMultiplying (\\ref{R_equation}) by $R$, integrating by parts, and using (\\ref{bc}), we obtain the estimate\n\\begin{equation}\n\\Vert R\\Vert_{L^2({\\mathbb G}^{\\rm stiff})}^2\\le C\\bigl(\\vert\\tau\\vert\\vert r\\vert\\Vert R\\Vert_{L^2({\\mathbb G}^{\\rm stiff})}+\\vert\\tau\\vert^4\\Vert R\\Vert_{L^2({\\mathbb G}^{\\rm stiff})}+\\vert r\\vert^2\\bigr),\\qquad C>0,\n\\label{R_est}\n\\end{equation}\nand hence, by virtue of (\\ref{mu_est}), we obtain\n\\begin{equation}\n\\Vert R\\Vert_{L^2({\\mathbb G}^{\\rm stiff})}\\le C\\tau^2.\n\\label{first_R_estimate}\n\\end{equation}\n\nNext, we re-arrange the right-hand side of (\\ref{bc}):\n\\begin{multline*}\n\\bigl(r+\\alpha_2({\\rm i}\\tau)^2\\bigr)\n\\bigl(1+{\\rm i}\\tau u_1+({\\rm i}\\tau)^2u_2+({\\rm i}\\tau)^3u_3+R\\bigr)-\\alpha_2({\\rm i}\\tau)^2(1+{\\rm i}\\tau u_1)\\\\[0.5em]=r\\bigl(1+{\\rm i}\\tau u_1+({\\rm i}\\tau)^2u_2+({\\rm i}\\tau)^3u_3+R\\bigr)+\\alpha_2({\\rm i}\\tau)^2\\bigl(({\\rm i}\\tau)^2u_2+({\\rm i}\\tau)^3u_3+R\\bigr).\n\\end{multline*}\nMultiplying (\\ref{R_equation}) by $1$, integrating by parts, and using (\\ref{bc}) once again yields the existence of $C>0$ such that\n\\begin{equation}\n\\vert r\\vert\\le C\\bigl(\\vert\\tau\\vert\\Vert R\\Vert_{L^2({\\mathbb G}^{\\rm stiff})}+\\vert\\tau\\vert^4\\bigr).\n\\label{r_estimate}\n\\end{equation}\nCombining this with (\\ref{first_R_estimate}) yields $\\vert r\\vert\\le C\\tau^3,$ which, by virtue of (\\ref{R_est}) again, implies\n\\begin{equation}\n\\Vert R\\Vert_{L^2({\\mathbb G}^{\\rm stiff})}\\le C\\vert\\tau\\vert^3.\n\\label{second_R_estimate}\n\\end{equation}\nFinally, the inequalities (\\ref{r_estimate}) and (\\ref{second_R_estimate}) together yield\n\\begin{equation}\n\\vert r\\vert\\le C|\\tau|^4,\n\\label{r_final}\n\\end{equation}\nas claimed.\\footnote{Combining (\\ref{r_final}) with (\\ref{mu_est}), we also obtain the estimate $\\Vert R\\Vert_{L^2({\\mathbb G}^{\\rm stiff})}\\le C\\tau^4.$}\n\n\n\n\n\\section*{Appendix C: Proof of Lemma \\ref{bound_below_lemma}}\n\n\nFor all $\\tau\\in[-\\pi,\\pi),$ using the formula for the second eigenvalue $\\mu_2^{(\\tau)}$ of the problem (\\ref{problem}) via the Rayleigh quotient, we obtain\n\\begin{align*}\n\\mu_2^{(\\tau)}&=\\min\\Biggl\\{\\biggl(\\sum_{\\partial{\\mathbb G}}\\vert u\\vert^2\\biggr)^{-1}\\int_{{\\mathbb G}^{\\rm stiff}}\\Biggl\\vert\\biggl(\\dfrac{d}{dx}+{\\rm i}\\tau\\biggr)u\\Biggr\\vert^2: u\\in W^{2,2}({\\mathbb G}^{\\rm stiff}), \\int_{{\\mathbb G}^{\\rm stiff}}u=0\\Biggr\\}\\\\[0.9em]\n&\\ge \\min\\Biggl\\{\\biggl(\\sum_{\\partial{\\mathbb G}}\\vert u\\vert^2\\biggr)^{-1}\\int_{{\\mathbb G}^{\\rm stiff}}\\vert u'\\vert^2: u\\in W^{2,2}({\\mathbb G}^{\\rm stiff}), \\int_{{\\mathbb G}^{\\rm stiff}}u=0\\Biggr\\}=\\mu_2^{(0)}>0,\n\\end{align*}\nfrom which the claim follows by setting $C_\\perp=\\mu_2^{(0)}.$\n\n\n\\section*{Acknowledgements}\n\nWe are grateful to Professor S. Naboko for suggesting a calculation in Section 8.\n\n\\bibliographystyle{siamplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}